text
stringlengths 1
3.78M
| meta
dict |
|---|---|
/* Copyright 2003-2013 Joaquin M Lopez Munoz.
* Distributed under the Boost Software License, Version 1.0.
* (See accompanying file LICENSE_1_0.txt or copy at
* http://www.boost.org/LICENSE_1_0.txt)
*
* See http://www.boost.org/libs/multi_index for library home page.
*/
#ifndef BOOST_MULTI_INDEX_DETAIL_SERIALIZATION_VERSION_HPP
#define BOOST_MULTI_INDEX_DETAIL_SERIALIZATION_VERSION_HPP
#if defined(_MSC_VER)
#pragma once
#endif
#include <boost/config.hpp> /* keep it first to prevent nasty warns in MSVC */
#include <boost/serialization/split_member.hpp>
#include <boost/serialization/version.hpp>
namespace boost{
namespace multi_index{
namespace detail{
/* Helper class for storing and retrieving a given type serialization class
* version while avoiding saving the number multiple times in the same
* archive.
* Behavior undefined if template partial specialization is not supported.
*/
template<typename T>
struct serialization_version
{
serialization_version():
value(boost::serialization::version<serialization_version>::value){}
serialization_version& operator=(unsigned int x){value=x;return *this;};
operator unsigned int()const{return value;}
private:
friend class boost::serialization::access;
BOOST_SERIALIZATION_SPLIT_MEMBER()
template<class Archive>
void save(Archive&,const unsigned int)const{}
template<class Archive>
void load(Archive&,const unsigned int version)
{
this->value=version;
}
unsigned int value;
};
} /* namespace multi_index::detail */
} /* namespace multi_index */
namespace serialization {
template<typename T>
struct version<boost::multi_index::detail::serialization_version<T> >
{
BOOST_STATIC_CONSTANT(int,value=version<T>::value);
};
} /* namespace serialization */
} /* namespace boost */
#endif
|
{
"pile_set_name": "Github"
}
|
179 F.2d 806
86 U.S.App.D.C. 70
DEBOBULA,v.MACONDRAY et al.
No. 10158.
United States Court of Appeals District of Columbia Circuit.
Argued Dec. 8, 1949.Decided Jan. 3, 1950.
Mr. Titus de Bobula, pro se.
Mr. Samuel F. Beach, Washington, D.C., with whom Mr. Leslie C. Garnett, Washington, D.C., was on the brief, for appellees. Mr. Karl Kindleberger, Washington, D.C., also entered an appearance for appellees.
Before EDGERTON, PRETTYMAN and WASHINGTON, Circuit Judges.
PER CURIAM.
1
Plaintiff brought this action against twelve defendants, four of whom are the appellees here. The complaint, drawn by the plaintiff himself, appears to be based on allegations of wrongful and abusive eviction from the premises occupied by the plaintiff as tenant, damages being sought against the marshal who executed the eviction, and others. The four defendant-appellees were the owners of the premises, and landlords of the plaintiff. A motion for summary judgment was made on behalf of the four defendant-appellees, among the supporting documents being the affidavit of one appellee that the marshal executed the writ of restitution 'solely on his own responsibility and without instructions from the proponent and her daughters, or their agents.' Plaintiff filed an answering affidavit, denying the statement just quoted and alleging that the marshal proceeded upon instructions from appellees' attorney and from their real estate agent.
2
The District Court granted the motion for summary judgment and plaintiff appealed. Reading the complaint and its exhibits along with the affidavits of the parties and the other relevant papers, we consider that at least two genuine issues of material fact were in controversy, i.e., whether the alleged wrongful conduct of the marshal was procured or authorized by appellees' attorney or real estate agent, or both, and if so, whether such action was within the scope of authority. In this posture of the case, appellees are not entitled to summary judgment. That being the sole question before us, we do not rule on any other.
3
Reversed.
|
{
"pile_set_name": "FreeLaw"
}
|
The Women's Initiative for Nonsmoking (WINS) XI: age-related differences in smoking cessation responses among women with cardiovascular disease.
Smoking cessation has immediate health benefits; however, the efficacy of smoking cessation interventions among older adults and women has received limited research attention. The original Women's Initiative for Nonsmoking (WINS) study was a randomized controlled trial that tested the efficacy of a smoking cessation intervention for Bay Area women hospitalized with cardiovascular disease. The current study, which used the WINS dataset, compares participants 62 and older with those younger than 62 years. The sample (n=277) contained 136 older smokers and 141 younger smokers. At the 6-month follow-up, 52.1% of older smokers had quit smoking compared with 40.6% of younger smokers. At the 12-month follow-up, 52.0% of older smokers had quit smoking compared with 38.1% of younger smokers. The difference at 12 months was statistically significant, and a Kaplan-Meier survival analysis further supported these findings. Clinicians should be sure to also include older smokers in smoking assessments and smoking cessation interventions.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Henry Macintosh
Henry Maitland Macintosh (10 June 1892 – 26 July 1918) was a Scottish track and field athlete and winner of gold medal in 4 × 100 metres relay at the 1912 Summer Olympics.
Macintosh was born in Kelso and educated at Glenalmond College and Corpus Christi College, Cambridge. A sprinter, at the Stockholm Olympic Games he was eliminated in the first round of the 100 metres and did not finish in the semi-final of the 200 metres. As the second leg in the British 4 × 100 m relay team, he won a gold medal, in spite of finishing second after United States in the semifinal. The United States was later disqualified for a fault in passing the baton – the same mistake was made in the final by the world record holder and main favourite German team.
In 1913, Macintosh served as president of the Cambridge University Athletics Club, won the Scottish title, and equaled the British record over 100 yards. He ran his last competition in 1914 and left to South Africa. After the start of World War I he was commissioned into the Argyll and Sutherland Highlanders. He died as a captain at age 26, from wounds. He was buried in Senlis French National Cemetery.
See also
List of Olympians killed in World War I
References
Category:1892 births
Category:1918 deaths
Category:People from Kelso, Scottish Borders
Category:Sportspeople from the Scottish Borders
Category:Scottish male sprinters
Category:Scottish soldiers
Category:Olympic athletes of Great Britain
Category:Olympic gold medallists for Great Britain
Category:Athletes (track and field) at the 1912 Summer Olympics
Category:Scottish Olympic medallists
Category:British Army personnel of World War I
Category:British military personnel killed in World War I
Category:Argyll and Sutherland Highlanders officers
Category:People educated at Glenalmond College
Category:Alumni of Corpus Christi College, Cambridge
Category:Medalists at the 1912 Summer Olympics
Category:Olympic gold medalists in athletics (track and field)
|
{
"pile_set_name": "Wikipedia (en)"
}
|
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
class DynamicLossScaler(object):
def __init__(
self, init_scale=2.**15, scale_factor=2., scale_window=2000,
tolerance=0.05, threshold=None, min_loss_scale=1e-4
):
self.loss_scale = init_scale
self.scale_factor = scale_factor
self.scale_window = scale_window
self.tolerance = tolerance
self.threshold = threshold
self._iter = 0
self._last_overflow_iter = -1
self._last_rescale_iter = -1
self._overflows_since_rescale = 0
self.min_loss_scale = min_loss_scale
def scale(self, outputs):
return self.loss_scale * outputs
def update(self):
if (self._iter - self._last_overflow_iter) % self.scale_window == 0:
self.loss_scale *= self.scale_factor
self._last_rescale_iter = self._iter
self._iter += 1
def _decrease_loss_scale(self):
self.loss_scale /= self.scale_factor
if self.threshold is not None:
self.loss_scale = max(self.loss_scale, self.threshold)
def check_overflow(self, grad_norm):
# detect inf and nan
if grad_norm == float('inf') or grad_norm != grad_norm:
# overflow has occured
prev_scale = self.loss_scale
iter_since_rescale = self._iter - self._last_rescale_iter
self._last_overflow_iter = self._iter
self._overflows_since_rescale += 1
pct_overflow = self._overflows_since_rescale / float(iter_since_rescale)
if pct_overflow >= self.tolerance:
self._decrease_loss_scale()
self._last_rescale_iter = self._iter
self._overflows_since_rescale = 0
if self.loss_scale <= self.min_loss_scale:
# Use FloatingPointError as an uncommon error that parent
# functions can safely catch to stop training.
self.loss_scale = prev_scale
raise FloatingPointError((
'Minimum loss scale reached ({}). Your loss is probably exploding. '
'Try lowering the learning rate, using gradient clipping or '
'increasing the batch size.'
).format(self.min_loss_scale))
self._iter += 1
raise OverflowError('setting loss scale to: ' + str(self.loss_scale))
|
{
"pile_set_name": "Github"
}
|
2. The Field of the Invention
The present invention relates to the manufacture of semiconductor devices. More particularly, the present invention is directed to methods employing etchants for etching oxides of silicon during the manufacture of a semiconductor device such that the selectivity of the etchant is low. The methods of the present invention are also useful in removing contamination other than silicon oxides typically encountered in semiconductor manufacturing process flows, such as polymer residues, while providing low selectivity.
3. The Relevant Technology
In the continuing quest for ever denser DRAM devices, the problem of forming for each memory cell capacitors having both sufficiently large capacitance to preserve a charge between refresh cycles and sufficiently small size to allow further reductions in circuit dimensions has become increasingly acute. Dimensional tolerances in capacitor formation have thus tended to become a yield-limiting and density-limiting factor in DRAM devices.
Clean processes are a significant source of decreased dimensional control in the formation of capacitor structures. Removal of native oxides and other types of oxide contamination is required at various steps during capacitor formation. A short dip in a dilute solution of hydrofluoric acid (HF), such as a 100:1 volumetric ratio of water to 49% HF solution, is typically employed for this purpose. Problems arise because the dilute HF solution also significantly and even preferentially attacks doped silicon dioxide such as BPSG in which the capacitor structures are formed and defined, resulting in decreased control of critical dimensions associated with the capacitor. A less selective process is thus needed to remove native oxides and other types of oxide contamination during capacitor formation without excessively attacking doped silicon dioxide such as BPSG.
A dilute HF solution is also typically employed to remove native oxide or other oxide contamination at process steps during which a refractory metal silicide such as titanium silicide is exposed to the solution. This may occur, for example, in a clean step prior to the formation of spacers around a gate stack that includes a refractory metal silicide layer, or during a clean step prior to filling a contact to a gate stack that includes a refractory metal silicide. As dimensions of gate stacks decrease, this use of dilute HF solution creates problems because the refractory metal silicide layer is preferentially etched by the dilute HF solution, such that where dimensional tolerances are small, the refractory metal silicide layer may be seriously damaged or even completely destroyed. A less selective process is thus needed to remove native oxide and other types of oxide contamination during gate formation and contact formation without excessively attacking refractory metal silicides.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
/*
* Copyright (C) 2013-2015 RoboVM AB
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.bugvm.apple.audiotoolbox;
import com.bugvm.rt.bro.Struct;
import com.bugvm.rt.bro.annotation.StructMember;
public class AudioQueueProcessingTapMutableFlags extends Struct<AudioQueueProcessingTapMutableFlags> {
public AudioQueueProcessingTapFlags get() {
return new AudioQueueProcessingTapFlags(getValue());
}
public void set(AudioQueueProcessingTapFlags flags) {
setValue((int)flags.value());
}
@StructMember(0)
private native int getValue();
@StructMember(0)
private native void setValue(int value);
}
|
{
"pile_set_name": "Github"
}
|
Succinic anhydrides are valuable reactive intermediates that find use in an array of applications.
For example, their copolymerization with epoxides or diols yields biodegradable polyesters. Anhydrides are also useful intermediates in organic synthesis, since they can be readily ring opened to diacids or other succinate derivatives; some examples of which include biologically active natural products, pharmaceuticals, and metalloprotease inhibitors.
Substituted succinic anhydrides have previously been synthesized by a number of methods, most often by the dehydration of the corresponding diacid or from maleic anhydride via Diels-Alder or Ene reactions. They have also been made by metal catalyzed carbonylation of alkynes, alkenoic acids, and lactones; however, most of these catalytic reactions proceeded either in low yield, with significant side products, or without demonstrating substrate generality or product stereochemical purity. Thus, the development of more efficient and stereoselective syntheses remains an important goal.
As disclosed in U.S. Pat. No. 6,852,865 our group has developed a class of well-defined bimetallic catalysts of the general type [Lewis acid]+[M(CO)x]− for the ring-expanding carbonylation of strained heterocycles. We have found that related catalysts can carbonylate β-lactones to succinic anhydrides in high yields while preserving stereochemical purity. Given the many syntheses of enantiomerically pure epoxides and the recent advances in epoxide carbonylation to β-lactones, subsequent carbonylation of these lactones constitutes a versatile two-step method for the stereoselective synthesis of succinic anhydrides (Scheme 1).
This method would be far more synthetically useful if the two steps could be consolidated, eliminating the requirement for isolation and purification of potentially toxic lactone intermediates, saving time and catalyst, and increasing overall yield. The present invention provides such a methodology.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Attention VW Golf buyers!
Buyers wanting a Volkswagen Golf with a small petrol engine had better get in quick – VW has halted production of some of the most popular engine variants in the range.
Our two favourite versions, the 120bhp 1.4 TSI and the 113bhp 1.6 FSI, are no longer available to factory order - that means you can't tailor them to your own bespoke specifications.
However, if you're a bit more flexible on the paint colour and the options fitted, Volkswagen says you can still buy these engines from stock reserves – cars that have already been built, but not bought.
That leaves the Golf range with a choice of just three petrol engines that you can order from scratch.
There's the 79bhp 1.4, which is hideously underpowered. There are another two versions of the 1.4 TSI engine (giving either 138bhp or 168bhp), and although they are much, much better, they make the Golf way too expensive.
1.9 TDI now cheaperTo compensate for the huge gap left in the range, Volkswagen has dropped the price of the 105bhp 1.9 diesel version.
Prices now start at £14,305. The rest of the diesel range remains the same. There's a non-turbo 2.0 with 74bhp, and two 2.0 turbodiesels that give either 138bhp or 168bhp.
There's more bad news for fans of sporty Golfs – production has also been halted on the excellent GTI and R32 hot hatch versions. Again, you can buy them from Volkswagen's stock, but no more factory orders are being taken.
The Golf's engine range shouldn't suffer these gaps for too long, however, because the new Mk VI six Golf is due to arrive at the end of the year.
|
{
"pile_set_name": "Pile-CC"
}
|
In vehicles, and especially automobiles, it is known to provide door handles that are moveable between a first, stowed condition in which the handle is essentially flush with the exterior surface of the door and generally inaccessible to a user, and a second, deployed condition in which the handle is pivoted so that a portion thereof extends away from the door so as to be accessible to, and actuatable by, a user to open the vehicle door. Presently, such “flush-mount” handles are manually pivotable, requiring the user to physically depress one area of the handle in order to pivot the handle from the stowed to the deployed positions thereof.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
I don't like Kate having cancer at all, mainly because I fear that it's their way of axing Lauren Koslow. The character of Daniel is a joke and a bust, and it appalls me the way they have Kate in this random corner just secluded as if she's his own personal story prop. Hate it! Meanwhile, can we please bring back her inner power-hungry bitch side? What happened to her wardrobe? The Kate I'm seeing now just doesn't even feel like the Kate I knew eight years ago.
ITA agree with this. i think they destroyed SC initially with all the 'world-famous' doctor propping and surfer dude talk. pairing him with chelsea was pretty much the final nail in the coffin. i'm not interested in his romance with kate, because frankly, he's already been set up as a total douchebag who targets ill women patients to sleep with. how am i supposed to root for him, even though he's finally in a an appropriate relationship? in a word, i can't.
Now, if they would EXPLORE this particular syndrome he's got and USE it, then they'd have a story...this targeting ill women. I mean, so many viewers note this (according to boards)...why not take it and run with it and give us a Daniel story with a background we can sink our teeth into? But..........no.
ICAM. I couldn't figure out why Date bothers me so much since Daniel is finally in an appropriate relationship. But you nailed it.
And you're right. If they actually address this issue, it would make a great story. But that would mean giving up on Daniel as a "hero." And I don't see them doing that.
Although I would hate for the actor to lose his job, I wish they would just write off Daniel Jonas. I think he's been a complete waste of time. It's not the actor's fault; it's that the writers have made a total mess of his character from day one. There are still interesting things they could do with him, but I don't care. Cure Kate, send Daniel off to some surfer's paradise, and give her a story with someone who deserves her.
|
{
"pile_set_name": "Pile-CC"
}
|
Barack Obama may be the former leader of the free world, but that doesn’t mean that he’s above the challenges that every parent experiences and that includes the struggles that come with sending a child off to college.
As the inaugural guest on David Letterman’s new, six-episode Netflix series, My Next Guest Needs No Introduction, the Dad-in-chief divulged that, like most people, he sometimes has issues when it comes to assemble-yourself furniture. According to Obama, when he and the rest of the family moved in his daughter Malia into her new college digs (she’s currently a freshman at Harvard,) he was feeling pretty emotional about firstborn moving out, so Malia suggested that he help her put together a lamp that she had gotten for her desk.
While the lamp initially appeared to be easy to put together, Obama ran into some challenges assembling it.
“I was basically useless. Everyone had seen me crying and misting up for basically the previous three weeks, so Malia, who’s very thoughtful, she goes, ‘Dad, you know, I’ve got this lamp in this box, could you put the desk lamp together?’ I said, ‘Sure.’ It should have taken five minutes or three minutes and it had one of those little tools. It only had like four parts and I’m just sitting there, toiling at this thing for half an hour and meanwhile, Michelle has finished scrubbing and she’s organizing closets and I was just pretty pathetic.”
While Obama noted that dropping off Malia at college was like having “open-heart surgery,” he also divulged that it’s been easier because of technology since the two text on an almost daily basis, with Malia checking in on him and sending him lots of heart emojis.
Get The Brief. Sign up to receive the top stories you need to know right now. Please enter a valid email address. Sign Up Now Check the box if you do not wish to receive promotional offers via email from TIME. You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Thank you! For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.
Write to Cady Lang at cady.lang@timemagazine.com.
|
{
"pile_set_name": "OpenWebText2"
}
|
- -10. Is t even?
True
Suppose 2*x - 7*x = -60. Let o = -3 + x. Let r = 14 - o. Is r a multiple of 2?
False
Let o(w) = w**2 - 7*w - 6. Let l be 74/8 - (-2)/(-8). Is 4 a factor of o(l)?
True
Let s = 18 + -45. Let n = s - -68. Let p = 4 + n. Is p a multiple of 18?
False
Let d(y) = 8*y - 1. Is d(23) a multiple of 23?
False
Let q(k) = k - 4*k + 3 + 2*k. Let g be q(-3). Suppose -g - 27 = -3*v. Is v a multiple of 11?
True
Suppose 744 = 8*n + 256. Is 19 a factor of n?
False
Let m(y) = 2*y - 2*y + y + 12. Does 12 divide m(0)?
True
Suppose 0 = -a + 2*z + 14, -3*z + 0*z = -3*a + 51. Let v be 6/15 + 452/a. Let g = 49 - v. Does 26 divide g?
True
Let x = 6 + -4. Let g be x/6 + 35/(-15). Is 1*-1*28/g a multiple of 12?
False
Let x = 33 + -18. Does 5 divide x?
True
Let r be (71/(-3))/(5/(-30)). Let s = r - 84. Is 20 a factor of s?
False
Let v(h) = -h**3 - 9*h**2 + h - 21. Is 10 a factor of v(-10)?
False
Let p(u) = 3*u**2 + 6*u + 20. Is p(-5) a multiple of 13?
True
Let u = 32 + 0. Let p = u + 3. Does 13 divide p?
False
Suppose 0 = 2*o + 3*o - 5. Let i(a) = 18*a**3. Is i(o) a multiple of 9?
True
Let b(v) = v**2 + 3*v + 3. Let s be b(-2). Is 16 a factor of 30 + s + -1 + 2?
True
Let h(c) = 2*c**3 - 3*c - 3*c**3 - 4 + 1 + 6*c**2 + 4. Suppose -5*b + 21 = 1. Is h(b) a multiple of 17?
False
Let u(w) = 5*w + 28*w**2 - 4*w + 2 - 3. Is u(1) a multiple of 14?
True
Let d(x) = -x**2 - 7*x - 3. Let u be d(-6). Let v be 1/u*(-1 + 22). Let g = v - -6. Is g a multiple of 13?
True
Suppose -8 = -5*z + 7. Suppose -51 = -0*t + z*t. Let u = t + 26. Does 9 divide u?
True
Let w(k) be the second derivative of 1/2*k**2 - 1/6*k**3 + 0 + 3*k. Is 5 a factor of w(-4)?
True
Let s(m) = -m**3 + 4*m**2 - 2*m. Let g be s(3). Suppose n = -g*n. Suppose -a + 9 = 5*q, 5*a - 2*q - 126 = -n*q. Is 12 a factor of a?
True
Suppose -2*x + 638 = 4*m - 0*x, -m = -3*x - 149. Suppose -2*u - u = 2*f - 94, 3*f = -5*u + m. Does 17 divide u?
True
Suppose 5*x + 5*f - f - 4 = 0, -5*x - f + 1 = 0. Is 1 + 42 - (3 + x) a multiple of 6?
False
Suppose 5 = -2*l + 3*l - b, -4*l + 3*b + 18 = 0. Suppose 5 = t + x, -2*t - l*x + 5 = -4. Suppose 4*h = -5*u + t + 2, -4*u = -2*h - 22. Does 3 divide u?
False
Suppose 16 = -2*u - 12. Suppose 0 = 5*a - b + 151, 3*a - 2*b = a - 62. Let x = u - a. Does 8 divide x?
True
Let y be 0 - 1 - (-1 + -3). Let f = -1 + y. Suppose 56 = f*r + 2*r. Is 5 a factor of r?
False
Suppose 6*r - 4*r + 8 = 0. Let u = r + 7. Does 3 divide u?
True
Suppose 0 = -i + 11 - 1. Does 19 divide (59/(-2))/((-5)/i)?
False
Let m = 5 + -1. Is 22 a factor of (m - (-3)/(-1))*45?
False
Suppose -3*c + 4*c - 2 = 0. Suppose -c*o + 8 = 2*o. Suppose -s - s + 26 = v, 2*s = -o*v + 22. Does 14 divide s?
False
Let y be (-1)/(-2) - 540/(-8). Suppose 0*j - y = -4*j. Is j a multiple of 7?
False
Let f = -1 - 6. Suppose -3*c + 4*i = 68, 0 = 5*c - 2*i + 17 + 101. Let t = f - c. Does 15 divide t?
False
Suppose -r - 432 = -9*r. Is 9 a factor of r?
True
Let f = 28 - 9. Is 12 a factor of f?
False
Let h be (3 - -2 - 3) + 2. Suppose -5*n + 0*c + 485 = 3*c, 356 = 4*n - h*c. Suppose 3*g - n = -g - 2*q, 0 = 3*g + 2*q - 69. Does 20 divide g?
False
Let b be 2 + 2*(-9)/6. Let f be (-1290)/(-24) - b/4. Suppose 0*c = -2*c + f. Is c a multiple of 16?
False
Suppose -4*r = -5*n - 563, -4*r + 2*r + 2*n = -284. Is r a multiple of 37?
False
Let o = -3 + 5. Suppose -o*z = 2 - 20. Suppose p - 21 = z. Is p a multiple of 19?
False
Let m = -3 - -5. Let p be (-28 - 0) + -4 + m. Is ((-6)/(-9))/((-2)/p) a multiple of 4?
False
Let z be (-10)/25 + 153/(-5). Let x = 1 - z. Suppose 2*w - 34 = -2*k, 4*k + x + 17 = 5*w. Does 13 divide w?
True
Let r(b) = 5*b**2 - 1. Let k be r(1). Suppose 0 = -5*f + k*f + 15. Is f a multiple of 12?
False
Suppose -5*o + 5*j = -15, 2*j + 5 - 13 = -5*o. Let h be (-1 - (0 - -1)) + 4. Suppose -1 - 5 = -o*l - h*y, -l + 2*y = -3. Is 2 a factor of l?
False
Let u(s) = 7*s**3 + 4*s**2 - s + 4. Let k be u(3). Suppose 5*m - k - 174 = 0. Suppose x - m = -3*x. Is 20 a factor of x?
True
Let f(w) = w**3 - w**2 - w. Let a be f(2). Let o be (0 + 0 + 0)/a. Suppose -j - 63 = -x - o*j, 120 = 2*x + 4*j. Is x a multiple of 20?
False
Does 13 divide 11/(-22)*72/(-1)?
False
Let p = 217 + -78. Is p a multiple of 15?
False
Let r = -11 + 7. Is 30 + r/((-8)/6) a multiple of 11?
True
Let o(t) = 6*t**3 - 2*t**2 - 2. Let d be o(2). Let w = d - 10. Let y = 40 - w. Is 8 a factor of y?
False
Let q(p) = 7 + 5*p - 3*p + 5*p - 4*p. Let m(o) = -2*o - 6. Let u(t) = 4*m(t) + 3*q(t). Is 3 a factor of u(9)?
True
Suppose -4*a = 37 - 1. Does 12 divide (-6)/27 + (-119)/a?
False
Let q be (-1)/4 + 52/16. Let r(d) = 3*d + 5 + 2*d**2 - q*d**2 - 10*d. Does 6 divide r(-4)?
False
Let s(p) = 3*p**3 + p**2 + p + 2. Let j be s(2). Let k = j + -10. Let o = k + 11. Is o a multiple of 9?
False
Let u be 94/3 - (-1)/(-3). Let z = 2 + -20. Let j = z + u. Does 6 divide j?
False
Let z(h) = h**2 - 2*h - 1. Let w be z(-1). Suppose 2*k - 67 = -5*l, 5*k - 2*k = -w*l + 18. Is 2 a factor of (-8)/10*l/(-6)?
True
Let g = -137 + 267. Suppose -5*f - h + g = 4*h, 4*h = -5*f + 129. Is f a multiple of 7?
False
Let p = -19 + 4. Let t = p - -39. Does 8 divide t?
True
Let y(u) = u**2 - u + 2. Suppose 5*k = r - 0*r + 14, 4*r = k - 18. Let m = -6 - r. Does 8 divide y(m)?
True
Suppose 0 = -4*x + 2*l + 62, x - 2*l - 5 = 12. Is x a multiple of 15?
True
Let h(v) = 7 - 10*v - 7. Let l be h(-2). Suppose -f + l = 4. Is f a multiple of 6?
False
Let o = 0 + 2. Let r be (-60)/(-18)*3/o. Suppose -r*x + 43 = j, 22 = 2*j - x - 31. Does 14 divide j?
True
Let k = -8 - -31. Is k a multiple of 14?
False
Let j(q) = -2*q + q + 3 + q**2 + 16. Does 5 divide j(0)?
False
Let l = 93 - 84. Is l a multiple of 3?
True
Let q = 27 - 7. Does 8 divide q?
False
Suppose 9 = 3*b + y, 2*y + 0*y - 4 = b. Suppose -69 = -n - b*n. Let f = n + 26. Is 13 a factor of f?
False
Let h(k) = -k**3 + 8*k**2 + 6. Let d = 27 + -19. Does 5 divide h(d)?
False
Let v be (1 - (-4)/(-3))*-6. Suppose -7*n + 30 = -v*n. Suppose -79 = -3*d - y, 5*y + n = 2*d - 41. Is d a multiple of 11?
False
Suppose 4*o = -3*o + 420. Is 30 a factor of o?
True
Suppose -q - 4 = -3*q. Let v(m) be the third derivative of m**5/15 - m**3/6 + m**2. Is 15 a factor of v(q)?
True
Let g = 153 - 141. Let y(x) = x**3 - 4*x - 7*x + 1 - 11*x**2 + 5. Is 9 a factor of y(g)?
True
Suppose -6*t + 10*t = 56. Is t a multiple of 12?
False
Let x be (3 - 1) + (13 - 1). Suppose -n + x = n. Let i = n - -3. Is i a multiple of 5?
True
Let d(x) = -x**3 + 4*x**2 - 1. Let f be d(4). Let l(h) = -5*h - 1. Is 2 a factor of l(f)?
True
Suppose 2*a - 133 = -11. Is a a multiple of 15?
False
Suppose 0 = 2*f + 3*f - 215. Does 10 divide f?
False
Suppose -3*f - 3*f = -420. Is 14 a factor of f?
True
Suppose 82 = l + 30. Is 13 a factor of l?
True
Let s be ((-4)/(-6))/((-2)/48). Does 3 divide (-19)/(-4) - (-12)/s?
False
Suppose 0 = 2*d - 4*d - 6, 5*x - d = 8. Suppose 2*v - x = -p, -5*v - p = -10*v - 8. Is 6 a factor of v + (-7)/(2/(-2))?
True
Suppose 0 = -22*j + 18*j + 384. Is j a multiple of 14?
False
Let u = 10 - 19. Does 7 divide (6/4)/(u/(-84))?
True
Suppose -3*x - 2*x = -30. Let b = -8 + x. Is 24 - b/4*-4 a multiple of 12?
False
Let a(d) = -33*d. Let i be a(1). Does 8 divide (i/15 + 3)*20?
True
Let t(g) = -g**2 + 4*g + 7. Let f = 9 + -4. Let i be t(f). Suppose -140 = -5*s - i*r, -4*s + 3*r = -5*s + 15. Is s a multiple of 15?
True
Suppose 0 = 3*f + 5*w - 71, 4*f - 3*w + 0 - 114 = 0. Does 8 divide f?
False
Let h(i) = 22*i**2 + 4*i - 3. Is h(-3) a multiple of 61?
True
Let d = -48 + 111. Is 13 a factor of d?
False
Suppose 5*b - 6 = -5*l - 51, -2*b = l + 22. Let z = -1 - b. Suppose -j + 2*m = -10, -3*j + 4*m + 14 + z = 0. Does 4 divide j?
False
Suppose -5*g + 584 = -0*o + o, -2*o = -4*g + 456. Is 26 a factor of g?
False
Let u(b) = b**2 - 4. Let l be u(3). Suppose 3*v = 3*d + 2*d - 107, d + l*v = -1. Is d a multiple of 7?
False
Suppose 6*w - 37 - 47 = 0. Does 3 divide w?
False
Suppose 14*w = 9*w + 330. Is 11 a factor of w?
True
Let l = -27 - -36. Suppose d + 3*q - 25 = -6, d + q - l = 0. Is 3 a factor of d?
False
Let k be 67/5 + (-2)/5. Let u = k - 4. Suppose -2*i + 5*j + 17 = 0, j = -2*i - 2*j + u. Does 4 divide i?
False
Is 21 a factor of 39 + 1/((-4)/(-4))?
False
Suppose 2*p = 3*m - 6, -5 = -m - 3*p - 3. Suppose -m*x + 5*r + 20 = x,
|
{
"pile_set_name": "DM Mathematics"
}
|
Pietro da Pietri
Pietro da Pietri (1663 – 1708, 1716, or 1721) was an Italian painter of the late-Baroque period, active mainly in Rome.
Born in Rome, he was a pupil of the painter Giuseppe Ghezzi, then of Angelo Massarotti, then assisted in the studio of Carlo Maratta. He is also known as Pietro Antonio da Pietri, Pietro dei Pietri, and Pietro de' Pietri. He painted an altarpiece of the Virgin with Saints for Santa Maria in Via Lata.
References
Category:1663 births
Category:18th-century deaths
Category:Artists from Rome
Category:17th-century Italian painters
Category:Italian male painters
Category:18th-century Italian painters
Category:Italian Baroque painters
Category:Pupils of Carlo Maratta
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Japanese researchers will launch a project this year to resurrect the long-extinct mammoth by using cloning technology to bring the ancient pachyderm back to life in about five years time, a report Monday said.
The researchers will try to revive the species by obtaining tissue this summer from the carcass of a mammoth preserved in a Russian research laboratory, the Yomiuri Shimbun reported.
"Preparations to realize this goal have been made," Akira Iritani, leader of the team and a professor emeritus of Kyoto University, told the mass-circulation daily.
Under the plan, the nuclei of mammoth cells will be inserted into an elephant's egg cells from which the nuclei have been removed to create an embryo containing mammoth genes, it said.
The embryo will then be inserted into an elephant's womb in the hope that the animal will eventually give birth to a baby mammoth. Researches hope to achieve their aim within five to six years, the Yomiuri said.
The team, which has invited a Russian mammoth researcher and two US elephant experts into the project, has already established a technique to extract DNA from frozen cells.
The researchers had once given up similar plans after nuclei in the cells of mammoth skin and muscle tissue were damaged by ice crystals and proved unusable.
However, another Japanese researcher, Teruhiko Wakayama of the Riken Centre for Developmental Biology, succeeded in cloning a mouse from the cells of another that had been kept in deep-freeze for 16 years.
Based on Wakayama's techniques, Iritani's team devised a method to extract the nuclei of mammoth eggs without damaging them.
"If a cloned embryo can be created, we need to discuss, before transplanting it into the womb, how to breed [the mammoth] and whether to display it to the public," Iritani said.
"After the mammoth is born, we will examine its ecology and genes to study why the species became extinct and other factors."
More than 80 percent of all mammoth finds have been dug up in the permafrost of the vast Sakha Republic in eastern Siberia. The most perfectly preserved remains of the Ice Age mammals still have hair and internal organs.
|
{
"pile_set_name": "OpenWebText2"
}
|
And now Alsip has been given the Christmas present of a lifetime. Police found the old Mustang at the Department of Motor Vehicles in Salinas last September. Apparently a man attempted to have it registered after owning it for 23 years. Sounding not at all suspicious, authorities began to investigate whether the car had been stolen. "It had been out of the system for so long that it came back with no file," according to the California Highway Patrol. "The officer did some digging and found out the car was stolen in 1986." And so 28 years after its disappearance, the Mustang was returned to its rightful owner, who plans to have it fixed up and start driving it again.
|
{
"pile_set_name": "OpenWebText2"
}
|
Social Media
Wikipedia's New Love Button Will Let You Send Kittens or Beer to Others
Wikipedia will launch a "Love" button next week that will let users send their messages of appreciation to other users through virtual cats, beer and more.
The experimental feature, called WikiLove, is scheduled to make its debut on the English version of Wikipedia on June 29. The goal of the product, according to a blog post by the Wikimedia Foundation, is to encourage more users to edit and engage with more articles by providing positive reinforcement.
"The drive for quality and reliability has led to the development of sophisticated automation mechanisms that aid in socializing new users to Wikipedia’s norms, policies and conventions," the company explained in its announcement. "The act of expressing appreciation for other users, by contrast, is a largely manual effort. Whether it’s welcoming new users, inviting users to participate in specific topics or discussions, recognizing effort using barnstars and trophies, or just sending a whimsical note, expressing appreciation is not an activity that is facilitated by the software — in spite of its known importance for people’s likelihood to want to edit."
The WikiLove feature is relatively straightforward and simple. Starting next week, a heart icon will appear next to the watchlist star icon on user pages. Clicking it will bring up the WikiLove menu, where users can send their appreciation to someone by sending them a virtual gift and a quick note. Barnstars (think of them like Foursquare badges), Beer and Kittens are the primary WikiLove gifts a user can send, but users also have the option to create their own.
Once a user sends someone a little WikiLove, an update appears on his or her user talk page, where messages and photos of "A beer for you!" or "A kitten for you!" could potentially flood it.
The addition of this simple feature makes a lot of sense. Wikipedia is currently dominated by a very small group of editors, and its future depends on getting more people to participate in the article creation and editing process. Wikipedia's research has concluded that there has been an increase in warnings and criticism and a decrease in praise over the years, a troubling trend to say the least.
While the feature won't be launching for another week, you can try it out on the Wikimedia prototype site if you want to play around with WikiLove a couple days early.
Mashable
is a global, multi-platform media and entertainment company. Powered by its own proprietary technology, Mashable is the go-to source for tech, digital culture and entertainment content for its dedicated and influential audience around the globe.
|
{
"pile_set_name": "Pile-CC"
}
|
It has recently been found, as disclosed in pending U.S. patent application No. 855,517, now U.S. Pat. No. 4,822,590 that singular molecular layers of layer-type transition metal dichalcogenides, such as MoS.sub.2, TaS.sub.2 and WS.sub.2, can be prepared by intercalating such compounds with lithium and then reacting the intercalated compound with water. This gives rise to a suspension of single molecular layers of the transition metal dichalcogenides in water.
Attempts have been made in the past to produce sheet-like forms of metal dichalcogenides as revealed, for example, in U.S. Pat. No. 4,299,892 to Dines and Chianelli. Here, an amorphous transition metal dichalcogenide product is prepared by low temperature non-aqueous precipitation of the compound from mixtures of the metal salts. The amorphous products are converted into sheets of metal dichalcogenides referred to in the patent as having a "rag-like" structure by controlled heating at temperatures between 250.degree. and 400.degree. C. However, neither the end product, nor the intermediate product, are oriented films or sheets, that is films or sheets wherein the crystalline c-axes of single layers of the metal dichalcogenide are aligned.
U.S. Pat. No. 4,647,386 to Jamieson discloses an intercalated transition metal based solid lubricating composition. A transition metal dichalcogenide is intercalated with a metal, preferably a coinage metal.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record
|
{
"pile_set_name": "PubMed Abstracts"
}
|
SAN ANTONIO — One day after a bystander’s cellphone video was released that appeared to show sheriff’s deputies fatally shooting a Hispanic man who had his hands raised in surrender, officials here voted Tuesday to finance additional body cameras for deputies in the field, as federal authorities said they had opened an investigation into whether the man’s civil rights had been violated.
The action by commissioners of Bexar County, which includes San Antonio, will eventually put cameras on all uniformed officers on the street, part of a program that has been underway for more than a year. The two deputies involved in the shooting on Friday were not wearing body cameras. Only about eight deputies have them, all motorcycle officers, officials said. The commissioners approved nearly $1 million for hundreds of body and dashboard cameras for the department.
Before the vote, it was clear that the shooting and the video were on the minds of the County Commissioners Court. One commissioner, Tommy Calvert, questioned a sheriff’s official about the department’s use-of-force policy and training. Nelson W. Wolff, the county’s top elected official, asked sheriff’s officials to prepare a presentation comparing their policies with those of other law enforcement agencies.
|
{
"pile_set_name": "OpenWebText2"
}
|
Impact of a standardized titration protocol with carvedilol in heart failure: safety, tolerability, and efficacy-a report from the GESICA registry.
Grupo de Estudio de la Sobrevida en la Insuficiencia Cardiaca en Argentina (GESICA) studied whether a standardized protocol for the initiation and titration of the beta-blocker carvedilol in a multicenter, open-label program would optimize beta-blocker use in heart failure (HF) patients. The program included: (1) the carvedilol initiation and titration period, and (2) long-term follow-up at 6 and 12 months. Of 1299 patients in the registry, 504 were excluded due to current therapy; of the remaining 795 eligible patients, 293 were excluded due to contraindications. Of the included patients with follow-up data (n = 316), 93.3% tolerated carvedilol initiation and 47.7% of the patients reached the target dose of 50 mg/day for a mean dose of 39 mg/day. Rates were comparable in the elderly (n = 83), of which 53% achieved a target dose for a mean dose of 43.08 mg/day. This protocol improved therapy rates and achieved target doses quickly (average of 4 visits). Concomitant medications did not have to be adjusted and there were low withdrawal rates (10%) and hospital admissions (7.2%) for HF. Patients were able to maintain carvedilol therapy at 6 and 12 months. These results indicate that a standardized titration protocol, as used in GESICA, for the initiation and titration of beta-blockers is well tolerated and may improve beta-blocker use in carefully selected heart failure patients.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
(41 - 209))
50
Evaluate 9 + -5 + -2 - (-17 - (10 - 14)).
15
Calculate 3 + 10 + -5 + (-361 - -353).
0
Calculate 130 + (43 - (-130 - -416)).
-113
Calculate 16 - (70 + 51 + -99).
-6
Calculate (12 - -9) + -14 - -19 - ((0 - 3) + -7).
36
Evaluate 6 + -14 - (-53 + 1 - -150 - (0 + 10)).
-96
Calculate -5 - (17 + 24 + (16 - 33)).
-29
What is the value of 21 + 44 + 6 - (-9 - ((-2 - -14) + -7))?
85
Calculate 97 - (106 + -97 + (-1 - -4) + -11).
96
What is the value of (-14 - (-18 - -16)) + -14 + 91?
65
Evaluate 56 + 1 + -1 + (-8 + -1 + 6 - -8).
61
What is -19 - (23 + -27) - -5 - -18?
8
What is 153 - ((-27 + -13 - -58) + -17)?
152
Evaluate 44 + -16 + -57 + -1.
-30
What is (-1 - 21 - (-195 + 140) - 4) + -20?
9
(-4 - (-32 + -27)) + 66
121
Calculate -4 + -3 + (13 + 9 - (0 + 5)).
10
What is the value of 14 + (-4 - (-6 + 0) - (53 + -31))?
-6
(18 - 10) + 43 + -17 - 40
-6
-25 + (19 - -4) - (-15 + -102 + 9)
106
Calculate -92 + (-4 - -83) + (2 + -3 - -59).
45
Evaluate 14 - (-42 - (61 - 51 - (-3 + 13))).
56
Calculate (9 - -1) + (-70 - (23 + -78)).
-5
What is the value of 728 - 735 - (0 - (70 + -1 + 2) - -3)?
61
Calculate 60 + -99 + (44 - 61).
-56
What is the value of -122 - (-28 - 36 - 111)?
53
What is -197 + 295 + 22 - (32 + 1)?
87
27 + 2 + (0 + 7 - -2) - (24 - 17)
31
What is the value of (-1 - (4 - 0) - (97 + -113)) + -69?
-58
Evaluate (-14 - -13 - -20) + -83 + 22.
-42
-18 + -17 + 15 + -6 + -12 + 25
-13
Calculate 3 + (-3 - 3) - (7229 - 7219).
-13
Calculate -64 + 5 + 14 + -6 - (8 + -4 + -10).
-45
-39 + 28 + -29 + 0
-40
Calculate -107 + -2 + -3 + (12 - 18) + 22.
-96
Calculate -13 + (-8 - 3 - (15 + -112 + 16)).
57
Evaluate -6 + (24 - 4) + (1 - -4) + -11 + -24.
-16
Evaluate (2 + -16 - 3) + 278 + -215.
46
What is -2 + 101 - (20 + (11 - (2 + 11)))?
81
What is the value of -12 + 0 + (110 - 53 - 40)?
5
What is -51 - -1 - ((-61 - -62 - (1 + 1)) + 14)?
-63
What is the value of 66 - 30 - (11 + -14 - ((0 - -5) + -9))?
35
What is the value of -27 + (5 - -43 - (-53 + 79))?
-5
11 + -6 + -6 + -1 + -29 + 0
-31
(1 + -1 - 1) + (37 - -20 - 57 - 55)
-56
Calculate 3 + -27 + (11 - 18) + -6 + (10 - 0).
-27
Evaluate -60 - ((0 - 42) + 5 + 21).
-44
Evaluate -303 - -311 - (1 + 48).
-41
61 - (47 + (12 - -32))
-30
Evaluate -97 + 276 + -25 + (7 - 30).
131
Calculate 28 + -8 - (-14 + 4).
30
2 + 22 + 50 + -32 + -8
34
Calculate 53 + 106 + (-92 - 77).
-10
What is the value of (1 - 1 - -2) + 9 + (-9 - (-37 + 15))?
24
What is the value of (13 - (-1 - 14)) + 97 + -118?
7
What is (-24 - -92) + -32 - 0 - -2?
38
What is (58 - (-2 - -2)) + 83 + -328 + 232?
45
What is the value of -2 + (-10 - (2 + -3) - (-2 + 6 + -1))?
-14
1 - 32 - (1 - 9 - (412 - 412))
-23
Calculate 40 - (-47 + 59 + 11).
17
Evaluate 4 + 14 + (-5 - (6 + (-8 - 8))).
23
7 + 5 + -3 - 83 - -19
-55
Calculate -1316 - -1296 - (5 + -11 - (5 - 0 - 2)).
-11
What is 0 + -16 + -4 + -22?
-42
Calculate 14 - 7 - (5 + -3 - -8) - -42.
39
What is (-5 + 4 - -16) + -309 + 291 - -15?
12
Evaluate -40 - (0 + -52 - (853 - 864)).
1
Calculate 6 - (-2 - (-2 + 7 + -28)).
-15
What is -21 + 3 + (22 + 7 - 15)?
-4
Evaluate (-17 - (83 + -50)) + 67.
17
7 - ((-1 - -4) + 146 + -129)
-13
-19 + -2 - -3 - (-57 + 38 + 27)
-26
Evaluate -198 - -40 - (-237 + 46).
33
Evaluate 9 - ((-20 - -36) + (-12 - 11)).
16
Calculate -3 + 2 + 1069 + -1062.
6
What is the value of (1 + 11 - (-36 - 0 - -47)) + (-4 - 0)?
-3
What is 3 + -2 - (22 + (2 - -13)) - -1 - 2?
-37
Calculate -103 + 47 + 4 + -97.
-149
What is -12 - 27 - -26 - (58 + 3)?
-74
Calculate (17 - -16) + 11 + (21 - (13 + -8)).
60
What is -6 + -4 + 12 - 2 - 19 - (18 + -13)?
-24
Calculate 14 - 60 - (-106 - -13) - -33.
80
Calculate -33 + (0 - 31 - (-916 + 854)).
-2
Calculate 16 - (-14 + 33) - (-20 + 4) - (-61 - -1).
73
Calculate 30 + -40 + -35 + (-1 - 9).
-55
Evaluate -2 + -7 + (0 - (25 + -38)) + 18.
22
Evaluate 42 + -90 + 48 + (0 + 1 + 32 - 1).
32
Calculate 27 + 9 + -59 - (2 - 14).
-11
What is the value of -15 + (-40 - -43) + 17 + -9 + 2?
-2
(-167 - -160) + (-26 - -9) + -7
-31
What is the value of 0 + -3 + 10 + -1 - (1535 - 1551)?
22
Evaluate -21 + (71 - (61 + -45)).
34
What is the value of (-212 - -100) + (0 + (3 + 0 - -2) - -2)?
-105
Evaluate 41 - (-26 + (16 - 3 - 6)).
60
What is (-13 - 0) + (-2 - -54 - (2 - -10))?
27
Calculate 38 + -117 + 7 + 11.
-61
What is 1 + (-16 + (9 - 1) - -10 - 0) - -3?
6
Evaluate -97 + -2 + -304 + 458.
55
10 + (6 + -16 - -8 - (-9 - -34))
-17
What is -98 + (-28 - -6 - -13) + -3 + 1 - 14?
-123
Evaluate ((-10 - 2) + 19 - (41 + -3)) + (29 - 39).
-41
Calculate 250 + -372 - (-53 - 4).
-65
Evaluate 5 - (-3 - (-22 + -11 + -1 + -2 + 2)).
-26
1 + 27 - 58 - -64 - ((0 - -3) + 2)
29
What is 145 + (-68 - (-1 + -27)) + 17?
122
What is (-11 - (-1 + -14 + (0 - -42))) + 11?
-27
Evaluate (13 - (-11 - -28)) + (2 - 0) + -37.
-39
What is the value of 1141 + -1113 + -1 + -33?
-6
What is -13 + (11 - (23 + -4 + 6))?
-27
Calculate (16 - (29 - 32)) + -15 + -1.
3
What is -4 + ((11 - 4) + -11 - -14) - (12 - 1)?
-5
6 - ((-13 - -20 - 19) + -19)
37
1 + -12 - (-2 - 0 - (-18 - (-109 - -27)))
55
Calculate -259 - -20 - -141 - (-4 - -32 - 2).
-124
Evaluate 87 + -43 + 1 + -57.
-12
-52 + (22 - 8) + 23 - -46
31
Calculate -4 - (17 + -8) - ((12 - -6) + -85 - -16).
38
What is the value of -50 - (-59 + -63 + 89)?
-17
What is the value of 1 + (-1 - -2) - (-86 + 85) - 41 - -11?
-27
-35 - -1 - -45 - (-5 - 65 - -7)
74
Calculate 3 + (3 - 2 - (-19 + 30)) - -39.
32
What is (-6 - 30) + -1 + 8 + -17?
-46
What is the value of 4 + -11 + (-8 - -20) - (-2 + 12 + 85)?
-90
What is -3 - -2 - (-4470 + 4378)?
91
Calculate (92 + 4 - -1) + 0 - (-16 - 5 - -32).
86
What is -133 - ((-26 - -10) + 30 + -4 + -6 + -2)?
-135
Evaluate 58 - (-18 + -6 + 26 + 9 + -27).
74
Evaluate (34 - -3 - (-36 - -58)) + -5.
10
Evaluate 37 - (-71 - -32 - -152).
-76
22 + (-3 - 53) + -36 + 7
-63
Evaluate -1 + 35 + ((9 - 5) + -1 - (-4 + 7)).
34
What is (54 - (-66 + 118) - (-1 - -12)) + -7?
-16
What is the value of 21 + -111 + 74 + 4 + 1 + (1 - -148)?
138
Calculate -19 + -19 + 11 + 4.
-23
Evaluate (410 + -417 - (-2 + 5)) + (0 - -2) - 29.
-37
6 + (8 - (-2 - -4) - -2 - 0)
14
Calculate (-11 - -68 - 17) + 0.
40
30 - (50 + -48 - (-1 - 12))
15
What is -1 + -4 + (7 - -18) + (5 - -2)?
27
Calculate ((-150 - -159) + 1 + -9 - -4) + -15.
-10
-21 - (-9 - 3) - 4
-13
What is the value of 1 + 16 - 23 - (-1 + 4)?
-9
-5 + (16 - 17 - 13 - -7)
-12
Calculate 6 + (19 - 32) + -9 - (5 - (-2 + -1)).
-24
What is the value of -1 + (-102 - 14) + 62?
-55
What is the value of (-12 + 18 + 22 - 22) + 10?
16
What is 71 + 0 + (6 + 6 - -11 - 22)?
72
(-83 - -78) + 75 + -43
27
(38 - (13 - -20) - 13) + 0 + -42
-50
Calculate -485 + 487 - (104 + 1).
-103
What is the value of 68 - (44 + -53 + 4 + (-7 - -4))?
76
Evaluate 13 + -5 - (14 - (-16 + -13 + 29 + 6)).
0
Calculate (-58 - -2 - (-2 - 6)) + 31.
-17
What is -3 + (6 - -6) + -1 + 0 - (-1435 - -1451)?
-8
What is 70 - 0 - ((-1987 - -2000) + 2 + -4 + -7)?
66
What is the value of 127 + -117 - (-5 - -30)?
-15
-2 + -46 + 3 + 1 + 8 - -10
-26
What is the value of -1654 - -1537 - (-93 + 0)?
-24
-1 + -20 + (69 - (-218 - -267))
-1
What is the value of -226 - (-242 - -20) - (2 - 115)?
109
What is 46 + (48 - 9 - 83)?
2
Evaluate -72 + 12 + 23 + -39 + (-1 - -24).
-53
What is the value of -107 + 151 + 47 + -97?
-6
What is the value of 132 - (-18 + (122 - 82))?
110
What is -3 + (9 - (9 - (7 + 12)) - -15)?
31
What is (-29 - -65) + (1 - (9 - (31 - 14)))?
45
Calculate (-17 - (-5 + (-22 - -5))) + 27 + -100.
-68
(-8 - (11 + -17)) + (-4 - (5 + 0))
-11
What is the value of (11401 - 11316) + (1 - 139)?
-53
-109 - 42 - (-44 + -10)
-97
Calculate 150 + (-35 - -15) + (4 - (10 + 15)).
109
What is -41 - (-52 - -8) - (-1 + (2 - (-3 + 4)))?
3
1 + -18 + (9 + 9 + -13 - -2) + 31
21
What is the value of 87 - (84 + 5) - (-1 + 1)?
-2
260 - ((126 - 127) + 120)
141
Evaluate -16 + (-5 - -7) - (-485 - -453).
18
What is the value of (47 - 64) + 52 - 8?
27
(64 - 1) + 8 + 0 + (16 + -1 - 32)
54
What is the value of -19 - (-4 + 0 + -4 + 14 - (1 + 9))?
-15
Calculate 0 + -76 - (1995 + -2126).
55
Evaluate 93 + -94 + 32 + -7.
24
Evaluate 57 + (79 - (7 + 74)).
55
What is the value of -33 + (42 - 24) + -11 + -1 + 0?
-27
Calculate -13 + -1 + (36 - 19).
3
11 + -104 + (-21 - -23) - (2 - (-4 - -1))
-96
Calculate 20 - 4 - 27 - 2 - -24.
11
What is th
|
{
"pile_set_name": "DM Mathematics"
}
|
1. Field of the Invention
The present invention relates to an image processor for reproducing a still image, a method of controlling the same, and a storage medium, and particularly to an image processor that performs correction of defective pixels when synthesizing a plurality of still images, a method of controlling the same, and a storage medium.
2. Description of the Related Art
In general, an image pickup apparatus for shooting, recording and reproducing a still image is equipped with an image processor, by which the synthesizing of a plurality of still images is sometimes performed. When a plurality of still images are synthesized, it is necessary to perform so-called defective pixel correction on defective pixels.
For example, in one exposure operation, an operation for reading out a video signal (image signal) from an image pickup device is performed a plurality of times to thereby record a plurality of video signals obtained by the reading operation. Further, when the plurality of video signals are subjected to synthesizing processing (e.g. addition processing) to thereby generate one video signal, the defective pixel correction is performed on each of the plurality of video signals (see e.g. Japanese Patent Laid-Open Publication No. 2001-326850).
However, in the method described in Japanese Patent Laid-Open Publication No. 2001-326850, although the defective pixel correction is performed on an individual video signal, a level is not indicated with reference to which some of image data items forming the video signal are determined as defective pixels. Therefore, for example, assuming that a minor defective pixel below the level exists in each image data item, when the plurality of image data items are accumulated by the addition processing, this causes accumulation of the defective pixel data items. As a result, this brings about a problem that even when the defective pixel correction is performed on each individual video signal, it is sometimes impossible to prevent degradation of image quality.
Further, it is known that along with an increase in the number of pixels and an increase in the sensitivity of an image pickup apparatus, such as a digital camera, an image pickup device, particularly a CMOS image sensor suffers from RTS (random telegraph signal) noise generated from transistors that read out pixels, which results in generation of white spot noise in an image.
This causes a problem that if a plurality of still images are synthesized in an image area in which such RST noise is generated, a large amount of white spot noise is generated within a screen, which degrades image quality.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Unibet Casino gives away a share of £700 on Tuesdays
Do you know a person who would not like to get some extra cash for doing simply nothing? Certainly not! That’s why the new promotion of Unibet Casino will surely be of great interest to everyone.
There is nothing new in the fact that in the arsenal of Unibet there is a great selection of awesome slot machines, brilliant table games and an exciting live casino. This casino is also a place where numerous promotions, tournaments, giveaways and other events that motivate you to play take place on a regular basis.
Currently, on Tuesdays, Unibet gives away £700 to its loyal customers and a chance to win a share of the prize pool.
Would you like to give it a try? Then you are to follow some steps to get a generous reward at the end!
Cash Drops on Tuesdays
Gambling at Unibet is a real fun and gambling there and receiving a reward for this is a double fun! These are the steps to follow:
Step 1: Go to Unibet Casino on Tuesday and opt in through the Casino Offers page.
Step 2: Wager at least £10 on the qualifying online slots to get a chance to participate into the Cash Drop prize draw.
Step 3: Play from your mobile device and wager only £5 to qualify for the promo!
Step 4: When you see your name to be drawn, enjoy the cash prizes in your account the following day.
GPWA Verification
Blackjack has not been one of the favorites in Los Vegas Casinos due to small margins and tiny percentage. It could have even disappeared in the 1980’s when notorious MIT Blackjack Team was at their pomp.
The situation might have become even worse when poker started to attract mo
If you have not heard yet, we are glad to inform you that Hasbro and Evolution Gaming united and created a real money game of Monopoly. It appeared in February and to promote this launch Mr Green casino introduced a promotion this April. We will not talk too much about the game itself, as you can re
Would you like to overcome a fierce dragon and save the princess? If so, Steam Tower slot is the right for you! Read our review, learn where to get free spins on the Steam Tower slot! Play for free and develop your strategy!
Now you can play the Playboy slot! Read our review, learn where to get free spins on the Playboy slot. Try for free and develop your strategy! Pay attention to the scatters Kimi, Sofia, Ashley and Jillian!
|
{
"pile_set_name": "Pile-CC"
}
|
/**
* Copyright 2019 LinkedIn Corporation. All rights reserved.
* Licensed under the BSD 2-Clause License. See the LICENSE file in the project root for license information.
* See the NOTICE file in the project root for additional information regarding copyright ownership.
*/
package com.linkedin.datastream.testutil;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.linkedin.datastream.metrics.BrooklinMetricInfo;
import com.linkedin.datastream.server.DatastreamTask;
import com.linkedin.datastream.server.providers.CheckpointProvider;
/**
* An in-memory implementation of {@link CheckpointProvider}
*/
public class InMemoryCheckpointProvider implements CheckpointProvider {
private static final Logger LOG = LoggerFactory.getLogger(InMemoryCheckpointProvider.class);
private final Map<DatastreamTask, Map<Integer, String>> _cpMap = new HashMap<>();
@Override
public List<BrooklinMetricInfo> getMetricInfos() {
return null;
}
@Override
public void unassignDatastreamTask(DatastreamTask task) {
_cpMap.remove(task);
}
@Override
public void updateCheckpoint(DatastreamTask task, int partition, String checkpoint) {
if (!_cpMap.containsKey(task)) {
_cpMap.put(task, new HashMap<>());
}
_cpMap.get(task).put(partition, checkpoint);
}
@Override
public void flush() {
}
@Override
public Map<Integer, String> getSafeCheckpoints(DatastreamTask task) {
return _cpMap.get(task);
}
@Override
public Map<Integer, String> getCommitted(DatastreamTask datastreamTask) {
if (_cpMap.containsKey(datastreamTask)) {
return _cpMap.get(datastreamTask);
} else {
return new HashMap<>();
}
}
}
|
{
"pile_set_name": "Github"
}
|
Pages
Tuesday, March 3, 2009
Don't You Love Surprises LIVE Giveaway!!! GIVEAWAY IS CLOSED
I'm feeling bad about not posting yesterday and I want to make it up to you. I have decided to host a "Live" Giveaway. Here is how this works. To win, start posting as many comments to this post as you wish. Each comment has to answer any of the following:
1. What would you change about my blog?
2. What do you like or love about my blog?
3. What would you like to see more of on my blog?
4. Is there anything you would like me to add to my blog?
5. Is there anything you would like me to stop posting on my blog?
Okay, you can make as many comments as you like, but your comments have to answer any or all of the questions listed above. I want to make my blog better for you - therefore I need you to tell me what you want.
Some time today (it's a surprise) I will use Random.org to chose a random comment. The winner will be announced both on twitter and here on my blog. Feel free to tweet about this too.
I bet you are wondering what you are going to win, but I'm not telling you. I will tell you it is a $25 gift certificate - but you won't know to where until it is over.SURPRISE!!!
I love your I Hate Meatloaf Recipe, i hate meatloaf too and hate to cook it, i cant wait to try yours out it actually looks like something i might enjoy! And i'm sure my hubs would love it too!!! Also i think you have a really cute header and button!!!
I would love to see more recipes! I am learning how to ocok, because i did not know my way around a kitchen until i got married and am still new too it, but i love to find new recipes and try them out!!
I think the pink background is very pretty, but I don't think it says "you"You have such a warm caring personality, the plain pink is not you. It needs to show your personality. Like in the backgrounds I sent you the other day. :)
I love your blog.. its so colorful, i love reading it. I would agree maybe the only thing i would change is like the person above me suggests is just a pic :) That and more posts! :) Your blog is what i want mine to be :D
Just to clarify a little bit, I'd love to see more giveaways for dog/cat lovers. There's a ton out there for kids/babies/moms but dogs/cats..not so much. I see you have 3 of your own (so do I) so I'd love to win some great products for pets, too.
I like the format of your whole blog. It's easy to navigate throughout it, and I like your colors. Like the pink in the background is light enough so that it looks nice but at the same time it's really easy to read everything. Also, on your sidebar I love your SassyMom button!
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
LabelEncoder within Lambda function
I'm working with the Ames, Iowa housing data, and I'd like to use a LabelEncoder within a lambda function to label encode my string values in my categorical features while skipping over the NaN values (so I can impute them later). This is what I have so far:
train['Fireplace Qu'].apply(lambda x: LabelEncoder(x).fit_transform if type(x) != np.float else x)
But it throws this error:
TypeError: object() takes no parameters
Any help would be greatly appreciated - trying to figure out a way to impute categorical data.
A:
Let us using factorize
pd.Series(pd.factorize(df.group)[0]).replace(-1,np.nan)
Out[141]:
0 NaN
1 NaN
2 0.0
3 0.0
4 NaN
5 NaN
6 NaN
7 NaN
8 1.0
dtype: float64
Or
df.loc[df.group.notnull(),'group']=df.group.astype('category').cat.codes
Data input
group
0 NaN
1 NaN
2 a
3 a
4 NaN
5 NaN
6 NaN
7 NaN
8 b
|
{
"pile_set_name": "StackExchange"
}
|
List of crime films of the 1960s
A list of crime films released in the 1960s.
Notes
References
Crime films
*
1960s
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Bend and helical twist associated with a symmetric internal loop from 5S ribosomal RNA.
We have used gel electrophoretic mobility measurements to investigate the conformation of the symmetric eubacterial loop E sequence of 5S rRNA (seven nucleotides in each strand). The loop strongly retarded the gel mobility of duplex RNAs containing it. In contrast, only asymmetric A5.An or U5.Un internal loops (n not equal to 5) strongly affected duplex RNA gel mobility. A phasing experiment, in which an A2 bulge and loop E were placed in the same duplex RNA and the number of base pairs between them varied, showed that loop E has a permanent bend and is torsionally stiff. A second phasing experiment substituting loop E for duplex sequences between two A2 bulges measured the helical twist associated with loop E; it is about 30 degrees (+/- 15 degrees) overwound compared to a duplex RNA of the same number of bases. Ribosomal protein L25 specifically recognizes loop E but had little or no effect on the twist of the loop. These results suggest that loop E adopts a specific, roughly helical structure.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
186 Ariz. 409 (1996)
923 P.2d 875
Lee Anne FLOYD, a married woman, Plaintiff-Appellant,
v.
Anthony DONAHUE and Jane Doe Donahue, husband and wife, Defendants-Appellees.
No. 1 CA-CV 95-0460.
Court of Appeals of Arizona, Division 1, Department B.
September 3, 1996.
*411 Wade F. Waldrip, Harland E. Carey, Phoenix, for Plaintiff-Appellant.
Raymond, Greer & Sassaman, P.C. by Randy L. Sassaman, Leonard D. Greer, Michael J. Raymond, Phoenix, for Defendants-Appellees.
OPINION
LANKFORD, Judge.
On this appeal from the dismissal of a complaint, we consider whether the statute of limitations bars a plaintiff's claims that her father sexually molested her from the time she was twelve years old. We hold that her claims of childhood abuse are barred, but that she may sue for acts occurring less than two years before she filed this action.
Although the trial court dismissed the complaint, it considered evidentiary matters in ruling on the motion to dismiss. The court thereby treated the motion as one for summary judgment. See Ariz. R. Civ. P. 12(b). Accordingly, we view the evidence in the record favorably to Floyd, the person against whom summary judgment was granted. Hill-Shafer Partnership v. Chilson Family Trust, 165 Ariz. 469, 472, 799 P.2d 810, 813 (1990). In addition, we determine de novo whether genuine issues of material fact exist and whether the trial court correctly applied the law. Gonzalez v. Satrustegui, 178 Ariz. 92, 97, 870 P.2d 1188, 1193 (App. 1993).
Appellant Lee Anne Floyd was born in December 1958. Beginning in about 1970, when Floyd was twelve years old, Appellee Anthony Donahue began sexually abusing her.[1] The abuse during her minority included numerous acts of inappropriate touching, exhibitionism, oral sex, and attempted intercourse. Donahue warned Floyd not to tell her mother, and Floyd feared that telling her mother would result in the breakup of the family.
In addition to the more egregious forms of abuse perpetrated from 1970 through 1974, Floyd claims Donahue used familial hugs and other opportunities as occasions for additional abuse, including open-mouthed kisses and thrusting his groin against Floyd in a sexually suggestive manner. This behavior continued into Floyd's adulthood until the day before her mother died on September 2, 1992.
Floyd filed her complaint on June 28, 1994. This date was more than seventeen years after her eighteenth birthday, but less than two years after her mother's death. After Floyd filed an amended complaint, Donahue moved to dismiss it based on the statute of limitations. The trial court granted the motion.
In general, the statute of limitations defense is disfavored; courts prefer to resolve cases on their merits. Gust, Rosenfeld & Henderson v. Prudential Ins. Co., 182 Ariz. 586, 590, 898 P.2d 964, 968 (1995). However, statutes of limitations serve the important public policy functions of protecting defendants and the courts from stale claims and from the evidentiary problems such claims generate, and protecting defendants from economic and psychological insecurity. Ritchie v. Grand Canyon Scenic Rides, 165 Ariz. 460, 464, 799 P.2d 801, 805 (1990).
In Arizona, a plaintiff must file suit for personal injuries "within two years after the cause of action accrues." Ariz. Rev. Stat. Ann. ("A.R.S.") § 12-542 (1992). Floyd argues that because of the unique issues involved in adults' claims against persons who sexually abused them as children, the statute should not apply. In a recent case, however, the Arizona Supreme Court applied A.R.S. section 12-542 to adult victims' claims against persons who sexually abused them as children. Florez v. Sargeant, 185 Ariz. 521, 524-25, 917 P.2d 250, 253-54 (1996). We therefore reject Floyd's argument that no statute of limitations applies to her claims.
*412 Because most of the alleged abuse in this case occurred when Floyd was a minor, the running of the limitations period on the acts occurring during childhood was tolled until she reached age eighteen. A.R.S. § 12-502(A) (1992). Floyd's claims as to the childhood abuse are facially untimely because she filed suit more than two years after her eighteenth birthday. Floyd has the burden to show some ground for tolling the statute of limitations. Ulibarri v. Gerstenberger, 178 Ariz. 151, 155, 871 P.2d 698, 702 (App. 1993).
Floyd offers several theories[2] in arguing that her claims are not time-barred. Some of these questions were resolved in Florez, in which the Arizona Supreme Court ruled that the claims of two persons sexually abused as children were barred by the statute of limitations. We will first briefly address those issues resolved in Florez, and then address an unresolved issue: May a victim sue for childhood sexual abuse beyond the two-year limitations period when the perpetrator continues to engage in unwanted sexually suggestive touching until within the two-year period?
Floyd contends that accrual of her cause of action was delayed until psychological counseling made her aware of the extent of injuries and of the causal connection between the abuse and her emotional problems. We disagree.
In Arizona, a claim accrues when a "plaintiff knows or, in the exercise of reasonable diligence, should know the facts" underlying that claim. Gust, Rosenfeld, 182 Ariz. at 588, 898 P.2d at 966; accord Kowske v. Life Care Centers of Am., 176 Ariz. 535, 537, 863 P.2d 254, 256 (App. 1993). The discovery rule delays accrual until the plaintiff has reason to know "by the exercise of reasonable diligence" that defendant harmed her. Mayer v. Good Samaritan Hospital, 14 Ariz. App. 248, 252, 482 P.2d 497, 501 (App. 1971).
The discovery rule did not render Floyd's claims timely. In Florez, the Arizona Supreme Court held that when adult victims knew who had abused them, what the abusers had done, and that this abuse had caused them injury, they could have filed their claims. 185 Ariz. at 527-29, 917 P.2d at 256-58. The record reveals that Floyd remembered her father's abuse, and was aware that this abuse had injured her. After she became an adult, Floyd began counseling to help her deal with psychological problems resulting from Donahue's abuse. In 1983, she began treatment at the Center Against Sexual Assault ("CASA"). In 1986 or 1987 she again sought counseling, and in 1993 sought marital counseling for problems associated with the past abuse. In 1993 Floyd also began counseling sessions with Kim Whiting, a counselor specializing in sexual abuse cases. Once Floyd had reason to know her father's abuse caused her injury, which occurred at the latest by 1983, her cause of action accrued. The discovery rule thereafter had no effect on the limitations period.[3] Because the limitations period for the childhood abuse had expired when plaintiff commenced the action, the complaint was properly dismissed.
Floyd also argues that because her father used his parental authority and threats against Floyd to keep her silent, and because his acts caused the mental impairment that prevented Floyd from timely filing *413 suit, Donahue should be estopped from asserting the statute of limitations. Although Floyd states that she is not claiming an "unsound mind" disability, her grounds for estoppel differ little from the "unsound mind" argument rejected in Florez. In Florez, the Arizona Supreme Court held that post-traumatic stress syndrome did not toll the statute of limitations unless its effects rendered the plaintiff "incapable of carrying on the day-to-day affairs of human existence." 185 Ariz. at 526, 917 P.2d at 255. There is no such evidence in this case, and we therefore decline to apply estoppel.
Floyd also asserts estoppel on this basis: She failed to file because she feared confronting her father would hurt her mother and lead to the breakup of her family. There is no evidence of concealment in this case. Compare Ulibarri, 178 Ariz. at 156, 871 P.2d at 703 (post-hypnotic suggestion to forget sexual conduct). In the absence of evidence of concealment, a specific threat or demonstrable duress, we decline to apply estoppel. Floyd no longer lived with her father, and she alleges merely that she forbore filing suit to protect her mother from unpleasant information that may have changed her mother's opinion about her husband. See Rigazio v. Archdiocese of Louisville, 853 S.W.2d 295, 297 (Ky. Ct. App. 1993) (abuser's telling victim not to tell anyone about abuse not sufficient to constitute obstruction or concealment); Franke v. Geyer, 209 Ill. App.3d 1009, 154 Ill.Dec. 710, 713, 568 N.E.2d 931, 934 (1991) (to show equitable estoppel, victim must show that the abuser's actions caused her to forbear filing suit); cf. Roer v. Buckeye Irrigation Co., 167 Ariz. 545, 547, 809 P.2d 970, 972 (App. 1991) (non-committal acts insufficient to establish estoppel). Accordingly, we reject the contention that defendant is estopped from raising the statute of limitations defense.
Floyd also claims that her father's continuing acts of sexually suggestive, offensive behavior into her adulthood extends the limitation period. The last such act occurred the day before her mother died and within the two year statutory limitations period. Floyd contends that all the conduct is linked and that the most recent acts sweep all prior conduct within the limitations period.[4] She further argues that although these later acts were not as nefarious as those Donahue committed during her childhood, these acts severely aggravated her depression and the post-traumatic stress syndrome caused by past abuse.
We agree that under certain conditions a tort is continuous, and in such cases the limitations period does not commence until the date of the last tortious act. See Garcia v. Sumrall, 58 Ariz. 526, 533, 121 P.2d 640, 643 (1942) (trespass to property). However, the continuing tort rule does not apply here because each claimed act is a separate assault causing separate as well as cumulative injury. See, e.g., Doe v. Doe, 671 So.2d 466, 469-70 (La. App. 1995) (separate acts of sexual abuse not a continuing tort); Hertel v. Sullivan, 261 Ill. App.3d 156, 198 Ill.Dec. 574, 578, 633 N.E.2d 36, 40 (1994) (continuing tort rule does not extend to sexual abuse claim); Davis v. Bostick, 282 Or. 667, 580 P.2d 544, 547-48 (1978) (separate but repeated acts of spousal abuse did not fall within continuing tort rule; claims for abuse occurring outside of limitations period barred). Cf. Mardis v. Robbins Tire & Rubber Co., 669 So.2d 885, 888 (Ala. 1995) (acts of sexual harassment occurring more than two years before suit time-barred); Doe v. Roe, ___ Ariz. ___, ___, ___ P.2d ___, ___, 1996 WL 445314, 222 Ariz. Adv. Rep. 17, 22 (App. Aug. 8, 1996) (Lankford, J., dissenting) (each act of abuse is a separate tort); but cf. Giuliani v. Stuart Corp., 512 N.W.2d 589, *414 595 (Minn. Ct. App. 1994) (sexual harassment claim timely if "at least one incident of harassment occurred within limitations period"). In this case, each of Donahue's acts against Floyd was a separate tort, and Floyd cannot assert claims for abuse occurring outside the two year limitation period.
We hold that Floyd's claims of abuse occurring more than two years from the date she filed are time-barred. We note, however, that Floyd claims that at least one act of offensive, sexually suggestive touching occurred within two years before she filed suit, and that this touching aggravated her preexisting psychological problems and thereby caused additional injury. Because this act occurred within the statutory period, we remand the claims filed within two years for further proceedings.
WEISBERG, P.J., and VOSS, J., concur.
NOTES
[1] Because we have assumed for purposes of this appeal that Donahue molested Floyd as alleged in the amended complaint, we find no merit to Floyd's argument that she should have been allowed to depose Donahue before resolution of the limitation issue.
[2] We note that Floyd does not claim that she was under the disability of "unsound mind" for purposes of tolling the statute of limitation. Accordingly, we need not address Donahue's argument that Floyd is attempting to tack disabilities in violation of A.R.S. section 12-503 (1992).
[3] Floyd argues that discovery of her cause of action was triggered by her son's reporting that Donahue told him not to tell his mother about something. However, the record shows that Floyd did not repress all memory of abuse and it is therefore unclear what memory her son's statement "triggered."
Floyd also argues that the trial court erred in disregarding the affidavit of her expert on the issue. We have reviewed the record de novo to determine whether an issue of material fact exists. We agree with the trial court that Floyd's concessions that she always remembered the abuse and that she sought counseling at least by 1983 to deal with psychological problems resulting from that abuse resolves the question. The expert's testimony did nothing to change that result because she merely opined that Floyd had discovered the full impact of Donahue's abuse after her mother died and after she began counseling.
[4] We reject Donahue's argument that Floyd waived this argument by failing to raise it in the trial court. In the amended complaint, made more specific than the original complaint because the trial court granted Donahue's motion for more definite statement, Floyd specifically alleged that Donahue's "harmful and offensive touching" continued until September 1992. Although Floyd did not use the words "continuing tort" in her written memoranda, fairly read these documents assert that Donahue's inappropriate conduct continued from 1970 through 1992, and that Floyd's claims were not separate claims for each alleged assault but were claims for her injuries resulting from the cumulative effects of those assaults. This is sufficient to raise the issue.
|
{
"pile_set_name": "FreeLaw"
}
|
Ultrasonic diagnosis of oral and neck malignant lymphoma.
A series of 14 patients with nodal and extranodal non-Hodgkin's lymphoma of the oral and neck region was analyzed by ultrasonogram evaluation. Eight nodal lymphomas and six extranodal lymphomas commonly exhibited almost completely similar ultrasonographic findings, specifically, clear delineation of the boundary echo and a homogeneous, weak internal echo, the so-called pseudo-liquid-like images. The results derived from our study suggest that ultrasonic diagnosis is also helpful in evaluating patients with lymphoma during the initial diagnosis and initial treatment like other diagnostic imaging modalities.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Getting to college can be tough. Especially for athletes. However, many athletes out there don’t understand that they can use the power of the internet in order to get their name out there. So, we’ve provided a step by step guide on how to get recruited by college coaches so that you can get your name out there and be seen.
According to the NCAA, its extremely hard to play collegiate athletics. And as sports become more and more popular, the opportunity to compete as an NCAA athlete shrinks each year.
Depending on your sport, the percentages of kids who earn some form of college scholarship can fluctuate, but the average is around 2% of high school students. Plus, more and more parents and athletes are trying to find out how to get recruited so that the NCAA is in the future for them.
So if only 2% make it, and you want to make it yourself, you need to do everything in your power to make sure you become recognized. That’s how to get recruited by college coaches.
You can’t sit back and assume that college coaches are going to contact you. College coaches are extremely busy. Especially the coaches that don’t have the ability to hire elaborate staffs and have to do the recruiting between them and their assistant coach.
Related: 8 Traits That Your Kid Needs To Become An Elite Athlete
If you want to figure out how to get recruited by these schools, you first need to assume that none of these coaches have ever heard of you, let alone watched you play. And, its your job to introduce yourself to them and keep them up to speed on everything about you in a professional, mature manner.
If you’re nervous about contacting college coaches, you should be. You only have one chance to make a first impression. If you come off as a showboat, arrogant, or even desperate, you will have a hard time trying to recover in the eyes of a potential coach that you are interested in playing for.
|
{
"pile_set_name": "OpenWebText2"
}
|
443 F.2d 368
UNITED STATES of America, Appellee,v.James Arthur O'NEAL, Appellant.
No. 26211.
United States Court of Appeals, Ninth Circuit.
May 28, 1971.
Donald R. Shaw (argued), of Tonkoff, Dauber & Shaw, Yakima, Wash., for appellant.
Carroll D. Gray (argued), Asst. U. S. Atty., Dean C. Smith, U. S. Atty., Spokane, Wash., for appellee.
Before JERTBERG, ELY and KILKENNY, Circuit Judges.
PER CURIAM:
1
Appellant was found guilty of refusing to submit to induction into the Armed Forces in violation of 50 U.S.C. App. § 462. He appeals. We affirm.
2
On March 27, 1967, after registration with his local draft board, appellant was classified I-A. Subsequently, he furnished additional information and applied for a IV-D classification. The local board refused to reopen.
3
(1) Relying upon Mulloy v. United States, 398 U.S. 410, 90 S.Ct. 1766, 26 L.Ed.2d 362 (1970); Miller v. United States, 388 F.2d 973 (9th Cir. 1967), and similar authorities, appellant claims that he made a prima facie case for reopening. The lower court thought otherwise and we agree. To qualify for a IV-D classification, a registrant must show that he is pursuing a full time course of instruction at a reconized theological or divinity school. 50 U.S.C. App. § 456(g). There is nothing in the record indicating that appellant was pursuing such a course, or that the alleged school was recognized.
4
(2) Although our decision on point (1) would, under ordinary circumstances, dispose of the appeal, we feel we should express our views on appellant's second contention. He urges that the actions of the board in his case amounted to a de facto reopening within the rules stated in Mulloy and Miller. In each of those cases, the registrant presented a prima facie case and the conduct of the board was tantamount to a reopening. Here, the board granted appellant a "courtesy interview," but refused to reopen. On the record before us, we hold that the actions of the board did not rise to the dignity of a de-facto reopening. Consequently, the rules in Mulloy and Miller are inapplicable. United States v. Price, 427 F.2d 162, 163 (9th Cir. 1970); United States v. Bowen, 423 F.2d 266, 267 (9th Cir. 1969).
5
We have considered, but found without merit, other points raised by appellant.
6
Affirmed.
|
{
"pile_set_name": "FreeLaw"
}
|
A flood of renewable capacity in the European Union is forcing member countries to consider grid upgrades that offer a more substantial power supply management role to distribution system operators.
With the final numbers now in, European grid operators and regulators report that almost 90% of all new power coming online in the European Union (EU) last year came from renewable sources. This trend is anything but over: Of the 24.5 GW of new capacity built across the EU in 2016, 21.1 GW—or 86%—was from wind, solar, biomass, and hydro, eclipsing the previous record of 79% in 2014.
Also, for the first time, wind farms now account for more than half of installed capacity in the region, according to data from trade group WindEurope. Wind energy has now also overtaken coal as the EU’s second largest source of power capacity after natural gas—though due to the technology’s intermittent nature, coal still meets more of the bloc’s actual electricity demand. Gas, because of its expense, remains primarily as a back up to maintain grid integrity.
1. A gust for the grid. The influx of renewable power in Europe may require an expansion of its grid and upgrades to its distribution networks, along with measures such as smart grid technologies and demand response. Courtesy: Tennet
Not surprisingly, Germany installed the most new wind capacity in 2016. However, France, the Netherlands, Finland, Ireland, and Lithuania all set new records for wind farm installations too. Although total capacity added was 3% lower than in 2015, a surge in offshore wind farms—which are twice as expensive as those built on land—saw investment from nearly every European country (including in Britain, which has opted to leave the EU), hitting a record €27.5 billion.
Integrating Renewables
With renewables booming, one of the biggest challenges for European distribution systems operators (DSOs) is integrating all these new intermittent sources that are coming online, which often flood into existing grid systems. According to the European Commission (EC), to keep up the pace with the estimated renewable expansion, €400 billion worth of new distribution network investments will be needed across Europe by 2020.
Going forward, if electric vehicles catch on in Europe (so far they have not), even more complexity could be added to the situation. Also challenging the status quo: As the price of solar continues to fall and small-scale installs remain popular, increasing amounts of “prosumers” have also come into the market.
These developments are birthing new, innovative, tech-heavy smart meters and smart grids, which are now being built across Europe. Throughout the region, the distribution grid will have to rapidly adapt as the wind gush is upending how energy needs to flow.
Overall, integration of decentralized sources along with robust demand-side management/response may allow DSOs to become central platforms for the energy transition by coordinating between energy producers and consumers—attempting to play something of a neutral referee. DSOs have to straddle the boundaries between existing and emerging fields, particularly flexibility, energy storage, data handling, and analysis, providing real-time information as well as helping to analyze where markets are heading—all, perhaps, while subtly helping involved parties move in concert.
A Flurry of Mergers and Acquisitions
Over the next three years, many of Europe’s top utilities are planning to invest tens of billions of euros to catch up with the green energy revolution. One simple growth path is through acquisitions. This is driving a flurry of takeovers by tech and engineering firms of niche, smart-energy innovators, according to reports by Reuters and other news organizations.
“Everywhere in the supply chain of power there is disruption going on,” said Bruce Jenkyn-Jones, co-head of listed equities at Impax Asset Management, which focuses on investments in environmental markets and resource efficiency.
The massive volume of renewables coming online has translated into a need for intelligent information technology (IT) systems that can balance out demand and supply swings while meeting energy and carbon emissions targets. Industrial and technology providers like Siemens, ABB, General Electric (GE), and others have become key players in this transformation, often working in close partnership with grid developers and operators to manage the grid’s evolution.
Merger and acquisition activity is also moving forward in storage and smart meter providers, both of which are key sectors in securing access to customers and, more importantly, their data, to help the utilities tailor their power purchases and save costs. According to Reuters, three major German meter makers—Techem, Ista, and Qundis—are up for sale, and in France, energy conglomerate Total recently purchased battery maker Saft Groupe for €950 million.
Across the pond, Oracle took over Opower, a maker of utility software, in a bid to reap key markets, especially emerging European ones. Other niche players being showcased include U.S. smart meter maker Itron, which relies on Europe for more than a third of its sales.
Utilities Prioritize Network Upgrades
Together, major European utilities may spend more than €40 billion over the next three years to upgrade their networks, according to ongoing investment plans. This includes replacing old cables, buying new smart meters, and putting new IT in place. These investments follow almost a decade of steep losses for major power producers like E.ON and RWE, which hemorrhaged many billions of euros in value as their fossil fuel systems were marginalized by increasingly less-expensive renewables.
Also in France, national utility EDF is moving quickly to install smart meters as that nation looks to increase renewable systems. The country’s future, however, is clouded by upcoming elections that could change its energy direction back towards nuclear. But if a left-leaning coalition government comes into office, then renewables will likely continue to grow there as well.
Goldman Sachs estimates power producers might invest more than €60 billion by 2025 to digitalize their grids. In Europe, big conglomerates, including ABB and Siemens, are so far seen as the leading integrated providers of smart grid technology and hardware, simply because they already cover a wide range of sectors, including IT.
“Sometimes it is hard to draw the line between IT and industrials. A company like Siemens is a bit of both,” Frederic Fayolle, senior fund manager at Deutsche Asset Management, said. GE in November bought Bit Stew Systems and Wise.io to expand its platform for industrial internet applications, which connect big machines such as power plants to databases and analytical software.
Utilities, however, will likely stay on the sidelines rather than become developers themselves. “I doubt that a utility can compete with Siemens, GE, nor with Google and Apple,” said Oskar Tijs, senior investment analyst at NN Investment Partners. “On the grid side, the utilities will be mostly clients of technology companies.”
New Technology, New Partnerships: E.ON and Siemens
Meanwhile, a partnership between E.ON and Siemens is breaking new ground as it develops smart metering technology in Germany. The collaboration forms part of E.ON and Siemens’ efforts to contribute towards Germany’s plan to replace, process, and systemically convert a large numbers of meters. Additionally, Siemens is providing its smart grid platform EnergyIP for integration with E.ON’s existing EniM program.
According to Siemens, the integration will allow E.ON network operators ease and optimal integration of smart metering infrastructure into IT systems to allow meter data acquisition for improved grid management and customer billing. “By integrating EnergyIP into our systems, we will be able to offer our customers the best possible advice and support with regard to the smart meter rollout,” said Paul-Vincent Abs, metering managing director at E.ON.
Looking ahead, “with implementation of our EnergyIP smart grid application platform, E.ON Metering is prepared for future rollout scenarios as a metering point operator and, when it comes, to smart meter gateway administration,” said Ute Redecker, head of the Siemens Digital Grid business unit in Germany. The integration will allow the analytics application to utilize various big data options for administering smart meter gateways and meter data processing for external market participants on the German market.
Siemens hopes the development will yield a rich treasure of finely granulated data for functionalities including energy theft and overloaded distribution equipment detections through grid load analysis, grid incident analysis, and end-customer consumption load analysis. In other words, Siemens will have a clear insight not only into E.ON’s grid, but a much better platform on which to build going forward, as other grid operators tackle the demands of a renewable energy backbone. The inclusion of the big data option will also allow for the creation of load forecasts for different levels in the distribution grid as well as an analysis of distributed energy resources.
EU Steps Up Funding for New Energy Infrastructure
In February, the EC announced that it would fund almost half a billion euros in selected power, smart grid, and gas energy infrastructure projects through the Connecting Europe Facility, the EU’s funding scheme for infrastructure. An ongoing project—valued at a total €5.35 billion—has been allocated to trans-European energy infrastructure between 2014 and 2020. This year, more than a third of the selected projects are in the energy sector, winning support of €176 million.
Another EC–funded project is a €40.25 million investment in Tennet’s “SuedLink” smart grid system. The project is planned to connect wind power generated in breezy northern Germany with consumer and industrial centers in the nation’s economically booming south. To do this and get buy-in from conservatives in Bavaria annoyed by the “unsightliness” of so many new wind towers, the project will require over 700 kilometers of new, buried underground, high-voltage cables—the first system of its kind on such a large scale. Despite everything else Germany has done, this represents the nation’s largest energy infrastructure project to date.
One of the challenges the nation faces is that Germany has an ambitious objective: to get 80% of its power supply from renewables by mid-century. To do that, not only do existing power grids have to be upgraded, but also political compromises have to be made. SuedLink satisfies what has been something of an imbroglio that has gridlocked future renewable development in the nation.
The north has offshore wind, but the south has the load that needs it. Sitting in between have been a lot of folks already shocked by the transformation of Germany’s landscape to accommodate the Energiewende. New transmission lines across conservative Bavaria have long been politically unpopular and local opposition has been a serious bottleneck.
German energy companies are pouring resources into digitalization across the entire value chain of the industry, writes Bernward Janzing in the Handelsblatt, a German language business newspaper. Both old, established companies and new, creative start-ups are jumping on board, he writes. “Digitalisation is one of the biggest topics in the energy industry,” Janzing quoted Stefan Kapferer, head of the German Association of Energy and Water Industries, as saying.
Interflex: Demonstrating a Grid Revolution
Another new initiative announced this year, again with key participation by E.ON, is the new European smart grid project, InterFlex, which aims to explore new ways of using various forms of flexibilities to optimize electric power systems on a local scale.
Coordinated by the University of Aachen, the project focuses on the interoperability of systems, the replicability of solutions, and on the possible resulting business models. Twenty industrial partners, including utilities, manufacturers, and research centers, are involved in the project, which has a budget of €23 million and seeks to apply smart grid technologies at an industrial scale to achieve a high penetration of renewables.
Part of the biggest current EU Research and Innovation program, Horizon 2020, InterFlex is scheduled to run three years. During that time, project participants will investigate the interactions between flexibilities provided by energy market players and the distribution grid, with a particular focus on energy storage, smart charging of electric vehicles, demand response, islanding, grid automation, and the integration of different types of energy carriers (gas, heat, electricity). Project findings will allow consortium members to replicate the demonstrated solutions and business models. Their overarching goal is to further develop advanced monitoring, local energy control, and flexibility services at the EU level.
Six projects (Figure 2) are slated for demonstration by five European distribution companies—CEZ Distribuce (Czech Republic), Enedis (France), E.ON (Sweden), Enexis (The Netherlands), and Avacon (Germany). The demonstration project in Germany will be implemented by Avacon, a German grid operator belonging to the E.ON group. Avacon will manage a centralized platform of flexibilities and distributed energy resources in a rural area to use energy only where it is generated in order to relieve congestion on the distribution grid.
2. A continental experiment. InterFlex, a three-year-long European smart grid demonstration, kicked off in January 2017, seeking to investigate interactions between flexibilities provided by energy market players and the distribution grid. It will involve six demonstrations hosted by five European distribution companies as shown here. Courtesy: Trialog
E.ON subsidiary Sverige, will undertake two projects. One is in Malmö, Sweden, designed to study energy integration, using the heat inertia of buildings as a flexibility measurement to achieve more optimized and environmentally friendly production throughout a distributed energy system. The other is in southern Sweden. It will explore ways of operating part of a distribution grid on a stand-alone basis (“islanding”).
NiceGrid—a demonstration project located in Nice, France—will be spearheaded by Enedis. It is pioneering peer-to-peer energy exchanges between solar photovoltaic installations and storage suppliers, allowing the integration of intermittent renewable energy into the distribution grid to be maximized.
CEZ Distribuce will lead another project to use grid automation and energy storage to integrate decentralized renewable energy within the distribution grid. Smart functions for electric vehicle charging stations will also be developed as a source of flexibility.
Finally, the Enexis demonstrator project, in Eindhoven, Netherlands, proposes a multiservice approach to harnessing local flexibilities. It will use stationary storage and electric vehicle batteries, and involve distribution system operators, charge point operators for electric vehicles, and other relevant parties. ■
—Lee Buchsbaum (www.lmbphotography.com), a former editor and contributor to Coal Age, Mining, and EnergyBiz, has covered coal and other industrial subjects for nearly 20 years and is a seasoned industrial photographer.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
What does 'ido-everywhere' actually do?
When reading about ido, we are instructed to add this to .emacs:
(ido-everywhere t)
The doc says that it Toggle use of Ido for all buffer/file reading.
What does it mean? Everything seems to work whether ido-everywhere is set or not.
A:
ido-everywhere function
(define-minor-mode ido-everywhere
"Toggle use of Ido for all buffer/file reading.
With a prefix argument ARG, enable this feature if ARG is
positive, and disable it otherwise. If called from Lisp,
enable the mode if ARG is omitted or nil."
:global t
:group 'ido
(remove-function read-file-name-function #'ido-read-file-name)
(remove-function read-buffer-function #'ido-read-buffer)
(when ido-everywhere
(add-function :override read-file-name-function #'ido-read-file-name)
(add-function :override read-buffer-function #'ido-read-buffer)))
It overrides read-file-name-function (https://www.gnu.org/software/emacs/manual/html_node/elisp/Reading-File-Names.html), read-buffer-function (ftp://ftp.gnu.org/old-gnu/Manuals/elisp-manual-20-2.5/html_chapter/elisp_20.html).
You can see the effect when you try File->Open File from menu bar.
With ido-everywhere disabled, it opens graphical interface, but with ido-everywhere enabled it shows file list in mini buffer (ido style).
Effect can be seen where ever these overrided functions are used.
|
{
"pile_set_name": "StackExchange"
}
|
Orienting system
The brain pathway that orients visual attention to a stimulus is referred to as the orienting system. There are two main types of visual orientations, covert (exogenous) which occurs when a salient environmental change causes a shift in attention and overt (endogenous) which occurs when the individual makes a conscious decision to orient attention to a stimuli During a covert orientation of attention, the individual does not physically move, and during an overt orientation of attention the individual's eyes and head physically move in the direction of the stimulus.
Information acquired through covert and overt visual orientations travels through the norepinephrine system, indirectly effecting the ventral visual pathway. The four specific brain regions involved in this process are the frontal eye field, the temporoparietal junction, the pulvinar, and the superior colliculus. The frontal eye field is involved in goal-driven eye movements and can inhibit stimulus driven eye movements. The temporoparietal junction appears to be involved location-cueing tasks, and individuals with lesions in this area have difficulty with attentional reorienting. The pulvinar is located posterior to the thalamus and its role in the orientating system is still being researched; however it is thought to be involved in covert orienting. Finally, the superior colliculus provides information about the location of the stimuli to which attention is directed.
References
Category:Visual system
Category:Central nervous system
Category:Attention
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Q:
React Performance Issues in Firefox?
I'm experiencing some performance issues with a react application that I developed. These issues specifically (or most notably) occur with Firefox (both FF developer 77.0b7 and FF 76.0.1).
When using this application in Firefox, CPU usage gets extremely high, and my fans start spinning up to very high speeds. I get about 15-19fps in firefox according to the performance tools in FF. I get roughly 60fps in Chrome and Safari.
These issues occur when I begin typing into the input field, and get worse as the input gets longer (which makes sense)
The application is available here:
https://text-to-aura-generator.netlify.app/
Source code available here: https://github.com/paalwilliams/Text-to-Aura/tree/master/src
I'm almost certain that this is something I'm doing incorrectly, or that I've written the code inefficiently, but that isn't necessarily supported by the stark performance difference between browsers. Is chrome just that much better and handling react/constant rerenders?
I know that this is a broad question, but I honestly don't understand what is happening here, or necessarily how to troubleshoot it beyond the developer tools. Any input or thoughts would be greatly appreciated.
A:
The problem is your application is rendering too fast. In your particular case, there a few ways to improve that.
Every time you update the state, React needs to re-render your application, so updating the state within a loop is usually a bad idea.
Also, you are using useState 3 times, but only colors should be there, as App actually needs to re-render to reflect the changes there. The other two pieces of state (text and hex) are only being used to pass data from the handleChange to the callback inside useEffect.
You can restructure your code to:
Avoid updating the state within a loop.
Use a simple variable instead of state.
Use useCallback to define a function with that logic that is not re-created on each render, as that forces TextInput to re-render as well.
Throttle this callback using something like this:
import { useCallback, useEffect, useRef } from 'react';
export function useThrottledCallback<A extends any[]>(
callback: (...args: A) => void,
delay: number,
deps?: readonly any[],
): (...args: A) => void {
const timeoutRef = useRef<number>();
const callbackRef = useRef(callback);
const lastCalledRef = useRef(0);
// Remember the latest callback:
//
// Without this, if you change the callback, when setTimeout kicks in, it
// will still call your old callback.
//
// If you add `callback` to useCallback's deps, it will also update, but it
// might be called twice if the timeout had already been set.
useEffect(() => {
callbackRef.current = callback;
}, [callback]);
// Clear timeout if the components is unmounted or the delay changes:
useEffect(() => window.clearTimeout(timeoutRef.current), [delay]);
return useCallback((...args: A) => {
// Clear previous timer:
window.clearTimeout(timeoutRef.current);
function invoke() {
callbackRef.current(...args);
lastCalledRef.current = Date.now();
}
// Calculate elapsed time:
const elapsed = Date.now() - lastCalledRef.current;
if (elapsed >= delay) {
// If already waited enough, call callback:
invoke();
} else {
// Otherwise, we need to wait a bit more:
timeoutRef.current = window.setTimeout(invoke, delay - elapsed);
}
}, deps);
}
If the reason to use useEffect is that you were not seeing the right values when updating colors, try using the version of setState that takes a callback rather then the new value, so instead of:
setColors([...colors, newColor]);
You would have:
setColors(prevColors => ([...prevColors , newColor]));
|
{
"pile_set_name": "StackExchange"
}
|
2. As is customary around this time of year, the bubble is essentially a Rorshach test. Essentially every team has a middling RPI, SOS, 2-3 good wins, and a blah loss. I won't argue adamantly for any of my bubble teams (or against teams that just missed) - that will mostly be sorted out over the next 8 weeks, anyways.
3. One of my largest discrepancies from the Bracket Matrix (bracketmatrix.com) is Alabama - they have seven Column 1 + 2 wins, including 2-2 vs Column 1, more than any other team near the cutline.
4. If this bracket were the real thing, I'd imagine there would be more than a couple pool entrants who would take three or even four 4-seeds to the Final Four. Hell of a (storied) group with Kansas, Xavier, Kentucky, and North Carolina.
5. On the other hand, this group of 12- and 13-seeds looks positively frightening. Of course, most of them will probably lose in their conference tournaments, as is annual custom, and I'll cry softly into a pillow as the auto-bid pool dilutes itself.
6. I still have Notre Dame clinging tenuously to their spot in the bracket - they've maintained enough respectability despite currently missing Matt Farrell and Bonzie Colson (that UNC win would have been sweet, though). Minnesota, on the other hand, has not, having just gotten blitzed by a combined 57 points in their last two games (and they've lost their last three).
That's all for now, we hope to have periodic updates to this in the lead-up to Selection Sunday.
|
{
"pile_set_name": "Pile-CC"
}
|
Molecular cloning, characterization and expression analysis of grass carp (Ctenopharyngodon idellus) NF45 (ILF2) cDNA, a subunit of the nuclear factor of activated T-cells (NF-AT).
NF45 (ILF2) and NF90 (ILF3) regulate the IL-2 gene transcription via interaction with the antigen receptor response element. Much work on NF45 has been done in human and mammals while little in fish. In the present study, we have cloned and characterized the full-length cDNA of NF45 in grass carp (Ctenopharyngodon idellus). The grass carp NF45 cDNA of 1563bp contains a short 5'UTR of 24bp, a 3'UTR of 375bp and an open reading frame of 1164bp coding for a protein of 387 aa with a predicted molecular mass of 42.8kDa. The encoded protein shares 86.3-96.7% identities to other homologues. RT-PCR was optimized to estimate the expression level of NF45 in grass carp. The results showed that NF45 is constitutively expressed in most selected tissues, including head kidney, spleen, heart, brain, liver, and gill, although low levels were observed in spleen, liver and gill. The ubiquitous expression of NF45 is consistent with a postulated role in gene regulation at the level of transcription. Stimulating the fish with PHA significantly up-regulated the expression of NF45 in most tissues examined, which potentially indicated that NF45 was involved in the immune responses triggered by PHA.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
// 20.2.2.7 Math.atanh(x)
var $export = require('./$.export');
$export($export.S, 'Math', {
atanh: function atanh(x){
return (x = +x) == 0 ? x : Math.log((1 + x) / (1 - x)) / 2;
}
});
|
{
"pile_set_name": "Github"
}
|
Clinical research evidence and clinical practice.
An informal survey among colleagues turns up the far-from surprising information that the average patient contact involves at least three or four judgements and decisions: judgements about aetiology and prognosis, decisions about diagnosis and therapy, and sometimes discussions about costs and side-effects. And so it goes: 20 patients a day, 60 decisions; 100 patients a week, 300 decisions. Who makes these decisions, the doctor or the patient? What factors govern the final choice in each case? Evidence-based medicine (EBM) is the factor getting a lot of attention these days, but clinical decisions depend on many different elements. Good doctors have always made use of experience and judgement as well as the best available evidence.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
make app universally compatible for device with and without camera
I face a strange problem,
My application works perfectly on devices with or without a camera; only a few functionalities are not available if you don't have a camera.
After uploading my app to the play store, the play store excluded some devices without a camera in which the app actually works fine!
By using this permission:
<uses-permission android:name="android.permission.CAMERA"/>
play store auto excludes.
Has anybody faced similar problems?
Sorry if this is a duplicate (I hope it is not).
A:
From the docs:
In some cases, the permissions that you request through can affect how your application is filtered by Google Play.
If you request a hardware-related permission — CAMERA, for example — Google Play assumes that your application requires the underlying hardware feature and filters the application from devices that do not offer it.
To control filtering, always explicitly declare hardware features in elements, rather than relying on Google Play to "discover" the requirements in elements. Then, if you want to disable filtering for a particular feature, you can add a android:required="false" attribute to the declaration.
So, just add this to your manifest:
<uses-feature android:name="android.hardware.camera" android:required="false"/>
|
{
"pile_set_name": "StackExchange"
}
|
Katikkiro Charles Peter Mayiga has released his third book titled Uganda: 7-Key Transformation Idea where he offers solutions to transform Uganda and Africa generally from the third world to the first one.
His other two books are King on the Throne; the Story of the Restoration of the Kingdom of Buganda, and Buganda ku Ntikko. Both these two books were centred on Buganda Kingdom. The new book looks at Uganda and offers solutions to its development challenges.
Written in a conversational tone with tons of humour and everyday examples, Katikkiro Mayiga identifies the rule of law, good governance, health, education, agriculture, public infrastructure and the settlement of the Buganda question as key areas that need be tackled to transform the country.
“This book is a conversation I wish to share with you because, together, we must find solutions to the endemic problems of ignorance, poverty and disease in our country, Uganda,” Mayiga writes. “It contains my views on laws, politics, institutions, management, social issues, culture, and economics, all of which, and more, can turn into tools of transformation,” he adds.
On the rule of law, Mayiga says that “the community that does not have a reliable and effective judicial system is always restive and can never enjoy a peaceful environment.”
He adds that the precursor to economic stimulation and growth is the belief that the state is possessed of institutions that can prevent lawlessness or that can address injustices whenever they occur. A serious investor will be keen to invest in a country where he knows rules are respected and they apply to everybody.
Mayiga further argues that the emancipation of Uganda will be through agriculture, animal husbandry, and commerce as the bedrock to a sustainable economy. With good governance, other areas like tourism and industrialisation will fall in place to propel the growth further ahead. “The people must be educated, to be able to appreciate this, but also to implement the requisite plans,” he explains.
Transformation of a community requires that all efforts must be geared towards a singular objective or focus, he says. “If our comparative advantage is agriculture – which it actually is – then enhancement of the citizens’ capital base must be related to agriculture. In that sense, the ‘assistance’ that is envisaged under the SACCO schemes must be to promote particular agro-based activities,” he explains.
Marketing is identified as another tenet that spurs economic growth. “It broadens the public’s awareness for any item, hence boosting sales. Good sales translate into earnings and growth. If anyone realises values from whatever they do, they will do it over and over again,” Mayiga argues.
The major problem is the absence of a systematic marketing mechanism, he argues. A farmer with a 10-bag harvest cannot deliver them on his own, let alone knowing about the availability of demand for them elsewhere.
Mayiga identifies federalism as key in solving the Buganda question and lead the country to the true path of transformation. “Federalism decongests the centre, and enables each region to pick its development priorities but within the overall national policy guidelines. Today, unlike in the past when regions had a bigger stake in management of public issues, development is concentrated in and around Kampala,” he writes.
The book is already available at Aristoc Bookshop, Bulange Building Reception, Buganda Shop at Muganzirwazza, and Majestic Brands offices at Bulange, Mengo at Shs40,000.
For further information, contact Denis Jjuuko on 0758111409. Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
LocationRequest constructor is marked as internal
I'm trying to set up location updates in my Android app using com.google.android.gms:play-services-location:12.0.0, but I'm getting the following error:
LocationRequest constructor is marked as internal and should not be accessed from apps
My location updates request looks like this:
locationClient.requestLocationUpdates(
new LocationRequest()
.setInterval(5000)
.setFastestInterval(1000)
.setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY),
locationCallback,
null
);
I have followed the docs and the example, which do it the same way. If I'm not supposed to call new LocationRequest(), then what is the proper way to do it?
A:
Use static methodLocationRequest create ().
LocationRequest locationRequest = LocationRequest.create();
locationRequest.setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY);
locationRequest.setInterval(5000);
locationRequest.setFastestInterval(1000);
A:
LocationRequest initialization procedure has changed into latest Google Play Service dependencies ( > 12.0.0). Now you can use its create() method to initialize this. e.g.
LocationRequest request = LocationRequest.create();
|
{
"pile_set_name": "StackExchange"
}
|
---
abstract: 'Being able to automatically and quickly understand the user context during a session is a main issue for recommender systems. As a first step toward achieving that goal, we propose a model that observes in real time the diversity brought by each item relatively to a short sequence of consultations, corresponding to the recent user history. Our model has a complexity in constant time, and is generic since it can apply to any type of items within an online service (*e.g.* profiles, products, music tracks) and any application domain (e-commerce, social network, music streaming), as long as we have partial item descriptions. The observation of the diversity level over time allows us to detect implicit changes. In the long term, we plan to characterize the context, *i.e.* to find common features among a contiguous sub-sequence of items between two changes of context determined by our model. This will allow us to make context-aware and privacy-preserving recommendations, to explain them to users. As this is an on-going research, the first step consists here in studying the robustness of our model while detecting changes of context. In order to do so, we use a music corpus of 100 users and more than 210,000 consultations (number of songs played in the global history). We validate the relevancy of our detections by finding connections between changes of context and events, such as ends of session. Of course, these events are a subset of the possible changes of context, since there might be several contexts within a session. We altered the quality of our corpus in several manners, so as to test the performances of our model when confronted with sparsity and different types of items. The results show that our model is robust and constitutes a promising approach.'
author:
-
-
-
title: 'Toward a Robust Diversity-Based Model to Detect Changes of Context'
---
User Modeling; Diversity; Context; Real-Time Analysis of Navigation Path; Recommender Systems
Introduction
============
Despite their growing success in industry (e-commerce, social networks, VOD, music streaming platforms) and their impressive predictive performances [@Simpson:2014], two major user concerns frequently show up about recommender systems in online services. First, people are more and more preoccupied by privacy issues. To maintain a good trust level, we should thus provide models and algorithms that offer the best compromise between quality of recommendations, ethics as regards data collection [@Cranor:2005], and users’ policy [@Knijnenburg:2013]. Second, recommendations are still too often made out of context. Recommending is not only a question of maximizing the accuracy, but also providing relevant items at the right time in the good manner [@Jones:2010]. This is the reason why the literature about context-aware recommender systems is increasing fast [@Hariri:2014].
Starting from these observations, we wondered what could possibly be the necessary and sufficient data to understand as quickly as possible the user context, and then to adapt recommendations. As regards privacy, Cranor suggests to favor methods where personal data are transient (*i.e.* deleted after the task or the session) [@Cranor:2005]. The system should also rely on item profiles, rather than user profiles. Thus, it is reasonable to study the short history of recently consulted items, and see what are the common features or differences that could explain or characterize the current user context. This line of reasoning implies that we have a precise description of each item available in the online service, or at least an exhaustive set of description attributes like those we have in product catalogs, but for every type of items (music tracks, social network profiles of users and companies, …).
Besides these considerations, Castagnos *et al.* took an interest in the role of diversity within the user decision-making process [@Castagnos:2010]. They provide two interesting conclusions within the frame of e-commerce applications. On one hand, the diversity in recommender systems seems to significantly improve user satisfaction, and is correlated to the intention to buy. On the other hand, the user need for diversity evolves over time, and should be carefully controlled to provide the correct amount of diversity and novelty. Bringing too much diversity risks to transform recommendations into novelty. Recent works confirmed that satisfaction is negatively dependent on novelty [@Ekstrand:2014], and badly-used diversity can lead users to mistrust the system [@Castagnos:2013]. Finally, in [@Castagnos:2010], we showed that recommender systems should increase the diversity level at the end of a session to make users more confident in their buying decisions. Yet, predicting when the session will end is not an easy task.
This conclusion led us to ask if we could take the opposite view: would it be possible to monitor the diversity level within user sequences of consultations over time, and find connections between variations of diversity and changes of context? Through an exploratory research, we proposed the first model that measure the diversity brought by each consulted item, relatively to a short user history [@LHuillier:2014]. We showed that variations of diversity often match with ends of session. However, these conclusions were made *a posteriori*, *i.e.* by analyzing the whole sequence of consultations for each user, and then knowing how each session ended and how the next session started. Furthermore, our model was built by considering that all consulted items were of the same type. As an example, if the active user is listening to music, it should be possible to measure the diversity between each pair of items.
In this paper, we want to bring this model a step further. First, we aim at investigating if it allows us to predict ends of session in real time, without knowing what happens next. Then, we will test the robustness of our model, by reconsidering our strong hypothesis according to which we always have a complete description of items. We will thus evaluate the performances of our model when we have sparse data about items. At last, we will extend our model to a situation where the active user consults different types of items (*e.g.* music tracks, social network profiles, ...). In this case, it is not always possible to measure the diversity between items, since their attributes may be different. Thank to a corpus of more than 210,000 consultations, we show that the performances of our system remain stable up to 60% of missing diversity measures.
The rest of this paper is organized as follows: Section \[related-work\] offers an overview of the literature as regards diversity and context in recommender systems. Section \[model\] is dedicated to the presentation of our model and our hypotheses about its robustness to sparsity and diversification of types of items. Section \[experiment\] presents and discusses its performances.
Related Work
============
Diversity in Recommender Systems {#diversity}
--------------------------------
Diversity has long been proven to improve the interactions between users and recommender systems [@McGinty:2003]. This dimension is considered in two different ways in the literature. Some analyze the impact of diversity on users’ behavior, while others integrate diversity in machine learning algorithms of recommender systems.
Diversity has first been defined by Smyth and McClave [@Smyth:2001] as the opposite dimension to similarity. More precisely, this measure quantifies the dissimilarity within a set of items. Thus, diversifying recommendations consists in determining the best set of items that are highly similar to the users’ known preferences while reducing the similarity between those recommendations. A classification of diversity has been proposed by Adomavicius and Kwon [@Adomavicius:2012]. It distinguishes individual diversity and aggregated diversity, depending on if we are interested in generating recommendations to individuals, or to groups of users. Here, we focus on individual diversity.
Many works focus on controlling the diversity level brought by recommender systems. Diversity was initially dedicated to content-based algorithms, especially in the case we have attribute values for each item. We distinguish 3 practices: we can compute the diversity between two items [@Smyth:2001], the diversity within a set of items [@Ziegler:2005], or the relative diversity brought by a single item relatively to a set of items [@Smyth:2001] (see Equation \[eq:reldiv\]). These metrics have then been used in content-based filtering to reorder the recommendation list, according to a diversity criterion [@Bradley:2001; @Zhang:2008]. In addition to these content-based algorithms, some works have focused on a way to integrate diversity in collaborative filtering [@Ziegler:2005; @Said:2012].
In parallel to the integration of diversity in recommender systems, many user studies took interest in the role and perception of diversity. McGinty and Smyth showed that diversity improves the efficiency of recommendations [@McGinty:2003]. Many works showed that diversity is perceived by users [@Zhang:2008; @Lathia:2010; @Jones:2010], and positively correlated to user satisfaction [@Castagnos:2013; @Ekstrand:2014]. Nevertheless, it came out that the user need for diversity evolves over time and diversity should not be integrated in the same way at each recommendation stage [@McGinty:2003; @Castagnos:2010]. At last, recent works focus on how the amount of diversity should be provided by recommender systems [@Hasan:2014].
Contrary to this literature, we do not want to adapt the amount of diversity in recommendations. We aim at observing the natural diversity level within users’ navigation path to infer their context. Thus, the following subsection will be dedicated to this notion of context.
Context-Aware Recommender Systems {#context}
---------------------------------
Integrating the context into the recommendation process is an increasing research field known as `CARS`, acronym for Context Aware Recommender Systems. In their state-of-the-art, Adomavicius *et al.* present several approaches like contextual modeling, pre/post filtering method for using contextual factors in order to adapt recommendation to the users’ context [@Adomavicius:2011b]. Contextual factors are all the information which can be gathered and used by a system to determine and characterize the current context of the user. For example, a system can use the location of the user to adapt the recommendation [@Kaminskas:2013]. The most important drawbacks of these kinds of systems lies in the fact that they are invasive, by using personal informations and most often require a complex representational model. For example, such systems can use ontologies to determine user context [@Chen:2014]. Yet, such an ontology cannot be transferred from one domain to another. As Adomavicius and Tuzhilin explain in their conclusion, “most of the work on CARS has focused on the representation view of the context and the alternative methods have been underexplored” [@Adomavicius:2011b]. This fact has also been highlighted by Hariri *et al.* who have developed a `CARS` based on users’ feedback on items presented in a interactive recommender system [@Hariri:2014]. Even if this approach dynamically adapts to changes of context, it requires user effort to obtain user’ feedback on which the system is based. We thus aim at proposing a similar method having the same objectives, but more transparent for users by relying on item profiles and users’ navigation path. In the following, we propose to distinguish two different types of context: explicit context and implicit context. Explicit context is close to the definition of contextual factors, that is to say physical context, social context, interaction media context and modal context are different kinds of explicit context [@Adomavicius:2011b]. Conversely, implicit context will refer to the common characteristics shared by the consulted items during a certain time lapse. The motivation behind this notion is that detecting implicit context does not increase user involvement, enhances the privacy and can be used in any application domain without heavy modifications.
Model and Hypotheses {#model}
====================
Overview
--------
As explained above, the role of our model is to monitor the diversity level within users’ navigation path over time, and then derive their implicit context. Concretely, each time a user consults a new item, we compute the added value of this item – called `target` – relatively to a short history (*i.e.* the $k$ previously consulted items) as regards to diversity. To provide a better understanding of our model, we will rely on an example shown in Figure \[fig:dance\]. Let us imagine an online service that allows users to listen to music, and to browse different kinds of profiles like we can do on social networks (profiles of other users, profiles of artists, information about record companies and so on). For each user, we can then pay attention to his/her sequence of consultations. In this example, we understand that there might be several contexts within a session, and several ways to classify them.
![Overview of our model.[]{data-label="fig:dance"}](images/dance.eps){width="50.00000%"}
One strength of our model is that it allows us to measure in real time the diversity brought by each item, for each attribute independently, and for the whole set of attributes. Thus, it can be configured to detect and characterize various kinds of implicit contexts, or to cut the navigation path at some points where diversity reaches the highest levels (*i.e.* what we called the changes of implicit context). In the rest of this article, we will give meaning to these changes of implicit context, by verifying that they match with some events such as ends of session in many cases. But, of course, there can be several successive implicit contexts, and several changes of context, within a session. Let us notice that, in the case where we want to force the detection of events and to optimize the characterization of the implicit context according to user’s expectations, all we have to do is to complete a learning phase to find the optimal weight of each attribute within our computation of the diversity over time. The quality of our model has been demonstrated in [@LHuillier:2014]. However, the purpose of this paper is to test the robustness of our model in the case where we have sparse data within item descriptions, that is to say detecting the same changes of implicit context with less data. We see two different scenarios which can explain sparse data. Either we have a single type of items (for example music tracks), but an incomplete description of each item, which is often the case in real applications. Or the users’ navigation path are made of different types of items, and there may be a partial overlap of attributes between items. In Figure \[fig:dance\], common attributes between items are displayed on the same line.
Formalism
---------
Before evaluating the robustness of our model, we will present it more formally and will introduce some notations. We call $U$ ={*$u_{1}$, $u_{2}$,..., $u_{n}$*} the set of users. $u$ refers to the active user. $I$ ={*$i_{1}$, $i_{2}$,..., $i_{m}$*} is the whole set of consulted items. The recent user history of size $k$ at time $t$, called $C_{k,t}^{u}$, can be written under the form of a sequence of items $<c_{t-k}^{u}$, ..., $c_{t-2}^{u}$, $c_{t-1}^{u}$, $c_{t-1}^{u}>$. At last, $A_i$ = *$a_{1}$, $a_{2}$,..., $a_{h} $* is the set of attributes of an item $i$. Let us note that each consulted item, such as $c_{t}^{u}$, refers to an item $i$ of the set $I$.
Our model is a Markov model. At each time-step (*i.e.* each time the active user consults a new item), our model computes the relative diversity brought by the new consulted item $c_{t}^{u}$ relatively to $C_{k,t}^{u}$. In order to do so, we strongly took inspiration from the formula proposed by Smyth and McClave [@Smyth:2001] (see Equation \[eq:reldiv\]). The only difference here is that we count the number of times $s$ when we can compute the similarity between the target item $c_{t}^{u}$ and one of the items in the history $C_{k,t}^{u}$. As the active user can browse different types of items, there may be situations where there is no common attributes between two items, and no way to compute the similarity between this pair of items (*i.e.* it returns NaN). Consequently, $s$ is included in $[0;k]$. [$$\begin{gathered}
\label{eq:reldiv}
\raggedright{
\scriptstyle RD(c_{t}^{u},C_{k,t}^{u})~=
\begin{cases}
&\scriptstyle ~\text{NaN~if $C_{k,t}^{u}$~=~$\emptyset$ or if $s~=~0$,}\\
&\scriptstyle ~\frac{\sum_{j=1..k}(1-sim(c_{t}^{u},c_{t-j}^{u}))}{s}~\text{otherwise.}
\end{cases}
}\end{gathered}$$ ]{}
Measuring RD (Equation \[eq:reldiv\]) involves to compute the similarity between each pair of items, using Equation \[eq:sim\]. In this equation, the function $sim_{a}$ computes the similarity between two items relatively to a specific attribute $a$. $\alpha_{a}$ is the weight of this attribute $a$ in the computation of the similarity. In this paper, since we want mainly want to test the robustness of our model as regards sparse data, we will use a naive approach where each weight $\alpha_{a}$ is equal to 1. But we could parameter these weights to adapt our model, according to the kind of changes of implicit context and/or the kind of events we want to detect. [$$\begin{gathered}
\label{eq:sim}
\raggedright{
\scriptstyle sim(c_{t}^{u},c_{t-j}^{u})~=
\begin{cases}
&\scriptstyle ~\text{NaN~if $(A_{c_{t}^{u}}\cap{}A_{c_{t-j}^{u}})$ or $c_{t}^{u}.a$ or $c_{t-j}^{u}.a~=~\emptyset$,}\\
&\scriptstyle ~\frac{\sum_{a\in{}A_{c_{t}^{u}}\cap{}A_{c_{t-j}^{u}}} (\alpha_{a}~*~sim_{a}(c_{t}^{u},c_{t-j}^{u}))}{\sum_{a\in{}A_{c_{t}^{u}}\cap{}A_{c_{t-j}^{u}}} \alpha_{a}}~\text{otherwise.}
\end{cases}
}\end{gathered}$$ ]{}
In Equation \[eq:sim\], $i.a$ refers to the values (or set of values) of an attribute $a$ for a given item $i$. Starting from here, we developed 5 generic formulas to compute similarities per attribute, according to the type of attribute we have. If the values $i.a$ are expressed under the form of a list (*e.g.* the attribute “similar artists” for a song), we will use Equation \[eq:sima1\]. $$\label{eq:sima1}
sim_{a}(c_{t}^{u},c_{t-j}^{u})=\frac{card(c_{t}^{u}.a\cap c_{t-j}^{u}.a)}{min(card(c_{t}^{u}.a), card(c_{t-j}^{u}.a))}
%sim_{attribut=a}(c_{t}^{u},c_{t-1}^{u})=1-\left(\frac{\sum_{i=1}^{nb_term(i)} terme(j).i=term(j+1).i}{\frac{nbterme(j)+nbterme(j+1)}{2}}\right)$$
If the values $i.a$ correspond to intervals (*e.g.* the attribute “period of activity of a singer”), we will use Equation \[eq:sima2\]. $$\label{eq:sima2}
sim_{a}(c_{t}^{u},c_{t-j}^{u})=\frac{card(c_{t}^{u}.a\cap c_{t-j}^{u}.a)}{max(card(c_{t}^{u}.a), card(c_{t-j}^{u}.a))}$$
If $i.a$ have binary values (*e.g.* the mode of a song), we will use Equation \[eq:sima3\]. [$$\begin{gathered}
\raggedright{
\scriptstyle sim_{a}(c_{t}^{u},c_{t-j}^{u})~=
\begin{cases}
&\scriptstyle ~1~\text{if}~c_{t-j}^{u}.a~=~c_{t}^{u}.a\text{,} \\
&\scriptstyle ~0~\text{otherwise.}\hspace*{5em}
\end{cases}
\label{eq:sima3}}\end{gathered}$$ ]{}
If $i.a$ take numerical values (*e.g.* user ratings), we will use Equation \[eq:sima4\]. $$\label{eq:sima4}
sim_{a}(c_{t}^{u},c_{t-j}^{u})=e^{-10*\left(\frac{c_{t}^{u}.a-c_{t-j}^{u}.a}{max_{a} - min{a}}\right)^2}$$
At last, if $i.a$ express coordinates (*e.g.* the localization of two artists), we will use the Equation \[eq:sima5\]. $$\label{eq:sima5}
sim_{a}(c_{t}^{u},c_{t-j}^{u})=1~-~\frac{distance(c_{t}^{u},c_{t-j}^{u})}{max_{distance}}$$
Finally, we are considering that there is a change of implicit context if the 4 conditions of Equation \[eq:detection\] are met. $\tau$ allows us to focus on relative diversity measures $RD(c_{t}^{u},C_{k,t}^{u})$ that exceed a given threshold. $$\begin{gathered}
\label{eq:detection}
RD(c_{t-1}^{u},C_{k,t-1}^{u})<>\text{NaN}~\text{and}~RD(c_{t}^{u},C_{k,t}^{u})<>\text{NaN}\\
\text{and}~RD(c_{t-1}^{u},C_{k,t-1}^{u}) < RD(c_{t}^{u},C_{k,t}^{u})~\text{and}~RD(c_{t}^{u},C_{k,t}^{u}) > \tau\end{gathered}$$
Hypotheses
----------
The scientific question is now to test if our model is robust to a realistic situation where: (1) we do not know what will happen after the current time $t$, (2) we have sparse data as regards item descriptions. For these reasons, we will make 3 assumptions that will be discussed in Section \[experiment\].
This assumption has not been considered in our preliminary work in [@LHuillier:2014], since we were analyzing variations of diversity *a posteriori* on the whole user’s navigation path, knowing consultations at each time. We will thus check how many ends of session we can retrieve by only using data at time $t$, even if this does not lower the interests and relevancy of our other detections, as explained above (see Subsection \[overview\]).
Considering that we have a single type of items, we expect to retrieve the same amount of events and changes of implicit context.
In this scenario, the attributes may be different from one type of items to another, leading to another form of sparsity.
Experiments {#experiment}
===========
In this section, we present 3 experiments we developed to validate these assumptions.
In the first experiment (**H1**), we test the ability of our model to detect changes of implicit context in real time, and check if the detected contexts could be correlated with some particular events like ends of sessions. However, unlike our exploratory research [@LHuillier:2014], our new model only uses data available at the current time $t$ (that is to say, we do not look at how diversity evolves beyond the current time). Indeed, our previous model was looking for local maxima on the curve of relative diversity and used thereby information unavailable at time $t$ to detect changes of context. In real situations, only present and past information are available. That is one of the reason which motivated us to extend our model (the other one is the consultations of different types of items). The principle of our model remains quite similar to [@LHuillier:2014]. However, the inputs used to detect changes of context are different.
For each consulted item, we compute the corresponding values of relative diversity. As relative diversity can be computed for each attribute, there are as many relative diversity values as attributes. In this paper, we set the relative diversity of the current item to the average of all relative diversities per attribute. From now on, when we will talk about a relative diversity value according to an item, we will refer to the average relative diversity calculated from all the attributes for this item relatively to the history (Equation \[eq:sim\]). Inside a given context, we assume that the relative diversity of each item is quite constant and low, but that the relative diversity suddenly increases when changes of implicit context occur. This increase is due to the fact that different contexts do not share the same characteristics (*i.e.* the same attribute values). Our model aims to detect these peaks of relative diversity over time. To achieve this, our model checks at each time-step if the conditions of Equation \[eq:detection\] are satisfied. In this case, we assume that $c_{t}^{u}$ is the first item of a new implicit context. For each new implicit context detected, we check if $c_{t}^{u}$ corresponds to the beginning of a new session.
In the second experiment (**H2**), we put to the proof our model by deleting information within our corpus. Indeed, data sparsity is a well-known problem in the field of recommender systems, and we want to know how our model can face this problem. In [@LHuillier:2014], we were using a complete dataset (*i.e.* with no missing information about items), but that is rarely the case in real situations. For instance, in a musical corpus, we could have the song title and artist name for each track but some information like the release date, the popularity or the keywords may be missing. Thus, we want to test if:
- our model is able to compute a relative diversity value, even if some pieces of information about attributes are not known;
- our model is robust to missing information and still performs well for detecting changes of context.
To answer these questions, we randomly delete values of attributes in our dataset, until we reach an intended rate of sparsity. We test the performances of our model for rates of sparsity between 1 and 99%. Because of that random deletion, some similarity measures between two items, or even some relative diversity measures could not be computed. As soon as we can compute the similarity on at least one attribute for at least one pair of items (the target item and one of the items within the history), a value of relative diversity can be set for the target. Otherwise, if we cannot compute any similarity per attribute on any pair of items, we set the relative diversity of the target to *NaN*. Let us notice that we set the diversity to *NaN*, because a value of 0 would indicate that there is no diversity brought by the current item, not that the diversity cannot be calculated. Of course, we do not consider NaN values as changes of context (see Equation \[eq:detection\]).
In the last experiment (**H3**), the purpose is to examine the consequences of having several types of items in our dataset on context detection performances. Indeed, the previous experiments were tested with a single type of items but in practice, this may not be always the case. When the target item and the history items are of the same type (*i.e.* music), the relative diversity can be computed on all attributes for all items (except when there are missing data). However, when these types may change from a consultation to another, the relative diversity can only be computed for common attributes (see Figure \[fig:dance\]). Considering that our initial dataset contained a single type of items (songs), we modified it in order to test our third hypothesis. Criteria for simulating the different types of items were as follows: First, a number of types of items is determined, and all items are randomly assigned to a type of items. Afterward, for each type of items, we randomly select a subset of $x$ attributes (from the whole set of attributes) that will characterize these items. Another parameter, called $y$, corresponds to the minimum number of attributes in common with all the other types of items. Let us notice that the common attributes between pairs of types of items are not necessarily the same (*i.e.* ($A_{type1}\cap{}A_{type2})<>(A_{type2}\cap{}A_{type3})$). In this way, we can artificially obtain a dataset composed of different types of items, with only a few attributes in common.
For instance, if the initial dataset contains 7 attributes ($A=\{a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7}\}$) and we want to create 3 types with $x=4$ and $y=2$, we randomly get this kind of situation: $A_{type1}=\{a_{1},a_{4},a_{6},a_{7}\}$, $A_{type2}=\{a_{1},a_{2},a_{3},a_{4}\}$, and $A_{type3}=\{a_{2},a_{3},a_{4},a_{6}\}$. In that case, $A^{type1}\cap{}A^{type2}=\{a_{1},a_{4}\}$, $A^{type1}\cap{}A^{type3}=\{a_{4},a_{6}\}$, and $A^{type2}\cap{}A^{type3}=\{a_{2},a_{3},a_{3}\}$.
Material
--------
In order to test our different hypotheses, we decided to base our evaluation on a musical dataset. This choice was made because musical items offer many advantages. First, musical items have their own consultation time, that is to say the time spent to consult a song cannot vary from a user to another. Second, meta data on songs can be easily retrieved using some specialized services like Echnonest[^1] or Musicbrainz[^2]. At last, users frequently listen to several songs consecutively, contrary to a movie corpus for example. Our dataset contains 212,233 plays which were listened by 100 users. We obtained these consultations by using the Last.fm[^3] API to collect listening events from 28 June 2005 to 18 December 2014. Our dataset is made of 41,742 single tracks, performed by 5,370 single artists. In order to create the sessions for all the users, we assumed that a session is composed by a sequence of consultations without any interruption bigger than 15 minutes. When this threshold is reached, we consider that the user started a new session. According to this standard, we computed 22,212 sessions with an average of 9.6 consultations per session (42.71 min per session). Then, using the Echonest API, we gathered meta data on these songs. For each song, we retrieved 13 attributes: 7 of these attributes are specific to songs, and 6 of them are related to artists.
- song attributes: duration, tempo, mode, hotttness, danceability, energy and loudness;
- artist attributes: hotttness, familiarity, similar artists (10 artists names), terms, years of activity, and location of the artist (geographical coordinates).
Table \[tab:corpus\] summarizes the values of the attributes.
Results and Discussion {#results}
----------------------
**Results as regards the first experiment (H1).** Previously, we presented Equation \[eq:detection\] which allow our model to determine if the current consultation is the start of a new implicit context. In order to fix the threshold $\tau$, we calculated the mean and the standard deviation of all values of relative diversity for all users within our corpus.
In Table \[tab:statistique\_rd\], we can notice that the standard deviation is pretty high compared to the mean of the relative diversity. This result means that users’ relative diversity over time takes a large range of values. We cannot know *a priori* the best value for $\tau$, since we do not know how many implicit contexts are present in our dataset. However, we previously assumed that diversity is pretty low within a given context and increases when a change of context occurs. This assessment can easily be confirmed *a posteriori*, by noticing that the average level of relative diversity for consultations that correspond to a session opening ($average=0.36, standard deviation=0.13$) is much higher than those of other consultations ($average=0.21, standard deviation=0.16$). We finally decided to set $\tau$ to the global average of relative diversity within our dataset ($0.23$), so as to favor the detection of consultations above an average rate, but without fixing this threshold too high since there might be significant increase of diversity after a long period of decreasing (leading to values near the global average). When relative diversity exceeds this threshold and all the conditions of Equation \[eq:detection\] are satisfied, we consider that there is a change of implicit context. The results are reported in Table \[tab:detection\_naive\].
In total, our model detects 51,795 changes of implicit context. Among those changes of context, the number of sessions detected is important, since our model is able to detect more than 63% of the sessions. This significant overlap between changes of context and events indicates that our model remains efficient when we only use information available at the current time (*i.e.* without considering consultations at time $t+1$ and beyond), since we can easily justify/explain these changes of context by a end of session. This means that, when the explicit context changes (at least as regards the time dimension[^4] since there is a temporal gap between two sessions), the songs listened in those two explicit contexts do usually not share common characteristics (since they are in different implicit contexts).
We can also note that there are 37,743 changes of implicit context which do not match with changes of session. This is not a surprising result and can be explained in a simply manner. There can exist more than one implicit context within a session. We can easily imagine the case where a user starts listening to calm and down tempo songs, and suddenly changes to energetic and rapid tempo songs within the same session. As a conclusion of these results, we can say that our model seems to perform well by detecting possibly interesting points with the navigation path, that corresponds to changes of implicit context according to our definition, and can often by confirmed by changes of explicit context (events). But, as a perspective, we need to confront these results to real users, in order to study how they perceive and accept these implicit contexts, before using them as a support for recommender systems. Also, let us remind that we can easily change every parameter of our model (weights of attributes, size of history, value of the threshold $\tau$, ...) after a learning phase, to match users’ expectations and maximize the acceptance and adoption rates.
**Results as regards the second experiment (H2).** In order to understand how our model performs with a lack of data, we operated a controlled deterioration of our corpus. By controlled, we mean that the amount of missing data (that is to say missing values of attributes for the songs) was fixed for each execution. We monitored the number of sessions and implicit contexts detected, while progressively deteriorating the corpus percent after percent (see Figure \[fig:degradation\_session\]).
![Performance of our model against sparcity[]{data-label="fig:degradation_session"}](images/degradation.eps){width="50.00000%"}
From Figure \[fig:degradation\_session\], we can see that the performances of our model are pretty stable until up to 60% of missing data. These results highlight the fact that our model can perform well, even with a large and realistic amount of missing data.
**Results as regards the third experiment (H3).** Derived from some popular social networks like Facebook[^5], LinkedIn[^6], or Yupeek[^7], we observed that the number of different types of items was usually around 4. That is why we decided to create 4 types of items from our initial corpus. On this basis, we tested different combinations as regards the number of attributes per item $x$ and the number of common attributes $y$. For each combination, we compute the number of sessions and implicit contexts detected. The results are presented in Table \[tab:types\_differents\]. These values result from 10 executions, with the intent to limit bias due to the random selection of attributes. Indeed, according to the attributes which are selected for each type of items, the performance could vary as some attributes may be more representative than others in the detection of implicit contexts.
From Table \[tab:types\_differents\], we can observe that performances are quite good even if the number of attributes per type of items $x$ is low. Moreover, the highest the number of common attributes between types of items $y$ is, the more we detect changes of session and implicit contexts. We see that the standard deviation has high values when both the number of attributes $x$ and the number of common attributes $y$ are low. This confirms that all attributes have not the same impact in detecting changes of implicit context. It can be supposed that a difference between the value of the energy of two songs is more characteristic of a change of context than a variation of the artist location. Adapting the weight of each attribute in the calculation of the relative diversity for a given item is a perspective.
Conclusions and Future Work {#conclusion}
===========================
Our model allows to monitor the natural diversity contained in users’ navigation path over time and, although part of an on-going research, already presents many strengths to characterize user context. First, it has a complexity in constant time since, at each time-step, we only compute relative diversity on a fixed and small history size. This makes our model highly scalable. In addition, it preserves privacy, since it does not require personal information about the active user (even if it can make use of information that other users accept to share, as shown in Figure \[fig:dance\]) and allows to forget the navigation path beyond the recent history. At last, it is generic since our equations fit any kind of attributes, and does not require an ontology to put words on the context. One of the questions addressed in this paper was to check our ability to predict changes of implicit context at time $t$, without knowing what will happen next. So as to give meaning to these implicit contexts detected by our model, we tried to find a matching with explicit factors and events such as ends of session. Our results showed that we got a significant overlap between changes of implicit contexts and ends of session. Thus, this reinforce our conviction that this model highlights interesting points within users’ navigation path. First, it allows us to anticipate ends of session, and will then be useful to adapt recommendations when users are near to reach a decision. Second, the changes of implicit context detected by our model that do not match with events are very promising results to be, on the long-term, able to formally characterize the user context and provide context-aware recommendations that fit privacy issues. Another purpose of this paper was to test the robustness of our model when confronted to sparse data. We distinguished two different scenarios where we have a single type of items with incomplete descriptions, or several types of items with small intersections of attributes. In both cases, the performances of our model remained stable in tough conditions, with about 60% of missing data.
Among our perspectives, we aim at confronting our model to real users, so as to measure their perception and acceptance rate of implicit contexts. We expect to map implicit and explicit contexts so as to reach the same performances as systems based on explicit contexts, but with a deeper consideration of privacy issues. Finally, by characterizing implicit contexts, we will be able to explain recommendations based on implicit contexts and provide new interaction modes to make user decisions easier.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was financed by the region of Lorraine and the Urban Community of Greater Nancy, in collaboration with the Yupeek company.
[1]{}
G. Adomavicius and Y. Kwon. Improving aggregate recommendation diversity using ranking-based techniques. , 24(5):896–911, 2012.
G. Adomavicius, B. Mobasher, F. Ricci, and A. Tuzhilin. Context-aware recommender systems. , pages 67–80, 2011.
A. L’Huillier, S. Castagnos, and A. Boyer. Understanding Usages by Modeling Diversity over Time. In [*ACM Conference on User Modelling, Adaptation and Personalization (UMAP)*]{}, 2014.
K. Bradley and B. Smith. Improving recommendation diversity. In [*irish Conference on Artificial Intelligence and Cognitive Science*]{}, AICS’01, pages 85–94, San Francisco, USA, 2001.
S. Castagnos, A. Brun, and A. Boyer. When diversity is needed... but not expected! In [*International Conference on Advances in Information Mining and Management (IMMM)*]{}, pages 44–50, 2013.
S. Castagnos, N. Jones, and P. Pu. Eye–tracking product recommenders’ usage. In [*RecSys*]{}, pages 29–36, 2010.
G. Chen and L. Chen. Recommendation based on contextual opinions. In [*User Modeling, Adaptation, and Personalization, UMAP ’14*]{}, pages 61–73. Springer, 2014.
L. F. Cranor. Hey, that‘s personal! In L. Ardissono, P. Bruna, and A. Mitrovic, editors, [*User Modeling 2005*]{}, volume 3538 of [*Lecture Notes in Computer Science*]{}, pages 4–4. Springer Berlin Heidelberg, 2005.
M. D. Ekstrand, F. M. Harper, M. C. Willemsen, and J. A. Konstan. User perception of differences in recommender algorithms. In [*Proceedings of the 8th ACM Conference on Recommender Systems*]{}, RecSys ’14, pages 161–168, New York, USA, 2014.
N. Hariri, B. Mobasher, and R. Burke. Context adaptation in interactive recommender systems. In [*Proceedings of the 8th ACM Conference on Recommender Systems*]{}, RecSys ’14, pages 41–48, New York, NY, USA, 2014. ACM.
M. Hasan, A. Kashyap, V. Hristidis, and V. Tsotras. User effort minimization through adaptive diversification. In [*Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*]{}, KDD ’14, pages 203–212, New York, NY, USA, 2014. ACM.
N. Jones. . These, Ecole polytechnique de Lausanne, July 2010.
M. Kaminskas, F. Ricci, and M. Schedl. Location-aware music recommendation using auto-tagging and hybrid matching. In [*Proceedings of the 7th ACM Conference on Recommender Systems*]{}, RecSys ’13, pages 17–24, New York, USA, 2013.
B. P. Knijnenburg, A. Kobsa, and H. Jin. Dimensionality of information disclosure behavior. , 71(12):1144 – 1162, 2013.
N. Lathia, S. Hailes, L. Capra, and X. Amatriain. Temporal diversity in recommender systems. In [*Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval*]{}, SIGIR ’10, pages 210–217, New York, USA, 2010.
L. McGinty and B. Smyth. On the role of diversity in conversational recommender systems. In [*Proceedings of the Fifth International Conference on Case–Based Reasoning*]{}, pages 276–290. Springer, 2003.
A. Said, B. Kille, J. Brijnesh, and S. Albayrak. Increasing diversity throught furhest neighbor–based recommandation. In [*Proceedings of the Workshop on Diversity in Document Retrieval*]{}, WSDM’12, Seattle, USA, 2012.
C. Simpson. Amazon will sell you things before you know you want to buy them. The Wire, 2014.
B. Smyth and P. McClave. Similarity vs. diversity. In [*Proceedings of the 4th International Conference on Case–Based Reasoning: Case–Based Reasoning Research and Development*]{}, ICCBR ’01, pages 347–361, London, UK, 2001.
M. Zhang and N. Hurley. Avoiding monotony: Improving the diversity of recommendation lists. In [*Proceedings of the 2008 ACM Conference on Recommender Systems*]{}, RecSys ’08, pages 123–130, New York, NY, USA, 2008. ACM.
C.-N. Ziegler, S. M. McNee, J. A. Konstan, and G. Lausen. Improving recommendation lists through topic diversification. In [*Proceedings of the 14th International Conference on World Wide Web*]{}, pages 22–32, New York, NY, USA, 2005. ACM.
[^1]: http://developer.echonest.com/
[^2]: https://musicbrainz.org/
[^3]: http://www.lastfm.fr/
[^4]: Among other common explicit context factors such as localization, mood, people nearby and so on.
[^5]: https://www.facebook.com/
[^6]: https://www.linkedin.com/
[^7]: http://yupeek.com/
|
{
"pile_set_name": "ArXiv"
}
|
There is no term I dislike more, and none that gets my proverbial hackles up more than “the greater good.” I hear it a lot in Pagan circles. I hear it a lot in the interfaith circles in which I move and work too. In both cases, it’s used almost inevitably as a universal panacea when the speaker is about to abrogate any sense of personal responsibility. Again and again, I’ve seen it used as a justification for moral cowardice. Again and again I’ve seen it used not only to excuse thoughtlessness or laziness, but to grant such questionable behaviors the moral high ground. Not only do I consider this term nine times out of ten a moral cop-out, but I also consider it an incredibly dangerous sensibility, one that can be used –and historically has been used—to justify incredible cruelties.(1)
We live in a society that does not encourage personal challenge. It does not encourage anyone to live an examined life. Instead, we’re encouraged –by the media, by the Christian dominated culture, by our corporate sponsors (yes I’m being sarcastic) to stay numb and dumb. We live in a culture that raises personal mediocrity to a high art. Worst of all, we live in a culture that, courtesy of the new age movement, fetishizes ‘feeling’ over personal obligations, and un-thought-out pleasure over any sense of personal responsibility. All of this (and more) contributes to the moral laxity that all too often creeps into our communities, so much so that not challenging ourselves to moral excellence has become the norm. I remember years ago, a Heathen man and kindred leader telling me most avidly that it was “ok” to be “mediocre.” He believed it too. I was appalled.
Before going any further, I think it is important that I define my use of certain terms like ‘moral” and ‘virtue.” The word “morality” comes from the Latin and implies something about one’s conduct or manner of behaving.(2) This has evolved into a branch of philosophy dealing with questions of good and evil, right and wrong. Ethics is related to morality in that it examines and categorizes various concepts of morality, the nature of right and wrong, the origins of moral theories, and the ways in which a moral decision might be reached. Ethics are, to my mind, the practical application of moral principles. ‘Virtue’ also comes from the Latin and refers to specific qualities of moral excellence as well as the ongoing process of their development.(3) In no way am I using either term to refer to sexual repression or social prudery, as I have occasionally heard them misused. In my use of both ‘morality’ and ‘virtue,’ I am specifically referring to the development of one’s character.
That being said, the questions inherent in the use of the term ‘the greater good’ are most definitely moral ones. Who gets to determine what that greater good is? About whose greater good are we talking? To whom do the benefits of this greater good go? My colleague Sarenth put it thusly:
“The Greater Good is usually not; it is, in fact, an appeal to the lowest common denominator in that it neither challenges individuals in terms of personal responsibility, nor does it hold larger society accountable for securing its own Good, as this Good is balanced on the back of a few who may never see the benefits of their sacrifice.”(4)
Whenever I hear someone allude to ‘the greater good” – and oddly enough, in interfaith settings at least, I often hear it said in prayers. In Pagan settings, it tends to come up in magic or energy work, particularly healing work and I can think of no worse places in which to abrogate personal responsibility—I grow very wary. It is a facile term, one that is far, far too easy to use and therein lies precisely its danger.
When I hear someone claim “the greater good” as the excuse for their decision (or more often their lack of one), I also know that I am very likely dealing with someone who, while inevitably well-meaning, has not yet shaken themselves free of the monotheistic paradigm, the paradigm that gave us colonialism, the doctrine of discovery, and endless bloodshed. Why? Well, talking about the greater good presupposes that there is a singularity, in other words one greater good. That is not too far from the belief that there is one and only one true way. It presupposes a tremendous arrogance on the part of the one making the decision as to what the greater good might be – often unconscious arrogance, but arrogance nonetheless. Who gets to determine this? Who or what is going to be sacrificed?
I’ve also found that quite often the real motivation is fear. One will do or not do a thing in order to maintain the status quo, to keep themselves from personal discomfort, or from having to make a clear-cut decision in a given situation. It does not matter what decision is morally correct, convenience takes precedence. In our spiritual lives this can come up in many surprising small ways. Perhaps you are a Pagan woman whose devotion to the Gods requires dressing a certain way, or doing a particular ritual one day a week. Perhaps your boyfriend objects to the time this takes away from him. What do you do? (my answer: bye bye boyfriend). Perhaps you are in school and you see someone being harassed because they are gay, or overweight, or unpopular, or a particular ethnicity. What do you do? Do you speak up or stay silent and by your silence collaborate with the bullying? Someone asks you if you’re Pagan. You are. What do you say? Do you have the courage and commitment to claim that space publicly for yourself?
I hadn’t ever really conceptualized this until quite recently. I’m going to go off on a tangent for a moment, but have no fear, it will lead me back to the point at hand, I promise. Lately, several times in fact over the past month, women have come to me in some way, shape, or form asking my advice over what to do if their boyfriends or spouses didn’t approve of their religion or certain practices in their religion. My point of view is simple: I am committed to my Gods and ancestors. This is the central facet of my life. Anyone coming into my life, or wishing to be part of it had best understand that. If someone makes it an issue, or in any way gets between me and my spiritual Work, or causes me to expend unnecessary emotional energy on the matter, they will be out of my life post haste. I have lived by this rule for over twenty years. After all, one is either committed to one’s Gods or one is not; and if one is, then there is no excuse for allowing one’s practices to be compromised. In every instance, the woman in question thanked me and complimented my strength but it was clear that she did not think she could ever find it in herself to do the same, even if she wanted to do so. In every instance I was deeply bothered by this well meaning and sincere compliment. It was only recently that I realized why.
It’s not a question of strength.
It has nothing to do with being strong. It’s a matter of commitment and choosing to hold to one’s personal (and spiritual) commitments every day. It’s personal choice nothing more. Moreover, to dismiss it as “strength,” in the way that these women did –with the emotional overtones that said very clearly that it was beyond their ability to conceive of such “strength” within themselves (because they did not conceive of themselves as strong, which is heartbreaking in and of itself) -- is to place the very idea of personal commitment and yes, personal strength outside of one’s personal potentiality. It is to deny that one could possibly be strong and/or committed to something too. It makes these qualities something that others do. That is very sad. In part though, I think this comes from the expectation that strength, courage, moral excellence, and any other virtue that one could possibly mention, are inborn graces, suddenly springing up in a person’s character whole and in full bloom when nothing could be further from the truth.
Instead, qualities like personal strength are born out of very small, every-day, seemingly very mundane choices. They are developed and honed through constant effort and mindfulness. They are exercised through attention to the small choices that each one of us has to make every day. They’re polished through failure and learning how to come back to center afterwards; and they exist always in an agonistic exchange with their opposite: one who has courage knows terror all the time, one who is strong, daily confronts weakness, the most compassionate person might struggle with depression or the urge to wall oneself off to the pain of the world. Strength doesn’t just happen; it’s the result of years of making those small and seemingly insignificant choices in ways that lead toward a greater sense of one’s capabilities and personal commitments. There is nothing grand about it. It’s choosing to get up and do that weekly ritual when you are tired and inconvenienced. It’s choosing to not buy from X brand, owned by fundamentalist Christians, it’s choosing to make that phone call to the friend fighting with cancer, even though you feel awkward and uncomfortable and don’t know what to say. It’s something that everyone can aspire to, which does not, I might add, translate into it being something that is easy to acquire.
I also think that this love affair with the idea of the greater good stems from a deep discomfort with conflict. One can speak of the greater good and of leaving things to the greater good or of doing this ‘for the greater good of all’ without feeling as though one has made any challenging decision. It removes the possibility that any conflict might arise as a direct consequence of taking a particular stand or making a particular choice. Lack of decision becomes the de facto decision. In the interfaith community particularly I see this cropping up a great deal. There’s an underlying discomfort with taking a clearly defined moral stance outside of something akin to ‘love and light for all.’ Conflict and disagreement, which can be powerfully fertile ground from which new ideas and shared endeavors might grow, is eschewed out of a fear that it might mean “being judgmental.” Taking a moral stance on any issue at all is viewed as being unfairly judgmental and as such is discouraged on a very deep, fundamental level; all of which leads to moral impotence.
I very strongly believe that our Gods and ancestors call us to make a stand…large or small, we are called upon to be people of substance. Sometimes this means making the uncomfortable or inconvenient or terrifying choices because they are the morally correct choices to make. This means being willing to take a moral stance and yes, to make a personal judgment. One can do that without expecting that everyone else will follow suit: one can believe a thing passionately, without demanding that every other person bow down and believe the same (this, by the way, is one of the essential differences between monotheism and polytheism). One can be judgmental without being cruel.
Is there ever a time when one must consider ‘the greater good’ beyond the abstract? I believe so. Warriors confront it, but they don’t call it ‘the greater good.’ They call it ‘awful necessity.’ In this vein Gandhi led his people in revolt against the governing power and transformed a nation. Martin Luther King, Malcolm X and many other brave men and women bucked the status quo and in some cases laid down their lives for the greater good of their people. Winston Churchill allowed British cities to be bombed shortly before D-Day taking no measures to move people to safety. Why? Because had he taken preventive measures, he would have revealed to the Germans that the allies had broken their codes and plans for D-day would have been for naught and the war might have dragged on far, far longer costing thousands more lives. He made the decision to stay mute, and continue plans for the offensive that helped end the war, in service to the greater good. One might question what all of these instances have in common. They have very little if anything to do with one’s personal comfort. They are in no way self-serving and that is the key. Of course, this presuppose that one knows oneself well enough to acknowledge one’s deepest motivations, and to know when one is in fact being self-serving. But that is part of our spiritual work too, part of what I believe we are each obligated to explore. It goes back to that maxim said to have been carved above the entrance to the temple of Delphi: know thyself. No one said this task was easy.
Notes:
The American government thought it was serving ‘the greater good’ when it tore Native American children away from their parents and enslaved them in Christianizing schools : “kill the Indian to save the man” was the saying of choice. Charlemagne surely thought he was serving the greater good when he slaughtered my Saxon ancestors for refusing to convert to Christianity. The “pro-life” man who shoots a doctor for providing care to women certain thinks he’s serving the greater good too.
I hope that my readers will forgive the unfortunate brevity of this particular column. I’ve had precious little time to sit down and write this week, and only now, barely a handful of hours before it’s due, am I sitting down to write my second “F” column. This week’s column, thanks to a number of conversations that I’ve been having over the past few days with my students, deals with one of (in my opinion) the most magnificent of the Elemental Powers: fire.
In addition to honoring the Holy Powers and reverencing the ancestors, the polytheistic traditions of our pre-Christian forebears, traditions that we are working very hard to restore and renew very often had one other important component in common: honoring the elemental forces. In Western metaphysical traditions, occultism, Wicca, and many branches of Paganism, the primary Elemental tribes are Earth, Air, Fire, and Water. (My tradition would also include Ice as one of the fundamental elements, different at its very core, in its nature, and in the way that one must interact with it from water). These were the forces that sustained and continually transformed our world and to which we owed a measure of respect and gratitude. Sadly, this awareness of the importance of this ongoing, reciprocal relationship is one of the many crucial things lost during the conversation to monotheism, and we have yet to recover from that loss. It’s one thing to say that the earth is alive after all but quite another to really comprehend what that means in a way that impacts every single second and aspect of one’s life. But we’ll get to that in a bit.
When we deal with the Elemental Powers in contemporary polytheisms, unlike in the culture in which our ancestors lived and loved and worshipped, it is painfully easy to forget that these Powers are living, sentient, elder Beings. Part of the issue is that we rarely, if ever, are exposed in our lives to the full force of any of the primal elements. Very few of us live the type of life that is immediately and glaringly dependent on their regard. It’s easy in our world to take as our model for any given Elemental Power, only that to which we have direct access, to think only of the gentlest, tamest, most civilized and human-friendly facet of any given element when we engage in our ritual praxis. Thus when we honor fire, we may think only of the candle flame or hearth-fire; when we honor water, it is with a bowl or chalice of water, or we light incense for air, and so forth. There’s nothing inherently wrong about this, so long as one understands that the Elemental Powers are so much more. Fire is not just the hearth-fire that gives warmth and light, but also the raging wildfire that devours the forest and maybe your home as well. It’s the inferno that steals the lives of the brave. It’s lava and the volcano blast that has buried cities and changed the course of civilizations, it’s electricity, and lightening, and the vibrant power of the sun. It has a thousand faces of which we know only a handful, and the same holds true with water. Water isn’t just the chalice of tap water, but the tsunami. Earth isn’t just the soil that nourishes the seed but also the earthquake that destroys a city. Air isn’t just that which we breathe, but also the fury of the hurricane gale and everything in between. Nor should there be any moral judgment on any of these manifestations. They simply are part and parcel of these magnificent beings. . It’s important to keep that all in mind, in part because the Elemental Powers were not brought into existence to pander to us, or to make us comfortable, and pretending that only the civilized aspects of an element exist does not and cannot make it so. It can lead to a certain spiritual complaisance.
While this week’s article is about fire, in the greater sense, however, it’s about what it means to live as an animist, knowing that every tree, every stone, every flicker of fire, every breath of the wind is alive, sentient, and ancient. That knowledge changes everything.
First of all, fire was absolutely essential to our ancestors. Without fire we very likely would never have made it out of the Neolithic era. Partnering with fire enabled us to cook our food, develop crafts like pottery, glass-work, and metal-work and lay the foundations for building civilization. Fire governs the arts of war too, but when it is channeled and engaged with properly, it is tremendously creative and positive in the blessings it bestows. Without fire, our ancestors would have died during the last ice-age. Fire sustained us as a species.
Moreover, in my tradition at least, we did not have to steal fire. It was given to us, part and parcel of how the worlds were made. The nation of fire chose to partner with us from the very beginning. My sister works in animation and a couple of years ago she introduced me to an animated series called “The Last Airbender.” While I found the rather dogged pacifism of the main character annoying (it was a children’s series for Nickelodeon after all), I found the theory of the elements quite sound. Moreover, in the series, the writers refer to the earth nation, fire nation, air nation, and water nation. I heard that and thought “Yeah. Exactly.” So I am shamelessly stealing this, though prior to this I would often refer to them as ‘tribes.’ I like using “nation” to describe them. Why? The term ‘nation’ implies a conscious unity of force and self-identification. Moreover, it implies a cohesive culture, language, and cultural awareness. It also, perhaps most importantly, emphasizes both the independence and the sentience of these Beings. To my way of thinking, ‘nation’ is precisely the appropriate term. (For those with kids, by the way, the animated series the “Last Airbender” is very Pagan friendly. Avoid Shyamalan’s movie like the plague). But I digress.
At its metaphysical core, fire is one of our most primal conduits to our ancestors. Every fire that we light, every fire that we encounter, every fire that *is* remains part of that first fire kindled by our very oldest ancestor. It is part of every fire that was and every fire that will be. The elementals powers are always in constant communication with each other. They do not cease to exist simply because they cease to be in our world. In many respects, the Elemental Powers are our eldest ancestors, existing as they did before humanity and sustaining us as they do. From a cosmological perspective within the Northern Tradition, they definitely hold this honored position. Acknowledging that and working that knowledge into one’s practices is the first step in re-awakening to the type of animism by which our ancestors lived. It’s the first step in healing a thread long sundered, the first step in restoring an awesome and important responsibility that our ancestors so long ago understood.
I was giving a workshop recently on Northern Tradition shamanism, and, while shamanism is not accessible to everyone (it’s a vocation and calling—one is taken up and owned by Gods or spirits), there are some techniques that everyone can safely do and maybe even should be doing (ancestor veneration being a definite should, in my opinion). I had promised at the start of the class to teach a couple of these techniques so that everyone present could take something practical home with them that they would then have the option of incorporating into their personal practices. The last thing that I talked about was honoring the Elements. I didn’t give any specific techniques. Instead, I talked to the class about how the world was alive and aware. The fire, wind, water, earth, soil itself, grass, trees, mountains, and every single stone was alive and aware. Understand that and the specific techniques will follow.
Because really, once you realize, truly realize to the very core of your being that everything is alive, everything is awake, everything is sentient, your relationship with everything in your world can change. It changes the way that one chooses to engage with the world and everyone and everything in it.
Elements act according to their natures. Therefore I will praise fire for the beauty, strength, warmth, and terror it bestows. I will praise it for sustaining our ancestors. I will praise it for its brightness. I will praise it for its heat. I will praise it for carrying our offerings to the Gods. Hail to fire who consecrates. Hail to fire who renders holy all that it encircles. Hail to fire, who illuminates the way of the dead. Hail to fire who unlocks ancestral memory. I will honor fire. I will set out offerings to this glorious nation. May it always and ever be praised.
I have two links of note for my readers today. Firstly, Rev. Allyson Szabo, Hellenic correspondent for the Keene Examiner interviewed me on the topic of prayer. I was very pleased with how the interview came out and Allyson asked some very insightful questions. That link may be found here:
the book itself may be found at www.asphodelpress.com or www.amazon.com. that's probably all for today, folks, but stay tuned for this week's Pagan Blog Project post (friday) and in the meantime, have a wonderful Ostara/Eostre celebration.
My adopted mother used to say that ‘fear is never a good motivator.’? While she was correct, I often advise both myself and my clients that, on the other hand, fear can be an excellent teacher. In fact, it is often the first and most fierce ‘teacher’ that many of us will encounter, one that dogs us again and again throughout our lives. The question is, whether and how well one learns to work with fear, and to use it, without bearing one’s neck to its bite.
I think that, amongst other things, this is the essence of warrior work. It’s also an aspect of warrior work that everyone can, in some way, touch and incorporate into their lives. We all had ancestors who were warriors after all and we’re here because of them. Our lines survived because of those men and women who made the hard and often violent choices, and in doing so, learned to stare down fear. We’re here because of those men and women who, whether they wanted to or not, learned to dance with fear. If they can do it, each and every one of us, warrior or no, can learn to do the same.
Most of us from my generation who were into sci-fi as children are familiar with the series “Dune.” I actually never saw the movie, but I did come across the following mantra against fear when I read the book (apologies to Dune fans if I’m misquoting slightly. I haven’t read the book in a very long time and I don’t have a copy on hand). Not only did I find it excellent advice when I first read it, but its wisdom has stayed with me through the years. So, with apologies to Frank Herbert:
“I must not fear. Fear is the mind-killer. Fear is the little death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past, I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.”
I remember it having only read the book once, at least twenty five years ago, which gives you some idea of how powerful I found this. In fact, I think this little mantra very neatly encapsulates the ‘medicine’ of fear: it has the capacity to show us our truest selves.
Nowhere is this more fully expressed than in the realm of the spiritual.
In some respects, fear is a biological, evolutionary response. It’s our first warning that something is amiss, that danger threatens. In animals and humans alike, fear often leads to a ‘fight or flight’ response. In a situation of active, actual danger that’s the appropriate response. It’s a survival mechanism. In spiritual terms however, we all too often take that very natural survival mechanism and use it to reinforce our prejudices, stereotypes, to strengthen the walls of the neat little restrictive boxes into which we’ve placed ourselves (or sometimes by circumstances allowed ourselves to be placed). We use it to avoid growth…which can be terrifying. We use it to avoid engagement with the Holy Powers, which can lead to tremendous, unavoidable change in every part of our lives. We use it as an excuse. We use it to avoid the consequences of spiritual commitments and obligations. Most of all, and saddest of all, we use it to avoid becoming fully realized human beings.
This isn’t just a spiritual phenomenon. How many people do you know who avoid doing something—something they may really want to do (be it going on a trip that involves plane travel, or going on a date, or going back to school, or trying for a particular job) because they are afraid? Fear tells us to evaluate a situation and then provides us with an opportunity to stretch ourselves and to grow. In the most difficult situations it also provides us with an opportunity to find our courage and develop our characters in unexpected and sometimes glorious ways. It can reveal our potentiality. It should never, ever, ever be used to limit who we are and who we have the potential to become. This is a misunderstanding of its medicine, though it is, given our society, an understandable one.
We live in a world replete with an amazing degree of moral cowardice and ennui. It’s to be expected that our communities have inherited that as well. Most of all, in our society women are raised nourished on fear. We drink it in with our mother’s milk and I’m not just talking about fear for one’s physical safety. I cannot begin to count the number of students and clients –inevitably women--I have had who have struggled spiritually because of this: they were being called to greater, more deeply rooted commitment to the Gods and wanted it with everything in themselves, and yet fought against it equally strongly. Why? Because they were afraid: afraid that they would become too independent and thus be perceived as unfeminine, afraid that their mates would leave them, afraid that their lives would change, afraid that they would have to be responsible for themselves in ways they’d never had to be before, afraid of being different, afraid people would dislike them, be angry at them, not approve, afraid that they would not be strong enough to stand against that disapproval or to make the journey; afraid…for all of this and a thousand other reasons; and all too often rather than acknowledge and face that fear, they gave in to it and allowed limitation to dictate their spiritual lives (in ways which I’m sure mirrored their mundane lives--the shadow side I suppose of the metaphysical maxim “as above, so below”). I find that tremendously sad.
I will admit, steeped in warrior medicine as I am, it is not something that I fully comprehend on a personal level. Fear is a warrior’s constant companion. I am afraid every day of my life in some way sometimes over things I can control and sometimes over things I can’t, but one learns early on to deal with it and move beyond it. It simply is and like pain, often irrelevant to doing the task at hand. I have long deplored our society’s idolization of emotional responses as a reason to do or not do a thing but nowhere more than here. What’s more, I do not know how to minister well to those who stubbornly and knowingly cling to fear and who fear becoming anything but weak. I do not know what it is like to be in that skin and my heart goes out to people of any gender who struggle with this; because I have come to see fear as a tremendous ally, a blessing in disguise.
Fear is the herald of courage and courage is an absolutely necessary component for a truly engaged spiritual life. It is in no way the absence of fear. Rather it exists in that liminal space with fear, forever partnered. One of the first things that spiritual work asks is that we willingly explore the power of the space these two things share, both within ourselves and without as well. Fear teaches us that we are stronger than we think. It hones that strength from which so many other spiritual virtues flow. It allows us to live productive, whole lives spiritually and otherwise. It teaches us commitment, perhaps to ourselves most of all.
What’s more, personal virtues like courage don’t spring forth fully evolved out of nothing. We are not born with them. Not even those of us who carry warrior medicine are born with them! These things must be cultivated, carefully challenge by challenge, day by day, year by year. In this way, they weave their unfolding ever more fully into the fabric of one’s life. There is tremendous security in learning to overcome the stranglehold fear can take on one’s spirit: once you have done so, once you know that the world will not end and you will survive in the face of your most frightening fear it loses its power and you are free. It is a powerful thing to learn what will not break you. (That is actually one of the mysteries of ordeal).
Strength and courage are cultivated through facing and overcoming small daily challenges. That holds true in one’s spiritual life just as much as in the most mundane of activities. Engaged spirituality begins with a certain surrender, with opening up, with consciously sought out vulnerability. That is terrifying, perhaps the most terrifying thing a person will ever do. It’s also an ongoing act of commitment that requires tremendous, ongoing courage. In this way, living an authentic devotional life requires one to taste, at least a little, of warrior medicine; because commitment is difficult and learning to trust the Gods is difficult, and having those deeply sacred moments of direct interaction can be terrifying. It changes everything. Most of all, it changes us and how we fit into the lives we had before we embarked upon this road. Sometimes it changes us in ways that cannot easily be undone (nor should they be).
I should point out that reclaiming our indigenous traditions takes tremendous courage in our world. Becoming Pagan or Heathen takes tremendous courage. It violates the Christian dominated status-quo. Developing a relationship with the Gods takes tremendous courage. It often meets with violent opposition in our religious communities (as counter-intuitive as that may seem). I always tell my clients: you have tasted courage before in your life. Be proud of that. It is no small thing at all.
In fact, gythia K.C. Hulsman (in speaking of Heathenry) pointed out to me recently that “ours is a religion wherein we're told that we ought to be proud in our actions and choices. We’re told that we should be truthful and frithful in our community dealings. Yet how many are fearful of sharing their personal practices?” She’s right too. Every week I get emails thanking me for my work, explaining how much it helped the writer spiritually but ending with “please don’t tell anyone I contacted you. I’m afraid of the response I’d get from my community.” Worse, more than once, I have had people contact me telling me they were giving up on the community because of the hostility to engaged spirituality. This is a travesty.
It is, however, a reality we must deal with: sometimes our communities are the biggest blockages to spirituality that we will face. In such situations, staying the course can require tremendous courage. Moreover sometimes the challenge occurs even closer to home: sometimes the first spiritual challenge we face (after shaking off the mental and spiritual chains of monotheistic dominance) happens in our homes, with loved ones: parents, spouses, friends. Sometimes we must work to be very strong when those we love object to our spiritual practices and even attempt to interfere. People often think that courage is developed by the big things, expansive once-in-a-life-time happenings when in reality, it’s the everyday grind where we find our deepest challenges. We don’t have to look far; fear dogs us.
Moreover, I think the Gods and ancestors challenge us. I think They challenge us to grow, to evolve, to become stronger, more fully developed human beings. They challenge us over and over at each new point in our spiritual life and how we respond to those challenges determines how useful we shall be to Them. It determines where we will be taken from that point on. It determines what blessings we are able to accept.
The secret is to not stop when you have made one small step forward but to persevere and keep going further and deeper. Hanging above my desk, I have a couple of quotes, both by Eleanor Roosevelt:
“We gain strength, and courage, and confidence by each experience in which we really stop to look fear in the face... we must do that which we think we cannot do.”
She also said that ‘what is to give light must endure burning.’ She was right too. Fire purifies. It hones. It anneals. So does working through fear. It is essential. There is no spiritual growth without it.
As of today, March 15, 2012, I (Galina Krasskova) am taking over as gythia (priest) of Ironwood Kindred (IWK). IWK will become, along with Urdabrunnr kindred, part of my spiritual House, with all the rights and obligations that entails. For members, not much will change, save that the primary administrative locus of the group has shifted to NY state. Etinmoot will still occur thanks to the gracious hospitality of Cauldron Farm in MA (the announcement of dates and programming is due to go up on the IWK site shortly) and Loki and Angurboda remain the Patron Gods of the kindred.
Running a kindred (or coven, or iseum, lyceum, or ile) is hard and grueling work. I would like to extend my appreciation and thanks to gythia Elizabeth Vongvisith, who not only founded IWK, but ran it efficiently and well for seven years. That is no mean feat. I’d also like to recognize and acknowledge her wisdom in knowing when it was time to say goodbye. Knowing when to step down is one of the hardest graces a priest/ess will ever be called upon to cultivate. Priestcraft is difficult. I was taught many years ago when I was first ordained that the average life of a priest/ess is about five years—if one is lucky. After that, if one doesn’t take a break, one might well be pushing burnout. I have found, in over twenty years of practice, that to be, largely true. With that said, Elizabeth has earned recognition for both her dedication and commitment. She will remain a valued member of IWK and I tip my proverbial hat to her.
Anyone with any questions should contact me at tamyris at earthlink.net. Those interested in learning more about IWK may see this site: http://ironwoodkindred.wordpress.com/ (this site will eventually be edited and updated to reflect the change in leadership). Those interested in learning more about Urdabrunnr Kindred should see http://urda.seika.org/.
Please note, both kindreds maintain a ‘no assholes allowed’ policy. Those interested in harassing, whining, moaning, complaining, or slandering us about our inclusion of the Jotnar in our devotional work are quite respectfully encouraged to piss off. Inflammatory emails will be summarily deleted. Threats will be reported to the police. Have a nice day.
I am very happy to announce that bibliotheca alexandrina has released its newest devotional: "Queen of the Sacred Way," a devotional anthology to Persephone.
I know many of my readers have been waiting for this one for a long time. (for those wondering at my involvement, it is very, very minimal: I have one small piece in the book, a poem written close to twenty years ago.) This promises to be a beautiful offering to the Hellenic Goddess of the Underworld. So check it out folks. i'm waiting with baited breath for their upcoming Hermes devotional. :)
"Queen of the Sacred Way" may be found here: http://neosalexandria.org/bibliotheca-alexandrina/current-titles/queen-of-the-sacred-way-a-devotional-anthology-in-honor-of-persephone/
this tells you where and how to order. It will be on amazon.com soon, but (as I well know from my own publishing exploits) they can take a few weeks to update their online catalog. Congrats, to those of you involved in the collecting, editing, and publishing. This is a beautiful book.
By happy coincidence, just as we’re coming to the letter ‘E’ in the Pagan Blog Project, we’re also drawing close to Eostre. This is the Pagan and Heathen celebration A) of the spring equinox and the coming of warm weather and lighter days and B) the Goddesses of spring and renewal, among them Eostre who gave Her name not only to our celebration (also sometimes called Ostara after another Goddess of spring, Who might be the same Goddess as Eostre but with a different regional name or Who might be a completely separate Deity) but to the Christian Easter as well. (Come on folks, did you really think the eggs and bunnies had anything to do with Jesus?). Moreover, spring equinox is one of the three holy tides that we know for sure were celebrated throughout Northern Europe. Other holidays might vary from region to region, tribe to tribe, but these three remained consistent (the other two being the Solstices).
Until very recently, I never gave much though to Eostre. I considered it a “lesser” holiday and would often allow it to pass with the most minimal of observance. I far preferred the cold season, the time from Winternights in October through Yule, the time when the Wild Hunt was said to ride, the time given over the ancestors, the Mothers, and Odin. All of that changed, however, when I moved out of New York City. Suddenly, I found myself ensconced in the seasonal rhythms. I found myself unable to ignore the cycles of the land, the feel of the soil as it prepared for winter slumber, and moreover as it began to awaken in the spring. It was all around me and as I now had a parcel of land for which I was responsible, I began to sit up and take notice. With that miniature epiphany, I found myself coming to crave Eostre with a deep physical and spiritual ache. I began to long for this holy tide and suddenly it didn’t seem ‘lesser’ at all. Suddenly it seemed crucial, valuable, a doorway marking the transition between death and rebirth, winter and summer, darkness and light, fallow and fertile. Suddenly my entire relationship to this Goddess, this celebration, and this time of year was transformed.
Eostre really is a magical time. Its power lies in part in its liminality. The equinoxes are liminal times: the earth is neither fully awakened nor still fully asleep. They mark periods of transition, of awakening, of initiation. They mark the passage through a place, a time, a state of being that is neither one thing nor the other and such places are tremendously important for us spiritually and emotionally. They are the places wherein we are given the opportunity to open up and grow a little ourselves, to move beyond our baggage, to reach out and, since we are speaking of things associated with the spring equinox, to embrace the light. They give us a chance to reawaken our passions and reorder our priorities. Liminal times and places not only provide a chance to drink deeply of the sacred, but they allow us the opportunity to remake ourselves: our hearts, our minds, our spirits, in the wake of that sacred drink as we go forth, through the passage of the holy tide, into new life, new birth, new growth and hopefully, greater awareness. Moreover, Eostre being what it is, we can do so with joy because over and above anything else, this is a holy tide resplendent with joy.
You can feel the potentiality bubbling up in the land, like laughter too long impishly suppressed. You can feel it in the life that is bursting into flower everywhere at this time. It’s palpable if one stops long enough to listen; and to feel. Eostre is all about sensation—a riotous exaltation of the senses and our capacity to enjoy them. It’s about taking that pause, that breath, and it’s also about blessing the creativity – the fertility not necessarily of body, though that is certainly part of this season, but of mind, heart, and spirit. This holy tide is about opening up and stretching our wings after the enforced constraint of winter. It’s about the grace of being alive, awake, and capable of feeling joy. It’s about sensuality---however one chooses to express or embrace it, Eostre is, in some very deep way, about sensuality, a glorious celebration of the sensorium which, after the fallow time of winter, has the chance to glory in the gifts brought in the wake of the brightening land.
The spring equinox also knits together two very disparate seasonal experiences. It stands holding a place between the fallow, resting cold of winter and the burgeoning heat of summer. It’s a doorway between barrenness and fertility, always leaning toward the latter as the fall equinox leans toward the quiet pause of the former. Here, the elegance of autumn is replaced by unbounded delight of spring. There’s a momentum awakened and unleashed here that reaches its apex at midsummer. There’s a drive, a joyous exuberance.
We have several Deities traditionally associated with this time. The first, of course is Eostre and Her continental cousin Ostara. Then of course there is Hreðe, of Whom I’ve written of quite recently on my blog (http://krasskova.weebly.com/1/post/2012/02/adorations-to-hree.html), and finally we have the moon God Mani and the Sun Goddess Sunna. Given that Eostre proper is a day when light and darkness are equally balanced, it’s quite appropriate to give special honors to our celestial Deities too. Folks wanting to learn more about these two Deities can look here: http://krasskova.weebly.com/the-house-of-the-moon.htmlor check out my book “Day Star and Whirling Wheel,” available here: http://www.asphodelpress.com/devotionals.html. I’m afraid, other than a few sparse references in the surviving sources, there’s not much else out there about these Deities. Still, that shouldn’t stop us from honoring Them. In fact, it provides a great impetus to throwing ourselves into celebration of these Holy Powers unburdened as we are by any artificial constraints of ‘lore.’
I’ve also written about Ostara/Eostre before. Those articles can be found, in no particular order here:
In the meantime, with Eostre less than two weeks away, I leave you with a series of Adorations to this most delightful of Goddesses.
28 Adorations to Eostre
I adore You, Goddess of spring. I adore You, Goddess of the wet and fertile field. I adore You, Ever-brightening Dawn. I adore You, Who hides Your mysteries in liminal places. I adore You, Rebirth. I adore You, Renewal. I adore You, aching tug of awakening hungers. I adore You, Goddess of adolescence. I adore You, Goddess of bursting bloom. I adore You, Goddess of the new season. I adore You, Goddess of New Growth. I adore You, Who awakens the womb of the earth. I adore You, Who brings fertility. I adore You, laughing dawnlight. I adore You, who looses the hare. I adore You, Who quickens the belly. I adore you. Who fills the egg with life. I adore You, Holder of all potentiality. I adore You, Who opens the passage from winter to summer. I adore You, Whose gentle caress causes winter to yield its sway. I adore You, Who sweeps away the cold with a kiss of light. I adore You, Alluring One. I adore You, Who delights in the rising cock. I adore You, Who delights in the wet cunt. I adore You, Goddess of playful delight. I adore You, friend of Mani. I adore You, friend of Sunna. I adore You, Eostre.
May You be hailed at this time, as cold turns to warmth, darkness to light, winter to summer, fallow land to fertile growth.
And to all my readers, may this coming holy tide, Eostre, Ostara, the Spring Equinox, be kind to you all. May it fill your homes and hearts with joy and may the works of your hands prosper. Happy Eostre.
(the picture below is by Mary Ann Glass. The eggs were painted by her mother, Evelyn Tron Glass. Holiday cards with this image are available. The photographer may be contacted at http://maryannglassphotos.blogspot.com/).
If you would like to show your support and appreciation for the work that I and House Sankofa are doing, there are several things you can do. 1. Buy my books. You can find a list of my current publications here: http://amzn.to/YtlrLq. This takes you to an amazon page listing my books.
2. Have me do a divination for you, or order a gift-certificate for a friend. 3. Have me write a prayer or series of Adorations for the Deity of your choice. I take commissions $15/prayer. Money goes toward offerings and shrine maintenance. 4. Buy something from my marketplace page. 5. Make a donation via paypal. I have an account at tamyris at earthlink.net. Thank you. *******Galina and her kindred donate quarterly to The Big Sur Land Trust. See the links page.
|
{
"pile_set_name": "Pile-CC"
}
|
The crystal structure of pectate lyase Pel9A from Erwinia chrysanthemi.
The "family 9 polysaccharide lyase" pectate lyase L (Pel9A) from Erwinia chrysanthemi comprises a 10-coil parallel beta-helix domain with distinct structural features including an asparagine ladder and aromatic stack at novel positions within the superhelical structure. Pel9A has a single high affinity calcium-binding site strikingly similar to the "primary" calcium-binding site described previously for the family Pel1A pectate lyases, and there is strong evidence for a common second calcium ion that binds between enzyme and substrate in the "Michaelis" complex. Although the primary calcium ion binds substrate in subsite -1, it is the second calcium ion, whose binding site is formed by the coming together of enzyme and substrate, that facilitates abstraction of the C5 proton from the sacharride in subsite +1. The role of the second calcium is to withdraw electrons from the C6 carboxylate of the substrate, thereby acidifying the C5 proton facilitating its abstraction and resulting in an E1cb-like anti-beta-elimination mechanism. The active site geometries and mechanism of Pel1A and Pel9A are closely similar, but the catalytic base is a lysine in the Pel9A enzymes as opposed to an arginine in the Pel1A enzymes.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Ontario just dropped a staggering $11 billion to start building the first-ever high-speed commuter train line. The trains will travel at 250 km/hour to get travellers from London to Union Station in a little over an hour. Considering how far people are willing commute these days, this trains seems like a godsend.
Premier Kathleen Wynne says a high speed rail line is on track to carry passengers between London and Toronto by 2025 with the help of an $11 billion investment from the province.
https://t.co/zkXnH279ou — 980 CFPL London News (@AM980News) April 6, 2018
The plans, announced by Premier Kathleen Wynne, are apparently well underway. Station stops have already been decided. They include London, Kitchener, Guelph, Union Station and Pearson Airport. The second phase of the project would include stops in Windsor and Chatham.
Via Dreamstime/Yinan Zhang
As far as commute times and travel opportunities in Ontario, this train line could be a game changer -- but not everyone's on board. There's definitely concern that, because of how expensive the train line will be to build and operate, it may not be a realistic option for everyday commuting. Can the government keep the ticket prices low enough so people can get to work without going broke?
Big news for @RegionWaterloo. #OntarioBudget 2018 promises $11 billion towards phase one of high-speed rail between Toronto to @CityKitchener and on to London! Major vote of confidence in importance of #TOWRCorridor for Ontario's future! pic.twitter.com/PtLWufvIRA — Berry Vrbanovic (@berryonline) March 28, 2018
In one article, a transportation policy expert says that there are only two similar high-speed rail lines in the world that actually make any money now that they're up and running. One is in Tokyo and the other is in Paris. In Ontario, especially with station stops in tiny cities like Windsor and Chatham, the loss of money could be devastating.
But, the government has already committed to the $11 billion investment. Service for the train line could begin as early as 2025. While there are lots of promises that this project will change commuting in Ontario for the better, there are also tons of unanswered questions.
Source: Macleans, Global, Government of Ontario
|
{
"pile_set_name": "OpenWebText2"
}
|
Stereoselective acylation of the E,E-vinylketene silyl N,O-acetal and its application to the synthesis of khafrefungin.
Stereoselective acylation of the E,E-vinylketene silyl N,O-acetal possessing a chiral auxiliary has been achieved by using acid anhydrides and SnCl4. Acid anhydrides having alkyl chains gave the adducts in excellent stereoselectivity. The formal synthesis of khafrefungin has been accomplished by the methodology.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Produce
The avocado market remains well supported. Demand is surging this week for the upcoming Super Bowl. Further, there was little to no growth in avocado imports from Mexico last week. History suggests that avocado prices may remain firm. The Hass 48 count avocado market has averaged flat to higher during February six of the last nine years. Tomato supplies from Mexico remain deficient. U.S. tomato imports last week were 15.5% less than prior year. Tomato supplies are expected to improve soon. Tomato prices have averaged below January during February in six of the last eight years.
Grains
The soybean oil and palm oil futures markets have fallen sharply during the last week due in a large part to consumption concerns in Asia. The coronavirus has dampened celebration events and food consumption for the Chinese New Year holiday. There could be more downside potential in food oil prices in the near term.
Dairy
The spot butter market this week is the lowest in 39-months. Per the USDA, butter inventories in December were 5.9% larger than the prior year, but it was the smallest build for the month since 2016. Butter prices may still fall in the near term, but some long-term forward buying can be considered. The cheese markets are experiencing weakness, but cheese block prices are still inflated for January. Domestic cheese stocks on December 31st were down 2.2% from 2018 and it was the largest drawdown for the month since 1987, signaling solid usage. This year’s seasonal price declines for cheese may be tame.
Beef
Mid-January packer production schedules surprised the industry, with beef output last week jumping 2.5% over the previous week and were 7.3% more than last year. Wholesale beef sales are robust, with movement in the more deferred time frames remaining the most active. Despite the recent sell-off on the live-cattle futures, anticipate beef merchandising to remain active into the late-winter and early-spring. Seasonally, beef prices are expected to move mostly sideways into mid-February before escalating into the early spring, but this year’s seasonal upside development could be here earlier than history has shown.
Pork
Pork production last week was up 9.4% (wow) and was 9.9% over last year. Packers are likely pulling on hogs to fill schedules, but we’re not looking for the strong year-over-year output gains to continue. Still, amid larger production, wholesale pork prices are rising, with bellies and hams leading the way. The USDA pork cutout is up more than 13% (yoy), but larger supplies may pressure prices lower later this year (Q4). Anticipate U.S. pork export interest to be robust, but the current coronavirus problem is casting some doubt on those expectations.
Poultry
Chicken production remains robust, with chickens processed recently up 4% (yoy), but heavier bird weights have boosted ready-to-cook supplies, with the 6-week production total up 8% from last year. While data continues to suggest that near-term broiler production schedules will remain robust, producer margins are starting to fade amid struggling breast meat prices. In fact, breast meat prices have been fading counter seasonally throughout early 2020, and this week was the lowest for any week in Q1 since at least 2000. Longer-term contracting could be considered. Upside price action for wings and leg quarters have slowed, but strength should resume deeper into the winter.
Seafood
The salmon markets have been firm during the last several weeks due in part to smaller imports from Chile. During November, total U.S. salmon imports were up 2.1% from prior year with farmed filet imports up just .1%. Total salmon (10.7%) and salmon filet (5.4%) imports from Chile during the month were both lower than 2018. U.S. salmon imports could improve in the coming months. This may temper any seasonal upside in prices.
Oil
The petroleum markets are under selling pressure due largely to the coronavirus outbreak mostly in China, which is tempering petroleum demand, and flight schedules have been reduced. Expect more volatility in the petroleum markets.
|
{
"pile_set_name": "Pile-CC"
}
|
Posted
by
samzenpuson Monday February 26, 2007 @03:58PM
from the get-cooking dept.
honestpuck writes "When reading the foreword of Rails Cookbook I felt a strong kinship with Zed Shaw, I too have fond memories of the first edition of Perl Cookbook and the way I relied on it once I'd taken the training wheels off. Since that one I have relied on several of the O'Reilly Cookbook series. It is only when I discard the early tutorial and dive in the deep end with a "cookbook" on my desk that I really start to learn proficiency." Read the rest of honestpuck's review.
Rails Cookbook
author
Rob Orsini
pages
514
publisher
O'Reilly
rating
7
reviewer
honestpuck
ISBN
0596527314
summary
for programmers who know something about web development but are early in their use of Rails,
I felt timorous and unsure when I finished Agile Web Development with Rails, a marvelous tutorial that introduced me to my first real web development framework (I must have enjoyed it, I just bought the second edition). Since I have volunteered to develop a fairly large and complex web application in Rails I awaited the arrival of my copy of Rails Cookbook with hopeful anticipation and bated breath.
Rob Orsini, his fellow contributors (15 in all) and the team at O'Reilly have once again delivered. Compared to the previous titles in the series I've owned Rails Cookbook seems to have fewer recipes but as it is tackling an entire application framework and some serious issues, some of the solutions and discussions run a lot longer. The book is targeted at programmers who know something about web development but are early in their use of Rails, though it should be helpful to all Rails developers.
The book starts with tackling issues of installation and getting development tools installed in the first two chapters. Despite already deploying a couple of simple Rails apps I found that there was the odd useful tip in these chapters. The book then covers each of the three main sections of Rails; Active Record, Action View and Action Controller. The rest of the book goes on with large chapters on testing, Javascript, debugging, performance and hosting and deployment. Along the way it also covers REST, Action Mailer, security, plug-ins and graphics.
The extremely large section on Active Record was to me the most useful. I seem to spend an inordinate percentage of my Rails coding time with Active Record and it contains a large part of Rails power so I appreciated the size of this chapter. By contrast the chapter on graphics is almost entirely unread.
It seems obvious that this book should be compared to Pragmatic's Rails Recipes. The first point of difference is that Rails Cookbook covers installation and setup. The second point is that is 'Recipes' covers Rails 1.1 while 'Cookbook' targets the brand new Rails 1.2. As a project fairly new on the scene Rails is a fast moving target so the six months between the two books makes a difference. Both books have excellent coverage of the various aspects of Rails, with a great deal of overlap. 'Recipes' has more, shorter pieces while 'Cookbook' tends towards longer pieces with more discussion. 'Cookbook' is also more general, with more recipes more likely to be useful in every Rails project you write.
The style is different between the two. Here Cookbook comes off second best, it feels as though tightly edited by a number of hands and ends up lacking personality; functional but cold compared to Recipes. The writing, however, is good. It's easily read, at times it feels like a good textbook. The layout is clean, it is easy to find the information you need from each recipe when you want.
With almost all "cookbook" style books I seem to be left feeling that a number of the recipes are just a little too obvious and covered well in beginner tutorials. There is some of this in Rails Cookbook, most notably the first two chapters, but overall the book will be useful to any beginner to intermediate Rails programmer. Personally I had a couple of moments where I read a tip and wanted to scream as it demonstrated and explained in a few short sentences and half a page of code what had taken me hours to discover for myself.
The "Cookbook" series all seem to be books worth the price and shelf space. This one is no exception. I'd give it three out of five with an extra half for its timely information on Rails 1.2 and would recommend it for all Rails programmers from the absolute beginner through to all but the most experienced. If you already have a copy of 'Recipes' and are happy with it then you might want to stick with that till either volume is updated for the next major revision of Rails, otherwise you will almost certainly appreciate a copy of Rails Cookbook.
Then you'll _never_ touch Python!I have a co-worker who loves it and whenever I say something like, "I'm going to write a small program around a few syscalls and some low level bit twiddling in C" his response is always "Oh, you know Python can do that, right? And it'll be faster!"
It actually looks like a decent language, but he's turned me off to it (and it seems to grab a lot of the things I don't like from perl [disclaimer: I like perl] and very few of the ones that I do). I'll learn it sometime, but I
I'm a geek who uses technology in the service of the arts, but I've never been a programmer. Can anyone recommend a language in which I can learn the basics of programming, but is still powerful enough for me to do useful things? I'm comfortable with a soldering iron and piano, but the mysteries of writing code have always been outside my ken.I don't want to become a professional, I'd just like to do know what to do when I have to write a script and make simple web apps. The only real programmers I know
......... no, seriously, help this guy out. I'm not about to recommend PERL or Java or SQL, even though that's really all I'm semi-literate in.
And out of those, only java might come close to the requirements. I'd like to know what you guys recommend as well
C'mon, I was just about to recommend Java. It's a good language to start off with since there's a ton of information out there, it's a flexable language that you can use for tons of different use cases, and it's a good introduction to OO programming, which if you're starting from scratch, is the way I'd go. I would stay away from stuff like PERL and PHP since the former is just an explosion at the punctuation factory, and the latter could help instill bad practices from the get-go since it's so easy to h
I'd like to append that perl is a good language to work for people who are heavy command line users. It was origionally designed as bash+, and it still fills that niche quite well. It's OO is truly horrid though, and for the one who was asking, I'd agree, perl is bad for him.
Python's definitely a good fit. It's not my favorite language in the world, but it's got sensible syntax that isn't too alien (the indent thing really won't bite you unless you have a really crappy editor or you copy and paste code). And it also has loads of libraries, good support for all major platforms (OSX support is so-so, but pyobjc is nice), and a lot of people who can help you out (#python on irc.freenode.net for example).
Here's the bad part: there's virtually no decent introductory programming texts for python or most other languages. Most of them tacitly assume you know some other language and gloss over basic things like structuring a program with control flow, functional (de)composition, and proper use of objects, or they make a hash out of trying to introduce them. To that end, I'd really recommend Structure and Interpretation of Computer Programs (aka SICP) as a learning text, but dear lord is it tedious and didactic. It's also going to teach you an abstract way of thinking that doesn't really map to python's practical structures (i.e. you're never going to use tail recursion generally, let alone an amb operator).
I really love smalltalk for playing around with programming, not so much for the language itself (it's just okay) but for the way that you don't think about "compiling" or "modules" so much as you just have objects that you fiddle around with, and your changes just happen. Unfortunately, Squeak is such a poorly-documented disastrous hodgepodge that I can't recommend it to new programmers.
So it's kind of a desert out there for decent introductory texts, but a language with good community support and mostly helpful people will be a big boost, and python does stand out.
I don't want you to think I asked the question and don't appreciate your taking the time to respond. I'm looking for some python books now. I guess I need some kind of compiler or libraries or something, so I'm going to search those out right now. Say, do I need to work in Linux to write python? I don't mind, I've got a machine around here I can use for Linux. I was waiting for the UbuntuStudio anyway, so I will have such a system soon.
Python works great on windows as well as any unix. I recommend unix systems in general for development, but it's mostly a matter of using what you're comfortable with. Most tutorials are going to assume you're on Linux. And I see you've already discovered where you can get the interpreter from python.org. Ubuntu comes with python, but not necessarily a full install -- you'll probably want to install python2.5 from apt anyway.
Here's a dopey question: I've got to use my Windows system for music and video production. I will set up an Ubuntu box next weekend, but I'm between projects and would like to spend some time playing with Python.If I install Python and the Win32all library and an IDE (maybe Boaconstrictor or one of those), there won't anything that runs using resources when I'm not actually working with Python, right? I mean, there won't be any libraries or little thingies running in the background that will take resourc
Python won't install any extra daemons or services out of the box, and no python package you install is going to do that either. The only additional resource it'll suck up is disk space.Boa Constructor is actually pretty awful. PyDev for Eclipse is decent -- yeah, it's a Java IDE, but it's a good plugin. Now THAT will eat resources when you run it. For a lighter alternative, you can always use emacs if you can get used to its weirdness, and it has a great python mode (never thought I'd call emacs slim).
Idle comes bundled with the Windows installer. I use that all the time. It is pretty simple, a colourized text editor mostly, but it doesn't have the pokey feel of Eclipse. About my only complaint about Idle is that it doesn't have any line numbering down the left side of the window. Instead the line numbers are in the lower right corner in a box which makes me look away from my code. It might be just me though.An good intro to Idle follows:http://hkn.eecs.berkeley.edu/~dyoo/python/idle_int ro/index.html [berkeley.edu]
The answer is of course Ruby. You can start with Chris Pine's Learn to Program [pine.fm] which walks you through the basics of Ruby programming, or for a more psychedelic way of learning programming there is always Why's poignant guide to ruby [poignantguide.net]. Then when you get advanced enough (you've read the pickaxe and possibly the ruby way) you can start working on the bi-weekly Ruby Quiz [rubyquiz.com]./ruby fanboy
Ruby or Python are probably the best places to start; they're not going to be suitable for every type of programming you do, but they might teach you some good habits before you learn other languages with horrific syntax and dangerous pointers, and you can get going very quickly.For Ruby, there's Chris Pine's Learn to Program [pine.fm] or Why the Lucky Stiff's Poignant Guide to Ruby [poignantguide.net] which is whimsical, but does pretty much the same thing, or just go to Try Ruby [hobix.com] and type help.
Go straight to Objective C on OS X. Apple's development tools (e.g. XCode with Interface Builder, free) make it pretty easy to get started and the language is well established so you can find books, etc. More importantly, you'll be programming in an environment with first-rate multimedia support.
I held off learning python for almost two years for this very reason (meeting someone who was slightly too enthusiastic in their advocacy). Turns out though that it actually *is* quite a nifty language;)
People don't make a big enough fuss about the interactive interpreter, partly because it's hard to describe why it's so useful. Try it out though, starting with diveintopython.org.
Make Rails Thread safe so the only option isn't to run multiple applications sucking up even more memory and I might be inclined to think it is a useful framework. WTF?! No I don't want to run a "pack of mongrels" or more than one FCGI process. Give me a break.Rails is awesome for developers (cause it's easy) but from a system and resource POV it's atrocious. And I thought I'd never find something that I disliked as much as PHP (due to security concerns, again, don't get me wrong, it has it's place).
I think this has been reviewed once or twice before on Slashdot... but I digress.
I agree that this is a fantastic book, as it shows you some incredibly slick stuff you can do using Rails. But unless you already have somewhat of an understanding of Ruby then I'd strongly recommend getting a separate reference book just for Ruby by itself. O'Reilly makes one of those, too.:)
> then I'd strongly recommend getting a separate> reference book just for Ruby by itself.
The Ruby Way [amazon.com] is an excellent book for that, plus, the author, Hal Fulton, is a nice guy. And his RubyForge user account name [rubyforge.org] is "hal9000", for which he gets additional points.
I had this book for about 26 hours before I returned it, I was deeply displeased by the repetition from the existing work, Rails Recipes. All the cookbook entries about model relationships, polymorphic associations, etc, were lifted straight from Rails Recipes, right down to using Magazines, Readers and Subscriptions as the example objects.
And, while the book has a shiny "Rails 1.2" badge on the cover, very little of it had anything to do with Rails 1.2 whatsoever, there were only a handful of recipes in the very back which dealt with the new features.
Plus, was it really necessary to burn 3 pages talking about how to join a discussion group of fellow Rails developers? If you're a web developer and you can't find an online community to discuss the language/framework, you need more help than Rob's book is able to offer...
Jaredbpd,
I am sorry to disappoint you but nothing was lifted from Rails Recipes. I would like to find you a recipe that mixes join models and polymorphism in Rails Recipe. I wrote the last 2 recipes in that chapter and that code is taken straight out (in simplified form) from one of my applications. I actually wrote the join model and polymorphism one and then I was asked to write one just about polymorphism as introduction to the concepts. The editor thought that my recipe was a little too advanced.
Also, that blog post has a ton of errors. Here's one: If you want to write a Web application in Ruby, there is only one solution. Only one. Ruby on Rails. Hm, about about Camping [hobix.com] or Nitro [nitroproject.org]?
Rails scales perfectly well, just the same as any other share-nothing approach.
You don't think the fact that people naturally think imperatively and not functionally has no bearing on the situation?When it's the difference between lisp, which was marginally easier to develop with once you understood it, or cobol, which was easier to understand, but harder to develop with once you knew it, which do you think people would choose?
Cobol more than nine times out of ten...and with the increase in coders, there was an increase in available code. Pathways to solve common problems were made a
I don't know how people "naturally" think; for me grokking functional programming was one of those eureka! moments and it really felt natural. I still write a lot of functional stuff even in procedural/OO languages; Ruby lends itself well to that. But I agree with your larger point. One of the things that really blew me away about RoR was seeing someone implement a really good declarative security mechanism [writertopia.com] in about 300 lines. The Java version (JAAS) is an entire library that I never was able to fully figur
Yes, the algorithm is confusing. However, the C version is longer, but not *that* much longer. It also doesn't have any comically-long symbol names like multiple-value-bind. Granted, that's not an intrinsic problem with Lisp, only with ANSI Common Lisp, but prefix math is also difficult for humans to deal with. I don't know if it would be any easier if we learned math that way from the start, but I suspect it's just not how the human brain works.I like Lisp and Paul Graham's essays almost had me won over, b
I read that almost a year ago and its a great post. I agree with most of it yet still use Rails everyday. Notice in it he's giving an opinion about the future of Rails. However right now, its an amazingly productive framework to develop in if you're targeting startup web applications.To me the development tools for Rails is like a holy grail. Coming from.NET and Windows, switching all mac for Rails has given me a sort of coding nirvana that I didnt think possible. Developing code, writing tests, reusing ge
First let me disclose I that I have worked with Java web appsusing Kiva, JRun, ATG Dynamo, Tomcat, Websphere, Spring Framework, JSEE. I havealso worked Perl+CGI, a smattering of cold fusion, plain old php,drupal, cakephp (rails like php framework).I am currently working on deploying a rails app.So what is my take on Ruby/Rails?
Briefly here are the pros/cons as I see it:
pros:
1) Ruby feels good to program in. Like Perl,PHP, or LISP you can
layout data using the language itself. S
I own both titles ("Rails Recipes" by Chad Fowler) and IMO neither does a great job -- there's nothing in this book not already covered by the definitive DT/DHH Agile Rails Development book. About the only redeeming value was the information on Mongrel and the detailed instructions to get Rails installed and running on all the different platforms.Speaking of which, though, the devoting of a good chunk of paper real estate to installation, seems to be space better left for meatier topics. Especially on a top
|
{
"pile_set_name": "Pile-CC"
}
|
By Sajjad Din, M.Sc., P.Geo.
Generally speaking, remediation or reclamation is a process in which consultants, contractors, construction managers, engineers and scientists conduct a series of steps to return contaminated land back to its original pre-human activity state, in terms of concentrations of various compounds in the soil and groundwater.
A risk assessment scientifically assesses the potential risk that exists for humans, plants, wildlife and the natural environment from exposure to a contaminant. The purpose of a risk assessment is to develop site specific standards that will allow uses, such as residential, that are being proposed to take place on the property.
Standards and Regulations
In Ontario, remediation may only require clean up to certain standards as outlined in the Soil, Ground Water and Sediment Standards for Use under Part XV.1 of the Environmental Protection Act. In many circumstances, achieving cleanup to these standards may be physically or chemically difficult due to various factors such as existing infrastructure, current land use, depth of impacts, soil types or shallow bedrock. For example, impacted groundwater may not be treatable due to it being at depths greater than 20 metres within the bedrock. Cleanup cost for said groundwater may be astronomical and hence beyond the financial reach of the owner or interested parties.
Subscribe to our Newsletter! The latest environmental engineering news direct to your inbox. You can unsubscribe at any time. First Name *
Email *
This iframe contains the logic required to handle Ajax powered Gravity Forms.
Prior to completing a remediation program, remedial options feasibility assessment should be carried out, in order to assess what method or methods of cleanup would be most suitable and effective. This may also be called a comparative analysis, in order to prepare a Remediation Action Plan.
As part of this process, the options of natural attenuation and/or a risk assessment should be considered. Natural attenuation considers the allowance for the natural breakdown and reduction in contaminants over time. This is verifiable by monitoring, i.e., testing of samples collected in the field. Risk assessments consider existing natural barriers or placing artificial administrative or engineered barriers between human occupants of a property and the contaminants present.
Municipalities want to curb urban sprawl and brownfields redevelopment is one way to do this. The site assessment process is a significant part of urban brownfield development. However, once Phase I and II site assessments are complete and the aerial and vertical extent of impacts have been defined, depth to bedrock determined, soil characteristics verified, hydraulic conductivity and groundwater flow direction ascertained, it still is not always clear as to what remedial approach would be best suited to a site. This is because there may still be uncertainties regarding subsurface biochemical oxygen demand and other soil and groundwater characteristics.
A major factor in determining remedial methods is the type of pollutant and its potential mobility and reactivity. Most contaminants can be broken down into volatile organic compounds, petroleum hydrocarbons, metals and inorganics, pesticides/herbicides and polycyclic aromatic hydrocarbons. The type of contaminants found should be considered when deciding on whether to simply risk assess. This is because the mobility and the fate and transport of them will determine the level of risk to the local human and ecological populations.
Remediation Cost
With remediation one can aspire to end up with a property that meets applicable standards. Ideally, all impacted material is removed or treated. This allows the site owner or developer to carry out any type of land use they wish. Also, lending institutions are more willing to front the funds to purchase such a property.
However, remediation work may cost millions, depending on the level and extent of contaminants of concern on site, soil types, existing buildings, infrastructure and groundwater conditions. The remedial approach may involve both ex situ and in situ treatment methods. Additionally, there would likely be disruption to site activities.
Though certain costs are inherently a part of any risk assessment process, this route may be far more cost-effective and time efficient. With remediation projects, specifically in situ, there is an inherent uncertainty of the amount of time that would be required for completion. The process would require study of which site specific standards should be developed to allow contaminants to remain on site in higher concentrations then would otherwise be permitted under the generic Ministry standards.
For risk assessment, a team of professionals with diverse skills in science and engineering would develop specific standards based on the current and proposed site use. This would involve an assessment of site conditions such as geology, concentrations of contaminants, human occupancy, building structures (existing and proposed) and infrastructure. Based on these parameters, acceptable levels of contaminants left in place are determined. Additionally, certain engineering controls such as sub-slab ventilation, concrete barriers, solidification/stabilization, Waterloo Barriers and underground slurry walls are introduced into the assessment calculations.
In the long run, a wiser use of both remedial and risk assessment options will be required for greater urban redevelopment and revitalization. Too many contaminated properties lie unused that could be redeveloped if a risk assessment is conducted and the risk posed by on site contamination is deemed to be acceptable.
Sajjad Din, M.Sc., P.Geo., is a part time professor at Seneca and Centennial Colleges and also a consultant with Toronto Inspection Ltd. (References are available upon request)
|
{
"pile_set_name": "OpenWebText2"
}
|
Low Tie Compared to High Tie Vascular Ligation of the Inferior Mesenteric Artery in Rectal Cancer Surgery Decreases Postoperative Complications Without Affecting Overall Survival.
The aim of this study was to determine the clinical impact of low tie ligation (LT) of the inferior mesenteric artery (IMA) below the left colic artery versus high tie ligation (HT) at the origin of the IMA in patients undergoing rectal cancer surgery. Between January 2005 and December 2017, all consecutive patients who underwent rectal resection for non-metastatic cancer were retrospectively included. Patients who had LT were compared to those who had HT. Overall, 200 patients were identified (101 HT and 99 LT). Postoperative 30-day mortality rate was nil in both groups. There were significantly higher severe postoperative complications in HT versus LT patients (Clavien-Dindo III-IV) (18.8% vs. 9.1%, p=0.048). Median follow-up was 38.5 months and overall survival at 5 years was 91.5% and there was no difference between the two groups (90.1% vs. 92.9%; HT vs. LT p=0.640). LT ligation of IMA significantly decreased the severe postoperative complication rate without affecting recurrence-free or overall survival.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Thymic extracellular matrix in human malnutrition.
Previous studies have shown that malnutrition severely affects both lymphoid and epithelial components of the thymus. Yet, few data are available concerning the extracellular matrix (ECM) of the thymic microenvironment in malnutrition. We studied by histological, ultrastructural, and immunohistochemical means thymuses obtained in necropsies from 19 malnourished children. We observed a consistent increase in the intralobular ECM-containing network which could be ascertained histologically by the dense reticulin staining. This abnormally dense ECM network contained fibronectin, laminin, and type IV collagen. Importantly, the enhancement of thymic ECM in malnourished individuals positively correlated with the degree of thymocyte depletion. This correlation may represent a cause-effect relationship in which the contact of thymocytes with abnormally high amounts of thymic ECM triggers and/or enhances programmed cell death.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
---
abstract: 'We show that the technique known as concatenated continuous dynamical decoupling (CCD) can be applied to a trapped-ion setup for a robust implementation of the quantum Rabi model in a variety of parameter regimes. These include the case where the Dirac equation emerges, and the limit in which a quantum phase transition takes place. We discuss the applicability of the CCD scheme in terms of the fidelity between different initial states evolving under an ideal quantum Rabi model and their corresponding trapped-ion realization, and demonstrate the effectiveness of noise suppression of our method.'
address: 'Institut für Theoretische Physik and IQST, Albert-Einstein Allee 11, Universität Ulm, 89069 Ulm, Germany'
author:
- 'Ricardo Puebla, Jorge Casanova and Martin B. Plenio'
title: A robust scheme for the implementation of the quantum Rabi model in trapped ions
---
[*Keywords*]{}: Dynamical decoupling, trapped ions, Rabi model\
Introduction
============
Quantum coherence is an essential prerequisite to observe and exploit the intriguing phenomena in the quantum realm [@Nielsen00]. Indeed, technologies relying on those quantum properties are expected to surpass their classical counterparts in efficiency and performance. This new generation of quantum technologies encompasses a large diversity of possible applications which inlcude quantum simulation [@Feynman82], quantum metrology [@Giovannetti04], quantum communication [@Gisin07] and quantum sensing [@Wu16], all of them requiring the preservation of quantum coherence for their correct functioning. In this respect, the loss of quantum coherence, or simply decoherence, is a crucial limitation as it occurs due to the unavoidable interaction of the quantum system with an uncontrolled environment as well as to the presence of experimental imperfections. Hence, the long-time maintenance of the quantum coherence of an evolving system is highly desired although its realization constitutes a formidable task.
During the past decades considerable efforts have been invested in the development of theoretical schemes to circumvent, as much as possible, the effect of the noise in the system with the goal of prolonging coherence times. Among them we find techniques such as decoherence-free subspaces [@Lidar13], quantum error correction [@Lidarbook13], or dynamical decoupling [@Souza12]. These are methods designed to handle specific noise scenarios, and present different benefits concerning noise supression. In particular, dynamical decoupling constitutes a promising tool to handle non-Markovian noise, and it is the central object of study in this article. In its continuous wave configuration, the effect of dynamical decoupling corresponds to the creation of a dressed basis with an energy gap such that, under certain circumstances that will be later developed, the effect of noise is suppressed. In addition, this technique allows for a [*concatenated*]{} configuration known as *concatenated continuous decoupling* (CCD) [@Cai:12] that consists in applying concurrently different driving fields to eliminate further sources of noise, including those from imperfect driving fields themselves. Standard dynamical decoupling has been theoretically proposed in its continuous [@Bermudez:12; @Lemmer:13njp; @Cohen15; @Mikelsons15] and pulsed [@Souza12; @Carr54; @Meiboom58; @Casanova15] configurations. Furthermore, these techniques have already been used in both radio frequency and Penning traps in [@Timonei11; @Tan13] (continuous case) and in [@UyS09; @Biercuk09; @Biercuk09bis; @Biercuk09bisbis] (pulsed case) as a method to suppress noises on the registers and to drive robust single- and two-qubit gates. Furthermore, dynamical decoupling has been used to explore different models involving spin-spin interactions [@Cohen15bis]. On the other hand, the CCD scheme has experimentally demonstrated its feasibility to preserve the coherence of an isolated nitrogen-vacancy center in diamond [@Cai:12]. However, the convenience and possible benefits of the CCD method in an ion trap platform for quantum simulation purposes has not been proven yet.
In the present article we show how to apply the CCD scheme in a trapped-ion setting for a robust implementation of the paradigmatic quantum Rabi model that describes the interaction between a two-level system and one bosonic field mode. Despite of its apparent simplicity, this model exhibits a rich variety of physics, ranging from the relativistic Dirac equation [@Lamata07; @Gerritsma09; @Casanova10r; @Gerritsma11] to critical phenomena as it can undergo a second-order quantum phase transition [@Hwang:15; @Puebla:16]. We demonstrate that, within the CCD scheme, high fidelities can be achieved and maintained during long evolution times in an ion trap setup in the presence of different noise sources and realistic conditions. While an experimental verification of such scheme in an ion trap is still required, the present theoretical results are promising and open the door to the study of robust and noise-resilient trapped-ion quantum simulations.
We exemplify and support by means of detailed numerics the applicability of the CCD scheme realizing the quantum Rabi model in three different parameter regimes. First, the case where the energy splitting of the two-level system matches the motional frequency and the rotating-wave approximation can be applied. In this situation the Jaynes-Cummings model [@Jaynes63] emerges and we can observe Rabi oscillations. Second, the realization of the Dirac equation [@Lamata07; @Gerritsma09; @Casanova10r; @Gerritsma11] whose main hallmark is the Zitterbewegung, and finally, the extreme parameter regime [@Casanova10] required to witness critical dynamics as a consequence of the emergence of a second-order quantum phase transition in the limit of strong coupling [@Hwang:15; @Puebla:16]. Additionally, we discuss possible drawbacks in the CCD scheme and identify particular situations where the method does not lead to an improved performance.
The present article is organized as follows. In Sec. \[sec:OU\] we introduce the Orstein-Uhlenbeck stochastic process [@Orstein:30; @Wang:45], which we will use to model fluctuations in the trapped-ion setting as well as of the externally applied control fields. In Sec. \[sec:ccd\] the CCD scheme is presented and explained. Furthermore, we show how CCD adapts to trapped-ion Hamiltonians giving rise to a noise protected quantum Rabi model in Sec. \[sec:TI\], while specific examples and their numerical simulations are shown in Sec. \[sec:num\]. Finally, we summarize the main conclusions in Sec. \[sec:conc\].
Stochastic fluctuations: Orstein-Uhlenbeck process {#sec:OU}
==================================================
A quantum system looses its quantum coherence due to an uncontrolled interaction with the environment. Such interaction introduces a stochastic noise or fluctuation in the system that we will model as an Orstein-Uhlenbeck (OU) stochastic process [@Orstein:30; @Wang:45; @Gillespie:96]. This effective description successfully reproduces the exponential decay of the quantum coherence due to dephasing noise as measured by Ramsey interferometry [@Wineland:98], as well as the behavior of a quantum system under fluctuations on the intensity of the applied radiation [@Cai:12]. Moreover, as we will see later on, it also allows to vary the width of the spectral density, which quantifies the amount of power per unit of frequency. In this manner the OU process can describe different noise scenarios, and thus, it has been extensively used in the literature [@Bermudez:12; @Lemmer:13njp; @Bermudez:13; @Lemmer:13].
An OU process is characterized by two parameters, namely, $\tau$ and $c$, relaxation or correlation time and diffusion constant, respectively. While the former fixes the time in which the noise is correlated, the latter is proportional to the noise amplitude. A stochastic variable $X(t)$ that obeys an OU process has an exact update formula [@Gillespie:96], $$\label{eq:OU}
X(t+\Delta t)=X(t)e^{-\Delta t/\tau}+\left[\frac{c\tau}{2}\left(1-e^{-2\Delta t/\tau}\right)\right]^{1/2}N(t),$$ for an arbitrary value of $\Delta t$. The term $N(t)$ stands for a temporally uncorrelated normally distributed random variable, i.e., $\overline{N(t)}=0$ and $\overline{N(t)N(t')}=\delta(t-t')$, where the overline denotes the stochastic average. The OU process is Gaussian, and hence, fully determined by its first and second moments, $$\begin{aligned}
\overline{X(t)}&=0 \\
\sigma^2[X(t)]&=\frac{c\tau}{2}\left(1-e^{-2t/\tau}\right),\end{aligned}$$ where $\sigma^2[X]$ denotes the variance of $X$, and thus, $\sigma[X]$ its standard deviation. The power spectrum or spectral density, $S_X(f)$, characterizes the nature of the noise, since it measures the amount of power per unit of frequency of $X(t)$ at a frequency $f$. The stochastic variable $X(t)$ can be written in Fourier series as $X(t)=\sum_n P_n e^{2\pi i f_n t}$ for $t\in[0,T]$ where $P_n$ are the corresponding Fourier coefficients at frequency $f_n$. Then, the spectral density can be defined in the $T\rightarrow \infty$ limit, as shown in [@Wang:45], as $S_X(f_n)=\lim_{T\rightarrow \infty} \frac{1}{T}\left|P_n \right|^2$. The spectral density will be of importance in the next section, Sec. \[sec:ccd\], for the understanding of the noise decoupling efficiency of the CCD method. Indeed, for the particular case of an OU process, $S_X(f)$ can be analytically calculated giving rise to [@Wang:45] $$\label{eq:Sxf}
S_X(f)=\frac{c\tau^2}{1+4\pi^2\tau^2f^2}.$$ Therefore, the relaxation time $\tau$ sets a boundary in the frequency domain between [*white*]{} noise, i.e. $S_X(f)\propto f^0$, and [*Brownian*]{} or [*red*]{} noise, i.e. $S_X(f)\propto f^{-2}$. This *crossover* frequency $f_{cr}$ can be estimated as $S_X(f_{cr})/S_X(0)=1/2$, that is, $f_{cr}=1/(2\pi\tau)$.
In Fig. \[fig:FFT\] we show a typical trajectory of an OU process for a fluctuating variable $\delta_m(t)$ and its Fourier transform. Note that $S_X(f_n)\propto \left|P_n\right|^2$.
Here we are interested in magnetic-field fluctuations or simply dephasing noise, which can be written as $H=\delta_m(t)/2 \ \sigma_z$ where $\delta_m(t)$ follows Eq. (\[eq:OU\]). The coherence time of the system depends then on the properties of $\delta_m(t)$. For example, consider an initial state ${{\left|\textstyle{\uparrow}\right\rangle}_x}$ at $t=0$, i.e. $\sigma_x{{\left|\textstyle{\uparrow}\right\rangle}_x}=+{{\left|\textstyle{\uparrow}\right\rangle}_x}$, evolving under $H=\delta_m(t)/2 \ \sigma_z$, then it is easy to prove that $$\label{eq:sx}
{\left\langle\textstyle{\sigma_x(t)}\right\rangle}=e^{-\frac{1}{2}\overline{\varphi^2(t)}},$$ where $\varphi(t)=\int_0^t ds \ \delta_m(s)$ is the time integral of the stochastic variable $\delta_m(t)$ and $\overline{\varphi^2(t)}$ its autocorrelation function that can be written as [@Gillespie:96] $$\label{eq:phi2}
\overline{\varphi^2(t)}=c\tau^2\left[t-\tau\left(\frac{3}{2}-2e^{-t/\tau}+\frac{1}{2}e^{-2t/\tau} \right)\right].$$ The coherence time $T_2$ is defined as the time instant at which ${\left\langle\textstyle{\sigma_x(T_2)}\right\rangle}=e^{-1}$. Hence, from Eq. (\[eq:phi2\]) and (\[eq:sx\]) it follows that $$\label{eq:c}
c=\frac{4e^{2T_2/\tau}}{\tau^2\left(4e^{T_2/\tau}\tau-\tau+e^{2T_2/\tau}(2T_2-3\tau) \right)}$$ that is, for a given $\tau$ and a coherence time $T_2$, the diffusion constant can be determined. Nevertheless, depending on whether the noise is fast, i.e. with short memory, meaning $\tau\ll T_2$, or slow, i.e. with long memory, which corresponds to $\tau\gtrsim T_2$, the coherence decays differently. Indeed, exponential decay is achieved when $\tau\ll T_2$ which is the typical scenario in ion traps [@Wineland:98]. In this case Eq. (\[eq:c\]) acquires a simpler form: $T_2\approx2/c\tau^2$. In contrast, for slow noise a Gaussian decay is observed. In Fig. \[fig:coherences\] we plot ${\left\langle\textstyle{\sigma_x(t)}\right\rangle}$ as a function of the evolution time $t$ for an initial state ${{\left|\textstyle{\uparrow}\right\rangle}_x}$ evolved under fast and slow noise, considering $T_2=3$ ms, $\tau=50 \ \mu$s and $\tau=5$ ms, and $c$ obtained according to Eq. (\[eq:c\]). We can observe how the numerical stochastic average ${\left\langle\textstyle{\sigma_x(t)}\right\rangle}$ agrees with the exact expression in Eq. (\[eq:sx\]).
Concatenated Continuous Decoupling (CCD) {#sec:ccd}
========================================
In this section we explain the technique known as dynamical decoupling in a concatenated scheme (CCD) [@Cai:12] that corresponds to the addition of several continuous decoupling fields. Note that the use of continuous fields, not pulsed, will be maintained throughout the article. Consider a situation where the Hamiltonian is $H=\omega_0(t)/2\ \sigma_z$ where $\omega_0(t)=\omega_0+\delta_m(t)$ with $\delta_m(t)$ the stochastic fluctuation of $\omega_0$, which strongly affects the quantum coherence of the system. Then, in order to eliminate its effects a continuous driving field with Rabi frequency $\Omega$ is introduced. This situation is described by the Hamiltonian $$\label{eq:ccd1}
H=\frac{\omega_0}{2}\sigma_z+\frac{\delta_m(t)}{2}\sigma_z+\Omega\cos(\omega t)\sigma_x.$$ In an interaction picture w.r.t. $\omega_0/2\sigma_z$ we have $$H^I=\frac{\delta_m(t)}{2}\sigma_z +\frac{\Omega}{2}\left[\sigma^+\left(e^{i(\omega_0+\omega)t}+e^{i(\omega_0-\omega)t}\right)+\textrm{H.c.}\right],$$ thus, selecting $\omega=\omega_0$ and invoking the rotating-wave approximation (RWA), the previous Hamiltonian (in the case $\Omega\ll\omega_0$) reads $$H^I\approx \frac{\delta_m(t)}{2}\sigma_z+\frac{\Omega}{2}\sigma_x.$$ The first term on the r.h.s of the above equation produces no transition in the basis $\left\{{{\left|\textstyle{\uparrow}\right\rangle}_x},{{\left|\textstyle{\downarrow}\right\rangle}_x}\right\}$ as long as the fluctuating term, $\delta_m(t)$, has vanishing Fourier coefficients, $|P_n|\ll 1$, in the vicinity of frequencies $f_n\approx \Omega$. In other words, to protect the system against the noise, the Rabi frequency $\Omega$ must lie in the region in which the noise spectrum is negligible. In this manner, transitions in the dressed basis $\left\{{{\left|\textstyle{\uparrow}\right\rangle}_x},{{\left|\textstyle{\downarrow}\right\rangle}_x}\right\}$ as a consequence of the stochastic term $\delta_m(t)/2 \ \sigma_z$ have an energy penalty and can be neglected. We will denote this first step as the [*first layer*]{} of protection, since only one additional driving has been introduced. From a more rigorous point of view, that noise elimination is achieved after the application of a RWA on each of the noise components as a consequence of the presence of the term $\frac{\Omega}{2} \sigma_x$. In addition, and because the RWA presents a slightly different behavior depending on the initial state of the system, the proposed method inherits its dependence. Note however the existence of certain states for which its evolution under noise and the Hamiltonian just gives rise to a global phase. For such dark states, introducing a first layer deteriorates the coherent evolution since, in the rotated basis, noisy terms are able to produce transitions. In \[subsec:QRM\] we will comment more about this scenario and show an example. Now one should also consider that the Rabi frequency $\Omega$ is not completely stable and represents another source of fluctuations, that is, $\Omega \equiv \Omega [1 + \delta_\Omega(t) ]$ with $\delta_\Omega(t)$ another stochastic fluctuation with a small amplitude. However, the CCD scheme offers the possibility to further protect the system against $\delta_\Omega(t)$ with a [*second layer*]{} by introducing one additional driving to cancel $\delta_\Omega(t)$ [@Cai:12].
In Fig. \[fig:schemeCCD\] we sketch the main idea behind the effectiveness of dynamical decoupling to cancel interfering stochastic processes. In Fig. \[fig:schemeCCD\] (b) the evolution of the coherences as a function of the evolution time is plotted for three different drivings. The success depends on the properties of the noise (a): when the Rabi frequency of the driving does not exceed the crossover frequency of the noise ($\Omega_1<f_{cr}$) no protection is achieved. On the contrary, as the Rabi frequency gets larger, $\Omega_{2,3}\gtrsim f_{cr}$, the quantum coherence is preserved during longer times since transitions due to the original noise occur with a smaller probability in the new dressed basis. This shows the crucial interplay between noise properties and driving frequencies in a dynamical decoupling scheme. Then, one can apply the same criteria to cancel further fluctuations of additional drivings fields in the CCD scheme. Note that the same techniques can be applied to other noise models that present a similar behavior, i.e. models exhibiting a spectral density that vanishes for asymptotically large frequencies.
![[Schematic representation of the CCD scheme. In (a) the normalized power spectrum of the noise is plotted. Depending on the Rabi frequency $\Omega_i$ of the additional pulse, different evolution of the coherences is observed (b). As sketched in (c), the original basis suffers dephasing. Then, if the introduced $\Omega_i$ is small compared to the characteristic frequency of the noise, there is essentially no protection, while $\Omega_i\gtrsim f_{cr}$ coherence times are enhanced significantly as the noise term $\delta(t)\sigma_z$ is not enough to produce transitions in the new dressed basis. Noise parameters are $\tau=50 \ \mu$s and $T_2=3$ ms, while the Rabi frequencies $\Omega_1=2\pi\times0.5 \ {\textrm{kHz}}$, $\Omega_2=2\pi\times 5 \ {\textrm{kHz}}$ and $\Omega_3=2\pi\times50 \ {\textrm{kHz}}$, and $\omega_0\gg\Omega_{3}$ such that a RWA can be safely applied.]{}[]{data-label="fig:schemeCCD"}](fig3){width="1\linewidth"}
Trapped-ion Hamiltonian and CCD {#sec:TI}
===============================
Consider a trapped-ion with internal electronic structure described by $\omega_I/2 \ \sigma_z$ and $\nu{a^{\dagger}a}$ representing the motional mode energy with $\nu$ the trap frequency. The interaction created by a laser irradiation is captured in the term $\Omega_j/2\sigma_x\left[e^{i(k\hat{x}-\omega_jt-\phi_j)}+\textrm{H.c.}\right]$. Hence, under the influence of applied radiation the trapped-ion Hamiltonian reads [@Leibfried:03] $$\label{eq:TIH}
H=\frac{\omega_I}{2}\sigma_z+\nu {a^{\dagger}a}+\sum_j \frac{\Omega_j}{2}\sigma_x\left[e^{i(k_j\hat{x}-\omega_jt-\phi_j)} +\textrm{H.c.} \right].$$ where $k_j$ is the wave vector of each laser field, $\omega_j$ its frequency, $\phi_j$ an initial phase, $\Omega_j$ the Rabi frequency of the $j$th laser, and $\hat{x}$ the ion position operator.
Before starting with further developments, let us introduce some typical values of the parameters in the previous equation according to the state-of-the-art in experiments with $^{40}\rm{Ca}^{+}$ [@Gerritsma09; @Gerritsma11]. Here, the axial trap frequency is $\nu = 2 \pi \times 1.36$ MHz, $\omega_I$ is on the optical regime at $729$ nm, i.e. $\omega_I =2\pi \times 4\cdot 10^{14}$ Hz, and the Rabi frequency is typically on the order of several kHz [@Gerritsma09; @Gerritsma11]. Additionally, we should consider the coherence time of the internal levels of the ions as the main limiting factor that affects to the quality of the experiments with $^{40}\rm{Ca}^+$ [@Gerritsma09; @Gerritsma11]. As we already commented this is caused by magnetic-field fluctuations which give rise to a coherence time $T_2 \approx 3$ ms, see [@Gerritsma11]. We will consider this value throughout the present article. Note however that, by using a cryogenic setup [@Brandl:16], a longer coherence time of $T_2\approx18$ ms has already been achieved. Additionally, laser-intensity fluctuations are present in any realistic ion trap experiment, while its frequency $\omega_j$ and phase $\phi_j$ can be very accurate. Although these magnetic and intensity fluctuations are the main limiting factor for the coherence time of the system, there are still another sources of noise which will be not considered here as they will produce significant effects only on time scales significantly longer than $T_2=3$ ms. In this respect, phonon dephasing has been measured with an incidence of few ${\textrm{Hz}}$ [@Kaler03]. This provides a limit of the time scale across which the dynamics can be observed, which is, approximately, two orders of magnitude larger than the one we could consider if the magnetic noise is not eliminated. Concerning the heating rate it can be estimated that, on average, one phonon is gathered in $\sim 100$ ms [@Kaler03], or in $\sim 500$ ms for a cryogenic setup [@Brandl:16]. Furthermore, the lifetime of the qubit for the D$_{5/2}$ state of $^{40}$Ca$^+$ is $\sim 1$s [@Kaler03].
Regarding the trapped-ion Hamiltonian, in the interaction picture w.r.t. $H_0=\frac{\omega_0}{2}\sigma_z+\nu {a^{\dagger}a}$, it reads $$\begin{aligned}
\label{eq:Hi_orwa}
H^{I}&=e^{i(\frac{\omega_I}{2}\sigma_z+\nu {a^{\dagger}a})t}H_1e^{-i(\frac{\omega_I}{2}\sigma_z+\nu {a^{\dagger}a})t}\nonumber \\ &\approx \sum_j \frac{\Omega_j}{2}\left[\sigma^+e^{i\eta_j(ae^{-i\nu t}+ae^{i\nu t})}e^{i(\omega_I-\omega_j)t-i\phi_j}+\textrm{H.c.} \right],\end{aligned}$$ where we have already performed the optical RWA, i.e., we neglect the terms that rotate at frequency $\omega_I+\omega_j$ (counter rotating terms). Since $\omega_j$ will be chosen such that $\omega_j\approx \omega_I$ and because $\Omega_j\ll \omega_I+\omega_j$, this approximation can be safely carried out. We denote $\Delta_j=\omega_I-\omega_j$, thus, choosing $\Delta=0$, $\nu$ or $-\nu$ one arrives to a [*carrier*]{}, [*red sideband*]{} or [*blue sideband*]{} interaction, respectively, when the system is adjusted to lie within the Lamb-Dicke regime ($\eta_j\sqrt{ {\left\langle\textstyle{(a+{a^{\dagger}})^2}\right\rangle} }\ll 1$). Here, the Lamb-Dicke parameter $\eta_j$ is $\eta_j=k_j x_0$ where $x_0=(2m\nu)^{-1/2}$, $m$ the mass of the ion and $\hbar=1$ throughout the whole article; thus, $\hat{x}=x_0\left(a+{a^{\dagger}}\right)$. Finally, we would like to remark that all the numerical simulations of trapped-ion Hamiltonians presented in this article have been performed after the optical RWA and without further assumptions.
CCD for a single trapped-ion setup {#subsec:CCDRabi}
----------------------------------
We discuss now how to employ a CCD scheme in a single trapped-ion setup. In [@Pedernales:15] it is demonstrated that, by using two traveling waves to excite the red- and blue sideband transitions, and by setting properly the parameters $\Omega_{1,2}$, $\phi_{1,2}$ and $\omega_{1,2}$, the Rabi model can be simulated in a variety of parameter regimes which includes the Dirac equation as a particular case. However, the presence of different noise sources could significantly deteriorate its realization. Therefore, a noise-resilient implementation is desired to enhance coherence control and fidelity. For that reason, in the following we apply a CCD scheme to a single trapped-ion setup. We use the [*first layer*]{} (\[subsubsec:1layer\]) to tackle the dephasing noise as it is the main limiting factor for the coherence time of the system, while the [*second layer*]{} is introduced to handle laser-intensity fluctuations (\[subsubsec:2layer\]).
### First layer {#subsubsec:1layer}
In order to achieve the Rabi model within the CCD scheme, we apply an extra laser, denoted by the subscript $a$, with the objective to introduce a term $\Omega_a \cos(\omega_I t)\sigma_x$ into the dynamics. This is accomplished by setting $\omega_a=\omega_I$ (resonant with the frequency splitting of the ion), $\phi_a=0$ and a Rabi frequency $\Omega_a\ll \omega_I$. Then, the trapped-ion Hamiltonian in a rotating frame w.r.t. $H_0=\omega_I/2 \sigma_z +\nu{a^{\dagger}a}$ and after the optical RWA reads $$\begin{aligned}
\label{eq:hifirst}
H^I_1=\frac{\delta_m(t)}{2}\sigma_z&+\frac{\Omega_a}{2}\left[\sigma^+e^{i\eta_a(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}+\textrm{H.c.} \right]+ \nonumber \\
&+\sum_{j}\frac{\Omega_j}{2}\left[\sigma^+e^{i\eta_j(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}e^{i(\Delta_j t-\phi_j)}+\textrm{H.c.}\right],\end{aligned}$$ where $\delta_m(t)$ follows an OU process and is responsible of the dephasing noise, $\Delta_j=\omega_I-\omega_j$ is the detuning and $\eta_j$ the Lamb-Dicke of the $j$th laser. Note that the additional laser $a$ has zero detuning, $\Delta_a=0$ which ensures a carrier interaction (i.e. a $\sigma_x$ proportional term) within the Lamb-Dicke regime, and when other terms, i.e. the ones with a linear dependence in the Lamb-Dicke parameter, can be averaged out because of the condition $\Omega_a\eta \ll \nu$. Hence, only the first term of the following expansion is considered, $$\begin{aligned}
\label{eq:exp_LD}
e^{i\eta(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}=I&+i\eta\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)-\nonumber\\
&-\frac{\eta^2}{2}\left(2{a^{\dagger}a}+1+a^2e^{-2i\nu t}+({a^{\dagger}})^2 e^{2i\nu t}\right)+\mathcal{O}(\eta^3).\end{aligned}$$ In this way the additional continuous driving $a$, provides a dressed spin-basis, $\left\{{{\left|\textstyle{\uparrow}\right\rangle}_x},{{\left|\textstyle{\downarrow}\right\rangle}_x}\right\}$, in which the system is protected against the magnetic-field fluctuation or dephasing noise, $\delta_m(t)/2 \sigma_z$, as long as $\Omega_a$ fulfills the criteria given in Sec. \[sec:ccd\]. Then, the magnetic-field fluctuation can be eliminated and the Hamiltonian (\[eq:hifirst\]) is $$\begin{aligned}
\label{eq:H1Ieff}
H^{I}_1\approx \frac{\Omega_a}{2}\sigma_x +\sum_{j}\frac{\Omega_j}{2}\left[\sigma^+e^{i\eta_j(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}e^{i(\Delta_j t-\phi_j)}+\textrm{H.c.}\right].\end{aligned}$$ Furthermore, choosing properly the detunings and phases, $\Delta_j$ and $\phi_j$, a tunable Rabi model can be obtained from the previous effective Hamiltonian. This can be accomplished by setting two lasers $j=1,2$ with $\Delta_1=\nu-\xi$ and $\Delta_2=-\nu+\xi$ (detuned red and blue sideband), for which only the terms at first order in $\eta$ ($\eta_{1,2}=\eta$) of the expansion in Eq. (\[eq:exp\_LD\]) survive, provided by $\xi\ll \nu$ and $\Omega_j\ll \nu$; that is, we are applying the vibrational RWA. Finally, the Rabi model is achieved when an interaction term is orthogonal to the free energy term of the two-level system, which in this case is $\sigma_x$. Therefore, it suffices to set the phases as $\phi_1=\phi_2=0$ and the Rabi frequencies $\Omega_{1,2}=\Omega$, $$\begin{aligned}
H^{I}_1\approx \frac{\Omega_a}{2}\sigma_x -\frac{\Omega\eta}{2}\sigma_y\left(ae^{-i\xi t}+{a^{\dagger}}e^{i\xi t} \right).\end{aligned}$$ The previous Hamiltonian corresponds to a Rabi model in a rotating frame w.r.t. $\xi{a^{\dagger}a}$, i.e. $$\begin{aligned}
\label{eq:Hi_final1}
H_R= \frac{\Omega_a}{2}\sigma_x +\xi {a^{\dagger}a}-\frac{\Omega\eta}{2}\sigma_y\left(a+{a^{\dagger}}\right).\end{aligned}$$ We remark that the previous effective Hamiltonian is only valid under both optical and vibrational RWA, within the Lamb-Dicke regime and when $\Omega_a$ is such that the noise $\delta_m(t)$ has vanishing small component at that frequency.
Under the same approximations, the Dirac equation can be obtained. The corresponding Hamiltonian of the $(1+1)$ Dirac equation [@Lamata07; @Casanova10r] reads $H_D=c_D \hat{p} \sigma_x +m_D c^2 \sigma_z$, where $c_D$ is the speed of light, $m_D$ the mass of the $\frac{1}{2}$-spin particle, and $\hat{p}$ the momentum operator. To realize such a Hamiltonian from Eq. (\[eq:H1Ieff\]), we select $\Delta_1=\nu$, $\Delta_2=-\nu$ (red and blue sideband), $\phi_1=3\pi/2$, $\phi_2=\pi/2$ considering $\eta_{1,2}=\eta$ and $\Omega_{1,2}=\Omega$ (together with $\Delta_a=0$ and $\phi_a=0$). Then, Eq. (\[eq:H1Ieff\]) reads $$\begin{aligned}
H_1^I\approx \frac{\Omega_a}{2}\sigma_x +\eta\Omega\sigma_y\hat{p},\end{aligned}$$ where $\hat{p}=i({a^{\dagger}}-a)/2$. This is equivalent to the Dirac equation with the following parameters $c_D=\eta\Omega$ and $m_D=\Omega_a/(2\eta^2\Omega^2)$.
### Second layer {#subsubsec:2layer}
Once the main source of noise, magnetic field fluctuations, is overcome by means of the first layer, the following step consists in facing laser-intensity fluctuations which can still spoil quantum coherence. The intensity of a $j$th laser is now modeled as $\Omega_j(t)=\Omega_j\left( 1+\delta_{\Omega_j}(t) \right)$, where $\Omega_j$ is the desired Rabi frequency and $\delta_{\Omega_j}(t)$ describes a small stochastic fluctuation. Such fluctuation will be present for all the lasers used in the setup. That is, the laser intensities are not completely stable, but fluctuate around its mean value $\Omega_j$. We characterize these fluctuations as an OU process with $\tau_{\Omega}=1$ ms following [@Haffner08], and an amplitude of $0.1\%$ ($p=0.001$) of the laser intensity $\Omega_j$. Thus, one can characterize this as $\sigma[\delta_\Omega]=p$, which leads to $c_{\Omega}=2p^2/\tau_{\Omega}$. Note that the laser-amplitude noise is chosen to be slow, compared to $\delta_m(t)$. This fact can be seen as a technological requirement as otherwise the noise might not be easily handled within the CCD scheme as we will discuss later on.
In this way, once $\delta_m(t)/2 \ \sigma_z$ is overcome, the main fluctuation in Eq. (\[eq:H1Ieff\]) appears in the free energy term of the two-level system (i.e. as dephasing noise). Note that the rest of the Rabi frequencies, $\Omega_j$, are multiplied by a Lamb-Dicke parameter which reduces the influence of the errors introduced into the system by their fluctuating character. Therefore, we can proceed as for the first layer to deal with the term $\Omega_a\delta_{\Omega_a}(t)/2 \ \sigma_x$. To eliminate its contribution an additional continuous driving, denoted by the subscript $b$, is introduced, but with a time-dependent Rabi frequency $\Omega_b2\cos(\Omega_a t)$. The Hamiltonian describing this situation in a rotating frame w.r.t. $H_0=\omega_I/2 \sigma_z +\nu{a^{\dagger}a}$ reads $$\begin{aligned}
H^I_2\approx \frac{\delta_m(t)}{2}\sigma_z&+\frac{\Omega_a}{2}\sigma_x +\frac{\Omega_a\delta_{\Omega_a}(t)}{2}\sigma_x+\nonumber \\
&+\sum_{j}\frac{\Omega_j}{2}\left[\sigma^+e^{i\eta_j(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}e^{i(\Delta_j t-\phi_j)}+\textrm{H.c.}\right]\nonumber \\
&+\frac{2\Omega_b\cos(\Omega_at)}{2}\left[\sigma^+e^{i\eta_b(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}e^{-i\phi_b}\right],\end{aligned}$$ where we have already fixed $\Delta_b=0$. By simplicity, we only write down explicitly the fluctuation $\delta_m(t)$ and $\delta_{\Omega_a}(t)$, although all the functions $\delta_{\Omega_j}(t)$ have been taken into account in our numerical simulations, see next Section. As we need an orthogonal carrier with respect to $\sigma_x$ for $\Omega_b$, we select $\phi_b=\pi/2$ which leads to $\Omega_b\cos(\Omega_at)\sigma_y$. Now we move to a rotating frame w.r.t. $\Omega_a/2 \sigma_x$ obtaining $$\begin{aligned}
H^{II}_2&=e^{i\frac{\Omega_a}{2}\sigma_xt}H^I_2e^{-i\frac{\Omega_a}{2}\sigma_xt}\nonumber \\&\approx\frac{\delta_m(t)}{2}\left[\cos(\Omega_at)\sigma_z+\sin(\Omega_at)\sigma_y\right]+\frac{\Omega_a\delta_{\Omega_a}(t)}{2}\sigma_x+\nonumber \\
&+\frac{\Omega_b}{2}\left[\cos^2(\Omega_at)\sigma_y-\cos(\Omega_at)\sin(\Omega_at)\sigma_z\right]+\nonumber\\&+\sum_j\frac{\Omega_j}{2}\left[e^{i\frac{\Omega_a}{2}\sigma_xt}\sigma^+e^{-i\frac{\Omega_a}{2}\sigma_xt} e^{i\eta_j(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}e^{i(\Delta_j t-\phi_j)}+\textrm{H.c}\right].\end{aligned}$$ The spin raising and lowering operators have contributions of $\sigma_x$ and $\sigma_y$, i.e. $\sigma^{\pm}=\frac{1}{2}(\sigma_x\pm i\sigma_y)$, which in a rotating frame with respect to $\Omega_a/2 \ \sigma_x$ makes $\sigma_y$ to rotate at frequencies $\pm\Omega_a$ while it does not affect $\sigma_x$. We then invoke the RWA to average out those rotating terms. Note that this is valid under the assumption $\Omega_b\ll\Omega_a$. The free energy term of the effective two-level system is given now by $\sigma_y$, and hence, the new dressed spin-basis is $\left\{{\left|\textstyle{\uparrow}\right\rangle}_y,{\left|\textstyle{\downarrow}\right\rangle}_y \right\}$. In this basis the fluctuating term $\Omega_a\delta_{\Omega_a}(t)/2 \ \sigma_x$ can be depreciated following the same arguments given in Sec. \[sec:ccd\], as well as $\delta_m(t)$. Hence, the Hamiltonian can be approximated by $$\begin{aligned}
\label{eq:H2IIeff}
H_2^{II}\approx \frac{\Omega_b}{2}\sigma_y+\sum_j\frac{\Omega_j}{2}\left[\frac{\sigma_x}{2}e^{i\eta_j(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t})}e^{i(\Delta_j t-\phi_j)}+\textrm{H.c}\right].\end{aligned}$$ We can summarize the operating regime on the second layer as $\Omega_b\ll\Omega_a\ll\omega_I$. Additionally, $\Omega_a$ has to be large enough to ensure decoupling with respect to $\delta_m(t)$, this condition is $\Omega_a\gtrsim 1/(2\pi\tau_m)$ or, in different words, $\Omega_a$ has to be larger than the crossover frequency, see Sec. \[sec:OU\]. At the same time, and following the same arguments, $\Omega_b$ needs to handle the fluctuation $\Omega_a\delta_{\Omega_a}(t)/2 \ \sigma_x$, and hence, $\Omega_b\gtrsim 1/(2\pi\tau_{\Omega})$ which implies the relation $\tau_{\Omega_a} \gg \tau_m$. Yet, both the intensity of the noise and the RWA ($\Omega_b\ll\Omega_a$) play a decisive role to successfully apply a second layer of protection in the CCD scheme.
We note that now we may use only one traveling wave to produce the Rabi-like interaction. Setting $\Delta_1=+\nu-\xi$, $\phi_1=3\pi/2$ , we arrive to $$\begin{aligned}
\label{eq:H2R}
H_2^{II}\approx \frac{\Omega_b}{2}\sigma_y-\frac{\Omega_1 \eta_1}{4}\sigma_x\left(ae^{-i\xi t}+{a^{\dagger}}e^{i\xi t} \right),\end{aligned}$$ after using the vibrational RWA. The previous equation is equivalent to the Rabi model in a rotating frame w.r.t. $\xi{a^{\dagger}a}$, $$\begin{aligned}
H_R=\frac{\Omega_b}{2}\sigma_y+\xi {a^{\dagger}a}-\frac{\Omega \eta}{4}\sigma_x\left(a+{a^{\dagger}}\right).\end{aligned}$$ As in the case of the first layer, the Dirac equation can be realized in a straightforward manner. Choosing $\Omega_1=\Omega$, $\eta_1=\eta$, $\Delta_1=\nu$ and $\phi_1=\pi$ the Eq. (\[eq:H2IIeff\]) reduces to $$\label{eq:H2D}
H_2^{II}\approx \frac{\Omega_b}{2}\sigma_y+\frac{\eta \Omega}{2}\sigma_x\hat{p},$$ which is equivalent to the Dirac Hamiltonian with $c_D=\eta\Omega/2$ and $m_D=2\Omega_b/(\eta^2\Omega^2)$. Note that the effective Hamiltonians given in Eqs. (\[eq:H2R\]) and (\[eq:H2D\]) are valid under a number of approximations, as for the first layer. Additionally, we now require $\Omega_b\ll\Omega_a$ due to a RWA, but at the same time $\Omega_b$ must be still large enough to decouple with respect to the noisy term $\Omega_a\delta_{\Omega_a}(t)\sigma_x$.
Numerical results {#sec:num}
=================
Here we present numerical simulations of the previous derived effective Hamiltonians. We compare the usefulness of CCD scheme in contrast to the bare realization, denoted here as [*zeroth layer*]{} (see for example [@Pedernales:15] and \[ap:1\] for a derivation), i.e., when no protection against noise is provided. We explore two physical regimes in the realized quantum Rabi model, namely, the paradigmatic resonant case to observe Rabi oscillations, and the limiting case where a quantum phase transition takes place [@Hwang:15; @Puebla:16]. Then, we present the case of the evolution of a Dirac particle. We emphasize that all the numerical simulations involving trapped-ion Hamiltonians have been carried out after the optical RWA without performing further approximations.
The bare realization or [*zeroth layer*]{} is accomplished by two lasers $$\begin{aligned}
\label{eq:H0sim}
H_0^I=\frac{\delta_m(t)}{2}\sigma_z&+\frac{\Omega_1(1+\delta_{\Omega_1}(t))}{2}\left[\sigma^+e^{i\eta_1\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_1t-\phi_1)}+ \textrm{H.c.}\right]+\nonumber \\&+\frac{\Omega_2(1+\delta_{\Omega_2}(t))}{2}\left[\sigma^+e^{i\eta_2\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_2t-\phi_2)}+ \textrm{H.c.}\right],\nonumber\\\end{aligned}$$ while the first layer involves and additional laser for protection purposes, $$\begin{aligned}
\label{eq:H1sim}
H_1^I=\frac{\delta_m(t)}{2}\sigma_z&+\frac{\Omega_1(1+\delta_{\Omega_1}(t))}{2}\left[\sigma^+e^{i\eta_1\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_1t-\phi_1)}+ \textrm{H.c.}\right]\nonumber \\
&+\frac{\Omega_2(1+\delta_{\Omega_2}(t))}{2}\left[\sigma^+e^{i\eta_2\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_2t-\phi_2)}+ \textrm{H.c.}\right]\nonumber\\
&+\frac{\Omega_a(1+\delta_{\Omega_a}(t))}{2}\left[\sigma^+e^{i\eta_a\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_at-\phi_a)}+ \textrm{H.c.}\right].\nonumber\\\end{aligned}$$ Finally, the second layer adds a time-dependent Rabi frequency, $$\begin{aligned}
\label{eq:H2sim}
H_2^{I}=\frac{\delta_m(t)}{2}\sigma_z&+\frac{\Omega_1(1+\delta_{\Omega_1}(t))}{2}\left[\sigma^+e^{i\eta_1\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_1t-\phi_1)}+ \textrm{H.c.}\right]\nonumber \\
&+\frac{\Omega_a(1+\delta_{\Omega_a}(t))}{2}\left[\sigma^+e^{i\eta_a\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_at-\phi_a)}+ \textrm{H.c.}\right]\nonumber\\
&+\frac{2\Omega_b\cos(\Omega_at)(1+\delta_{\Omega_b}(t))}{2}\left[\sigma^+e^{i\eta_b\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_bt-\phi_b)}+ \textrm{H.c.}\right].\nonumber\\\end{aligned}$$ The effective magnetic-field fluctuation is described by $\delta_m(t)$, as shown in Sec. \[sec:OU\] and \[sec:ccd\], with parameters $\tau_m=50 \ \mu$s and $T_2=3$ ms. Note that distinct experimental setups may suffer different magnetic-field fluctuation, and thus $\tau_m$ may differ. In this respect, depending on the correlation time $\tau_m$, our scheme can be adapted to suppress magnetic-field fluctuations by setting properly the Rabi frequencies $\Omega_j$, as discussed in Sec. \[sec:ccd\]. However, for a too short noise correlation time, i.e. in the limit of Markovian noise $\tau_m/T_2\rightarrow 0$, the tunability of the simulated Rabi models using CCD scheme is reduced as the Rabi frequency must fulfill $\Omega_a>1/(2\pi\tau_m)$ to ensure decoupling. We recall that the characteristic frequency from which the spectral density starts to decay as $1/f^2$ corresponds to $f_{cr}=1/(2\pi\tau_m)$, and therefore $\Omega_a>f_{cr}$, as explained in Sec. \[sec:ccd\]. In addition, the fluctuation of the $j$th laser’s amplitude, denoted as $\delta_{\Omega_j}(t)$, is parametrized with $\tau_\Omega=1$ ms and $c_\Omega=2p^2/\tau_\Omega$ as it describes a relative amplitude fluctuation, with $p=0.1\%$. We have considered an equal noise for the lasers with intensities $\Omega_1$ and $\Omega_2$, i.e. $\delta_{\Omega_1}(t)=\delta_{\Omega_2}(t)$, while the fluctuations of the rest are completely independent. However, we also performed simulations with uncorrelated noise between $\Omega_1$ and $\Omega_2$ and no significant differences have been observed. In all the simulations, the trap frequency has been chosen as $\nu=2\pi\times1.36 \ {\textrm{MHz}}$, the Lamb-Dicke parameter as $\eta_{1,2}=0.06$ and $\eta_{a,b}=0.01$ [@Gerritsma09; @Gerritsma11].
Quantum Rabi model realization {#subsec:QRM}
------------------------------
Here we present the numerical simulations of the trapped-ion Hamiltonian realizing the quantum Rabi model to observe the paradigmatic Rabi oscillations. The simulated quantum Rabi model in the $i$th layer can be written as $$\label{eq:simR}
H_{R,i}=\frac{\tilde{\Omega}_i}{2}\sigma^i_{\tiny{\textrm{TLS}}}+\tilde{\omega}_i{a^{\dagger}a}-\tilde{\lambda}_i\sigma_{\perp}^i\left(a+{a^{\dagger}}\right),$$ where $\sigma_{\tiny{\textrm{TLS}}}^i$ and $\sigma_{\perp}^i$ stand for the Pauli matrices of the free energy term of the two-level system and the orthogonal direction of the interaction, respectively. The parameters used to simulate this model using Eqs. (\[eq:H0sim\]), (\[eq:H1sim\]) and (\[eq:H2sim\]) are gathered in Table \[tab:1\], as well as their relation with the effective frequencies given in Eq. (\[eq:simR\]), $\tilde{\Omega}_i$, $\tilde{\omega}_i$ and $\tilde{\lambda}_i$. Note that $\Omega_{1,2}=\Omega$ and $\eta_{1,2}=\eta$ for zeroth and first layer.
[@llll]{} ${}$&Zeroth layer&First layer&Second layer\
$\Delta_1$ & $\nu+\delta_1$& $\nu-\omega_1$&$\nu-\omega_2$\
$\Delta_2$ & $-\nu+\delta_2$& $-\nu+\omega_1$& —\
$\Delta_a$ & — & $0$ & 0\
$\Delta_b$ & — & — & 0\
$\phi_{1,2}$& $3\pi/2$ & $3\pi/2$ & $3\pi/2$\
$\phi_a$ & — & $0$ & 0\
$\phi_b$ & — & — & $\pi/2$\
$\sigma_{\tiny{\textrm{TLS}}}^i$ & $\sigma_z$ & $\sigma_x$ & $\sigma_y$\
$\sigma_{\perp}^i$ & $\sigma_x$ & $\sigma_y$ & $\sigma_x$\
$\tilde{\Omega}_i$ &$\frac{1}{2}(\delta_2+\delta_1)$ & $\Omega_a$ & $\Omega_b$\
$\tilde{\omega}_i$ & $\frac{1}{2}(\delta_2-\delta_1)$ & $\omega_1$ & $\omega_2$\
$\tilde{\lambda}_i$ & $\frac{\eta\Omega}{2}$ & $\frac{\eta\Omega}{2}$ & $\frac{\eta_1\Omega_1}{4}$\
In order to achieve the same effective model, regardless of the layer, we will introduce dimensionless constants to define a target Hamiltonian. These are $R\equiv\tilde{\Omega}_i/\tilde{\omega}_i$ and $g\equiv2\tilde{\lambda}_i/(\tilde{\omega}_i\sqrt{R})$. Hence, fixing $R$ and $g$, $H_{R,i}/\tilde{\omega}_i$ represents the same effective quantum Rabi model.
We set $\tilde{\omega}_{0,1,2}=\tilde{\Omega}_{0,1,2}=2\pi\times 5 \ {\textrm{kHz}}$ to simulate a resonant case $R=1$, and a dimensionless coupling constant $g=1/4$. This implies that: (i) for $H_0^I$, i.e. for the bare realization, $\delta_2=2\pi\times10 \ {\textrm{kHz}}$, $\delta_1=0$ and $\Omega_{1,2}=2\pi\times20.83 \ {\textrm{kHz}}$; (ii) for $H_1^I$ (first layer) $\omega_1=2\pi\times 5 \ {\textrm{kHz}}$, $\Omega_a=2\pi\times 5 \ {\textrm{kHz}}$ and $\Omega_{1,2}=2\pi\times20.83 \ {\textrm{kHz}}$; (iii) for $H_2^I$ (second layer) $\omega_2=2\pi\times 5 \ {\textrm{kHz}}$, $\Omega_b=2\pi\times 5 \ {\textrm{kHz}}$ and $\Omega_a=40\Omega_b=2\pi\times200 \ {\textrm{kHz}}$, $\Omega_1=2\pi\times41.67 \ {\textrm{kHz}}$.
We illustrate how CCD improves the realization of the Rabi model by means of the fidelity among the wavefunction of the ideal Rabi model, ${\left|\textstyle{\psi_{R,i}(t)}\right\rangle}$, and its noisy trapped-ion realization ${\left|\textstyle{\psi_i(t)}\right\rangle}$ for the $i$th layer of protection, which reads $$\begin{aligned}
F_i(t)=\left| \left< \psi_{R,i}(t)\right| \left.\psi_i(t) \right> \right|.\end{aligned}$$ We will also compare the oscillations of the population on the excited state of the qubit which is given by ${\left\langle\textstyle{\sigma_{\tiny{\textrm{TLS}}}^i+1}\right\rangle}/2$ in both cases, ideal and the trapped ion realization with different noisy contributions.
In Figs. \[fig:TIRabiCCD\] the improvement achieved by applying the CCD scheme is clearly demonstrated for two different initial states, ${\left|\textstyle{\psi(0)}\right\rangle}={\left|\textstyle{0}\right\rangle}{\left|\textstyle{\uparrow}\right\rangle}_{\tiny{\textrm{TLS}}}$ and ${\left|\textstyle{\psi(0)}\right\rangle}={\left|\textstyle{0}\right\rangle}{\left|\textstyle{\uparrow}\right\rangle}_{\perp}$, where $\sigma_{\tiny{\textrm{TLS}}}{\left|\textstyle{\uparrow}\right\rangle}_{\tiny{\textrm{TLS}}}=+{\left|\textstyle{\uparrow}\right\rangle}_{\tiny{\textrm{TLS}}}$ and $\sigma_{\perp}{\left|\textstyle{\uparrow}\right\rangle}_{\perp}=+{\left|\textstyle{\uparrow}\right\rangle}_{\perp}$. To the contrary, there are specific situations in which CCD scheme could deteriorate the desired realization. In particular, if the considered initial state is parallel to both magnetic noise, $\delta_m\sigma_z$ and Hamiltonian (i.e. when we deal with the dark state), to apply CCD scheme is counterproductive since it changes a source of noise, that originally just gives rise to a global phase, to an orthogonal noise producing transitions and distorting the dynamics. This is the case for ${\left|\textstyle{\psi(0)}\right\rangle}={\left|\textstyle{0}\right\rangle}{\left|\textstyle{\downarrow}\right\rangle}_{\tiny{\textrm{TLS}}}$ in the Rabi model when $g\ll 1$ i.e. when the Jaynes-Cummings model arises. As we see in Fig. \[fig:TIRabiCCD\_SdownZ\_F\], for $R=1$ and $g=1/4$ the fidelity of the first layer is noticeably worse that an unprotected realization, while the second layer is just as good as the original. This reveals that CCD scheme does not necessarily lead to an improved realization; it depends on several factors which have to be taken into account beforehand.
Critical dynamics of the superradiant quantum phase transition in the Rabi model
--------------------------------------------------------------------------------
In order to illustrate the versatility of the CCD scheme, we analyze the realization of a time-dependent Rabi Hamiltonian in the ultra-strong coupling regime. In this respect, it has been recently shown that the Rabi model (Eq. (\[eq:simR\])) undergoes a quantum phase transition in the $R=\Omega/\omega_0\rightarrow\infty$ limit at the critical point $g_c=2\lambda_c/\sqrt{\Omega\omega_0}=1$ despite of consisting only of a single two-level system and a single-mode bosonic field [@Hwang:15]. For finite $R$, critical behavior is revealed in the form of *finite-frequency* scaling functions, in an approach that is equivalent to finite-size scaling in traditional phase transitions [@Fisher:72; @Botet:82]. As shown in [@Puebla:16], the presence of the quantum phase transition can be observed with a single trapped-ion that interacts with one of its vibrational modes. This can be achieved resorting to non-equilibrium universal scaling functions [@Acevedo:14; @Puebla:16] in terms of the expectation value $\left<\sigma^i_{\tiny{\textrm{TLS}}}\right>$ of Eq. (\[eq:simR\]), which can be measured with high-fidelity in a trapped-ion system [@Myerson08; @Burrell10]. To obtain such non-equilibrium universal scaling functions one can proceed as follows. Prepare an initial state ${\left|\textstyle{\psi(0)}\right\rangle}={\left|\textstyle{0}\right\rangle}{\left|\textstyle{\downarrow}\right\rangle}_{\tiny{\textrm{TLS}}}$ at $g=0$ for a fixed $R$, such that $\sigma_{\tiny{\textrm{TLS}}}^i{\left|\textstyle{\downarrow}\right\rangle}_{\tiny{\textrm{TLS}}}=-{\left|\textstyle{\downarrow}\right\rangle}_{\tiny{\textrm{TLS}}}$, and then quench continuously in a time $\tau_Q$ the coupling constant $g$ until $g=g_c=1$ is reached. Then, at $g(\tau_Q)=1$ for a frequency ratio $R$ we calculate the quantity ${\left\langle\textstyle{\sigma_{\tiny{\textrm{TLS}}}^i}\right\rangle}_R(\tau_Q,R)=\left|\left<\psi(\tau_Q)\left|\sigma_{\tiny{\textrm{TLS}}}^i \right|\psi(\tau_Q)\right>-{\left\langle\textstyle{\sigma{\tiny{\textrm{TLS}}}^i}\right\rangle}_{GS}(R) \right|$, where ${\left\langle\textstyle{\sigma{\tiny{\textrm{TLS}}}^i}\right\rangle}_{GS}(R)$ is the ground-state expectation value of $\sigma_{\tiny{\textrm{TLS}}}^i$ at $g=1$ and $R$. The non-equilibrium universal function is found as $S(T)=R^{\mu}{\left\langle\textstyle{\sigma_{\tiny{\textrm{TLS}}}^i}\right\rangle}_R$ where $T\equiv R^{-\gamma/(\mu(1+\zeta))}\tau_Q$. The critical exponents are $\mu=2/3$, $\gamma=1$ and $\zeta=1/2$ [@Hwang:15; @Puebla:16]. Note however that the driving time $\tau_Q$ cannot be arbitrarily short since $S(T)$ is obtained assuming adiabatic dynamics away from the critical point. On the other hand, in an ion-trap realization, the duration of the dynamics to reconstruct $S(T)$ is severely restricted due to the presence of various sources of noise [@Puebla:16].
Here, by applying the CCD scheme, we offer a way to overcome these noises, which facilitates the observation of universal scaling functions, and illustrate that the CCD scheme is valid in an extreme parameter regime and even when quench dynamics is considered. Note however that, due to the large desired value of $R$, the second layer is expected to fail as $R\propto \Omega_b$ but $\Omega_b\ll\Omega_a$ is required to fulfill the RWA. Hence, for this specific case the approximations leading to the quantum Rabi model will break down.
The Fig. \[fig:TIQPTuniv\] shows the universal non-equilibrium function $S(T)$ as a function of the rescaled driving time $T$. The solid black line corresponds to the ideal quantum Rabi model, while the points to the trapped-ion realization using a first layer protection with $R=50$ (circles) and $R=100$ (squares) for $0.02\leq\tau_Q\leq 8.6$ in units of $2\pi/\tilde{\omega}_i$. In the inset the results using zeroth and second layer are plotted. Observe the remarkable improvement compared to the zeroth layer, and the failure of the second layer as $\Omega_b$ becomes comparable to $\Omega_a$. The simulation parameters are $\tilde{\omega}_{0,1}=2\pi\times 1 \ {\textrm{kHz}}$, $\tilde{\omega}_2=2\pi\times400{\textrm{Hz}}$, while $\tilde{\Omega}_i=R\tilde{\omega}_i$. For the second layer $\Omega_{a}$ is set to $2\pi\times200 \ {\textrm{kHz}}$, and hence $\Omega_a/\Omega_b=10$ and $5$ for $R=50$ and $100$, respectively, which already provides evidence of the expected failure of the RWA. Additionally, the quench is attained by tuning linearly in time the laser intensities from $0$ to $\Omega_f$. For the zeroth and first layer, $\Omega_f$ results in $2\pi\times 117.8 \ {\textrm{kHz}}$ and $2\pi\times 166.7 \ {\textrm{kHz}}$ for $R=50$ and $R=100$, respectively. For the second layer $\Omega_f$ amounts to $2\pi\times 94.3 \ {\textrm{kHz}}$ and $2\pi\times 133.3 \ {\textrm{kHz}}$ for $R=50$ and $R=100$, respectively.
Dirac equation realization in a trapped-ion setting
---------------------------------------------------
The parameters to realize the Dirac equation, $H_{D,i}/c_D=r\sigma_{\tiny{\textrm{TLS}}}^i+\hat{p}\sigma_{\perp}^i$ with $r\equiv m_Dc_D$, using Eqs. (\[eq:H0sim\]), (\[eq:H1sim\]) and (\[eq:H2sim\]) are gathered in the Table \[tab:2\].
[@llll]{} ${}$&Zeroth layer&First layer&Second layer\
$\Delta_1$ & $\nu+\delta$& $\nu$&$\nu$\
$\Delta_2$ & $-\nu+\delta$& $-\nu$ & —\
$\Delta_a$ & — & $0$ & $0$\
$\Delta_b$ & — & — & $0$\
$\phi_{1}$& $\pi$ & $3\pi/2$ & $\pi$\
$\phi_2$ & $0$ & $\pi/2$ & —\
$\phi_a$ & — & $0$ & $0$\
$\phi_b$ & — & — & $\pi/2$\
$\sigma_{\tiny{\textrm{TLS}}}^i$ & $\sigma_z$ & $\sigma_x$ & $\sigma_y$\
$\sigma_{\perp}^i$ & $\sigma_x$ & $\sigma_y$ & $\sigma_x$\
$m_Dc_D^2$ &$\frac{\delta}{2}$ & $\frac{\Omega_a}{2}$ & $\frac{\Omega_b}{2}$\
$c_D$ & $\eta\Omega$ & $\eta\Omega$ & $\frac{\eta_1\Omega_1}{2}$\
In order to observe the paradigmatic Zitterbewegung [@Gerritsma09] we calculate the expectation value of the position operator $\hat{x}=(a+{a^{\dagger}})$ as a function of time for an initial state ${\left|\textstyle{\psi(0)}\right\rangle}$, eigenstate of $\sigma_{\perp}^i$ (in particular we consider ${\left|\textstyle{\uparrow}\right\rangle}_{\perp}$). We then set a value $m_D$ and $c_D$, or equivalently, $r$. Note that the presented scheme for first and second layer does not allow for a realization of the strict massless limit, $r=0$, since $r$ is proportional to $\Omega_{a}$ or $\Omega_b$ and $\Omega_{a,b}=0$ does not provide a protected Hamiltonian against fluctuations, while in the zeroth layer, $r$ is just proportional to the detuning $\delta$. Nevertheless, for $r>0$, CCD scheme still improves the simulated Dirac equation, as we illustrate in the following.
We set $r=2$, (i) $\delta=2\pi\times5 \ {\textrm{kHz}}$, (ii) $\Omega_a=2\pi\times5 \ {\textrm{kHz}}$, (iii) $\Omega_b=2\pi\times5 \ {\textrm{kHz}}$ and $\Omega_a=2\pi\times200 \ {\textrm{kHz}}$. This implies (i) for Eq. (\[eq:H0sim\]) $\Omega_{1,2}=2\pi\times20.8 \ {\textrm{kHz}}$ and $\Delta_{1,2}=\pm\nu+\delta$, (ii) for Eq. (\[eq:H1sim\]) $\Omega_{1,2}=2\pi\times 20.8 \ {\textrm{kHz}}$ and (iii) for Eq. (\[eq:H2sim\]) $\Omega_1=2\pi\times41.7 \ {\textrm{kHz}}$. In Fig. \[fig:TIDiracCCD\] we plot the fidelity $F_{0,1,2}(t)$ (a) and position expectation value ${\left\langle\textstyle{x(t)}\right\rangle}$ (b) as a function of time. The fidelity corresponds to $F_i(t)=\left| \left<\psi_{D,i}(t)\right| \left.\psi_{i}(t) \right>\right|$, where ${\left|\textstyle{\psi_i(t)}\right\rangle}$ and ${\left|\textstyle{\psi_{D,i}(t)}\right\rangle}$ are the wave-function of the trapped-ion and ideal Dirac equation of the $i$th layer, respectively. Note that the final time corresponds to $t=3(2\pi/c_D)=2.4$ ms. The improvement is clearly shown in Fig. \[fig:TIDiracCCD\]. The second layer works worse at longer times than the first one, which is mainly due to laser-amplitude fluctuations and breakdown of RWA (note that $\Omega_a=40\Omega_b$). Nevertheless, for shorter times, the simulation of Dirac equation in the second layer is considerably enhanced. Finally we want to comment that the access to motional variables is achieved by, for example, adding a second ion to the trap and computing the time derivative of the qubit expectation value [@Gerritsma09; @Gerritsma11], see \[ap:2\] for more details. In principle, this protocol requires to prepare the ancillary ion in a certain quantum state that we will select as parallel to the magnetic noise $\delta_m(t)$. Hence, during the realization of the dynamics, this ion is not affected by external fluctuations, while, for the reconstruction of the time derivatives, a fast evolution is required. In this manner the noise will have an small incidence in the reconstruction of $\langle x(t)\rangle$.
Summary {#sec:conc}
=======
In the present article we demonstrate that concatenated continuous dynamical decoupling (CCD) can be applied to a trapped-ion setup for a robust realization of the quantum Rabi model. We show that the use of the CCD scheme can significantly improve the coherence times and fidelities of quantum simulations in ion-trap experiments. We exemplify this by means of numerical simulations exploiting the rich physics of the quantum Rabi model in three completely different parameter regimes.
This work is supported by an Alexander von Humboldt Professorship, the EU STREP project EQUAM, the ERC Synergy grant BioQ and the CRC TRR21. The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the Germany Research Foundation (DFG) through grant no INST 40/467-1 FUGG. J. C. acknowledges support to the Alexander von Humboldt foundation.
Zeroth layer realization of the quantum Rabi model {#ap:1}
==================================================
Here we recall briefly the procedure to realize the Rabi model and the Dirac equation without resorting to CCD scheme, as shown in [@Pedernales:15].
A tunable quantum Rabi model can be realized as follows. The trapped-ion Hamiltonian, in the rotating frame with respect to $\omega_I/2\sigma_z+\nu{a^{\dagger}a}$ and after the optical RWA, reads $$\begin{aligned}
\label{eq:H0I}
H_0^I=&\frac{\delta_m(t)}{2}\sigma_z+\frac{\Omega_1}{2}\left[\sigma^+e^{i\eta_1\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_1t-\phi_1)}+ \textrm{H.c.}\right]\nonumber \\
&+\frac{\Omega_2}{2}\left[\sigma^+e^{i\eta_2\left(ae^{-i\nu t}+{a^{\dagger}}e^{i\nu t} \right)}e^{i(\Delta_2t-\phi_2)}+ \textrm{H.c.}\right].\end{aligned}$$ Now, choosing frequency detunings such that $\Delta_1=\nu+\delta_1$ $\Delta_2=-\nu+\delta_2$, together with $\Omega_{1,2}=\Omega$, $\eta_{1,2}=\eta$ and $\phi_{1,2}=3\pi/2$ we obtain $$\begin{aligned}
H_0^I&=\frac{\delta_m(t)}{2}\sigma_z-\frac{\eta\Omega}{2}\left[\sigma^+\left(ae^{i\delta_1t}+{a^{\dagger}}e^{i\delta_2t}\right) +\textrm{H.c.}\right]\\
&=\frac{\delta_m(t)}{2}\sigma_z-\frac{\eta\Omega}{2}\left[(\sigma^+e^{i\tilde{\Omega}_0t}+\sigma^-e^{-i\tilde{\Omega}_0t})(ae^{-i\tilde{\omega}_0t}+{a^{\dagger}}e^{i\tilde{\omega}_0t}) \right],\end{aligned}$$ which corresponds to a Rabi model in a rotating frame with respect to $\tilde{\Omega}_0/2\sigma_z+\tilde{\omega}_0{a^{\dagger}a}$, being $\tilde{\Omega}_0=(\delta_1+\delta_2)/2$ and $\tilde{\omega}_0=(\delta_2-\delta_1)/2$.
In a straightforward manner, the Dirac equation is realized when choosing $\delta_{1,2}=\delta$, $\phi_1=\pi$, $\phi_2=0$, $\eta_{1,2}=\eta$ and $\Omega_{1,2}=\Omega$. Then, the Eq. (\[eq:H0I\]) adopts the following form $$\begin{aligned}
H_0^I\approx \frac{\delta_m(t)}{2}\sigma_z+\eta\Omega\left[\sigma^+e^{i\delta t}+\sigma^-e^{-i\delta t}\right]\hat{p},\end{aligned}$$ where $\hat{p}=i({a^{\dagger}}-a)/2$. The previous Hamiltonian is then equivalent to the Dirac Hamiltonian $H_D=\frac{\delta}{2}\sigma_z+\eta\Omega\sigma_x\hat{p}$ in a rotating frame with respect to $\delta/2\sigma_z$ (omitting fluctuations). Thus, $c_D=\eta\Omega$ and $m_Dc^2=\delta/2$.
Measurement of vibrational operators {#ap:2}
====================================
After the system evolution within the CCD scheme we have that the final state is $|\psi(t')\rangle$. Then, we can use another ion which is initialized into the state ${{\left|\textstyle{\uparrow}\right\rangle}}$, and therefore does not suffer from the action of a noisy term $\delta_m(t)/2 \sigma_z^A$, where $\sigma_i^A$ are the Pauli operators of the ancillary ion. Hence, it does not require CCD protection. After the final time $t'$, a short evolution of time $t$ of the form $U = e^{-i\Omega t \sigma_x^A \hat{x}}$ is applied to the state ${\left|\textstyle{\psi(t')}\right\rangle}{{\left|\textstyle{\uparrow}\right\rangle}}$. Then, it is easy to demonstrate that $$\partial_t \langle \sigma_ y^A \rangle\bigg|_{t=0} = 2\Omega \langle \psi(t') | \hat{x} | \psi(t') \rangle.$$
References {#references .unnumbered}
==========
[99]{} Nielsen M A and Chuang I L [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, England, 2000) Feynman R P 1982 *Int. J. Theor. Phys.* [**21**]{} 467 Giovannetti V, Lloyd S and Maccone L 2004 *Science* [**306**]{} 1330 Gisin N and Thew R 2007 *Nature Photonics* [**1**]{} 165 Wu Y, Jelezko F, Plenio M B and Weil T 2016 *Angew. Chem. Int. Ed.* [**55**]{} 6586 Lidar D A 2012 *Adv. Chem. Phys.* [**154**]{} 295 Lidar D A and Brun T A *Quantum Error Correction* (Cambridge University Press 2013) Souza A M, Álvarez G A and Suter D 2012 *Phil. Trans. R. Soc. A* [**370**]{} 4748 Cai J M, Naydenov B, Pfeiffer R, McGuinness L P, Jahnke K D, Jelezko F, Plenio M B and Retzker A 2012 *New J. Phys.* [**14**]{} 113023 Bermudez A, Schmidt P O, Plenio M B and Retzker A 2012 *Phys. Rev. A* [**85**]{}(4) 040302 Lemmer A, Bermudez A and Plenio M B 2013 *New J. Phys.* [**15**]{} 083001 Cohen I, Weidt S, Hensinger W K and Retzker A 2015 *New J. Phys.* [**17**]{} 043008 Mikelsons G, Cohen I, Retzker A and Plenio M B 2015 *New. J. Phys.* [**17**]{} 053032 Carr H Y and Purcell E M 1954 *Phys. Rev.* [**94**]{} 630 Meiboom S and Gill D 1958 *Rev. Sci. Instrum.* [**29**]{} 688 Casanova J, Wang Z Y, Haase J F and Plenio MB 2015 *Phys. Rev. A* [**92**]{} 042304 Timoney N, Baumgart I, Johanning M, Varón A F, Plenio M B, Retzker A and Wunderlich Ch 2011 *Nature* [**476**]{} 185 Tan T R, Gaebler J P, Bowler R, Lin Y, Jost J D, Leibfried D and Wineland D J 2013 *Phys. Rev. Lett.* [**110**]{} 263002 Uys H, Biercuk M J and Bollinger J J 2009 arxiv:0904.0036 Biercuk M J, Uys H, VanDevender A P, Shiga N, Itano W M and Bollinger J J 2009 arxiv:0906.0398 Biercuk M J, Uys H, VanDevender A P, Shiga N, Itano W M and Bollinger J J 2009 *Phys. Rev. A* [**79**]{} 062324 Biercuk M J, Uys H, VanDevender A P, Shiga N, Itano W M and Bollinger J J 2009 *Nature* [**458**]{} 996 Cohen I, Richerme P, Gong Z -X, Monroe C and Retzker A 2015 *Phys. Rev. A* [**92**]{} 012334 Lamata L, León J, Schätz T and Solano E 2007 *Phys. Rev. Lett.* [**98**]{} 253005 Gerritsma R, Kirchmair G, Zähringer F, Solano E, Blatt R and Roos C F 2010 *Nature* [**463**]{} 68 Casanova J, García-Ripoll J J, Gerritsma R, Roos C F and Solano E 2010 *Phys. Rev. A* [**82**]{} 020101(R) Gerritsma R, Lanyon B P, Kirchmair G, Zähringer F, Hempel C, Casanova J, García-Ripoll J J, Solano E, Blatt R and Roos C F 2011 *Phys. Rev. Lett.* [**106**]{} 060503 Hwang M J, Puebla R and Plenio M B 2015 *Phys. Rev. Lett.* [**115**]{} 180404 Puebla R, Hwang M J, Casanova J and Plenio M B 2016 arxiv:1607.03781 Jaynes E T and Cummings F W 1963 *Proc. IEEE* [**51**]{} 89 Casanova J, Romero G, Lizuain I, García-Ripoll J J and Solano E 2010 *Phys. Rev. Lett.* [**155**]{} 263603 Uhlenbeck G E and Orstein L, 1930 *Phys. Rev.* [**36**]{} 823 Wang M C and Uhlenbeck G E, 1945 *Rev. Mod. Phys.* [**17**]{} 323 Gillespie D T 1996 *Phys. Rev. E* [**54**]{}(2) 2084 Wineland D J, Monroe C, Itano W M, Leibfried D, King B E and Meekhof D M 1998 *J. Res. Natl. Inst. Stand. Technol* [**103**]{} 259 Bermudez A, Bruderer M and Plenio M B 2013 *Phys. Rev. Lett.* [**111**]{}(4) 040601 Lemmer A, Bermudez A and Plenio M B 2015 *Proceedings of the International School of Physics “Enrico Fermi”, Course 189,* edited by M. Knoop, I. Marzoli and G. Morigi. Leibfried D, Blatt R, Monroe C and Wineland D J 2003 *Rev. Mod. Phys.* [**75**]{} 281 Brandl M F *et al* 2016 arXiv:1607.04980 Schmidt-Kaler F, Gulde S, Riebe M, Deuschle T, Kreuter A, Lancaster G, Becher C, Eschner J, Häffner H and Blatt R 2003 *J. Phys. B: At. Mol. Opt. Phys.* [**36**]{} 623 Pedernales J S, Lizuain I, Felicetti S, Romero G, Lamata L and Solano E 2015 *Scientific Reports* [**5**]{} 15472 Häffner H, Roos C F and Blatt 2008 *Physics Reports* [**469**]{} 155 Fisher M E and Barber M N, 1972 *Phys. Rev. Lett.* [**28**]{} 1516 Botet R, Jullien R and Pfeuty P, 1982 *Phys. Rev. Lett.* [**49**]{} 478 Acevedo O L, Quiroga L, Rodríguez F J and Johnson N F, 2014 *Phys. Rev. Lett.* [**112**]{} 030403 Myerson A H, Szwer D J, Webster S C, Allcock D T C, Curtis M J, Imreh G, Sherman J A, Stacey D N, Steane A M and Lucas D M, 2008 *Phys. Rev. Lett.* [**100**]{} 200502 Burrell A H, Szwer D J, Webster S C and Lucas D M, 2010 *Phys. Rev. A* [**81**]{} 040302
|
{
"pile_set_name": "ArXiv"
}
|
Togolese footballer Francis Kone won an award after his quick-thinking on the pitch saved an opponent's life.
High on a shelf in an unassuming living room on the outskirts of the Czech city Brno sits a trophy with the words 'THE BEST - FIFA' on it.
It's a modest place for such a prestigious award yet its equally-humble owner hopes its symbolism can change one of football's - and society's - greatest scourges.
Francis Kone was awarded the Fifa prize last month after the Togo international, then playing for Slovacko FC, saved the life of Bohemians' Martin Berkovec while playing a Czech league game in February.
He did so by reacting quickest after the Bohemians goalkeeper was knocked out by one of his own players - putting his fingers inside Berkovec's mouth to stop him from swallowing his tongue.
If you are viewing this page on the BBC News app please click here to vote.
Although it may not strictly meet medical guidelines, the act has become something of a speciality for Kone who has saved four players this way.
Yet the presence of TV cameras this time around meant he has since shot to global fame.
Nonetheless, he still didn't expect to receive the annual Fifa Fair Play Award, his name glittering brightly alongside the likes of Cristiano Ronaldo, Gianluigi Buffon and Zinedine Zidane during October's ceremony in London.
"I was very surprised. It was like a dream," said the Ivorian-born Kone, who has won two caps for Togo because of his mother's nationality.
Over the shock, the 26-year-old now believes the award can help combat the scourge of racism - the striker suffered racial abuse from some Bohemian fans during the opening 30 minutes of February's match in Prague.
"It is a message. Fair play means something like this too - to stop racism, this is fair play," he said.
"I know what they said because they called my club to apologise. They said thanks because I saved one of their players. They apologised because they were saying bad things, like monkey and many other things.
"It's not normal to treat a person as like a monkey - it's incorrect. Football is fair play and I showed them that. Football is not what they are doing with racism. They have to stop it - for me, that's the message."
Kone (left) was labelled a hero after his actions in Prague's Dolicek Stadium on 25 February
Kone's heroic actions did not just change the attitudes of some Bohemians fans, he says, but nearly all the people he meets in a country where he has become a local celebrity.
"Now there is more respect because the way they looked at me and treated me before is not the same as the way as they look at me and treat me now. Now there is more friendliness, a better relationship. For them, I am famous."
One of Kone's proudest moments came when he travelled with FC Zbrojovka Brno, who he joined in July, to the Czech capital last month to play at the home of reigning champions Slavia Prague.
"I was warming up and I could hear the fans say 'it's that guy who saved Martin' and after that, they started clapping and chanting 'Francis' - not my fans, but the fans of Slavia," he says with a sense of wonder.
"The feeling made me like 'Oh My God' and I was not concentrating on warming up - I was away - because it was amazing, very amazing."
Martin Berkovec was quick to praise Kone after the incident and the pair have stayed in touch
From racial abuse to widespread applause, it's quite a leap, quite a journey.
After starting out at FC Bibo in Ivory Coast (the same academy where his late friend Cheick Tiote came through), Kone played in Thailand, Oman, Portugal and Hungary before moving to Czech Republic.
He says he was racially abused in only his third game in European football, when his Portuguese side Olhanense played at the home of two-time European champions Porto in December 2013.
A combination of the abuse and shock - "it was very scary" - meant he almost walked away from football there and then, but he stayed on and now believes he can play a vital role.
"I am a Christian, I pray a lot, I fast a lot so I think God is trying to give me a message. Maybe I am a little angel sent by him. There are some people now who see me as an angel - I have a lot of messages like this. I say 'I am just Kone Francis and it is God who is doing this, not me.'"
Like many, Kone is flummoxed that he has saved the lives of four people in a similar way, describing it as "not normal - but something extra-normal."
After hearing how two people in his neighbourhood died in such fashion, he was inspired to learn from an older friend who stopped another death by preventing the individual from swallowing their tongue.
Francis Kone has dedicated his Fifa Fair Play award to his mother, Akoudji Yawavi Victorine
Kone says he has since saved players' lives in Thailand (2011) and twice in Abidjan (2013 and 2015), despite the physical pain that can come his way.
He has scars on his fingers and surprisingly high up his hand, explaining that people who are slipping away find a lot of power "because they are fighting for their lives" - the problem being that any potential saver tends to have to slip his fingers between teeth that often want to clench tightly shut.
"Martin bit me but the second guy bit me so much - it was horrible," he says. "All his mouth was blood - blood from his mouth, blood from my fingers."
Francis Kone (right) and his agent pose with the Togo international's Fifa Fair Play Award
Nonetheless, he still wasted no time in helping Martin and the pair have stayed in touch, meeting twice despite their conversation being limited by language.
"Unfortunately my English is not that good, or it's as good as his Czech, but we talk somehow so we go for a coffee," Berkovec, who now plays for MFK Karvina, told BBC Sport.
"I remember only what happened before the collision and then I blacked out. The following day I saw what had happened, through the media, and it was horrible. I thank him for saving my life.
"He also got the (Fifa) prize for it so I congratulate him, even though it would be much better if that hadn't happened at all."
Despite the knock-on effects of his actions in south-eastern Prague in February and his belief in the message the Fifa awards sends to racists, Kone shares the sentiment.
"I don't want this to happen again - really - because it's so dangerous and so horrible," explains one of football's quiet heroes. "But if it happens, I have to do what I can do."
|
{
"pile_set_name": "OpenWebText2"
}
|
Edward Grochowicz
Edward Grochowicz (born May 21, 1939 in Warsaw, died March 8, 2014) was a Polish photographer.
Life and career
He graduated from the Technical School of Photography in Warsaw. In 1961 he became a member of the Creative Group "Stodoła 60" (since 1964 "Group ST-60"). Long-term vice-president of the Association of Polish Artists Photographers, chairman of the College of Appraisers at the City Hall of Warsaw. Chairman of the appeals board at the Ministry of Culture and Art, member of the policy council and scholarship committee at the Ministry of Culture and Art, co-founder and member of the Polish Culture Foundation.
He participated in more than 200 national and international exhibitions, among others: the 2nd International Exhibition of Photography (Warsaw 1961), "Photographers seeking" (Warsaw 1971), National Photography Exhibition of X Biennale of Polish Landscape (Kielce 1987). Won many awards, among others the Medal of Merit of Culture. Awarded by the International Federation of Photographic Art with titles of honor: Artiste FIAP (AFIAP) and Excellence FIAP (EFIAP).
Edward Grochowicz's photographs are currently in the Archives of the KARTA Center
References
Category:1939 births
Category:2014 deaths
Category:Polish photographers
Category:People from Warsaw
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Background {#Sec1}
==========
Diabetes is a chronic disease that is responsible for high rates of morbidity and mortality which can be attributed to atherosclerosis and cardiovascular disease \[[@CR1]\]. It is estimated that type II diabetes doubles the risk of cardiovascular disease even after adjustment of other cardiovascular risk factors \[[@CR2]\]. Despite the increase in the rate of treatment of diabetic patients with statins and glucose lowering drugs achieving target glycated hemoglobin (HBA1C) levels and low-density lipoprotein (LDL) levels \[[@CR3]\], another strategy of effective management of diabetes lies in management of the disease process at earlier stage \[[@CR1]\]. Prediabetes is a collective term that encloses individuals with glucose levels lower than cutoff levels for diabetes but too high to be considered normal. It is the term used for individuals with impaired fasting glucose (IFG) and/or impaired glucose tolerance (IGT) and/or HbA1C levels ranging from 5.7 to 6.4% \[[@CR3]\]. Prediabetes is not an uncommon condition with an estimated worldwide prevalence of 343 million individuals expected to rise to 471 million by 2035 \[[@CR4]\]. Prediabetes is a serious clinical condition that not only increases the risk of developing diabetes but also increases the burden of cardiovascular disease risk. Compared to normoglycemic individuals, patients with prediabetes show a 20% higher risk of developing cardiovascular disease (CVD) \[[@CR5]\]. Prediabetes is a toxic state in which both micro- and macrovascular complications of diabetes can manifest \[[@CR6]\]. The prompt diagnosis and proper management of prediabetes are necessary to prevent progression to diabetes mellitus and to prevent microvascular and macrovascular complications that manifest early in the prediabetic state \[[@CR7]\]***.***
Aim of the work {#Sec2}
===============
Observes the effect of prediabetes on the severity of coronary artery disease in patients undergoing elective coronary angiography.
Methods {#Sec3}
=======
The current study was carried out at the cardiology department at a university hospital.
Inclusion criteria {#Sec4}
------------------
Patients who were admitted for elective coronary angiography and/or PCI starting from September 2017 to August 2018.
Exclusion criteria {#Sec5}
------------------
No exclusion criteria were applied.
After an informed written consent, all patients involved in the study were subjected to:
**A- History taking and examination** with special emphasis on age, sex, risk factors for coronary artery disease (smoking, HTN, DM, dyslipidemia, positive family history for premature CVDs), history of CKD detected either by reduction in GFR or high serum creatinine, history of prior percutaneous coronary intervention (PCI) or coronary arteries bypass grafting (CABG), or acute coronary syndrome (ACS).
**B- Laboratory tests**: Level of HBA1C and serum creatinine on admission.
**C- Estimation of renal function**: eGFR was estimated using MDRD formula:
eGFR = 186 × (serum creatinine)--1.154 × age--0.0203 (× 1.210 if black) (× 0.742 if female) \[[@CR8]\]
**D- Interventional data**: Number of vessels affected and atherosclerotic burden of CAD assessed by Gensini score \[[@CR9]\]***.*** For patients undergoing PCI, additional data was collected regarding number of stents used, type of stents used, and total length of stents used.
The studied patients were divided according to HbA1C level to 3 groups:
1- Group A: Normoglycemic patients (HBA1C \< 5.7%)
2- Group B: Prediabetic patients (HBA1C 5.7--6.4%)
3- Group C: Diabetic patients (HBA1C \> 6.4%) \[[@CR10]\]
Statistical analysis {#Sec6}
--------------------
Data were collected and revised on PC. Data were tabulated and statistically analyzed using SPSS 17 software, mean and standard deviation (± SD), and range for parametric numerical data, while the median was used for nonparametric numerical data. Student t-test was used to assess the statistical significance of the difference between two study group means. Mann--Whitney test (U test) was used to assess the statistical significance of the difference of a nonparametric variable between two study groups. Chi-squared test was used to examine the relationship between two qualitative variables. Fisher's exact test was used to examine the relationship between two qualitative variables when the expected count is less than 5 in more than 20% of cells.
Results {#Sec7}
=======
Patients were divided to group A (normoglycemic group, *N* = 228), group B (prediabetes group, *N* = 177), and group C (diabetic group, *N* = 326). Prediabetics represented 24% of the study population (Table [1](#Tab1){ref-type="table"}). Table 1Group classificationHBA1CGroup A(*n* = 228)\
(31.2%)Group B(*n* = 177)\
(24.2%)Group C(*n* = 326)\
(44.6%)Mean ± SD5.25 ± 0.246.00 ± 0.228.92 ± 1.60Range4.5--5.65.7--6.46.5--13
Among patients with HBA1C in the prediabetic range, there were only 8 patients who were known prediabetic and on medical treatment. Among the diabetic group, 7% of patients were newly diagnosed, denoting that newly diagnosed prediabetics and diabetics represent 26% of the study population.
Demographic and clinical characteristics {#Sec8}
----------------------------------------
There was no significant difference regarding age among the three groups, yet group C showed higher prevalence of male gender and a lower prevalence of smoking. Both DM and prediabetes group showed significantly higher prevalence of HTN. The normoglycemic group showed a stronger family history of CAD (Table [2](#Tab2){ref-type="table"}). Table 2Demographic and clinical characteristics of the groupsGroup AGroup BGroup CTest\
value*P* valueSig.Post hoc\
analysisNo. = 228No. = 177No. = 326P1P2P3Age (years)56.68 ± 9.2157.10 ± 9.8458.26 ± 8.872.1660.115NS------Sex(male)171 (75.0%)132 (74.6%)213 (65.3%)7.8230.020S0.9240.0150.033Smoking114 (50.0%)99 (55.9%)111 (34.0%)26.5880.000S0.2360.0000.000HTN84 (36.8%)90 (50.8%)198 (60.7%)30.6490.000S0.0050.0000.032Dyslipidemia120 (52.6%)86 (49.0%)165 (50.6%)0.660.7NS------Known CKD12 (5.3%)9 (5.1%)10 (3.1%)2.0020.367NS------Family history of CAD97 (42.5%)36 (20.3%)82 (25.2%)28.8040.000S0.0000.0000.224*P* value \> 0.05, nonsignificant; *P* value \< 0.05, significant; *P* value \< 0.01, highly significant\*: Chi-squared test; •: one-way ANOVA testP1: *P* value group A vs group BP2: *P* value group A vs group CP3: *P* value group B vs group C
Assessment of renal function {#Sec9}
----------------------------
On comparing the three groups, there was no significant difference regarding the mean eGFR or prevalence of CKD (Table [3](#Tab3){ref-type="table"}). Table 3Assessment of renal functionGroup AGroup BGroup CTest\
value*P* valueSig.No. = 228No. = 177No. = 326CreatinineMean ± SD1.03 ± 0.261.05 ± 0.251.02 ± 0.290.980•0.376NSRange0.6--1.60.6--1.90.5--2.9eGFRMean ± SD78.86 ± 23.3176.92 ± 23.2878.41 ± 24.120.361•0.697NSRange35--14237--14225--149CKD54 (23.7%)33 (18.6%)75 (23.0%)1.711\*0.425NS*P* value \> 0.05, nonsignificant; *P* value \< 0.05, significant; *P* value \< 0.01, highly significant\*, Chi-squared test; •, one-way ANOVA testP1: Group A vs group BP2: Group A vs group CP3: Group B vs group C
Prior history of ischemia {#Sec10}
-------------------------
There was no significant difference in history of PCI or CABG prior to the current procedure between the different groups with significantly higher prevalence of prior ACS in patients with prediabetes (Table [4](#Tab4){ref-type="table"}). Table 4History of CAD among the different groupsGroup AGroup BGroup CTest\
value\**P* valueSig.Post hoc analysisNo. (%)No. (%)No. (%)P1P2P3Prior PCI42 (18.4%)39 (22.0%)75 (23.0%)1.7470.417NS------Prior CABG9 (3.9%)3 (1.7%)15 (4.6%)2.7840.249NS------Prior ACS81 (36.0%)93 (52.5%)114 (35.0%)16.5430.000S0.0010.8030.000*P* value \> 0.05, nonsignificant; *P* value \< 0.05, significant; *P* value \< 0.01, highly significant\*: Chi-squared testP1: Group A vs group BP2: Group A vs group CP3: Group B vs group C
Interventional data {#Sec11}
-------------------
Regarding the type of procedure performed, group A showed a lower rate of PCI compared to group C. Both group B and group C showed a larger number of vessels with significant disease when compared to group A. LM disease was significantly higher in groups B and C when compared to group A. Group B showed a more complex coronary anatomy with a higher Gensini score than group A and comparable to group C. The type of stent used was similar among the different groups. Length of stents used was higher in prediabetic when compared to normoglycemic group denoting a longer length of lesions (Table [5](#Tab5){ref-type="table"}, Figs. [1](#Fig1){ref-type="fig"} and [2](#Fig2){ref-type="fig"}). Table 5Interventional data among the different groupsGroup AGroup BGroup CTest\
value*P* valueSig.Post hoc analysisNo. = 228No. = 177No. = 326P1P2P3ProcedureCA99 (43.4%)66 (37.3%)116 (35.6%)14.507\*0.024S0.2150.0090.662CA + PCI69 (30.3%)57 (32.2%)93 (28.5%)CA + PTCA3 (1.3%)0 (0.0%)0 (0.0%)PCI57 (25.0%)54 (30.5%)117 (35.9%)No. of vessels042 (18.4%)18 (10.2%)38 (11.7%)41.574\*0.000S0.0000.0000.4351102 (44.7%)54 (30.5%)87 (26.7%)248 (21.1%)66 (37.3%)111 (34.0%)336 (15.8%)39 (22.0%)87 (26.7%)40 (0.0%)0 (0.0%)3 (0.9%)LM disease11(4.8%)21(11.8)%34(10.4%))7.4180.0245S0.0090.010.6Gensini scoreMedian (IQR)35.75 (24--64.5)66 (49--94)65 (36--96)72.404≠0.000S0.0000.0000.967Range0--1520--1350--156Type of stentNo0 (0.0%)3 (2.7%)3 (1.4%)6.393\*0.172NS------BMS0 (0.0%)3 (2.7%)3 (1.4%)DES120 (100.0%)105 (94.6%)204 (97.1%)No. of stents03 (2.4%)3 (2.7%)3 (1.4%)22.963\*0.003S0.1030.0000.331178 (63.4%)54 (48.6%)90 (42.9%)239 (31.7%)48 (43.2%)99 (47.1%)30 (0.0%)3 (2.7%)15 (7.1%)43 (2.4%)3 (2.7%)3 (1.4%)LengthMedian (IQR)33 (21.5--50)42 (32.5--59)48 (28--66)16.055≠0.000S0.0040.0000.500Range12--11012--9610--147*P* value \> 0.05, nonsignificant; *P* value \< 0.05, significant; *P* value \< 0.01,highly significant\*, Chi-square test; •, one-way ANOVA testP1: Group A vs group BP2: Group A vs group CP3: Group B vs group C Fig. 1CAD severity among different groups represented by Gensini score (median and IQR) Fig. 2Length of stents used among different groups
Discussion {#Sec12}
==========
Our study included 731 patients who presented to our university hospital to undergo elective coronary angiography for the diagnosis and treatment of CAD starting from September 2017 to August 2018. We aimed to evaluate the effect of prediabetes on angiographic outcomes in those patients. One hundred and seventy-seven patients were prediabetics constituting 24% of the study population. Similar prevalence of prediabetes was demonstrated among elective PCI patients and ACS patients in various registries \[[@CR11], [@CR12]\]. Patients with prediabetes had the same age range as diabetics and normoglycemic subjects, yet female gender was more prevalent among the diabetic group. This can be explained by the findings of Kodama et al. \[[@CR13]\] suggesting that cardiovascular risk in the diabetic population is higher among women than in men. Although Kataoka et al. \[[@CR14]\] and Choi et al. \[[@CR11]\] found no significant difference in age between normoglycemic and prediabetic groups, the results of both showed male preponderance across the different groups. There were more smokers in the prediabetes group compared to diabetics (Choi et al.) \[[@CR11]\]**.** However, we found that smoking was not significantly different between normoglycemic patients and prediabetics. There was a parallel increase in the prevalence of hypertension with increase in HBA1C. This can be attributed to insulin resistance promoting both hypertension and diabetes (Sowers) \[[@CR15]\] or a myriad of genetic and environmental factors contributing to the development of both diabetes and hypertension \[[@CR16]\]*.* Choi et al. \[[@CR11]\] also demonstrated a higher prevalence of hypertension among prediabetic patients than normoglycemic patients undergoing elective PCI. Similarly, Zhang et al. \[[@CR17]\] demonstrated that hypertension was more common in prediabetics than normoglycemic subjects and in diabetic group more than prediabetic group. Patients with prediabetes had a prevalence of dyslipidemia which was comparable to diabetics and normoglycemic subjects. Nakamura et al. \[[@CR18]\] demonstrated that among CAD patients, prediabetics and diabetics showed a higher prevalence of dyslipidemia, yet this was evident in postprandial lipid levels and not the fasting lipid levels which are used as the standard screening test. Similarly, Açar et al. \[[@CR12]\] found no difference in prevalence of dyslipidemia between prediabetic, normoglycemic, and diabetic subjects. The prevalence of CKD was not significantly different among the three groups, although diabetes is known as a common comorbid risk factor for CKD \[[@CR19]\] as well as CKD pathophysiology starting in prediabetic subjects \[[@CR20]\]. Those results are similar to Zhang et al. \[[@CR17]\] and Choi et al. \[[@CR11]\] who found no significant difference in prevalence of CKD among CAD patients. This can be attributed to hindering of both pharmacological and interventional treatment of cardiovascular disease by the presence of CKD in addition to increments in the risk of contrast-induced nephropathy with worsening of renal function; the management plan of CAD in CKD patients is directed towards more conservative management \[[@CR21], [@CR22]\]. Prediabetic subjects showed involvement of coronary arteries with a more aggressive atherosclerotic process resulting in CAD severity that was significantly higher than normoglycemic subjects and comparable to diabetic subjects. The number of coronary arteries with significant disease was higher in the prediabetic group than the normoglycemic group, yet there was no significant difference when compared with the diabetic group. This is similar to the findings of Santos et al. \[[@CR23]\] who demonstrated that among patients with CAD confirmed by angiography, prediabetes was more commonly associated with multivessel disease. In addition, Açar et al. \[[@CR12]\] found that among patients presenting with acute coronary syndrome, diabetic and prediabetic patients showed significantly higher prevalence of three vessel diseases when compared to normoglycemic patients. The complexity of CAD assessed by Gensini score was higher in the prediabetic than in normoglycemic subjects and comparable with diabetics. This is similar to the results of Açar et al. \[[@CR12]\] where patients with prediabetes and diabetes showed a more complex coronary anatomy than normoglycemic subjects with a higher proportion of patients with three vessel diseases and higher CAD severity assessed by both SYNTAX and Gensini scores. This is in accordance with the results of Kataoka et al. \[[@CR14]\]; both prediabetes group and diabetes group showed a higher Gensini score when compared to those without diabetes. The glycemic state didn't affect the type of stent used, with drug-eluting stents (DESs) used in most of patients across the three groups. This goes hand in hand with Choi et al. \[[@CR11]\] as all patients of the different groups received DESs. When comparing the length of stent used among the different groups, both prediabetics and diabetics required significantly longer stents than normoglycemic patients. This can be attributed to the findings of De Rosa et al. \[[@CR24]\] who assessed plaque characteristics in stable CAD patients and demonstrated that both prediabetes and diabetes were associated with a higher and longer plaque burden. Zhang et al. \[[@CR17]\] assessed OCT data regarding non-infarct-related plaques in patents presenting with ACS and found that raised HBA1C in prediabetic subjects was associated with more complex and active plaque structure with longer lipid length, higher lipid content, thinner fibrous cap, higher macrophage infiltration, wider lipid arc, and more calcification than normal subjects but was comparable to diabetic subjects. HBA1C was independently associated with significantly higher lipid length. Those results agree with the findings of Kataoka et al. \[[@CR14]\] which demonstrated that both prediabetes and diabetes were associated with high average lesion length in patients with CAD assessed by quantitative coronary angiography. Similarly, Choi et al. \[[@CR11]\] found significantly longer lesions in prediabetics when compared to normoglycemic subjects.
Conclusion {#Sec13}
==========
Prediabetes is not merely a step closer to diabetes, it is a stage of diabetes which shows a similar atherosclerotic disease progression causing more complex coronary anatomy and requiring a higher number of longer stents. Yet, such a stage is always overlooked. Prediabetes confers high yet modifiable cardiovascular risk. Rigorous lifestyle interventions and medical treatment can help flatten the risk of conversion to diabetes, regression to normoglycemia, and reduction of the cardiovascular disease burden in this population.
ACS
: Acute coronary syndrome
ANOVA
: Analysis of variance
BMS
: Bare metal stent
CA
: Coronary angiography
CABG
: Coronary artery bypass grafting
CAD
: Coronary artery disease
CKD
: Chronic kidney disease
CVD
: Cardiovascular disease
DES
: Drug eluting stent
DM
: Diabetes mellitus
eGFR
: Estimated glomerular filtration rate
GFR
: Glomerular filtration rate
HTN
: Hypertension
IFG
: Impaired fasting glucose
IGT
: Impaired glucose tolerance
IQR
: Interquartile range
LDL
: Low-density lipoprotein
LM
: Left main
MDRD
: Modification of diet in renal disease
No.
: Number
OCT
: Optical coherence tomography
PCI
: Percutaneous coronary intervention
SD
: Standard deviation
**Publisher's Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Not applicable.
AM, SA, and AS performed the angiographies and angioplasties and analyzed and interpreted the patient data. MTZ, AM, and AS were major contributors in writing the manuscript. All authors read and approved the final manuscript.
No financial support or scholarship
All data and equipment were available at Ain Shams University
Ethics approval and consent to participate
This study was approved by the Ethical Committee of Ain Shams university. All the procedures in the study were in accordance with the 1975 Helsinki Declaration, updated in 2013. Informed consent was obtained from all of the participants included in the study.
Not applicable.
The authors declare that they have no competing interests.
|
{
"pile_set_name": "PubMed Central"
}
|
# Generated by vio0 dhclient
search c.symbolic-datum-552.internal.
nameserver 169.254.169.254
nameserver 10.240.0.1
lookup file bind
|
{
"pile_set_name": "Github"
}
|
Likely to step in as acting secretary is Deputy Secretary David Bernhardt, a former oil and gas lobbyist who has played a key role in many regulatory rollbacks during Zinke’s time as agency chief.
In a statement posted to Twitter, Zinke said he is “extremely proud of all the good work” he and Trump accomplished but “cannot justify spending thousands of dollars defending myself and my family against false allegations.”
“It is better for the President and Interior to focus on accomplishments rather than fictitious allegations,” he added.
|
{
"pile_set_name": "OpenWebText2"
}
|
Detecting and detering employee theft.
Physician group practices can limit their vulnerability to employee theft by taking steps to detect theft when it occurs and to deter future occurrences. Steps for detecting theft include being wary of an employee's refusal to take earned time off, conducting periodic credit checks on employees, rotating employees' duties, and conducting impromptu reviews of the practice's finances. Steps for detering theft include routing the practice's checks to a lock box; reviewing cash reports; reconciling check with deposit statements; separating employees' duties; reviewing bank, credit card, and ATM statements; setting the tone for prudent financial management; and reporting cases of theft when they occur.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Wearables have quickly become our generation’s water purifiers. If you are over 35, you know what I’m talking about. Everyone had one. Or two. Or three, and they all ended up in your garage. Loads of promise, but in practice, not a lot of use. While the iWatch is getting all the recent news cycle attention, a recent global patent filing by Google promises to turn wearables into something that can really make a difference in our lives: A wrist-worn watch that fights cancer. And the healthcare and tech community is taking notice.
According to the World Intellectual Property Organization (WIPO), the filing entitled Nanoparticle Phoresis” indicates a watch that could destroy cancer cells in a person’s blood by automatically modifying or destroying one or more targets in the blood that have an adverse health effect, e.g. by destroying enzymes, hormones, proteins, cells and other things that can adversely affect a person’s health.
Google X has been aiming at life sciences moonshots, and this is no exception. If successful, it figures to shift the paradigm in how cancer is treated. So how would it work?
Well, given that Google X recently revealed that it was developing a pill that could detect cancer cells by “painting” the cells with nanoparticles, making them magnetic, the idea of the watch could be to draw in those magnetic particles – and the cells that they are attached to – via the bloodstream to the wrist and the watch destroys them.
|
{
"pile_set_name": "Pile-CC"
}
|
Outcomes of the Arterial Switch Operation in Children Less Than 2.5 Kilograms.
Children with body weight less than 2.5 kg who undergo the arterial switch operation (ASO) represent a challenging group. We sought to determine outcomes of patients with weight less than 2.5 kg at ASO at a single institution. All patients who underwent an ASO with biventricular repair and weighed less than 2.5 kg at time of surgery were identified from the hospital database and reviewed retrospectively. From 1983 to 2014, 870 patients underwent an ASO with biventricular repair at our institution. At the time of ASO, 31 patients (3.6%, 31 of 870) weighed less than 2.5 kg (mean 2.1; median 2.1; range, 1.1 to 2.4). Twenty-nine patients underwent an ASO for d-transposition of the great arteries, and 2 patients had an ASO for Taussig-Bing anomaly. Mean age at operation was 16 days (median 11; range, 3 to 66). There were 6 hospital deaths (19%, 6 of 31) among patients weighing less than 2.5 kg compared with a hospital mortality of 1.9% (16 of 839) among patients weighing more than 2.5 kg (p < 0.0001). Mortality for children weighing 2.0 kg or less was 50% (5 of 10) compared with a mortality of 2.8% (1 of 21) for children weighing more than 2.0 kg but less than 2.5 kg. Four patients (13%, 4 of 31) required reoperation during hospital admission. Follow-up was available for 24 survivors (96%, 24 of 25). Mean follow-up was 13.2 years (median 11.9; range, 6 months to 25 years). There were no late deaths. Two patients (8%, 2 of 24) required late reoperation. No patient had more than mild neoaortic valve regurgitation, and all survivors were in New York Heart Association class I at last follow-up. Early mortality for children weighing less than 2.5 kg undergoing the ASO remains high; however, most of the mortality occurred in children weighing 2.0 kg or less. Long-term outcomes for survivors are excellent.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Monday, April 25, 2016
Can BMW Fend Off The Charge of the Tesla Model 3? Part 2
In last week's post, we looked at the impact that Tesla's Model S has had on the sales of competing vehicles in the large luxury segment in the US. That set the table for the question of whether or not the Model 3 can have equal or perhaps even greater success in the entry level, premium segment when it hits the streets sometime in the end of 2017 or early 2018. That segment has been owned by BMW's 3-Series for decades, and BMW isn't going to just give it up without a fight.
But what exactly can they do? The Model 3 has captured the imagination of the public and Tesla has received over 400,000 reservations in the first three weeks since the reservation process has opened. That staggering number has undoubtedly caused a few sleepless nights for product planners of various OEMs. In fact, if we look at theory of Diffusion of Innovations, the interest in the Model 3 would absolutely prove that the electric vehicle market has now moved beyond the innovators and early adopters, and we are now well into the early majority phase. That's good news for Tesla, but is BMW also ready to capitalize on the inevitable market shift we are witnessing?
The short answer is yes, they absolutely can. In fact they are probably positioned better than any other OEM to do so because of the tremendous investment that they have made in BMW i. They've poured billions into the i division, and it wasn't just for the i3 and i8. Lessons learned working with CFRP, aluminum and a variety of sustainable materials and manufacturing processes will be carried into future plug-ins. In fact, it's doubtful any auto manufacturer has spent more restructuring the company in preparation for the shift to electrics, than BMW has over the past seven years. However, the remarkable Model 3 reservation list probably indicates that they need to accelerate their EV programs and bring some vehicles to market a little sooner than they might have planned if they want to minimize defection from the brand. The good news for BMW is that Tesla can have a million reservations, and that won't mean they can actually make the cars fast enough to satisfy demand. In fact, every car Tesla has released so far has has been delayed, and even when they initially "launch" the vehicle, it takes them 4 to 6 months before they are making them in serious volume and the first few months of production are usually plagued with quality issues.
The Tesla's Model 3 is predicted to launch in late 2017
So even if Tesla does manage to have a few ceremonial Model 3 deliveries in late 2017 as promised, they probably won't be making them in volume much before the summer of 2018, and I highly doubt they will deliver more than 30,000 to 40,000 Model 3s before the end of 2018. By the time 2019 rolls around, Tesla will likely have any initial quality issues worked out and will be able to begin really producing the vehicle in high volume. So BMW has about three years to produce a vehicle to compete in this segment which will curb mass defection from the loyal 3-Series following, as well as keep the BMW name synonymous with innovation, performance and sustainability.
Does BMW have a vehicle in development that can compete in this class that has already been green-lighted for production? Yes they do, the 2020 i5. We've all read an assortment of i5 predictions from various "BMW insiders" ranging from it being a hydrogen fuel cell vehicle, to an EV with a range extender. If BMW is serious about competing in this space than it shouldn't be either. The i5 needs to be a long range electric vehicle, there's no need to mess around with range extenders or fuel cells. The remainder of this post is purely my thoughts and predictions on what BMW should and could do to remain a leader in the industry. I have nothing concrete to base these opinions on, and everything you read below is purely speculative.
The cornerstone of the BMW i will be the 2020 i5 which will launch in mid 2019 with the following specs:
So why doesn't BMW bring the i5 to market sooner and beat Tesla to the punch? Is it because they don't think the market is ready, or they just don't believe in long range electric cars just yet? The answer to both of those questions is no. It's all about the batteries. Tesla knows this, and refused to wait for the market to bring cutting edge battery cells to them. Instead they are building what will be the largest battery factory in the world, to supply their cars with the best batteries as soon as they are available. BMW, along with the rest of the OEMs, will rely on third party suppliers for their battery cells. It's too early to tell which strategy is best, but once the Gigafactory is operational, it should provide Tesla with the advantage of having the best cells available and at a lower cost, but that has not yet been proven.
Why 2019? That's because Samsung SDI, BMW's battery partner is scheduled to bring to market their next generation lithium ion battery cell sometime in 2019. These new cells have been described by Samsung as the "Low Height Pack" cell generation because they aren't nearly as tall as the batteries currently used in the i3 which will allow for a lower seating position. However, the real progress is in the specific energy of the cells and the cost. The current i3 uses 60Ah cells that are believed to have a specific energy of 130 Wh/kg. The 2017 i3 is rumored to be using the latest Samsung SDI cells that are the same physical size as the 60Ah cells, but are 94Ah with a specific energy of about 190 Wh/kg. These new cells are going to increase the i3's range from 81 miles per charge to about 120 MPC. However that still isn't good enough for the long range Model 3 competitor that the i5 needs to be. The 2020 i5 will use Samsung's Low Height Pack cells that are estimated to be about 125Ah with a specific energy of about 250Wh/kg, nearly double the energy density of what the current i3 batteries have and cost less than the current 60Ah cells do. These cells will allow BMW to stuff a 78.75kWh battery pack in the i5 and still keep the weight under 4,000lbs.
A Samsung SDI rep holding their new "Low Height Pack" cells which won't be available until 2019. Notice the energy rating is not listed on the cell as it is on the other batteries on display. Also note the low height as compared to the 94Ah cell on the left. That 94Ah cell is rumored to be in the 2017 BMW i3, and is the same physical size as the 60Ah cell used in current i3s.
The i5's battery pack I'm designing would consist of 14 modules, each containing 12 battery cells for a total of 168 cells. If BMW allows 90% of the pack to be available, that means 70kWh of usable energy and an EPA range of about 245 miles per charge. It will also accept up to 150kW of DC power and utilize the emerging network of 150kW DC fast chargers that, by then, will begin being funded by members of the CharIn EV association. The network will be minuscule compared to Tesla's Supercharger network, and Tesla still has a huge advantage there, but at least customers will see a path to what someday could rival the Supercharger network, which currently doesn't exist. I'm not even ruling out a partnership with Tesla, where the other OEMs pay Tesla to install 150kW CCS stations at every Supercharger location. After all, at Audi's 2014 LA Auto Show press conference, the automaker promised they would have a network of 150kW DC Fast charge stations installed and operational before they launch the 2019 e-tron Quattro. How else could they accomplish that?
The i3's battery tray
Granted, even if BMW hits the mark with the i5, the Model 3 is going to be a widely popular vehicle as long as Tesla can manage to deliver what they have promised. However, a strong competitor from BMW like what the i5 has the potential to be, can limit the number of sales the Model 3 takes from BMW in this segment. The i5 will cost more than the Model 3, starting at $49,990. However the standard i5 will be better optioned than the standard Model 3, and I believe a loaded Model 3 will end up costing around $60K anyway. Therefore the average purchase price of the two cars may only be $6,000 to $8,000 apart.
That said, the i5 isn't the only plug they'll have in 2020. By then BMW's entire array of models will offer PHEV options. They already sell the X5 40e plus the 330e, and by the end of the year will have the 740e in showrooms. Sometime in 2017 the 540e will be added to the iPerformance PHEV line. These are all very competent PHEVs, and the reviews have been very positive with regards to the driving experience they offer. The only problem I have with these cars is the AER. None of these vehicles boast an EPA range of even fifteen miles per charge, and I just don't find that acceptable in 2016. If BMW wants customers to see the value in paying more for the plug in version of any car in their line, it has to deliver an electric range that can save them a reasonable amount in fuel to offset the couple thousand dollars extra the vehicle costs, and 13 miles of electric range just doesn't do it.
BMW now calls the PHEV line that comes from their conventionally powered vehicles "iPerformance"
BMW needs to upgrade the batteries in their PHEVs to the higher density cells coming to market now, and then again in 2019. If BMW were to use the higher energy cells available later this year, the AER of their iPerformance PHEVs would jump up to about 20 miles per charge without increasing the battery's physical size or weight. Then, in 2019 when the 125Ah cells are available, they can bring the 2nd generation PHEVs to market with a boost to 30 - 40 miles of electric range. This won't satisfy the hardcore EV aficionado, but there will be plenty of people looking to buy their first plug in. These people aren't ready for a 100% electric car, and a PHEV with a respectable AER will bring them (or keep them loyal) to the brand.
The final piece of the puzzle is the 2nd generation i3. Using Samsung's Low Height pack 125Ah cells means BMW can offer a 48kWh i3 which would most likely have about a 180 mile electric range. I expect BMW to stick with the range extender option when the 2nd generation i3 is released so the choices will be the 180 mile BEV and a REx that has about 325 miles of combined range, and both versions will charge at 150kW like the i5. I also expect it to have the functionality to turn the REx on manually when the operator wishes, because BMW will have worked out the issues with CARB and the BEVx designation which is why the current i3's range extender is restricted from using the built in Hold SOC Mode that European i3 owners get to use. Expect the gen 2 i3 to be slightly larger than the current model, and I'm betting BMW will replace the rear coach doors with conventionally opening ones. They will also figure out how to add a third seat in the back. BMW will improve the drivetrain efficiency as well as add about 20 hp and 25 left of torque. 0 to 60 times for the BEV will be in the mid 6 second range.
BMW will bring the MINI Rocketman BEV to market in 2018
One last prediction. In 2018 BMW will introduce the MINI Rocketman and it will be available in pure BEV and use many of the i3's components. It will have about a 100 mile range and at launch be available only as a hardtop. However, the following model year it will also be offered in convertible trim, finally giving the EV faithful an attractive and sporty electric ragtop offering.
While BMW's i5 will be the Model 3's direct competitor, I believe it's going to take an entire portfolio of plug-ins for BMW to remain competitive in the ever expanding plug-in market. While BMW absolutely needs a flagship long distance pure EV, there is no one size fits all in the automobile industry, and the plug-in market is no exception. This is one area where BMW has a clear advantage over Tesla. By 2020, BMW will have no less than seven models with plugs in their showrooms, and most likely that number may actually be closer to ten models. If the incredible amount of reservations the Model 3 has amassed has proven anything, it's that the public is absolutely ready for compelling electric vehicle options. Tesla has captured the imagination of the world. They've proven that it can indeed be done and people want to support them for doing so. Your move BMW.
35 comments:
I would like to see a REx option on the future i5 though, because it still give you such a great amount of flexibility. If any deals or agreements with Tesla's SC network can be made, then the necessity might not exist to the same extent. I'm not holding my breath on this one though.
As you are aware, the charging network situation here in Europe is different. Ecotricity is a provider of 106 free, reliable CCS charging stations in the UK right now and I am hoping this will double over the next 2-3 years.
One major drawback I see is the change of minds that is still necessary in BMW garages. In conversations I have with long-serving BMW employees at dealerships, I find that many still don't take the EV move by their product manufacturer seriously. They're still in love with the classic models and would rather see BMW produce 100 refined grilles for a ICE 7 series than spend any time on a plug-in car. This shift will take longer than we all think.
A second big aspect of the Tesla allure is the software focus. This obviously starts with OTA upgrades and moves all the way through to the overall experience of their cars. As an i3 owner for just over a year, I am stunned how difficult it is to report to BMW i even the simplest bug with their iRemote app or the ConnectedDrive site. User experience, testing, design... there are so many opportunities for other manufacturers with software heritage to trump here. Just as Tesla has to step up to the mark and deliver a lot of cars by 2018, BMW has an equally gigantic task of moving the needle not only on their BMW i offerings, but also in terms of software end user mindedness.
Another entertaining and thoughtful article, Tom! I hope your predictions are well-founded, but I'm not optimistic. Had Herbert Diess been named CEO instead of Krüger, BMW i would be coming into its own. Instead, BMW's petrolheads have taken over, driving out the EV-enthusiasts in the company. BMW is marching into the future with its eyes firmly focused on the past.
Your render of the i5 looks to be based on the Active Tourer PHEV, a car I like a lot (well, except for the gasoline engine :-) I would have deleted the black hood, and the i3's "running stream" rear side window treatment, especially now that the Bolt has copied it.
If BMW was really serious about electric cars, it would have brought the Active Tourer PHEV to the U.S.; it could have exposed people to daily electric driving, who are not quite ready to abandon their ICE. The Active Tourer is lower, lighter, sleeker, more efficient, and sportier than the X1, while having the same utility as the X1. The AT's electric 4WD system is also far superior to the X1's mechanical 4WD. But BMW is still too dogmatic to offer anything in the U.S. that isn't macho-ICE.
I haven't read anything in the last year about BMW's "plans" for electrification that gives any indication they are actually moving ahead. They are steadfastly resisting, offering only vague statements about the next decade. As you know, I have offered hopeful suggestions about the path that BMW i could and should take to advance its groundbreaking early progress, but to no avail. I've now given up on BMW, and am waiting patiently for my dual-motor Model 3.
Great article, Tom. As the comment above notes, BMW has to also adopt to the way things are done in the group sourced environment to consolidate its current advantages. While you may have insight due to your access, and you've previously noted that input from the Active E group was well listened to, BMW has to reach back to its endusers and engage more. The iconceirge has to step up its game, the dealerships have to be a part of this shift or all the best intentions won't bear fruit in the failure to execute.
Transitioning to that will prove as much a challenge to BMW as becoming a mass market manufacturer (that's reaching out to a lower end market segment) that Tesla is attempting to do. Its an opportunity that they have to actively remediate, more than in the i3 rollouts and ongoing support.
Keep the thoughtful articles coming, it is always a treat to read informed information, and not just fan boy blathering so common these days.
Great insights and speculation into BMW's near-term approach. I wondered about one thing though:
"These cells will allow BMW to stuff a 78.75kWh battery pack in the i5 and still keep the weight at about 4,000lbs."
Are they really going to field the i5 at 4000 lbs? Seems pretty astonishing considering that they've spent so much energy bringing CFRP to the i3--which weighs in at 2/3rds of that.
I'm not really in love with the styling of the i3--though I drive a peculiar-looking Leaf so I imagine I'm not one to talk. I think the process of converting electrics to mass-market appeal involves muting a lot of what I see as nonsensical design flairs at this point. (If only the Model X hadn't added gullwing doors we might see more of them on the road...) To that end, "the rest" have much to learn from Tesla. BMW would do itself a favor by not forcing its loyalists to choose a Jetson's mobile simply because they're used to BMW and they're ready to drive an electric. The saying "different strokes for different folks" cuts both ways: half the population might love the styling, but if half hates it, as a company it's going to require a lot more effort to eke out your market share in the new world of electrics when there aren't just "standard" choices elsewhere, but beautiful ones.
The only other comment I have is I think your projections for Model 3 manufacturing are overly pessimistic (or optimistic, as a BMW enthusiast.) This is one area I've constantly lamented about EV enthusiasts and manufacturers alike--something akin to infighting within the small-but-growing EV crowd. It isn't exactly that--but to me it's the antithesis of the concept "a rising tide lifts all boats."
BMW is not "doomed" by Tesla in any respect. As you've pointed out, BMW is probably more ready than any other OEM for the world-changing revolution of electrics. As ready as Tesla? I think that's doubtful regardless of cash reserves or manufacturing capacity. If Tesla only produces 30-40,000 Model 3's in 2018, it would be a disaster, and it could spell doom for the company. But it's terribly unlikely, in my view, that it turns out to be so few.
Based on their current manufacturing trend, Tesla is likely to produce at least 75,000 cars in 2017. This figure is hampered a good deal by the lengthier and more complicated production sequence of the Model X. A figure of 100-125k cars built in 2018 seems a fairly reasonable estimate, and that would likely include a 50-60% share of Model 3's. On the low side, 50,000 with the possibility of almost 70k produced, 2018.
This sets aside entirely the possibility (likelihood, in my view) of Tesla partnering with an OEM manufacturer to build components of the Model 3 chassis. Such an arrangement wouldn't likely impact production as early as 2018, but it's hard to believe Tesla isn't considering an option like this given that they have an idea of what their most ambitious projections could yield and they also know the wolves are starting to come for their heels.
My estimate for 2018 production of the Model 3 is thus roughly double what yours is. 60-75,000 is to me a more reasonable estimate.
Elon Musk has recently reaffirmed Tesla's target to produce 80,000 to 90,000 Model S and X cars during 2016 (after a slow start due to the Model X introduction), so saying Tesla is likely to produce at least 75,000 cars during 2017 seems quite a bit on the low side.
If you could post where Elon is quoted to have said this--better yet where he reaffirmed it--it would be helpful. As of this moment, their trajectory is to 80,452 cars in 2016 based on their last five quarterly production figures, and Q1 of '16 has been less than stellar at 14,820 cars, so they have some catching up to do in order to average 25% more cars per quarter for the rest of the year than they have ever made at peak production. They have the demand, certainly, but demand hasn't been the question.
My comment was made to establish a lowest bound; to put reasonable figures on the table. I have no doubt that they’ll exceed 75,000 cars next year, but to make guesstimates significantly higher than that for a lower bound is foolish. There’s no doubt the figure will drop somewhere in the 75-100k range, and if anything begins to restrict that number it will be a modest transition of demand from purchasing an S or an X to purchasing a ‘3, as might well be expected. No one anticipates a glut of ‘3s to become available until at least Q1 or Q2 of 2018, and essentially that means we’re saying the same thing.
This is a press release from April 4: http://ir.teslamotors.com/releasedetail.cfm?ReleaseID=963460 Tesla is having a Q&A on 1st quarter results and outlook on May 4 at 2:30 p.m. PT which will be live webcast on Tesla's investor page.
Deliveries doesn't equal production. And Tom's number are not pessimistic. I think 30K-40K M3 for the first year would be very surprising. it's a new platform, new tooling, new components, 100% new. If Tesla produce 20K-25K it will already be an accomplishment, so when Tom's make his prediction, he actually give more credit to Tesla to overcome 1rst year production issues and ramp up. And it's not like Tesla made its name on its capacity to meet delivery dates.
While I don't drive a leaf, and Im not a BMW fan per say.. I came to BMW because I liked the design of the i3.... just saying some of us actually like it. That said, I'm more in-love with electric than BMW, and my dealer is firmly cementing my feelings, so Im unlikely to buy a second BMW unless its a worthwhile upgrade.
When I first saw the i3, I said what the heck is that. But now owning one, I have totally falling in love with it. Both in looks and the typical BMW driving fell. No matter where I go, people stop to look and talk about the car. When the 2017s come out, I will be a family of two i3s. Note: I also own a BMW M6 which now is only driven once a week, as ev vehicles are the future, which is now here. And what I am saving in gasoline is making the payments on the i3
When I first saw the i3, I said what the heck is that. But now owning one, I have totally falling in love with it. Both in looks and the typical BMW driving fell. No matter where I go, people stop to look and talk about the car. When the 2017s come out, I will be a family of two i3s. Note: I also own a BMW M6 which now is only driven once a week, as ev vehicles are the future, which is now here. And what I am saving in gasoline is making the payments on the i3
Great post Tom, as some as mentionned, I believe the i5 will also have a REX option. It may only be offer in limited area where the charging infrastructure isn't as developped, but it will be available.
If BMW can bring the i5 in 2019 (not in pre-sale, but actual sales) they will be in a good position. If you look at other premium brands (Audi, Mercedes, Cadillac, Lexus, Infiniti...) They are the only one with an active programme producing EV cars. All the others are still in planning mode (Porche is the odd one as they do have something but in a very limited AER).
Another great incisive post Tom. I really hope BMW's strategy follows your thinking and they don't have any wobbles about really committing to the full EV design,technology and engineering vision. I remember reading Johanna Quandt's obituary in The Times which said she was a passionate advocate for electric vehicles and championed the i programme. I hope her children are exercising the same influence on the BMW Board.
Hi Tom - thanks for the great article! Hope BMWBlog will also cover this one (but they might be a bit afraid of reactions from Munich!)Two questions/comments I would like to share:1. Do you think 2019 is going to be early enough? I have strong doubts, especially after the recent announcement in Germany of the EV purchasing incentive of up to € 4K. This will be good for about 300-500 K (PH)EVs but excluding expensive models (above € 60K). So BMW would only have its PHEVs and the i3 basically. I guess Nissan/Renault, Kia and VW will take the lion share of this. The incentive will end somewhere in 2019 (€ 1.2 billion cap).2. I think BMW also has to make its future EVs more energy efficient. I am saying that owning an i3 BEV, which does consume little energy in absolute terms, but not necessarily so good given its size and weight. With the Tesla Model 3 coming in at less than 60 kWh gross battery size, it will likely have 50 kWh net available. My i3 gets usually 19.5 kWh available. Using EPA rating (estimated 215 miles for Model 3), the Model 3 would get more miles than the current i3 per kWh. And that for a bigger, heavier, sportier car with a larger battery - while the i3 is using CFRP and aluminium only. BMW could probably gain more by making the car/drivetrain more efficient than by using expensive CFRP/aluminium.
But I have to agree with Chris Llana that the outlook for BMW is not very good - so many BMW i folks leaving seems to indicate they are disappointed in the EV strategy of the company. Confirming what we have seen from the outside as well. Not showing the next BMW i model/concept in the anniversary year is a bad signal - it could mean BMW will not last another 100 years!
Good comment as always Tom. Your suppositions have the benefit of conversations with the movers and shakers at BMW. Bearing in mind that we now know the new i3 battery is 6 months early I would hope that both i3 v2 and i5 are available at least 6 months before your forecast.
Good comment as always Tom. Your suppositions have the benefit of conversations with the movers and shakers at BMW. Bearing in mind that we now know the new i3 battery is 6 months early I would hope that both i3 v2 and i5 are available at least 6 months before your forecast.
First, an an experienced corporate marketing person, I think we should all consider Tom's post to be a calculated leak by BMW to try to blunt some of the Model 3 hype. I suspect there is less speculation and more informed knowledge that Tom is able to admit. But moving on...
Second, no matter how you much you discount Tesla, BMW Is playing catch up in a big way and they are very late to the party.
The Model S consumes .06 Wh/lb/mi, the i3 consumes .09 Wh/lb/mi. The Model S is dramatically more efficient than the i3, which allows them to avoid spending money on carbon fiber. BMW has very limited expertise in producing EVs, they are building up staff and expertise in electric propulsion, but the numbers tell the story. They are not making a very efficient car today, nor are they making it any volume. And the i3 has had its share of recalls and engineering problems (I just had my engine mounts replaced).
More importantly, BMW (et al) are still acting like this is an engineering problem to be solved. It's not, it's a system problem, an element of which is vehicle engineering. BMW is way behind, if not entirely blind to the software engineering, dealer network and charging network issues. An i5 BEV will find very few CCS chargers once it leaves either coast, and 150kW CCS chargers to feed that i5 are a pure fantasy. BMW dealers, as other people have mentioned, have not embraced the i3, it's a CARB compliance car, sort of a nuisance, not a "real" BMW. My dealer only has an i3 mechanic in the shop 3 days a week, less than 5 miles from a fully staffed Tesla service center.
Tom's scenario is entirely dependent on Samsung delivering in volume, on time. Tesla is already producing batteries in its Nevada factory, low volume PowerWall products to be sure, but is way ahead of anyone else. And Tesla's factory is close to a proven supply of lithium in Nevada, that's not the case in South Korea. Ship lithium to South Korea, then batteries to Germany, then cars to the US. Tesla ships lithium across Nevada, and batteries from Nevada to California. Which supply chain sounds better?
VW had to cheat to make its diesel's compliant, and diesel's have been part of its business for many decades. Mitsubishi just did its mea culpa on fuel economy. Don't be surprised if there are others.
I hope BMW is at least moderately successful, competition is good, but history is not on the side of major corporations making big transitions.
No way Jose. Even assuming that your .06 Wh/lb/mi vs. .09 Wh/lb/mi is correct (which is a weird metric - I've never seen it mentioned before), that would be due to the MS aerodynamic advantage, which is a function of the type of car, not any intrinsic technological superiority. The i3 is the most efficient car on the market today, and that is a fact.
Weight is generally considered bad and to avoid, so why should more of it - everything else being equal - improve the efficiency measure (as in your equation), unless you consider it to be a proxy of something else of value? Size maybe, but then why not use interior volume instead?
The i3 specs say 36.9 cf of cargo space with the seat down, and let's add 1 cf for the frunk to make it 37.9, though the i3 frunk is pretty much worthless.
The Model S reports 63.4 cf with the seat down, including 5.3 in a very useful frunk.
If we call the front seats equal in volume, that's a gift to the i3 because the Model S has much more volume in the front seats.
My i3 reports an average of 4.3 mi/kWh, or 232 wh/mi. Today I drove just under 100 miles in the Model S, with outside temp in the mid-80's so the AC was running the entire time, a mix of freeway (70-75mph) and country roads and reported 282 wh/mi. Another gift to the i3 because with the AC running the i3 range drops about 8%.
So based on volume:
i3 = 232 / 37.9 = 6.12 wh/cfModel S = 282 / 63.4 = 4.45 wh/cf
I3 / Model S = 137% of the power per cubic foot of volume.
I like my i3 and currently plan to lease a 2018 when my 2014 lease is up. But BMW needs to use at least 30% less energy to be equivalent to Tesla, and they’ve already played the carbon card. The BMW drive train is just significantly less efficient than a Tesla.
Russ, these are interesting numbers and I do not question them. However, back to my original point, how does that prove Tesla's superior *drivetrain* efficiency, and not simply better aerodynamics? That's all I am saying.
How fast will 40 KWs get you on the motorway -- 70 / 80mph?Use the REx for cruising and the battery for acceleration?Set the REx cut in to 50% charge on long journeys?Set the REx cut in to 30% charge day to day.Real world 70 mile range on batteries only.Real world as in battery life.
more important than the car is the charging network. range, looks, cost are all secondary for widespread adoption. I can drive in my Model S almost anywhere in the USA today, and it only will get much better in the next year. I have a model 3 on order. Without a supercharging network only very few BMW EV's will be sold. Black swan events are quite dramatic for the incumbents. Motorola (phones), Nokia, and Blackberry are just a few of the large players of gadgets on history's list of destruction. Nikon and Cannon make great digital cameras, but they had to fully dedicate themselves to the transition. Can BMW? I say the odds are less than 50%, I hope they do, they make great cars, I had a BMW M3 prior to the Tesla...
How fast will 40 KWs get you on the motorway -- 70 / 80mph?Use the REx for cruising and the battery for acceleration?Set the REx cut in to 50% charge on long journeys?Set the REx cut in to 30% charge day to day.Real world 70 mile range on batteries only.Real world as in battery life.
"The i5 will cost more than the Model 3, starting at $49,990. However the standard i5 will be better optioned than the standard Model 3, and I believe a loaded Model 3 will end up costing around $60K anyway. Therefore the average purchase price of the two cars may only be $6,000 to $8,000 apart."
How will it be better optioned that the Tesla Model 3? That is very hard to imagine given that Tesla's innovation is currently unrivaled. Also, a completely max'd out Model 3 MAY cost close to your $60K number, but the majority of them will sell for much less. The model 3 is better looking (IMO), has a better range, will be out much sooner and most importantly has an existing charging network. BMW has a long, long way to go...if they can even catch up at this point.
BMW (and any other manufacturer for that matter) is better off getting in line to purchase the batteries from Tesla and going from there.
Lastly, Tesla owners don't want anything with a gas engine. That said, while BMW may have close to 10 cars with a plug - only a few of those would be considered a competitor of any Tesla vehicle.
One might find it hard to single out a name as the best since both BMW and Tesla are two close competitors. Tesla's Model S, undoubtedly, hit the market hard and now there is a speculation whether Tesla could break its own record in the entry level premium segment when its Model 3 will hit the market in 2017 or 2018. Let the time to decide! Tesla also claims that it has received bulk reservations, which definitely worries its rivals. Yes, I am quite sure that BMW will bring something new to take on the model, however, the market shift takes time. The German player might launch 2020 i5 in 2019.
It's been a long, long time since I've looked through premium quality material much like your post. You have carried out incredibly good work on this and I'm impressed with your viewpoint. Really decent work.BMW GREENSBORO
|
{
"pile_set_name": "Pile-CC"
}
|
Christmasville
Once upon a time an Abominable Snowman came to your office for help. He informed you that Santa Claus was missing and the Christmas was no more, and asked of you to investigate into this case. Although you were the Great Detective Arthur Knight, you did not believe Abominable Snowman, which is why he put you to sleep with his magic powder. Ended up the North Pole you understood that it was the truth. Now you are challenged to save Santa and return Christmas to children. You are leaving for Christmasville - the land of Santa Clause - to start your investigation. There you will meet your new friends - Abominable Snowman, Reindeer, Elf and even Death and mysterious Ear. They will be helping you along the way to discover the truth of disappeared Santa in this eye-popping seek-and-find Christmas adventure for entire family.
Features:
* 80 great levels to complete
* Funny and intriguing story
* Gorgeous graphics
* Wonderful music
* A lot of mini-games
|
{
"pile_set_name": "Pile-CC"
}
|
INTRODUCTION
============
Sudden infant death syndrome (SIDS) is death of an infant that is neither attributable to medical history nor explained after autopsy or by death scene investigation. SIDS is the leading cause of death in the first year of life after the neonatal period and is currently responsible for 0.53 deaths per 1000 infants. High incidence, catastrophic impact on affected families and absence of mechanistic insight means that SIDS represents a major medical challenge. Since the 'back to sleep' campaign in 1994, there have been no further reductions in SIDS incidence. A number of causative mechanisms have been proposed to lead to SIDS, but without any unifying theory or correlation with pathological findings ([@b7-0060503]).
A study of 33,034 infants found that 50% of infants who died of SIDS had a prolonged QTc interval in the first week of life ([@b13-0060503]). Approximately 10% of SIDS cases carry functionally significant genetic variants in sodium and potassium channels causing long QT ([@b2-0060503]), or variants in the gap junction protein Connexin43 (Cx43) ([@b16-0060503]). This circumstantial evidence suggests a role for abnormal electrical conduction in SIDS, but the underlying cause(s) in the vast majority of cases remains unexplained.
Most risk factors for SIDS, including prone sleeping position, respiratory disorders and high altitude, are associated with a reduced oxygen environment. Furthermore, hypoxia is associated with a prolonged QT interval in the adult ([@b11-0060503]; [@b15-0060503]). We therefore hypothesised that neonatal hypoxia leading to abnormal electrical conduction is a potential cause of sudden death.
We used non-invasive electrocardiography to characterize the postnatal maturation of the cardiac electrical conduction system in neonatal mice ([@b6-0060503]). We investigated whether reduced ambient oxygen environment or genetically manipulated hypoxic signalling affected maturation of the cardiac electrical conduction system and the subsequent risk of sudden death.
RESULTS
=======
Maturation of ECG morphology in wild-type mice
----------------------------------------------
To assess electrocardiographic changes immediately following birth, unborn pups were removed from pregnant females at embryonic day (E)18.5 and placed with a foster mother. We performed electrocardiography in the same pups sequentially at 0, 1, 3, 6, 12 and 24 hours after birth. ECG morphology changes were detectable at 1 hour after birth, becoming significant at 3 hours. Heart rate increased whereas QRS, QTc and QTc dispersion (where 'c' denotes correction for heart rate) declined rapidly and then plateaued over the 24 hours ([Fig. 1](#f1-0060503){ref-type="fig"}). We recorded resting ECGs from postnatal day (P) 0.5 to P10 ([Fig. 1](#f1-0060503){ref-type="fig"}). The trends of increased heart rate and declining QRS, QTc and QTc dispersion continued over this timescale ([Fig. 1](#f1-0060503){ref-type="fig"}).
{#f1-0060503}
Hypoxia prevents maturation of the ECG
--------------------------------------
Neonates reared in 10% oxygen for 24 hours showed reduced heart rate and increased QTc and QTc dispersion compared with normoxic controls ([Fig. 2](#f2-0060503){ref-type="fig"}). These parameters were similar to those in newborn neonates.
{#f2-0060503}
*αMHC-Cre::VHL^fl/fl^* mice exhibit immature ECG morphology and sudden death
----------------------------------------------------------------------------
αMHC*-Cre::VHL^fl/fl^* mice have cardiac-specific deletion of von Hippel-Lindau protein (VHL), causing constitutive upregulation of cardiac hypoxia inducible factor (HIF) signalling. Their neonates showed decreased heart rate with increased QRS, QTc and QTc dispersion compared with control αMHC-*Cre::VHL^+/+^*or *αMHC-Cre::VHL^fl/+^*littermates at 10 days after birth ([Fig. 2](#f2-0060503){ref-type="fig"}). αMHC-*Cre::VHL^fl/fl^* mice died between P16 and P18. Before death, there were no observable differences between mutant and control littermates in behaviour or weight. *αMHC-Cre::VHL^fl/fl^* mice exhibited frequent cardiac arrhythmia, consistent with sudden cardiac death, as did hypoxic wild-type mice ([Fig. 3](#f3-0060503){ref-type="fig"}).
{#f3-0060503}
###### TRANSLATIONAL IMPACT
**Clinical issue**
Sudden infant death syndrome (SIDS) remains one of the major enigmas in modern medicine. The 'back to sleep' campaign in 1994, which encouraged parents to place infants on their backs to sleep, promoted a reduction in SIDS incidence from 2 to 0.53 infants per 1000 births. Since then, there has been no reduction in this figure. Thus, advice for parents remains limited, and tragedies that might be preventable continue to occur. It has been previously documented that ∼50% of infants that die from SIDS display a prolonged QTc interval in the first few weeks of life, but the mechanisms underlying this observation have been elusive (except in cases where rare channelopathies and other genetic abnormalities are present).
**Results**
In this study, the authors addressed this issue using a recent innovation in electrocardiography that allows non-invasive recording of the electrocardiogram (ECG) in mice. This enabled the first reported catalogue of ECG changes from birth to 10 days postnatally, measuring changes in heart rate, QTc interval and QRS duration. By altering ambient oxygen concentration or genetically manipulating cellular hypoxic signalling in neonatal mice, the authors show that an increase in ambient oxygen concentration after birth is important for driving maturation of cardiac electrical conduction. Reduced oxygen predisposed mice to arrhythmia and sudden death, which was associated with ECG abnormalities. At the cellular level, reduced oxygen caused aberrant gap junction phosphorylation and distribution, and misexpression of ion channels, in the heart. These findings are consistent with known risk factors of SIDS -- such as head covering, high altitude, respiratory infections, central nervous system abnormalities and the prone sleeping position -- all of which are directly or indirectly associated with a hypoxic environment.
**Implications and future directions**
This study provides a link between neonatal hypoxia, ECG abnormalities and sudden death, which might provide an explanation for many SIDS cases. The results support the use of regular ECG screening of infants, and subsequent close monitoring of infants displaying long QTc interval, as well as ensuring a well-ventilated environment in cots and the use of other hypoxia-prevention strategies. The mouse models used in this study will facilitate further investigation into SIDS, and the non-invasive ECG approach used here can be applied by other researchers investigating cardiac conduction defects in mice.
Risk of sudden death on exposure to hypoxia decreases with age in neonates
--------------------------------------------------------------------------
We hypothesised that postnatal electrocardiac maturation is oxygen dependent and that exposure of neonatal mice to hypoxia at later points after birth would result in lower rates of sudden death. When neonates were raised from birth in a hypoxic environment for 24 hours, mortality was 58%. When neonates were raised from birth in normoxia for 1, 6 and 12 hours, then placed into 10% hypoxia for 24 hours, mortality was reduced to 33, 25 and 17%, respectively ([Fig. 3](#f3-0060503){ref-type="fig"}).
Connexin43 distribution and quantification
------------------------------------------
Cx43 is essential for normal electrical conduction in the heart; cardiac-restricted inactivation of Cx43 leads to slower ventricular conduction and lethal arrhythmias in mice ([@b8-0060503]). We therefore performed immunohistochemistry to investigate left ventricular distribution of Cx43 in neonatal mice reared in 10% oxygen, in *αMHC-Cre::VHL^fl/fl^* mice and in normoxic controls. We found no difference between hypoxic mice and controls in Cx43 distribution and quantification (data not shown). In *αMHC-Cre::VHL^fl/fl^* mice, Cx43 was observed in intracellular aggregates rather than at the cell membrane ([Fig. 4](#f4-0060503){ref-type="fig"}). Western blots using antibodies to phosphorylated and non-phosphorylated Cx43 revealed that total Cx43 was unaltered, whereas the presence of phosphorylated Cx43, thought to be targeted to the plasma membrane ([@b14-0060503]), was nearly undetectable ([Fig. 4](#f4-0060503){ref-type="fig"}).
{#f4-0060503}
Ion channel expression
----------------------
Microarray gene expression analysis showed reduced expression of several cardiac ion channels (potassium channels, potassium inwardly rectifying channels and sodium channels) in neonates reared in 10% oxygen for 24 hours compared with normoxic controls ([Fig. 4](#f4-0060503){ref-type="fig"}).
DISCUSSION
==========
We describe, for the first time, oxygen-dependent maturation of cardiac conduction in the mouse over the hours and days following birth. Increased postnatal heart rate and decreased QRS duration, QTc interval and QT dispersion during the first postnatal week are dependent on downregulation of hypoxia signalling in the heart. Elevation of neonatal cardiac hypoxia signalling leads to arrhythmia and sudden death. This in turn suggests a previously unknown mechanism for SIDS pathogenesis. Our results link hypoxia, a major risk factor for SIDS, with several genetic mutations found in SIDS victims ([@b2-0060503]; [@b16-0060503]).
The link between prolonged QT interval and risk of SIDS has been firmly established ([@b13-0060503]); however, with the exception of genetic variations in ion channels ([@b2-0060503]), the root of prolonged QT remains unknown in most SIDS cases. In some studies of human SIDS, no correlation has been made with QT prolongation. This might reflect the relatively poor sensitivity of surface ECGs to detect changes in QT interval, the age of testing or the use of small cohorts. Our mouse models suggest that hypoxia could be an important precipitant of prolonged QT, and thus sudden death, by hypoxia-induced downregulation of ion channels and Cx43 dephosphorylation. Indeed, hypoxia, prolonged QT interval and risk of lethal cardiac arrhythmias are causally linked in adults ([@b11-0060503]; [@b15-0060503]). It is unclear whether hypoxia alone can be causal in human SIDS cases, as in our models, or whether it exacerbates underlying genetic variations and is additive with other SIDS risk factors.
We found overall levels of Cx43 to be unaltered in mice with constitutively elevated cardiac HIF signalling, but a significant reduction in membrane targeting consistent with Cx43 dephosphorylation ([Fig. 4](#f4-0060503){ref-type="fig"}), which has been reported in adult hypoxic myocardium ([@b3-0060503]). We also found significant downregulation of potassium, sodium and calcium channels when neonates were raised in hypoxia ([Fig. 4](#f4-0060503){ref-type="fig"}). In our current study, dead pups were rapidly eaten by the dam, so the quality of tissue available for autopsy was poor. It will be important to analyse the cause of death by ECG telemetry and immediate autopsy, to compare with human pathological findings in SIDS ([@b7-0060503]).
Sensitivity to myocardial hypoxia decreases with time after birth, with risk of death declining with age of exposure to hypoxia ([Fig. 3](#f3-0060503){ref-type="fig"}). It is not definitively known how the timescale of postnatal development in mice relates to that of humans, but it is well documented that sensitivity to SIDS in humans decreases 4 months after birth. Interestingly, this is when the QTc interval is known to peak in humans ([@b12-0060503]), whereas mice display unidirectional change ([Fig. 1](#f1-0060503){ref-type="fig"}). We propose a 'ratchet' effect whereby oxygen causes maturation of the electrical conduction system, with declining ability to revert to immature phenotype with increasing age. These pathological cardiac changes could represent a predisposition to cardiac death and might themselves be serious enough to lead to death (as in our model), or be lethal in combination with other risk factors such as brain-stem malfunction.
The discrepancies between hypoxia-reared neonates and *αMHC-Cre::VHL^fl/+^* mice might be due to a dosage effect of HIF signalling. It could be that, at 10% FiO~2~, neonatal cardiac HIF signalling is not maximally upregulated, whereas it is in VHL deletion. Systemic effects of generalised hypoxia, such as increased sympathetic activation, might also contribute to the slightly differing phenotypes.
In summary, we propose a model that links neonatal hypoxia with sudden death by cardiac arrhythmia by misregulation of cardiac Cx43 and ion channels. Our model is consistent with existing theories of SIDS pathogenesis and links hypoxia, the major known risk factor for SIDS, with many of the candidate genes for pathogenesis. Our electrocardiographic characterisation in the developing neonatal mouse serves as a benchmark for future studies and we believe that the neonatal hypoxic model and the *αMHC-Cre::VHL^fl/fl^*mouse will facilitate further investigation into SIDS. The lack of validated animal models of SIDS is puzzling given that this is a large clinical problem with little mechanistic insight at the moment. We feel that our study adds further evidence to prompt the use of regular ECG screening of infants and subsequent close-monitoring of those infants displaying long QTc interval to ensure a well-ventilated environment in the infant\'s cot and the use of other strategies to prevent hypoxia.
MATERIALS AND METHODS
=====================
Animal husbandry
----------------
All studies were performed in accordance with the Home Office Animal Procedures Act (1986) and guidelines established by the European Convention for the Protection of Laboratory Animals. In studies characterising ECG maturation in the hours after birth and in hypoxic studies, embryonic F1(CBA/Ca × C57BL/10) mice were removed and fostered at E18.5 onto a Parkes mouse who had littered the previous day. We were able to distinguish the fostered mice by the black colouration of the eyes in the F1(CBA/Ca × C57BL/10) pups compared with unpigmented eyes of the Parkes strain pups. The transgenic mouse strain, *αMHC Cre^+^::VHL^fl/fl^,*was created by crossing transgenic mice with a floxed VHL allele ([@b9-0060503]) with mice containing Cre driven by the α-myosin heavy chain promoter (αMHC Cre), resulting in cardiac specificity ([@b1-0060503]). PCR amplification was performed on tail-derived genomic DNA to determine genotype.
Electrocardiography
-------------------
ECGs were recorded non-invasively in conscious mice using the ECGenie system (Mouse Specifics). Data acquisition was carried out using the program LabChart 6 (ADInstruments). Analysis of individual ECG signals was then performed using e-MOUSE physiologic waveform analysis software (Mouse Specifics) as described ([@b6-0060503]). In this system, ECG recordings are assessed by the user before being analysed by automated algorithms; signals which contain too much noise or incorrectly called waveforms are removed. All data were obtained during daylight hours, when the mouse heart rate is more stable than during the more active nocturnal hours. In evaluating waveforms and intervals, the end of the T wave was determined as the return of the signal to the isoelectric line as previously described ([@b6-0060503]). QTc was calculated according to Bazett\'s formula modified specifically for mice ([@b10-0060503]), i.e. QTc=QT~0~/\[(RR~0~/100)^1/2^\].
Connexin43 studies
------------------
Hearts from embryos and neonatal mice were dissected in cold PBS and immediately snap-frozen in liquid nitrogen. Immunohistochemistry and western blotting for Cx43 was performed as described ([@b5-0060503]). Anti-Connexin43 (Zymed) and anti-phosphorylated-Connexin43 (Invitrogen) were diluted 1:200 for histology. An immunoblot for actin protein (anti-actin antibody; Sigma-Aldrich) was used as a control for equal protein loading in western blotting. All antibodies including secondary antibodies (Sigma-Aldrich) were diluted 1:5000. Full details of immunochemistry and western blotting procedures are available on request.
Ion channel gene expression
---------------------------
RNA extraction and microarray analysis was performed on hearts as previously described ([@b4-0060503]). Briefly, the specimen was placed in 1 ml of TRIzol reagent and homogenised using glass homogenisers and plungers (Uniform, Jencoms, England). Chloroform (200 μl*)*was added, samples mixed by vortex and left at room temperature for 5 minutes. The tubes were then centrifuged at 13,000 r.p.m. for 15 minutes at 4°C. The aqueous phase was then transferred to a new centrifuge tube and 500 μl of chilled isopropanol added and mixed by vortex. After incubation at room temperature for 20 minutes, tubes were centrifuged at 13,000 r.p.m. for 30 minutes at 4°C. The supernatant was discarded and 500 μl of ice cold 70% ethanol (v/v) added to the residual pellet. Tubes were vortexed before centrifuging at 8000 r.p.m. for 5 minutes at 4°C. The supernatant was discarded and the pellets air-dried. The resulting RNA was resuspended in 50 μl of nuclease-free water (Ambion, Huntingdon, UK) and frozen at −80°C until use. RNA cleanup was carried out on samples prior to microarray analysis, followed by assessment of RNA yield and purity (full details available on request).
Labelled RNA was hybridised to the mouse 430Plus 2.0 chip (Affymetrix) (full details available on request) and the raw data analysed using GeneSpring software version 11.0 (Silicon Genetics/Agilent Technologies). Gene lists were quality filtered to remove genes with expression levels below background and limited to report genes that changed by 1.5-fold or greater with a significance of *P*\<0.05 according to an unpaired *t*-test.
**FUNDING**: This work was funded by the Medical Research Council to T.M. \[grant number U117562103\].
**COMPETING INTERESTS:**The authors declare that they do not have any competing or financial interests.
**AUTHOR CONTRIBUTIONS:**M.T.N., R.A.B. and T.J.M. conceived and designed the experiments. M.T.N. performed the experiments. M.T.N. analysed the data. M.T.N. and R.A.B. wrote the paper. T.J.M. edited the paper.
|
{
"pile_set_name": "PubMed Central"
}
|
Q:
Has an entire generation of a young children in a civilization ever been orphaned and raised as loyals?
In a movie I watched a ruler kills all of the adults in a kingdom in order to raise the young children (who were too young to remember or at least understand the event) as loyal soldiers.
Has anything like this ever happened in history? If so, which was the largest occurance?
A:
This is quite reminiscent of the Ottoman Empire's original Janissaries.
At first these were young boys forcibly taken from Christian families as slaves and raised to be the Sultan's personal guard. Not being from Muslim families they could legally be enslaved, and they had no social position in the Empire apart from their relationship to the Sultan. So not only were they indoctrinated to be loyal soldiers, but their position was entirely dependent on their Sultan. Thus, unlike Muslim volunteer troops, they had nothing to gain and everything to lose if something were to happen to the Sultan.
|
{
"pile_set_name": "StackExchange"
}
|
Even with all of the other things that have been going on with the Minnesota Vikings recently, I can't believe we missed something as major as this. I'd like to thank Michael Rand for bringing it to our attention.
Yesterday, 5 November, marked the 25th anniversary of the Minnesota Vikings' 23-21 victory over the (then) Los Angeles Rams at the Metrodome. The score doesn't sound like a terribly out of the ordinary NFL score, until you take a look at the boxscore and see how the Vikings scored their points.
The Vikings got 21 points courtesy of 7. . .yes, seven. . .Rich Karlis field goals, with the majority coming from relatively short distances. To be exact, Karlis' field goals were from 20, 24, 22, 25, 29, 36, and 40 yards, as the Vikings' offense fell apart inside the red zone on numerous occasions. The game went into overtime, and the Vikings won when Mike Merriweather blocked a Rams' punt that went out of the end zone for a safety.
Because of the Vikings' offense sputtering in the red zone on so many occasions, the wrath of the fans came down largely on offensive coordinator Bob Schnelker, including Schnelker getting a sizeable quantity of beer dumped on his head as the team retreated to the locker room after the game.
And from that came the real reason that this is an important day in Minnesota Vikings' history. . .because the combination of those things gave us the greatest post-game rant in the history of the league. Better than Jim Mora's "PLAYOFFS!?" rant. . .better than Denny Green imploring us to "crown their asses". . .Minnesota Vikings' head coach Jerry Burns set the bar by which all other post-game tirades shall be measured.
And his team won the game.
I mean, any excuse we have to post Burnsie's rant is a good one, but it's actually a special occasion for it this time. This is the "uncensored" version, so there's a whole lot of bad language involved and you might want to avoid watching it at work.
Other than that, sit back. . .once again. . .and enjoy. And we're sorry that we got this out there a little bit late. I don't know who to blame for it, but is sure isn't f***ing Schnelker.
|
{
"pile_set_name": "OpenWebText2"
}
|
cannabisnews.com: White House Watch: THC Madness
White House Watch: THC Madness
Posted by FoM on December 08, 2001 at 11:06:59 PT
By Ann McFeatters
Source: Pittsburgh Post-Gazette
And now, for something completely different, to borrow a phrase from Monty Python. The three earnest young men burdened with plastic bags came to the office bearing food. Pretzels with seeds. A snack bar. An energy bar. Tortilla chips.Never mind the caloric sin. We're talking serious evil here. Or so the government says.
Unless you are an avid reader of the Federal Register and perused the tiny print of almost undecipherable bureaucratese on pages 51,539 through 51,544, you might have missed it -- but the government has returned to normal.The Drug Enforcement Administration, under the direction of Asa Hutchinson, the former GOP congressman from Arkansas, has announced rules to ban certain brands of a wide variety of foods -- "beer, cheese, coffee, corn chips, energy drink, flour, ice cream, snack bars, salad oil, soda and veggie burgers" -- if they contain trace amounts of THC.THC, as those who came to the age of majority in the 1960s know well, is tetrahydrocannabinols. As DEA succinctly explains: "That's the hallucinogenic substance in marijuana that causes the psychoactive effect or high."The THC found in certain brands of the above-mentioned food comes from hempseeds and hempseed oil, popular with some so-called "natural food" manufacturers because they are high in protein and serve as a fatty acid supplement -- "good fats" that doctors like. But DEA says such foods are now controlled substances illegal for everyone. Makers of foods with hempseeds or oil, with $5 million in annual sales, argue that the amount of THC is so infinitesimal that inhumanly high consumption of them would be required to get high. They liken it to getting a buzz from eating the opiate-containing poppy seeds on bagels or the alcohol in orange juice.But the Controlled Substances Act says that any consumption of THC is forbidden. And any food that contains it is no longer to be sold, distributed or eaten.Says the DEA: "If you wish to err on the side of caution, you may freely dispose of the product. As stated in the rules that DEA published on Oct. 9, 2001, anyone who has purchased a food or beverage product that contains THC has 120 days (until Feb. 6, 2002) to dispose of the product without penalty under federal law."After Feb. 6, it will be illegal to sell or import any hemp-containing foods.The DEA, in its wisdom, notes that bird seed with cannabis seeds, clothing such as hats, shirt and shoes, cosmetics, lotion, paper, rope, twine and, yes, shampoo and soap, which also can contain hemp, are not illegal. "Based on the information currently available, DEA believes that [such products] do not cause THC to enter the human body and are therefore legal."Confronted with the thought that the government's investing time, money and energy in such a campaign during a time of war is, possibly, ridiculous, Hutchinson says, "Many Americans do not know that hemp and marijuana are both parts of the same plant and that hemp cannot be produced without producing marijuana."Not surprisingly, supporters of food with hempseed oil have gone to court, beseeching the 9th U.S. Circuit Court of Appeals to block the DEA rule. DEA says it is permitted to issue the ban on THC-laced products without a formal rule-making procedure although the public may comment until Dec. 10. "It's like the judge announcing the verdict before the trial," complained John Young, a lawyer for the hemp-food lawsuit, to the National Law Journal.Groups which are applauding the DEA's action, such as the conservative Family Research Council, say food with hempseeds sends a pro-drug message to children and is camouflage for a campaign to legalize marijuana.The other day, confronted by a man in Florida who said the government was not responding to his needs, President Bush muttered, "I can't stand bureaucracy."Bush remembered the cameras were rolling and said that he appreciated "the hard-working people who care enough to work for the government. But what I don't like is systems that get so cumbersome that those who are trying to help you don't get the product out."In the course of writing this, I have munched on the 120-calorie corn chips, the 220-calorie pretzels and devoured the 170-calorie snack bar. In truth, I feel nothing but my waistband.And a curious desire to watch "Monty Python's Flying Circus."Note: The drug war blunders on: The DEA is cracking down on hempseed oil in tortilla chips. Ann McFeatters is National Bureau chief for the Post-Gazette and The Blade of Toledo, Ohio. Source: Pittsburgh Post-Gazette (PA)Author: Ann McFeattersPublished: Sunday, December 9, 2001Copyright: 2001 PG PublishingContact: letters post-gazette.comWebsite: http://www.post-gazette.com/Related Articles & Web Site:FTE's Hemp Linkshttp://freedomtoexhale.com/hls.htmHemp Policy is Put To The Taste Testhttp://cannabisnews.com/news/thread11516.shtmlProtesters Say Hemp is Food Not Drugshttp://cannabisnews.com/news/thread11514.shtmlCannabis University Promotes Hemphttp://cannabisnews.com/news/thread11509.shtml
Home Comment Email Register Recent Comments Help
Comment #18 posted by goneposthole on December 10, 2001 at 07:52:45 PT
Government going, going, gone
"The time to speak up has passed. Now is the time for senseless bickering."
[ Post Comment ]
Comment #17 posted by qqqq on December 10, 2001 at 03:02:58 PT
more..personal,off topic ramblings
...I cannot believe,,that we continue to bomb Afghanistan!.......Have we met any resistance?....NO....Are we somehow still justified in pummeling the SHIT out of possible "targets"??????...what,,is our "goal",to kill bin Laden,,with thousands of tons of bombs,at a cost of over SIX BILLION DOLLARS?????.....I think this is LUNACY at its finest!...It's like trying to kill a mosquito,,with a 12 gauge shotgun,,,,it's like trying to get rid of a housefly,with a sledgehammer,,,,and the US governments military persual of of some Arab,is nothing less than sheer IDIOCY!!!......I mean,,,GIMME A FREAKIN BREAK!!!!!!.....WAKE FREEKIN' UP!!!!..How long until some people start to open their eyes,and realize what is going on!...We have spent way over 6 billion dollars,in some sort of absurd retaliatory hunt for "bin Laden",,"Al Queada",,and "Taliban" ,,in what has been labeled a "War on Terror"!!!!!!!!!!!!!!!and,,much to my chagrinned amazement,apparently the entire populace of the US citizenry,has been successfully BRAINWASHED,,into thinking this is normal,justified,and necessary!.....flocks of sheeple have been convinced to rally around our new ASSHOLE government,as they pummel Afghanistan,like some neanderthal schoolyard bully beating the shit out of some wimp!....The longer it goes on,,,the harder it is to believe!,,,and there are still millions of American Sheeple,who continue to blindly accept all this as normal,and necessary.......I hope some government security asshole snooper-pig is reading this,,because if you are,,,go ahead and track me down,and lock me up!..I am a proud and strong,and true American Patriot,,and anyone who would be a part of trying to censor me,or try to incarcerate me,,is a TRAITOR!....just because uncle sam signs your paycheck,,doesnt mean you have anything to do with being an American!,,heck,come to think of it,if uncle sam wanted to cut me some nice beefy checks,I might even join the united we stand team,(Bush/Ashcroft/Bennett in 2004!)..yikes....The American government has little to do with patriotism nowdays! ....In fact,,,not many people know,,but the real Uncle Sam was born in Norway,and came to the states on a whaling ship,,and he met up with Betsy Ross in Nantucket,,and they had a cheap,rum induced,one night fling,,and spawned an illegitimate love-child son,,who grew up to be that familiar American icon,who later came to be known as Colonel Sanders,of Kentucky fried chicken fame...........
[ Post Comment ]
Comment #16 posted by dddd on December 09, 2001 at 23:09:30 PT
Hi FoM
Wow...Corrie Ten Boom...I read her writings years ago,,and she is truly inspirational...when we read about stuff that people like her went through,,it makes having to hide to smoke weed,not seem that bad...there's always a bright side to almost everything,no matter how bad things may seem.....lol....dddd...Yes,,,I believe we are "headed for a fall",,,in fact,,I feel quite safe,in predicting that the coming year is going to be even stranger than this one..............Looking at the stars,can actually bring one down to earth,,like Corrie Ten Boom,,the main thing of life on earth,is how we treat people every day...Love is stronger than death,,,and niceness and goodness kicks evils' ass,,if not in this life,,then in the one that awaits.
[ Post Comment ]
Comment #15 posted by FoM on December 09, 2001 at 22:05:36 PT
Hi dddd
Glad you had a good time in the high desert. You said a lot in your comment. It is a sad state of affairs how it is going in the USA. I find peace going out and throwing the ball for the dog. At night looking at the stars and how in order everything seems up there. Peace is something that no one can take from you. Corrie Ten Boom was a Christian lady who hide the Jews and she was at peace in all she did even though her life was seriously jeopardised. I pray that our Nation might be Blessed but I don't think we have been good enough to deserve a Blessing speaking on a spiritual level. I hope no one minds. Why should we as a Nation not suffer like other people have? Why do some of our leaders think we are better then any other Nation or Country? I just don't know. Today an army helicopter flew over and what a strange feeling since we haven't seen many planes even though they practice out here doing maneuvers. They haven't since 9-11. There's an old expression. Pride cometh before a fall. I wonder if we are headed for a big fall.
PS: Good to have you back safe and sound.
[ Post Comment ]
Comment #14 posted by dddd on December 09, 2001 at 20:51:57 PT
back to reality
...I spent the weekend camping at Joshua Tree National Monument.It's a very special place in the high desert of Southern California.It was quite cold ,especially at night,(elevation 5000 ft),,,but it's so beautiful,that the cold doesnt matter.At night,the galactic shine of the stars is quite astoundingly awesome.You can see galaxies with the naked eye....seeing it,is enough to make even the bitterest of hardcore atheiests think twice about questioning the reality of God....and,speakin' of God,,,,,the four hour drive from LA to get out there,is mostly along the same 4 and 6 lane freeway that leads to Palm Springs.It is lined with billboards and perpetually clogged with traffic.....I am to the point of sheer disgust,in seeing the masses of brainwashed sheeple with flags in their cars,and "God bless America",,United we stand" bumper stickers,,, but what really enraged me,,was to see that uncle sam has been buying up billboard space to convey his bullshit of false unity....huge billboards with "United we stand,and "God bless America",line the freeway .....think about it.."God bless America",,,is that asking God to bless America,,or telling him to do so? ..Perhaps if God refuses to bless America,Bush,and Ashcroft will consider him a terrorist,and bomb him,,and seize his assets???....furthermore,,,why should God bless America?I cant think of too many things that the Amerika,as represented by its' government,has done that it would deserve to be "blessed" for..?In fact,,it would be a blessing,if God would topple the beastly republicrat Evil Empire,,but that wont happen.. ...The Amerikan government of today,is starting to look alot like a Charles Manson,,it has persuaded millions of innocent people rally behind its' murder and mayhem,under a created illusion that somehow god is walking around in a red white and blue robe,waving a flag. ..
...OK,,,to finally say a few things about the article....The absurdity of prohibition is really disturbing to begin with,,,but it really goes over the edge with the grotesque ban on hemp products.It's almost like they are seeing how far they can go,or what they can get away with,,,and it's becoming increasingly clear,,that they can get away with dam near anything nowdays.....after all,,the hemp ban issue appears somewhat insignificant,when compared to the brutal gang-rape of the Constitution.......and,,like I keep saying,,the only way they are able to pull all this shit off,,is because they control the press.......... And speakin' of the press,,,notice,,the latest silly ploy,is the false issue concerning;"The Liberal biased media"...What a pile of crap!...what a roose,,,what a masterful decoy,to further fake out the flocks.....dddd
[ Post Comment ]
Comment #13 posted by FoM on December 09, 2001 at 20:33:13 PT
News Brief from The Associated Press
Topless Women Draw Attention to Issues
December 9, 2001, 10:00 AM
By AP Staff
A group of women bared their chests in downtown Eugene to protest the effects of pesticides on their children and fight for legal hemp production.
The four women, who called themselves Mother Bares, said Saturday they were also protesting against multinational conglomerates and U.S. military action in Afghanistan.
"This is the mother energy that's coming out right now," said protester Eileen Erdelt.
Police Sgt. Terry Fitzpatrick said state law does not specifically prohibit women from going topless in public.
Copyright 2001 by The Associated Press
[ Post Comment ]
Comment #12 posted by Elfman_420 on December 09, 2001 at 13:33:47 PT
Asa, Asa, Asa....
"Many Americans do not know that hemp and marijuana are both parts of the same plant and that hemp cannot be produced without producing marijuana."No Asa!!!! Many Americans think that hemp and marijuana are the same exact thing and that it is possible to get high from a hemp plant.That is why all of the pro-hemp articles have to state every single time towards the beginning "The hemp plant is from the same family as the marijuana plant, but does not contain enough of the active ingredient THC to get a person high."
[ Post Comment ]
Comment #11 posted by boppy on December 09, 2001 at 08:22:49 PT
I should proof read more closely...
I meant to say "those who fought desegregation..." Sheesh! My apologies!
[ Post Comment ]
Comment #10 posted by boppy on December 09, 2001 at 08:15:46 PT
It all reminds me of a time...
The DEA's attitude and the sheep who agree with them reminds me of the groups of people who fought segregation in the late 50's and early 60's. Their positions were absolutely indefensible and rooted simply in hate. That's exactly what we are encountering here. Blind, ignorant hate. In 1937 when Henry Anslinger who was head of the Federal Bureau of Narcotics contended that cannabis caused users to commit violence and other aggressive crimes and maintained that it provoked black men to molest white women, what were those opinions based on? Facts or hate? The FBN offered no proof of these claims. Did he sound like he was well informed or just perpetuating rascism and class war? The DEA is not to far removed from the attitudes of 1937.
[ Post Comment ]
Comment #9 posted by The GCW on December 09, 2001 at 05:38:16 PT
Would Jefferson, Kill these people responsible?
Thomas Jefferson is quoted: "Hemp is of first necessity to the wealth and protection of the country."
Bush mutters.
[ Post Comment ]
Comment #8 posted by E_Johnson on December 09, 2001 at 00:00:31 PT
We are a population in captivity
That song resonates with me now. I guess the Old Testament makes a whole different kind of story when you experience it from the end of a campaign to wipe you out.There is a campaign to wipe out a certain culture in America, this is what the hemp food ban is all about, it's direct cultural and political warfare against citizens of a particular group.I mean, this is Berlin in the thirties and we are the Jews.Drug Free America -- that slogan doesn't worry people who are conscious of the Nazi dream of a Final Solution to the people whose culture bothered them so much?It's no surpise that Germany is deciding that marijuana prohibition is a waste of time. They understand the true cost of waging these kinds of wars. They taught themselves that the really hard way.But in America, we are in captivity, we are walking in bondage, and it's good to look to the history of people who survived a very long time in that beleaguered state.And we have to keep singing our songs of freedom in that strange land as we walk.Even in modern society, one can learn from ancient people.
[ Post Comment ]
Comment #7 posted by goneposthole on December 08, 2001 at 19:34:31 PT
I don't know
who is red, white, yellow, brown or black and I don't care. I do know that green is the color we all support. Green green. It's green they say on the far side of the hillGreen, green, it's green they say where the grass is greener, still.Bob Seger:"Like a rock""And I stood arrow straight
unemcumbered by the way of all these hustlers and their schemes"I stood proud, I stood tall, high above it all
I still believed in my dreamsLike a rock"
[ Post Comment ]
Comment #6 posted by MysticRevelation on December 08, 2001 at 17:22:22 PT
Had a thought.
Then sang that song.
There's no turning back.
I sang out loud.
I sang that song again.Thanks E Johnson.
[ Post Comment ]
Comment #5 posted by E_Johnson on December 08, 2001 at 15:50:45 PT
Music I'm playing today
By the rivers of Babylon,
Where he sat down,
And there he wept
When he remembered Zion. Twas the wicked carried us away in captivity,
Required from us a song,
How can we sing King Alpha's song
In a strange land? Twas the wicked carried us away in captivity,
Required from us a song,
How can we sing King Alpha's song
In a strange land? Sing it out loud
Sing the song of freedom brothers
Sing the song of freedom sisters
We got to walk in bondage
We got to shout the song of freedom now So, let the words of our mouth
And the meditations of our heart
Be acceptable in Thy sight.
Oh, verai! So, let the words of our mouth
And the meditations of our heart
Be acceptable in Thy sight.
Oh, verai! We got to sing it together
Everyone of us!
[ Post Comment ]
Comment #4 posted by Dark Star on December 08, 2001 at 14:41:37 PT
What if?
What if someone has a hemp hat, and breaks their Marinol capsule on it and licks their hat? Is their hat then illegal because it put THC in their body?If there is any justice in this world, or the next, there are going to be a great number of current politicians taking up space in hell.
[ Post Comment ]
Comment #3 posted by Lehder on December 08, 2001 at 13:44:44 PT
Sink the Mayflower
Read today's editorial at Counter Punch, High-Tech Puritanism by John Chuckman, for a very good perspective on US history and current policies. Our international and domestic persecutions spring from the same mind set that impelled the hateful intolerance for others by the original American Puritans. America has never been a free or democratic country, and only time and massive immigration will bring change.http://www.counterpunch.org/
[ Post Comment ]
Comment #2 posted by E_Johnson on December 08, 2001 at 12:54:10 PT
In praise of our enemies
Between Ashcroft and Hutchinson, public respect for the DEA is going into the toilet as it never has before.Adding Walters to the mix is only going to make things get worse for them. The Republican leadership clearly has a very bad disconnect from the American public on drug policy, the Bush administration is clearly controlled by the tiny minority of Drug Free pressure group extremists left in America.By 2004, after two more years of the leadership of this Drug Free Triumvirate, I expect that the DEA will have almost no public respect left in America, and the Democrats will be free to run whole hog with that in the election.And then Washington will be a different place for the DEA than it has been in the last several decades.If the people in this agency cannot realize that they are on a mission of self-destruction with this hemp food ban, then God bless them and let them continue on their foolish prideful path.
[ Post Comment ]
Comment #1 posted by E_Johnson on December 08, 2001 at 12:43:29 PT
Agit prop theater by Mullah Ashcroft
If I had hired an agit-prop theater company to perform an absurdist drama on all of the potentially ridiculous consequences of marijuana prohibition, I could not have found any revolutionary agit-prop troupe in the WORLD that could have put together a more absurd and shocking performance this the one starring the DEA, directed by Mullah Ashcroft and his sidekick, Osama Bin Hutchinson.Their revolutionary performance of "Why the Schedule I status of marijuana is absurd" is breaking new ground in educatiing ordinary citizens about the Controlled Subsatcnes Acr and the power it has to intrude into human dignity and defeat the everyday instincts of basic morality and ethics in this country.
[ Post Comment ]
Post Comment
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Stop highligtning Bar color in MPAndroidChart?
I am using MPAndroidChart in my application. The problem is, whenever
I click any particular bar, it gets highlighted. I want to stop that
and want it to be normal as it was before get clicked.
I have tried these methods I found after doing study about it but nothing works fine:
barChart.setClickable(false);
barChart.setEnabled(false);
barChart.setDrawHighlightArrow(false);
barChart.setDrawBarShadow(false);
barChart.getData().setHighlightEnabled(false);
barChart.setHighlightPerTapEnabled(false);
A:
Try this to see if the problem solved
barchart.setTouchEnabled(false);
|
{
"pile_set_name": "StackExchange"
}
|
// Boost.Geometry (aka GGL, Generic Geometry Library)
// This file is manually converted from PROJ4
// Copyright (c) 2008-2012 Barend Gehrels, Amsterdam, the Netherlands.
// This file was modified by Oracle on 2017, 2018.
// Modifications copyright (c) 2017-2018, Oracle and/or its affiliates.
// Contributed and/or modified by Adam Wulkiewicz, on behalf of Oracle
// Use, modification and distribution is subject to the Boost Software License,
// Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// This file is converted from PROJ4, http://trac.osgeo.org/proj
// PROJ4 is originally written by Gerald Evenden (then of the USGS)
// PROJ4 is maintained by Frank Warmerdam
// PROJ4 is converted to Geometry Library by Barend Gehrels (Geodan, Amsterdam)
// Original copyright notice:
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
// The above copyright notice and this permission notice shall be included
// in all copies or substantial portions of the Software.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
/* meridional distance for ellipsoid and inverse
** 8th degree - accurate to < 1e-5 meters when used in conjunction
** with typical major axis values.
** Inverse determines phi to EPS (1e-11) radians, about 1e-6 seconds.
*/
#ifndef BOOST_GEOMETRY_PROJECTIONS_PJ_MLFN_HPP
#define BOOST_GEOMETRY_PROJECTIONS_PJ_MLFN_HPP
#include <cstdlib>
#include <boost/geometry/srs/projections/exception.hpp>
#include <boost/geometry/srs/projections/impl/pj_strerrno.hpp>
#include <boost/geometry/util/math.hpp>
namespace boost { namespace geometry { namespace projections {
namespace detail {
template <typename T>
struct en
{
static const std::size_t size = 5;
T const& operator[](size_t i) const { return data[i]; }
T & operator[](size_t i) { return data[i]; }
private:
T data[5];
};
template <typename T>
inline en<T> pj_enfn(T const& es)
{
static const T C00 = 1.;
static const T C02 = .25;
static const T C04 = .046875;
static const T C06 = .01953125;
static const T C08 = .01068115234375;
static const T C22 = .75;
static const T C44 = .46875;
static const T C46 = .01302083333333333333;
static const T C48 = .00712076822916666666;
static const T C66 = .36458333333333333333;
static const T C68 = .00569661458333333333;
static const T C88 = .3076171875;
T t;
detail::en<T> en;
{
en[0] = C00 - es * (C02 + es * (C04 + es * (C06 + es * C08)));
en[1] = es * (C22 - es * (C04 + es * (C06 + es * C08)));
en[2] = (t = es * es) * (C44 - es * (C46 + es * C48));
en[3] = (t *= es) * (C66 - es * C68);
en[4] = t * es * C88;
}
return en;
}
template <typename T>
inline T pj_mlfn(T const& phi, T sphi, T cphi, detail::en<T> const& en)
{
cphi *= sphi;
sphi *= sphi;
return(en[0] * phi - cphi * (en[1] + sphi*(en[2]
+ sphi*(en[3] + sphi*en[4]))));
}
template <typename T>
inline T pj_inv_mlfn(T const& arg, T const& es, detail::en<T> const& en)
{
static const T EPS = 1e-11;
static const int MAX_ITER = 10;
T s, t, phi, k = 1./(1.-es);
int i;
phi = arg;
for (i = MAX_ITER; i ; --i) { /* rarely goes over 2 iterations */
s = sin(phi);
t = 1. - es * s * s;
phi -= t = (pj_mlfn(phi, s, cos(phi), en) - arg) * (t * sqrt(t)) * k;
if (geometry::math::abs(t) < EPS)
return phi;
}
BOOST_THROW_EXCEPTION( projection_exception(error_non_conv_inv_meri_dist) );
return phi;
}
} // namespace detail
}}} // namespace boost::geometry::projections
#endif
|
{
"pile_set_name": "Github"
}
|
Hello! Glad to be here again! Thanx for visiting my entry and good luck to you!
This is the first fast sketch, just to capture the whole idea.
My Image will show a struggle/fight between a human and an alien kid on a schoolyard. The whole scene is set on some colonized planet. My intention was(is) to depict a human alien-interaction in a rather non-standard but still trivial manner.
I hope you like it and am looking forward to your comments. :)
Vatrobot
04-04-2008, 12:14 PM
Think-Update: I've decided to make this one in 3D and not as a paintjob... I whish myself luck ;)
I started colouring the first sketch just to make it look better, but doing this I realized that there is no way I colud make this one in 3D on time, so I made up my mind again and will finish it as a paintjob... And beacuse it was too much fun using my new tablet the way it should actualy be used ;)
Well I was pretty busy with some regular projects last week and there is more work coming on. This is good for my business, but bad for this image. I still believe I'll manage to finish it, but it's not going to be a good example of nice workflow and time-management for sure... sigh. :sad:
Here is a rather minor update. My biggest achievement last week was that I've finaly found "my" painting-technique, so everything should go faster now. Well, it must anyway... tic-tac-tic-tac... :wip:
CGTalk Moderation
04-27-2008, 06:05 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
|
{
"pile_set_name": "Pile-CC"
}
|
You will be able to tell by the photos here that these are not actresses, these events have not been staged, those are genuine ENF reactions from the girls who clearly didn't plan to lose their bra and panties on that occassion. Non Nude Update — 19th August Today we've got a Nice Non Nude Collection for you, it'll take a while to list this out but here goes, not in order. British Girls Gone Bad. We also hate this porn site. Kari Sweets New Uncensored Pics
|
{
"pile_set_name": "Pile-CC"
}
|
BARRE — With a new digital flood insurance rate map for Washington County finally scheduled to go into effect in March, folks with flood insurance questions will have several opportunities to get them answered starting next month.
A series of public meetings will be held throughout the county starting Jan. 8 in Waterbury and wrapping up in Montpelier on Feb. 5. There will be five meetings in all, and each will focus on the newly revised flood hazard maps and the insurance options available for structures that may be affected by the updated designations.
Countywide, roughly 200 structures have been identified for the first time as being at a high risk of damage by flooding, and their owners may benefit by getting flood insurance before the new map goes into effect March 19.
Residential property owners who buy flood insurance before the map change can benefit from a more gradual increase in their insurance costs. They are eligible for low-cost “preferred-risk policies” that can be renewed twice before insurance increases to the full cost.
An average flood insurance policy for property in a high-risk area currently costs around $1,400 a year for $170,000 in coverage.
In many cases obtaining insurance for properties in those areas isn’t optional. Federal law requires lenders to be sure that mortgages on structures in the flood hazard area are insured for their known flood risk.
While the flood hazard area is expanding in some areas, it is contracting in others. Countywide, roughly 500 properties are expected to drop out of the high-risk area, though that doesn’t mean it’s a good idea to drop flood insurance. Owners of property outside the flood hazard area will benefit from lower available flood insurance premiums.
The upcoming public information meetings are open to all and will be held on Tuesday and Thursday nights.
The first meeting will be at 7 p.m. at Thatcher Brook Primary School in Waterbury on Jan. 8.
The venue will shift to Barre’s Alumni Hall on Jan. 17 at 7 p.m., Brown Public Library in Northfield on Jan. 22 at 7 p.m., the Old Schoolhouse Common in Marshfield on Jan. 31 at 7 p.m., and Memorial Room at City Hall in Montpelier on Feb. 5 at 6 p.m.
The new maps for Washington County have been in final form for some time, but their implementation was delayed while officials in Barre exhausted their ability to appeal the flood hazard boundaries in their community.
Barre officials were concerned with a significant — and unwarranted, they believed — expansion of the flood hazard area on and around a 1,900-foot section of North Main Street. Dozens of properties will be affected by the new designation, which creates insurance obligations and imposes rigid restrictions on any development.
|
{
"pile_set_name": "Pile-CC"
}
|
Lavelle Family & Cosmetic Dentistry was founded in Prospect, KY., in 1979 by Dr. Paul Lavelle. Dr. Abby Lavelle Staffieri recently joined his practice, bringing with her a passion and emphasis in cosmetic dentistry.
Dentist 40059 | Oral Hygiene at Work
Do you brush your teeth after lunch? If you’re one of the millions of people who work outside the home, chances are you don’t have the time or resources to brush during the day. However, not being able to brush doesn’t mean you can’t protect your teeth at work.
Grab a drink of water. When you finish eating, get a drink of water. Swish the water around in your mouth, then spit or swallow it. Water helps to remove small particles of food that can remain on your teeth after your meal or snack.
Chew sugarless gum. There are certain types of sugarless gum that are approved by the American Dental Association (ADA) as good for your oral health. The reason for this is that chewing stimulates the production of saliva in your mouth. That saliva washes away food particles and helps to neutralize acids on your teeth.
Limit time drinking coffee or soda. Coffee, soda, tea, and many other beverages contain high levels of sugars and acids. The more time you spend sipping your drink, the longer your teeth are exposed to these sources of decay. Instead of spending an hour taking small swallows, drink quickly to limit exposure, then rinse your mouth or switch to water to help counteract the effects.
Brush and floss when you can. Try to keep to a regular routine of good oral hygiene practices when you are at home. Brush at least twice daily, for two full minutes each time. Floss or use an interdental cleaner of your choice once a day. Keep your recommended appointments to have your teeth cleaned and evaluated by our team.
Taking care of your teeth doesn’t have to interrupt your workday. Keeping these simple tips in mind can help protect your mouth from tooth decay, periodontal disease, and other oral health issues.
|
{
"pile_set_name": "Pile-CC"
}
|
1. Field of the Invention
The present invention relates to massagers, more particularly to power-operated massagers.
2. Background Art
Power-operated massagers are often used to treat muscle tension and fatigue. Power-operated massagers may provide a vibrating massage effect, a percussive massage effect, a kneading massage effect, a rubbing massage effect, a rolling massage effect, a Shiatsu massage effect, or the like.
Often, these massage effects are provided in power operated massagers that are adapted for handheld operation. Thus, many of these massagers are embodied in an apparatus having a handle to be grasped and manipulated by the user. Due to the massage effect of a particular massager, resultant vibrations from the massage effect are often imparted to the massager, which are translated through the handle portion to the hand and wrist of the user.
Oftentimes, the massage feature of a massager is provided spaced away from the handle portion so that the user may apply the massage to a body part. The length of the handle portion may magnify the resultant moment or torque experienced by the user.
Handheld massagers are often limited in visual therapeutic effects.
Handheld vibratory massagers are often limited in features of versatility.
A goal of the present invention is to reduce shock and vibrations imparted to the hand of the user when operating a handheld massager. Another goal of the present invention is to improve visual therapeutic effects of a handheld massager. A further goal of the present invention is to improve the effectiveness and versatility of handheld vibratory massagers.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Q:
Unsure of meaning of assignment function (variable assignment) in semantics of predicate logic?
I'm currently in a mathematical linguistics course, and I'm having trouble understanding the meaning of
'g[d/v]: the variable assignment g′ that is exactly like g except (maybe) for
g(v), which equals the individual d'
in the semantics of predicate logic. If given a variable assignment in an example model, does this mean that (v) refers to all variables (d) that is in the universe, thus that all elements of the universe replaces the variable (v)?
g1 =
x1 → John
x2 → Mary
x3 → Pete
xn → Pete
(where n≥4)
g1[John/x3] =
x1 → John
x2 → Mary
x3 → John
xn → Pete (where n≥4)
g1[[John/x3]Pete/x1] =
x1 → Pete
x2 → Mary
x3 → John
xn → Pete (where n≥4)
Also, I have an exercise based on variable assignment equivalence, but I do not know how to approach answering these questions since I do not entirely understand the meaning of variable assignment, and modified variable assignment.
QUESTION: Complete the equivalences assuming: g(x) = Mary, and g(y) = Susan.
1.
g[Paul/x)(x) =
2.
g[Paul/x)(y) =
3.
g[[Paul/x]Susan/x)(x) =
4.
g[[Paul/x]Susan/x)(y) =
5.
g[(Paul/x)Susan/y)(x) =
6.
g[[Paul/x]Susan/y)(y) =
If anyone could explain this concept to me, I would be very grateful!
EDIT: sorry, i'm quite new to this site! the questions were cut off by the closed bracket. I've tried attempting the questions below.
1.g[Paul/x)(x) = x: Paul
2.g[Paul/x)(y) = y: Susan
3.g[[Paul/x]Susan/x)(x) = x: Susan?
4.g[[Paul/x]Susan/x)(y) = y: Susan?
5.g[[Paul/x]Susan/y)(x) = x: Paul?
6.g[[Paul/x]Susan/y)(y) = y: Susan?
I'm a bit unsure about some of these, if x is originally mapped to Mary, then to Paul & Susan in (3&4)
A:
If given a variable assignment in an example model, does this mean that (v) refers to all variables (d) that is in the universe, thus that all elements of the universe replaces the variable (v)?
No. A variable assignment maps every variable to a specific individual. You can see that with your first example:
g1 =
x1 → John
x2 → Mary
x3 → Pete
xn → Pete (where n≥4)
However, we can change those assignments when we do something like:
g1[John/x3]
This means that everything gets assigned the same individual as above, except that we now map $x3$ to John, so we get:
g1[John/x3] =
x1 → John
x2 → Mary
x3 → John
xn → Pete (where n≥4)
So, for last exercise at the end, your initial g is:
g =
x → Mary
y → Susan
So that means that g[Paul/x] is:
g[Paul/x] =
x → Paul
y → Susan
Can you do the others?
|
{
"pile_set_name": "StackExchange"
}
|
Non-suicidal self-injury and suicidality in trans people: A systematic review of the literature.
Literature has described high levels of mental health problems among trans people, such as depression, resulting in increased levels of non-suicidal self-injury (NSSI) behaviour and suicidality (suicidal thoughts, suicide attempts and suicide rates). With the aim of systematically reviewing the available literature in this field, this study identifies 31 papers that explore the rates of NSSI and suicidality in trans people. From reviewing the literature, it was revealed that trans people have a higher prevalence of NSSI and suicidality compared to the cisgender (non-trans) population. There appear to be some gender differences within these rates, with trans men at a greater risk for NSSI behaviour. Prevalence rates differ depending on the different stages of transition, but they are still overall greater than the cisgender population. The study concludes that trans individuals are at a greater risk of NSSI behaviour and suicidality than the cisgender population, and discusses risk factors and the need to develop effective preventative interventions.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Jenks Football Coach Allan Trimble was honored Saturday at the Mabee Center. "A Night of Legacy" celebrated the 13-time State Champion's incredible career and served as a fundraiser for his battle with ALS. Click Here for Video
|
{
"pile_set_name": "Pile-CC"
}
|
from fnmatch import fnmatch
import re
__all__ = ("make_active_helper", )
def make_active_helper(request):
def active(*url_patterns, partial=False, class_name="active"):
curr_path = re.sub("index.html$", "", request.path).strip("/")
for urlp in url_patterns:
urlp = re.sub("index.html$", "", urlp.strip("/")).strip("/")
if fnmatch(curr_path, urlp) or (partial and curr_path.startswith(urlp)):
return class_name
return ""
return active
|
{
"pile_set_name": "Github"
}
|
/****************************************************************************
**
** Copyright (C) 2016 The Qt Company Ltd.
** Contact: https://www.qt.io/licensing/
**
** This file is part of the QtWidgets module of the Qt Toolkit.
**
** $QT_BEGIN_LICENSE:LGPL$
** Commercial License Usage
** Licensees holding valid commercial Qt licenses may use this file in
** accordance with the commercial license agreement provided with the
** Software or, alternatively, in accordance with the terms contained in
** a written agreement between you and The Qt Company. For licensing terms
** and conditions see https://www.qt.io/terms-conditions. For further
** information use the contact form at https://www.qt.io/contact-us.
**
** GNU Lesser General Public License Usage
** Alternatively, this file may be used under the terms of the GNU Lesser
** General Public License version 3 as published by the Free Software
** Foundation and appearing in the file LICENSE.LGPL3 included in the
** packaging of this file. Please review the following information to
** ensure the GNU Lesser General Public License version 3 requirements
** will be met: https://www.gnu.org/licenses/lgpl-3.0.html.
**
** GNU General Public License Usage
** Alternatively, this file may be used under the terms of the GNU
** General Public License version 2.0 or (at your option) the GNU General
** Public license version 3 or any later version approved by the KDE Free
** Qt Foundation. The licenses are as published by the Free Software
** Foundation and appearing in the file LICENSE.GPL2 and LICENSE.GPL3
** included in the packaging of this file. Please review the following
** information to ensure the GNU General Public License requirements will
** be met: https://www.gnu.org/licenses/gpl-2.0.html and
** https://www.gnu.org/licenses/gpl-3.0.html.
**
** $QT_END_LICENSE$
**
****************************************************************************/
#ifndef QSTACKEDLAYOUT_H
#define QSTACKEDLAYOUT_H
#include <QtWidgets/qtwidgetsglobal.h>
#include <QtWidgets/qlayout.h>
QT_BEGIN_NAMESPACE
class QStackedLayoutPrivate;
class Q_WIDGETS_EXPORT QStackedLayout : public QLayout
{
Q_OBJECT
Q_DECLARE_PRIVATE(QStackedLayout)
Q_PROPERTY(int currentIndex READ currentIndex WRITE setCurrentIndex NOTIFY currentChanged)
Q_PROPERTY(StackingMode stackingMode READ stackingMode WRITE setStackingMode)
QDOC_PROPERTY(int count READ count)
public:
enum StackingMode {
StackOne,
StackAll
};
Q_ENUM(StackingMode)
QStackedLayout();
explicit QStackedLayout(QWidget *parent);
explicit QStackedLayout(QLayout *parentLayout);
~QStackedLayout();
int addWidget(QWidget *w);
int insertWidget(int index, QWidget *w);
QWidget *currentWidget() const;
int currentIndex() const;
using QLayout::widget;
QWidget *widget(int) const;
int count() const override;
StackingMode stackingMode() const;
void setStackingMode(StackingMode stackingMode);
// abstract virtual functions:
void addItem(QLayoutItem *item) override;
QSize sizeHint() const override;
QSize minimumSize() const override;
QLayoutItem *itemAt(int) const override;
QLayoutItem *takeAt(int) override;
void setGeometry(const QRect &rect) override;
bool hasHeightForWidth() const override;
int heightForWidth(int width) const override;
Q_SIGNALS:
void widgetRemoved(int index);
void currentChanged(int index);
public Q_SLOTS:
void setCurrentIndex(int index);
void setCurrentWidget(QWidget *w);
private:
Q_DISABLE_COPY(QStackedLayout)
};
QT_END_NAMESPACE
#endif // QSTACKEDLAYOUT_H
|
{
"pile_set_name": "Github"
}
|
GST rates decided on 1211 items, except gold
Addressing a news conference at Sher-e-Kashmir International Convention Centre (SKICC) in Srinagar after attending the 14th GST council meeting, Jaitley said, "It is for J&K and its assembly to decide what they want to do and I am sure they will take an appropriate decision".
As widely reported, essential items have either been kept out of the purview of the tax or assigned the lowest rate.
The cost of energy generation is likely to come down as the tax incidence on coal will come down to 5% from about 11% now.
The complete details will be available in the public domain after the said meeting concludes on Friday. He said J-K, which is the only state enjoying special taxation powers, will make necessary changes to pass the GST bill, which is billed as the biggest tax reform in the history of independent India.
Rates for household goods like soaps at 18 percent and durables such as cars at 28 percent.
While frozen meat will attract a GST of 12 per cent, Ayurvedic or homeopathy medicines, agarbatti, umbrella, electric vehicles and mobile phone manufacturing will be taxed at 12 per cent.
Some food items like coffee and edible oil will be taxed at a rate of 5 percent, Revenue Secretary Hasmukh Adhia said.
Commenting on the GST Council's deliberations, a senior tax analyst said the rates announced were along expected lines. Coal is now taxed at 11.69 per cent. "Another aspect encountered and accepted by most of the GST countries lies in the statistic that GST will be inflationary, especially if the effective tax rate is higher than what prevailed before".
The council finalised tax rates on 80-90 percent of items under the four-slab structure.
"Cereals will be in exempt list".
The council also approved most of the draft rules relating to the new tax that would push businesses to upgrade their infrastructure for filing tax returns under the new system. But what is to be done with packaged and branded food that has to be separately decided.
The Finance Minister said the GST is perhaps India's largest and most significant tax reform, the report added. "We are now part of the national policy making which will be recorded in the history", he said.
"Of several commodities, we have consciously brought down the tax".
"We are banking on hope that under GST, evasion will be checked and buoyancy will go up", the minister said.
Share
Related Articles
Critics say that statement showed just how out of touch DeVos is with some of the populations she serves as education secretary. Numerous standing graduates took their seats, but a few remained standing, prompting Jackson to implore them again to sit down.
What's more, Game 4 marked the first time in 2016-17 that Thomas didn't attempt a free throw during a Celtics loss. Thomas has a four-piece temporary bridge that cracked when he was re-injured and replaced at halftime of Game 3 .
The Owls made it hard for their Yorkshire counterparts throughout the game, frustrating them during a cagey, nervy affair. The victor of the tie will play either Reading or Fulham in the Championship play-off final at Wembley on Monday, 29 May.
Former Kentucky men's basketball stars Karl-Anthony Towns and DeMarcus Cousins almost made the All-NBA Teams as well. Players who received votes at multiple positions were slotted at the position where they received the most votes.
By the group's count, some 40 journalists have been killed in Mexico for reasons confirmed as related to their work since 1992. Numerous media and human rights organisations including Amnesty International have called for an impartial investigation.
At its annual Google I/O conference, Google announced that developers can add transactions to its Assistant digital helper. We are about to see Portuguese, Japanese, French, German, and Brazilian come to the Assistant.
He won the league three seasons in a row during his time with Juventus and was eventually offered the Italian National Team job. Premier League champions Chelsea will bid for a record-breaking end to the season, beginning against Watford on Monday night.
Associated Press reporters Robert Burns and Lolita Baldor in Washington, D.C. contributed to this report. Manning was the first service member to be approved for gender reassignment surgery in military prison.
It had always been reported that several of Germany's biggest names will be rested during the tournament in Russian Federation . Joachim Low has made a youthful selection for the summer tournament in Russian Federation and includes seven uncapped players.
After a wait, the rain stopped and the second innings started at 12:55 am IST with the target of 48 in Six overs. The strongest part about the SRH is the bursting opening pair of David Warner and Shikhar Dhawan.
Civil rights champion poised to be Philadelphia DA
Democratic incumbent Seth Williams did not run for a third term as he awaits trial next month on federal bribery charges. Several candidates supported prison and bail reform and prisoner re-entry programs, despite efforts under U.S.
House leader told colleagues 'I think Putin pays' Pres
Just like Trump, Rohrabacher thinks Russian Federation is not an enemy but a potential partner in the fight against terrorism. Rohrabacher said he isn't taking offense to the comments, adding he believes the majority leader made the comments in jest.
Mata eager for Man United to build momentum
Wednesday's instalment of Premier League action sees Southampton welcome Manchester United to St Mary's Stadium (19:45). Paul Pogba will also be out for the second match in a row on compassionate leave following the death of his father.
|
{
"pile_set_name": "Pile-CC"
}
|
/****************************************************************************
**
** https://www.qxorm.com/
** Copyright (C) 2013 Lionel Marty (contact@qxorm.com)
**
** This file is part of the QxOrm library
**
** This software is provided 'as-is', without any express or implied
** warranty. In no event will the authors be held liable for any
** damages arising from the use of this software
**
** Commercial Usage
** Licensees holding valid commercial QxOrm licenses may use this file in
** accordance with the commercial license agreement provided with the
** Software or, alternatively, in accordance with the terms contained in
** a written agreement between you and Lionel Marty
**
** GNU General Public License Usage
** Alternatively, this file may be used under the terms of the GNU
** General Public License version 3.0 as published by the Free Software
** Foundation and appearing in the file 'license.gpl3.txt' included in the
** packaging of this file. Please review the following information to
** ensure the GNU General Public License version 3.0 requirements will be
** met : http://www.gnu.org/copyleft/gpl.html
**
** If you are unsure which license is appropriate for your use, or
** if you have questions regarding the use of this file, please contact :
** contact@qxorm.com
**
****************************************************************************/
#ifndef _QX_IS_QX_POD_H_
#define _QX_IS_QX_POD_H_
#ifdef _MSC_VER
#pragma once
#endif
/*!
* \file is_qx_pod.h
* \author Lionel Marty
* \ingroup QxTraits
* \brief qx::trait::is_qx_pod<T>::value : return true if T is a POD type and not a pointer
*/
namespace qx {
namespace trait {
/*!
* \ingroup QxTraits
* \brief qx::trait::is_qx_pod<T>::value : return true if T is a POD type and not a pointer
*/
template <typename T>
struct is_qx_pod
{
enum { value = (std::is_pod<T>::value && ! std::is_pointer<T>::value && ! std::is_member_pointer<T>::value) };
typedef typename std::conditional<qx::trait::is_qx_pod<T>::value, std::true_type, std::false_type>::type type;
};
} // namespace trait
} // namespace qx
#endif // _QX_IS_QX_POD_H_
|
{
"pile_set_name": "Github"
}
|
<?php
/**
* @package plugins.tvComDistribution
* @subpackage lib
*/
class TVComDistributionProvider implements IDistributionProvider
{
/**
* @var TVComDistributionProvider
*/
protected static $instance;
protected function __construct()
{
}
/**
* @return TVComDistributionProvider
*/
public static function get()
{
if(!self::$instance)
self::$instance = new TVComDistributionProvider();
return self::$instance;
}
/* (non-PHPdoc)
* @see IDistributionProvider::getType()
*/
public function getType()
{
return TVComDistributionPlugin::getDistributionProviderTypeCoreValue(TVComDistributionProviderType::TVCOM);
}
/**
* @return string
*/
public function getName()
{
return 'TV.com';
}
/* (non-PHPdoc)
* @see IDistributionProvider::isDeleteEnabled()
*/
public function isDeleteEnabled()
{
return true;
}
/* (non-PHPdoc)
* @see IDistributionProvider::isUpdateEnabled()
*/
public function isUpdateEnabled()
{
return true;
}
/* (non-PHPdoc)
* @see IDistributionProvider::isMediaUpdateEnabled()
*/
public function isMediaUpdateEnabled()
{
return true;
}
/* (non-PHPdoc)
* @see IDistributionProvider::isReportsEnabled()
*/
public function isReportsEnabled()
{
return false;
}
/* (non-PHPdoc)
* @see IDistributionProvider::isScheduleUpdateEnabled()
*/
public function isScheduleUpdateEnabled()
{
return true;
}
/* (non-PHPdoc)
* @see IDistributionProvider::isAvailabilityUpdateEnabled()
*/
public function isAvailabilityUpdateEnabled()
{
return false;
}
/* (non-PHPdoc)
* @see IDistributionProvider::isLocalFileRequired()
*/
public function isLocalFileRequired($jobType)
{
return false;
}
/* (non-PHPdoc)
* @see IDistributionProvider::useDeleteInsteadOfUpdate()
*/
public function useDeleteInsteadOfUpdate()
{
return false;
}
/* (non-PHPdoc)
* @see IDistributionProvider::getJobIntervalBeforeSunrise()
*/
public function getJobIntervalBeforeSunrise()
{
return 0;
}
/* (non-PHPdoc)
* @see IDistributionProvider::getJobIntervalBeforeSunset()
*/
public function getJobIntervalBeforeSunset()
{
return 0;
}
/* (non-PHPdoc)
* @see IDistributionProvider::getUpdateRequiredEntryFields()
*/
public function getUpdateRequiredEntryFields($distributionProfileId = null)
{
return array();
}
/* (non-PHPdoc)
* @see IDistributionProvider::getUpdateRequiredMetadataXPaths()
*/
public function getUpdateRequiredMetadataXPaths($distributionProfileId = null)
{
return array();
}
}
|
{
"pile_set_name": "Github"
}
|
[Gallbladder cancer in chronic calculous cholecystitis].
127 cases of gallbladder cancer are analysed. Among them there were 99(78%) women and 28(22%) men. The majority of the patients were at the age from 61 to 70 years; 19 (15%) patients were younger than 50 years. Mean average age of the patients was 61.7 years. Chronic calculous cholecystitis was detected in 116 (91.3%) patients. In 75 (59%) patients the duration of the disease exceeded 5 years. Women suffer more frequently than men because of higher rate of calculous cholecystitis in women. Radical surgery was possible only in 51 of 97 (52.7%) patients as a result of late diagnosis of the cancer. Cholecystectomy was performed in 32 patients, cholecystectomy in combination with liver resection in 9 patients, tumor removal in borderes of the healthy tissues in 10 patients. Palliative operations were performed in 46 (36.2%) patients. Long-term results of the surgery were followed up in 93 patients. 5 year survival after the radical surgery was 40%, 10 year survival--20%. Mean survival after the palliative surgery was 7 months. Gallbladder cancer is difficult for diagnosis and the results of its treatment are unsatisfactory. Timery surgery of chronic calculous cholecystitis is important in prevention of gallbladder cancer.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
A co-production by ORF, Cosmos Factory, National Geographic Channel, National Geographic Channels International and ZDF
Available worldwide except for Germany and USA
Languages:
German (ORIGINAL)
, English (DUBBED)
, French (DUBBED)
Format: 16:9 , 16:9 , 16:9
Lightning Reloaded
Blitzgewitter - Himmel unter Strom
Even today day still we don't know exactly where lightning strikes come from and how they are created. This documentary focuses on the latest lightning and thunderstorm research, using the cutting-edge digital video technology. It solves the mystery of ball-lightning and explores in detail the anatomy of a lightning strike, as well as investigating the phenomena of sprites and lightning bolts that explode in the depths of space.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Undo: export GIT_ASKPASS=""
I'm relatively new to git/gitlab. For my school gitlab account, I was trying to setup git push to not continuously ask for my rsa passphrase by using:
export GIT_ASKPASS="<password goes here>"
It did not work, and now I'm stuck trying to push to gitlab with a refused connection. Is there an easy way out? Or do I have to setup my rsa keys all over again? Thanks in advance for helping a noob in distress.
A:
It is best at first to generate ssh keys without a passphrase.
Or you would have to deal with ssh-agent, as described in "Adding your SSH key to the ssh-agent"
ssh-keygen -t rsa -C "key for xxx access" -q -P ""
Publish your public key to your GitLab account, and it should not ask for a passphrase (provided you are using a git@gitlab.com:<username>/<reponame> ssh url, not an https one)
|
{
"pile_set_name": "StackExchange"
}
|
8 things to do for fall break
Here are some fun ways for students to spend their time off. With so many things within
a couple hours’ drive from Columbia, there is always something to do during fall break.
State FairThe South Carolina State Fair begins Oct. 12 and wraps up Oct. 23. There will be rides and games but most importantly,
over 90 food stands. Make sure you try their deep-fried mini-cinnamon roll pops, take
a picture and tag us. Buy tickets online until Oct. 12 to save $3.
The Liberty Bridge in Falls Park is a pedestrian bridge in downtown Greenville - perfect
for pictures.
GreenvilleRelive your childhood and spend a day geocaching at the Swamp Rabbit trail in Greenville.
Head in the direction of Travelers Rest, S.C., and you’ll come across a great place
to stop and eat called Swamp Rabbit Cafe. You can also spend your afternoon in Falls Park or hop on a trolley to check out the town. Great places to eat and fun things to do can be found by searching their hashtag
#Yeahthatgreenville.
AshevilleAsheville is the perfect weekend trip — here’s our guide to do it successfully. Venture 3 1/2 hours to Grandfather Mountain
in Linville, N.C., to your final destination — the Mile High Swing Bridge. The Mile High Swing Bridge is America’s highest suspension bridge and allows visitors
360 degree views from Grandfather Mountain. Afterwards, head towards the Bon Paul and Sharky’s Hostel in Asheville, N.C., about an hour and a half drive. Either snag some food at a restaurant in walking distance or make your way towards downtown Asheville to explore more options. On Sunday, whether you’re carbing up for another hike or just looking for a good place for breakfast —Biscuit Head is a must.
The Canopy Walk is a walkway suspended from trees at Lynches River County Park in
Coward, S.C., is less than two hours away, in the middle of nowhere and totally awesome.
Coward, S.C.The Canopy Walk is a walkway suspended from trees at Lynches River County Park in Coward, S.C., is
less than two hours away, in the middle of nowhere and totally awesome. Warning — if you’re scared of heights this may not be for you. Visitors also have the option
of renting kayaks and canoes or trying out the free archery range.
Riley Moore Falls - between Westminster, S.C., and Long Creek, S.C.The Riley Moore Falls is a 12-foot cascade into a 100-foot-wide pool of water with breathtaking colors
at the base. After a quick dip or a few pictures, take a short drive to Chattooga Belle Farm where you can go apple picking or stop by have a farm fresh lunch.
The sea lions are back! Take some time this fall break to stop by the Sea Lion Landing
at the Riverbanks Zoo.
SCarowindsFormally known as Carowinds, SCarowinds takes on a Halloween twist during the month of October to help celebrate our spookiest
holiday. SCarowinds is right on the North Carolina-South Carolina state line — about an hour drive north and totally worth the trip. Buy your ticket in advance
online to save money and time.
Riverbanks ZooIf you plan on staying in Columbia for fall break, it’s a perfect time to check out
the Riverbanks Zoo. Try out the Zipline Canopy Tour, which includes River and Zip. If you’d rather save
the heights for another day, make sure you order your ticket online to save $2.20.
|
{
"pile_set_name": "Pile-CC"
}
|
As the dimensions of MOSFET devices have been pushed deeper and deeper into the sub-micron regime, increasing demands have been placed on controlling and characterizing the fabrication methods that are used to electrically isolate those devices from one another. The more common methods used for isolating MOSFET devices usually employ narrow strips of oxide layers and underlying doped silicon layers, that are placed in between the closely spaced devices. The purpose of the oxide is to provide electrical isolation while also providing lower lateral device to substrate capacitance, relative to older junction isolation methods. The purpose of the underlying doping layers is to avoid the formation of conducting channels that could, otherwise, result in unwanted buried current paths between adjacent devices. As device dimensions have continued to shrink, the associated isolation structures between the devices have continually been improved, in order to conserve space while also allowing for improved speed performance (lower capacitance).
FIGS. 1a and 1b illustrate cross-sectional and top views, respectively, of an early popular isolation method, based on the Local Oxidation of Silicon, LOCOS, which was first invented in 1970. Referring more particularly to the cross-sectional view of FIG. 1a, there is shown a P type semiconductor substrate 2, an overlying relatively thin gate oxide region 4 and a further overlying polysilicon gate region 6, representing one of a plurality of isolated N channel MOSFET devices. FIGS. 1a and 1b also show the surrounding LOCOS isolation structure, for the MOSFET device, which is comprised of a P doped channel stop region 8 and an overlying relatively thick LOCOS oxide region 10. It is noted that the LOCOS oxide region 10 is usually thermally grown in the presence of a silicon nitride feature (previously removed and not shown), which is used to prevent a thick oxide layer from also being simultaneously grown in gate oxide region 4. As a result, evidence can still be seen of the partial extension of the thick oxide growth, under the previously removed nitride feature, in the form of a .vertline.Bird's Beak.vertline. 12. It is also noted that the same thermal processing for thick oxide region 10 tends to undesirably make the P type channel stop region 8 encroach into gate region 4. It is further noted that the highly graded Bird's Beak shape 12, along with the encroachment of the channel stop region 8 has historically posed a number of problems for the LOCOS based fabrication of sub-micron devices. Consequently, subsequent improvements in isolation technology have led to more recent advances such as Shallow Trench Isolation, STI, methods.
FIGS. 2a and 2b illustrate cross-sectional and top views, respectively, of a more recent STI method for isolating sub-micron devices. Referring now more particularly to the cross-sectional view of FIG. 2a. there is shown a P type semiconductor substrate 20, an overlying relatively thin gate oxide region 22 and a further overlying polysilicon gate region 24, representing one of a plurality of isolated N channel MOSFET devices. FIGS. 2a and 2b also show the surrounding STI isolation structure, for the MOSFET device, which is comprised of a P doped channel stop region 26 and an overlying relatively thick STI oxide region 28. It is noted that STI oxide region 28 is usually fabricated by first etching a shallow trench into the silicon substrate and then filling the trench with a CVD oxide layer which is subsequently planarized.
From an examination of FIG. 2a, the aforementioned .vertline.Bird's Beak.vertline. problem of FIG. 1a, has been avoided. Also, the relatively lower thermal cycle needed for the CVD STI oxide tends to, beneficially, reduce the aforementioned channel stop encroachment problem of FIG. 1a. These are some of the reasons why the STI method of FIGS. 2a and 2b has increasingly become a desirable replacement for the LOCOS method of FIGS. 1a and 1b.
Although STI is now widely used in deep sub-micron process technologies, as a LOCOS replacement, it does have its own problems. For example, along the edge of the device channel (part A, illustrated in FIGS. 2a and 2b), there is a tendency for the gate oxide to be thinner at the boundary between the gate oxide and the STI oxide. This .vertline.thinning effect.vertline. is caused by larger stresses in the corner regions as well as by orientation dependence. Furthermore, along the same edge of the device channel (part A, illustrated in FIGS. 2a and 2b), there is also a tendency for the doping concentration of the substrate to be lower. This is due to the .vertline.segregation effect.vertline., where boron tends to thermally segregate from silicon to oxide regions.
Continuing to refer to FIG. 2a and 2b, the above STI problems, associated with the .vertline.thinning effect.vertline. and doping .vertline.segregation effect.vertline. will tend to cause the device threshold voltage to be lower in Part A of the device channel, relative to Part B of the device channel. Consequently, as illustrated in FIG. 3, the device will tend to behave as two devices in parallel, where the effective device along the edge of the channel region (part A) has a very different threshold voltage characteristic (defined as a drain current versus gate voltage characteristic) than the more centrally located region of the channel (part B). As shown in FIG. 3, the overall threshold voltage characteristic for the device is a superposition of the threshold voltage characteristics for parts A and B, whereby the resultant overall characteristic exhibits a characteristic hump or .vertline.kink.vertline. as a indicator of the above problems. This .vertline.Kink Effect.vertline. can, in turn, lead to very undesirable sub-threshold current problems that can usually be controlled by proper considerations in device design and in process design. However, there can be situations where a relevant process, in this regard, has gone out of control and needs to be corrected as soon as possible. For detecting such process control problems, one might consider monitoring for an unacceptable kink in a threshold voltage characteristic. However, that, in turn, poses the problem of how to detect an unacceptable kink by some means that is less subjective than merely observing a device threshold characteristic, similar to that of FIG. 3. As will now be described, the present invention solves this subjectivity problem by providing an innovative quantitative electrical characterization method that is inherently sensitive to any oxide thickness or doping differences between parts A and B of FIGS. 2a and 2b.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Q:
Why isn't every vertex connected to every other vertex in TextRank?
I've been reading up on the automatic text summarization approach TextRank, particularly for generating summaries of a text using sentence extraction as opposed to keyword or phrase extraction.
In the published paper, an example body of text and the resulting ranked graph of all sentences/vertices and their edges is given. I'm unclear as to why every vertex doesn't have an edge connecting it to every other vertex in the graph - shouldn't all sentences be compared with each other?
This doesn't seem to be addressed in the paper. One possible explanation I've produced is that there is no edge if the similarity between two sentences is 0. Does anyone know for sure?
Link to paper: https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf
A:
After getting in touch with the authors of the paper, the reason why every vertex isn't connected to every other vertex in the graph is because only edges with a non-zero weight are added to the graph.
|
{
"pile_set_name": "StackExchange"
}
|
Lithium niobate nanoparticle-coated Y-coupler optical fiber for enhanced electro-optic sensitivity.
Single crystals of lithium niobate (LiNbO3), possessing high birefringence and anisotropic properties have been explored, for a long time, to harness their excellent electro-optic properties. However, their nanoforms are comparatively less explored. In this context, dielectric constant and polarization (P) versus electric-field (E) characteristics of LiNbO3 nanomaterials have been studied. A nonideal P-E loop and a dielectric constant of 20 at the onset of 1 kHz were seen. The electro-optic sensitivity was found to be 4 times as compared to the bulk LiNbO3 crystals. The results are attributed to oxygen vacancies, antisite defects, and grain boundary effects in an already congruent structural matrix of LiNbO3.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Speak (Bachelor Girl song)
"Speak" is a song by Australian pop group Bachelor Girl. The song is due for release on 18 June 2018, twenty years since the release of the group's debut single "Buses and Trains".
It is the group's first single in 16 years and tackles the theme of the correlation between technology and depression in the young.
Tania Doko said: "I feel strongly that "Speak" is the first offering back for the band – personally, it's a wake-up call and hopefully a 3 minute 30 reminder out there that we're humans, not just hashtags." James Roche added: "Imagine our world if we develop a group agreement that speaking the truth and acting with integrity is normal."
Discussing the song, Doko said: "I wanted to reflect on a worrying trend: since the onset of the smart phone, there's been a global, dramatic spike in depression, especially among our young people. Not discounting that technology provides a voice for many, we face a conundrum. In this digital 'virtual' age, social media, data, algorithms and fake news often determine behaviour and self-esteem, leaving real conversation, human connection and truth itself frequently sacrificed. Whether it's sharing day to day goings on, or serious issues like abuse and bullying, no device can replace the value of real-life communication."
The music video was released on 18 June 2018.
Background
Bachelor Girl formed in 1992 and released their debut studio album Waiting for the Day in 1998. The album won an ARIA Award in 1999 and was certified platinum. The album spawned the platinum selling and APRA Award-winning single "Buses and Trains". In 2002, the group released their second studio album Dysfunctional before commencing a hiatus in 2004 when Roche moved to London. In 2011, the group, briefly reformed and released a greatest hits album as well as a third studio album of songs recorded prior to the hiatus. Since 2012, Doko has been living in Stockholm while Roche has worked with a number of Australian artists, including Anthony Callea on his 2016 number-one album, Backbone. In 2016, the duo performed together in Australia for an Australia Day concert. In 2017, Roche travelled to Sweden and wrote songs with Doko, which inspired the duo to record new songs together. The duo said: "Things really sparked, it reminded us why we started working together in the first place way back then – the joy of writing and making music together. We're just so excited about the new songs, they're better than ever before."
Track listing
Digital download
"Speak" – 3:35
Release history
References
Category:Bachelor Girl songs
Category:2018 singles
Category:2018 songs
|
{
"pile_set_name": "Wikipedia (en)"
}
|
The Senate has voted 64-12 with one abstention to send the federal government's assisted-dying bill back to the House of Commons for a vote.
MPs passed a motion Thursday morning to allow the amended bill to be debated in the House right away, without the usual waiting period that would have delayed debate until Monday.
While in the Red Chamber, senators made seven amendments to the legislation.
The Senate rarely alters a government bill to this extent, as the honourable senators often defer to the Commons to craft legislation. But many senators have voiced serious concerns about Bill C-14's constitutionality, particularly the government's move to restrict physician-assisted dying to people whose natural death is "reasonably foreseeable."
If the Liberal government rejects the Senate's move to expand eligibility for assisted dying beyond those who are terminally ill, it could result in a showdown between the two chambers. The Conservative leader in the Senate, Claude Carignan, has already said there is a risk of the bill being "completely rejected" if it comes back in its original form.
Justice Minister Jody Wilson-Raybould, for her part, has said one of the Senate's key amendments goes too far.
"It will broaden the regime of medical assistance in dying in this country and we have sought to ensure that we, at every step, find the right balance that is required for such a turn in direction," she said of Liberal Senator Serge Joyal's amendment.
Senate communications have been active on social media since the senators got their hands on the controversial bill. They've tweeted out the following quote cards, detailing the seven amendments that passed the Red Chamber.
'Reasonably foreseeable'
(Senate of Canada)
Joyal's amendment is arguably the most significant. It proposes to drop the "reasonably foreseeable" condition and replace it with eligibility criteria that are closer to that drafted by the Supreme Court of Canada in its Carter decision. All Canadians with "a grievous and irremediable medical condition" causing "enduring suffering" would be able to access an assisted death — a much broader definition than initially intended.
Palliative care
(Senate of Canada)
Conservative Senator Nicole Eaton's amendment would require all patients considering physician-assisted dying to get a full briefing on available palliative care options.
Materially benefit from death
(Senate of Canada)
Another important change to the legislation is Conservative Senator Don Plett's amendment that would restrict who can help a person in their assisted death, tightening the rules around what role a person who would materially benefit from the death could do.
Death certificates
(Senate of Canada)
Conservative Senator Elizabeth Marshall's amendment would compel the health minister to draft regulations around death certificates and provide greater clarity on what information is collected by medical practitioners.
Parliamentary reports
(Senate of Canada)
Liberal Senator Art Eggleton's amendment calls for a report to be issued to Parliament, within two years, on issues that have arisen from the provision of physician-assisted dying.
Minor language amendments
(Senate of Canada)
|
{
"pile_set_name": "OpenWebText2"
}
|
#defaults=no
#cm=OSC
Wave=2
<?
float Wave[ 128 ];
Wave[0] = 0.24238586;
Wave[1] = 0.41617107;
Wave[2] = 0.46353912;
Wave[3] = 0.48039913;
Wave[4] = 0.4921379;
Wave[5] = 0.49693775;
Wave[6] = 0.49484444;
Wave[7] = 0.49568653;
Wave[8] = 0.4960327;
Wave[9] = 0.49604797;
Wave[10] = 0.49585915;
Wave[11] = 0.49537563;
Wave[12] = 0.49440002;
Wave[13] = 0.49623775;
Wave[14] = 0.48931885;
Wave[15] = 0.48100853;
Wave[16] = 0.45385742;
Wave[17] = 0.35899734;
Wave[18] = -0.36828995;
Wave[19] = -0.44202328;
Wave[20] = -0.46016312;
Wave[21] = -0.46931553;
Wave[22] = -0.46839714;
Wave[23] = -0.4694624;
Wave[24] = -0.47022247;
Wave[25] = -0.46396542;
Wave[26] = -0.4549904;
Wave[27] = -0.43542957;
Wave[28] = -0.39808655;
Wave[29] = -0.31289768;
Wave[30] = 0.2768135;
Wave[31] = 0.36634064;
Wave[32] = 0.38983154;
Wave[33] = 0.39865685;
Wave[34] = 0.38128662;
Wave[35] = 0.34444332;
Wave[36] = 0.14352417;
Wave[37] = -0.3370676;
Wave[38] = -0.4019947;
Wave[39] = -0.43534946;
Wave[40] = -0.4539795;
Wave[41] = -0.4656229;
Wave[42] = -0.47224998;
Wave[43] = -0.47598076;
Wave[44] = -0.48213196;
Wave[45] = -0.47880745;
Wave[46] = -0.48221016;
Wave[47] = -0.4791441;
Wave[48] = -0.47694397;
Wave[49] = -0.46932793;
Wave[50] = -0.46334076;
Wave[51] = -0.4541874;
Wave[52] = -0.43317795;
Wave[53] = -0.40482998;
Wave[54] = -0.3462391;
Wave[55] = 0.12877083;
Wave[56] = 0.35277557;
Wave[57] = 0.40117455;
Wave[58] = 0.42041397;
Wave[59] = 0.43679905;
Wave[60] = 0.44541168;
Wave[61] = 0.45211506;
Wave[62] = 0.45650864;
Wave[63] = 0.46191692;
Wave[64] = 0.45610046;
Wave[65] = 0.45303917;
Wave[66] = 0.4391346;
Wave[67] = 0.42031956;
Wave[68] = 0.39373016;
Wave[69] = 0.3386736;
Wave[70] = 0.2028389;
Wave[71] = -0.2721529;
Wave[72] = -0.3352661;
Wave[73] = -0.36938477;
Wave[74] = -0.38748932;
Wave[75] = -0.39518642;
Wave[76] = -0.39886856;
Wave[77] = -0.39587688;
Wave[78] = -0.38757706;
Wave[79] = -0.36933517;
Wave[80] = -0.33569336;
Wave[81] = -0.2830286;
Wave[82] = 0.023773193;
Wave[83] = 0.26341438;
Wave[84] = 0.3042717;
Wave[85] = 0.3101349;
Wave[86] = 0.31197166;
Wave[87] = 0.3047905;
Wave[88] = 0.28457642;
Wave[89] = 0.24487782;
Wave[90] = 0.020082474;
Wave[91] = -0.24949074;
Wave[92] = -0.2975769;
Wave[93] = -0.33198357;
Wave[94] = -0.3554325;
Wave[95] = -0.37625694;
Wave[96] = -0.3957672;
Wave[97] = -0.40959644;
Wave[98] = -0.4214363;
Wave[99] = -0.4309044;
Wave[100] = -0.4325409;
Wave[101] = -0.43899727;
Wave[102] = -0.44270706;
Wave[103] = -0.44280243;
Wave[104] = -0.43627167;
Wave[105] = -0.43244553;
Wave[106] = -0.4311161;
Wave[107] = -0.42015362;
Wave[108] = -0.41301727;
Wave[109] = -0.40781212;
Wave[110] = -0.39965248;
Wave[111] = -0.3901863;
Wave[112] = -0.38417053;
Wave[113] = -0.37187862;
Wave[114] = -0.3564911;
Wave[115] = -0.34715748;
Wave[116] = -0.35260773;
Wave[117] = -0.3540678;
Wave[118] = -0.36891747;
Wave[119] = -0.38233852;
Wave[120] = -0.3957367;
Wave[121] = -0.41210938;
Wave[122] = -0.42630386;
Wave[123] = -0.43379116;
Wave[124] = -0.4417343;
Wave[125] = -0.4360962;
Wave[126] = -0.4243431;
Wave[127] = -0.37822437;
Selected.WaveTable.set( 1 , Wave );
Wave[0] = 0.4960785;
Wave[1] = 0.49593544;
Wave[2] = 0.49580765;
Wave[3] = 0.49564934;
Wave[4] = 0.49549103;
Wave[5] = 0.49531555;
Wave[6] = 0.49513817;
Wave[7] = 0.49491215;
Wave[8] = 0.4947052;
Wave[9] = 0.4945011;
Wave[10] = 0.4942417;
Wave[11] = 0.49790573;
Wave[12] = 0.49759674;
Wave[13] = 0.49730396;
Wave[14] = 0.4969616;
Wave[15] = 0.49660873;
Wave[16] = 0.49623108;
Wave[17] = 0.49194717;
Wave[18] = 0.49148178;
Wave[19] = 0.49103546;
Wave[20] = 0.49050522;
Wave[21] = 0.4938717;
Wave[22] = 0.49325943;
Wave[23] = 0.49262524;
Wave[24] = 0.48806;
Wave[25] = 0.487319;
Wave[26] = 0.4865017;
Wave[27] = 0.48954296;
Wave[28] = 0.48862076;
Wave[29] = 0.4836731;
Wave[30] = 0.48257446;
Wave[31] = 0.4852705;
Wave[32] = 0.48007202;
Wave[33] = 0.4786358;
Wave[34] = 0.48098755;
Wave[35] = 0.47540855;
Wave[36] = 0.4774704;
Wave[37] = 0.47152328;
Wave[38] = 0.47319984;
Wave[39] = 0.46998215;
Wave[40] = 0.46408844;
Wave[41] = 0.46424484;
Wave[42] = 0.46167183;
Wave[43] = 0.45793724;
Wave[44] = 0.45280838;
Wave[45] = 0.44543839;
Wave[46] = 0.43989372;
Wave[47] = 0.43654633;
Wave[48] = 0.43086243;
Wave[49] = 0.42164898;
Wave[50] = 0.41345787;
Wave[51] = 0.40236378;
Wave[52] = 0.3889885;
Wave[53] = 0.36850548;
Wave[54] = 0.35069466;
Wave[55] = 0.32259083;
Wave[56] = 0.2799225;
Wave[57] = 0.13707638;
Wave[58] = 0.0;
Wave[59] = 0.0;
Wave[60] = 0.0;
Wave[61] = 0.0;
Wave[62] = 0.0;
Wave[63] = 0.0;
Wave[64] = 0.0;
Wave[65] = 0.0;
Wave[66] = 0.0;
Wave[67] = 0.0;
Wave[68] = 0.0;
Wave[69] = 0.0;
Wave[70] = -0.010377884;
Wave[71] = -0.2576542;
Wave[72] = -0.31333923;
Wave[73] = -0.34516525;
Wave[74] = -0.36541367;
Wave[75] = -0.3834076;
Wave[76] = -0.39888;
Wave[77] = -0.40995026;
Wave[78] = -0.42110825;
Wave[79] = -0.43081284;
Wave[80] = -0.43930054;
Wave[81] = -0.44439793;
Wave[82] = -0.44970703;
Wave[83] = -0.45214844;
Wave[84] = -0.45649338;
Wave[85] = -0.4603777;
Wave[86] = -0.4638443;
Wave[87] = -0.46767426;
Wave[88] = -0.4736786;
Wave[89] = -0.47305584;
Wave[90] = -0.47511864;
Wave[91] = -0.47893238;
Wave[92] = -0.47864532;
Wave[93] = -0.480402;
Wave[94] = -0.48591232;
Wave[95] = -0.4834633;
Wave[96] = -0.48872375;
Wave[97] = -0.4899807;
Wave[98] = -0.4872036;
Wave[99] = -0.48849297;
Wave[100] = -0.49310684;
Wave[101] = -0.4940138;
Wave[102] = -0.490942;
Wave[103] = -0.491704;
Wave[104] = -0.496315;
Wave[105] = -0.496974;
Wave[106] = -0.49757195;
Wave[107] = -0.49422455;
Wave[108] = -0.49474716;
Wave[109] = -0.4952488;
Wave[110] = -0.49568748;
Wave[111] = -0.4973173;
Wave[112] = -0.500412;
Wave[113] = -0.50075436;
Wave[114] = -0.50107765;
Wave[115] = -0.5014229;
Wave[116] = -0.5016937;
Wave[117] = -0.5002575;
Wave[118] = -0.4982891;
Wave[119] = -0.4985447;
Wave[120] = -0.49876404;
Wave[121] = -0.49895287;
Wave[122] = -0.4991417;
Wave[123] = -0.4993391;
Wave[124] = -0.49949265;
Wave[125] = -0.49966145;
Wave[126] = -0.49977493;
Wave[127] = -0.49993324;
Selected.WaveTable.set( 2 , Wave );
Wave[0] = 0.49601746;
Wave[1] = 0.4951172;
Wave[2] = 0.4977646;
Wave[3] = 0.49198627;
Wave[4] = 0.49308014;
Wave[5] = 0.48712254;
Wave[6] = 0.47885323;
Wave[7] = 0.46846485;
Wave[8] = 0.45150757;
Wave[9] = 0.4143896;
Wave[10] = 0.32100487;
Wave[11] = 0.0;
Wave[12] = 0.0;
Wave[13] = 0.0;
Wave[14] = 0.0;
Wave[15] = 0.0;
Wave[16] = 0.0;
Wave[17] = 0.0;
Wave[18] = 0.0;
Wave[19] = 0.0;
Wave[20] = 0.0;
Wave[21] = 0.0;
Wave[22] = 0.0;
Wave[23] = 0.0;
Wave[24] = 0.0;
Wave[25] = 0.0;
Wave[26] = 0.0;
Wave[27] = 0.0;
Wave[28] = 0.0;
Wave[29] = 0.0;
Wave[30] = 0.0;
Wave[31] = 0.0;
Wave[32] = 0.0;
Wave[33] = 0.0;
Wave[34] = 0.0;
Wave[35] = 0.0;
Wave[36] = 0.0;
Wave[37] = 0.0;
Wave[38] = 0.0;
Wave[39] = 0.0;
Wave[40] = 0.0;
Wave[41] = 0.0;
Wave[42] = 0.0;
Wave[43] = 0.0;
Wave[44] = 0.0;
Wave[45] = 0.0;
Wave[46] = 0.0;
Wave[47] = 0.0;
Wave[48] = 0.0;
Wave[49] = 0.0;
Wave[50] = 0.0;
Wave[51] = 0.0;
Wave[52] = 0.0;
Wave[53] = 0.0;
Wave[54] = 0.0;
Wave[55] = 0.0;
Wave[56] = 0.0;
Wave[57] = 0.0;
Wave[58] = 0.0;
Wave[59] = 0.0;
Wave[60] = 0.0;
Wave[61] = 0.0;
Wave[62] = 0.0;
Wave[63] = 0.0;
Wave[64] = 0.0;
Wave[65] = 0.0;
Wave[66] = 0.0;
Wave[67] = 0.0;
Wave[68] = 0.0;
Wave[69] = 0.0;
Wave[70] = 0.0;
Wave[71] = 0.0;
Wave[72] = 0.0;
Wave[73] = 0.0;
Wave[74] = 0.0;
Wave[75] = 0.0;
Wave[76] = 0.0;
Wave[77] = 0.0;
Wave[78] = 0.0;
Wave[79] = 0.0;
Wave[80] = 0.0;
Wave[81] = 0.0;
Wave[82] = 0.0;
Wave[83] = 0.0;
Wave[84] = 0.0;
Wave[85] = 0.0;
Wave[86] = 0.0;
Wave[87] = 0.0;
Wave[88] = 0.0;
Wave[89] = 0.0;
Wave[90] = 0.0;
Wave[91] = 0.0;
Wave[92] = 0.0;
Wave[93] = 0.0;
Wave[94] = 0.0;
Wave[95] = 0.0;
Wave[96] = 0.0;
Wave[97] = 0.0;
Wave[98] = 0.0;
Wave[99] = 0.0;
Wave[100] = 0.0;
Wave[101] = 0.0;
Wave[102] = 0.0;
Wave[103] = 0.0;
Wave[104] = 0.0;
Wave[105] = 0.0;
Wave[106] = 0.0;
Wave[107] = -0.31201744;
Wave[108] = -0.38224792;
Wave[109] = -0.41823387;
Wave[110] = -0.44052696;
Wave[111] = -0.45562744;
Wave[112] = -0.4664917;
Wave[113] = -0.47186375;
Wave[114] = -0.47831917;
Wave[115] = -0.4853449;
Wave[116] = -0.48934555;
Wave[117] = -0.49016762;
Wave[118] = -0.49088287;
Wave[119] = -0.49673653;
Wave[120] = -0.49443054;
Wave[121] = -0.49573994;
Wave[122] = -0.5006771;
Wave[123] = -0.50155735;
Wave[124] = -0.49838257;
Wave[125] = -0.49894142;
Wave[126] = -0.49944687;
Wave[127] = -0.49986076;
Selected.WaveTable.set( 3 , Wave );
Wave[0] = 0.4960785;
Wave[1] = 0.49568462;
Wave[2] = 0.49524498;
Wave[3] = 0.494668;
Wave[4] = 0.49694443;
Wave[5] = 0.49712086;
Wave[6] = 0.49565125;
Wave[7] = 0.49103355;
Wave[8] = 0.49343872;
Wave[9] = 0.4877348;
Wave[10] = 0.4893818;
Wave[11] = 0.48270226;
Wave[12] = 0.47922134;
Wave[13] = 0.4747963;
Wave[14] = 0.4706459;
Wave[15] = 0.4645357;
Wave[16] = 0.45213318;
Wave[17] = 0.44146442;
Wave[18] = 0.41967773;
Wave[19] = 0.39403725;
Wave[20] = 0.33927536;
Wave[21] = 0.11698723;
Wave[22] = 0.0;
Wave[23] = 0.0;
Wave[24] = 0.0;
Wave[25] = 0.0;
Wave[26] = 0.0;
Wave[27] = 0.0;
Wave[28] = 0.0;
Wave[29] = 0.0;
Wave[30] = 0.0;
Wave[31] = 0.0;
Wave[32] = 0.0;
Wave[33] = 0.0;
Wave[34] = 0.0;
Wave[35] = 0.0;
Wave[36] = 0.0;
Wave[37] = 0.0;
Wave[38] = 0.0;
Wave[39] = 0.0;
Wave[40] = 0.0;
Wave[41] = 0.0;
Wave[42] = 0.0;
Wave[43] = 0.0;
Wave[44] = 0.0;
Wave[45] = 0.0;
Wave[46] = 0.0;
Wave[47] = 0.0;
Wave[48] = 0.0;
Wave[49] = 0.0;
Wave[50] = 0.0;
Wave[51] = 0.0;
Wave[52] = 0.0;
Wave[53] = 0.0;
Wave[54] = 0.0;
Wave[55] = 0.0;
Wave[56] = 0.0;
Wave[57] = 0.0;
Wave[58] = 0.0;
Wave[59] = 0.0;
Wave[60] = 0.0;
Wave[61] = 0.0;
Wave[62] = 0.0;
Wave[63] = 0.0;
Wave[64] = 0.0;
Wave[65] = 0.0;
Wave[66] = 0.0;
Wave[67] = 0.0;
Wave[68] = 0.0;
Wave[69] = 0.0;
Wave[70] = 0.0;
Wave[71] = 0.0;
Wave[72] = 0.0;
Wave[73] = 0.0;
Wave[74] = 0.0;
Wave[75] = 0.0;
Wave[76] = 0.0;
Wave[77] = 0.0;
Wave[78] = 0.0;
Wave[79] = 0.0;
Wave[80] = 0.0;
Wave[81] = 0.0;
Wave[82] = 0.0;
Wave[83] = 0.0;
Wave[84] = 0.0;
Wave[85] = 0.0;
Wave[86] = 0.0;
Wave[87] = 0.0;
Wave[88] = 0.0;
Wave[89] = 0.0;
Wave[90] = 0.0;
Wave[91] = 0.0;
Wave[92] = 0.0;
Wave[93] = 0.0;
Wave[94] = 0.0;
Wave[95] = 0.0;
Wave[96] = 0.0;
Wave[97] = 0.0;
Wave[98] = 0.0;
Wave[99] = 0.0;
Wave[100] = 0.0;
Wave[101] = 0.0;
Wave[102] = 0.0;
Wave[103] = 0.0;
Wave[104] = 0.0;
Wave[105] = 0.0;
Wave[106] = 0.0;
Wave[107] = 0.0;
Wave[108] = 0.0;
Wave[109] = 0.0;
Wave[110] = 0.0;
Wave[111] = 0.0;
Wave[112] = 0.0;
Wave[113] = 0.0;
Wave[114] = 0.0;
Wave[115] = 0.0;
Wave[116] = 0.0;
Wave[117] = 0.0;
Wave[118] = 0.0;
Wave[119] = 0.0;
Wave[120] = 0.0;
Wave[121] = 0.0;
Wave[122] = 0.0;
Wave[123] = 0.0;
Wave[124] = 0.0;
Wave[125] = 0.0;
Wave[126] = 0.0;
Wave[127] = -0.4884081;
Selected.WaveTable.set( 4 , Wave );
Wave[0] = 0.49604797;
Wave[1] = 0.495368;
Wave[2] = 0.49444008;
Wave[3] = 0.49704933;
Wave[4] = 0.4913292;
Wave[5] = 0.4927311;
Wave[6] = 0.48913002;
Wave[7] = 0.48081875;
Wave[8] = 0.47442627;
Wave[9] = 0.46474934;
Wave[10] = 0.44244385;
Wave[11] = 0.409832;
Wave[12] = 0.32769394;
Wave[13] = 0.0;
Wave[14] = 0.0;
Wave[15] = 0.0;
Wave[16] = 0.0;
Wave[17] = 0.0;
Wave[18] = 0.0;
Wave[19] = 0.0;
Wave[20] = 0.0;
Wave[21] = 0.0;
Wave[22] = 0.0;
Wave[23] = 0.0;
Wave[24] = 0.0;
Wave[25] = 0.0;
Wave[26] = 0.0;
Wave[27] = 0.0;
Wave[28] = 0.0;
Wave[29] = 0.0;
Wave[30] = 0.0;
Wave[31] = 0.0;
Wave[32] = 0.0;
Wave[33] = 0.0;
Wave[34] = 0.0;
Wave[35] = 0.0;
Wave[36] = 0.0;
Wave[37] = 0.0;
Wave[38] = 0.0;
Wave[39] = 0.0;
Wave[40] = 0.0;
Wave[41] = 0.0;
Wave[42] = 0.0;
Wave[43] = 0.0;
Wave[44] = 0.0;
Wave[45] = 0.0;
Wave[46] = 0.0;
Wave[47] = 0.0;
Wave[48] = 0.0;
Wave[49] = 0.0;
Wave[50] = 0.0;
Wave[51] = 0.0;
Wave[52] = 0.0;
Wave[53] = 0.0;
Wave[54] = 0.0;
Wave[55] = 0.0;
Wave[56] = 0.0;
Wave[57] = 0.0;
Wave[58] = 0.0;
Wave[59] = 0.0;
Wave[60] = 0.0;
Wave[61] = 0.0;
Wave[62] = 0.0;
Wave[63] = 0.0;
Wave[64] = 0.0;
Wave[65] = 0.0;
Wave[66] = 0.0;
Wave[67] = 0.0;
Wave[68] = 0.0;
Wave[69] = 0.0;
Wave[70] = 0.0;
Wave[71] = 0.0;
Wave[72] = 0.0;
Wave[73] = 0.0;
Wave[74] = 0.0;
Wave[75] = 0.0;
Wave[76] = 0.0;
Wave[77] = 0.0;
Wave[78] = 0.0;
Wave[79] = 0.0;
Wave[80] = 0.0;
Wave[81] = 0.0;
Wave[82] = 0.0;
Wave[83] = 0.0;
Wave[84] = 0.0;
Wave[85] = 0.0;
Wave[86] = 0.0;
Wave[87] = 0.0;
Wave[88] = 0.0;
Wave[89] = 0.0;
Wave[90] = 0.0;
Wave[91] = 0.0;
Wave[92] = 0.0;
Wave[93] = 0.0;
Wave[94] = 0.0;
Wave[95] = 0.0;
Wave[96] = 0.0;
Wave[97] = 0.0;
Wave[98] = 0.0;
Wave[99] = 0.0;
Wave[100] = 0.0;
Wave[101] = 0.0;
Wave[102] = 0.0;
Wave[103] = 0.0;
Wave[104] = 0.0;
Wave[105] = 0.0;
Wave[106] = 0.0;
Wave[107] = 0.0;
Wave[108] = 0.0;
Wave[109] = 0.0;
Wave[110] = 0.0;
Wave[111] = 0.0;
Wave[112] = 0.0;
Wave[113] = -0.28205395;
Wave[114] = -0.39137077;
Wave[115] = -0.42966843;
Wave[116] = -0.45636368;
Wave[117] = -0.4690466;
Wave[118] = -0.4769516;
Wave[119] = -0.48387432;
Wave[120] = -0.48986053;
Wave[121] = -0.49137688;
Wave[122] = -0.49783516;
Wave[123] = -0.4958353;
Wave[124] = -0.50117874;
Wave[125] = -0.4983673;
Wave[126] = -0.4991703;
Wave[127] = -0.49978924;
Selected.WaveTable.set( 5 , Wave );
Wave[0] = 0.4960785;
Wave[1] = 0.49587345;
Wave[2] = 0.49567032;
Wave[3] = 0.4954481;
Wave[4] = 0.49519348;
Wave[5] = 0.49491596;
Wave[6] = 0.4946041;
Wave[7] = 0.49424648;
Wave[8] = 0.49777222;
Wave[9] = 0.49736118;
Wave[10] = 0.4968891;
Wave[11] = 0.49637794;
Wave[12] = 0.49189377;
Wave[13] = 0.4912672;
Wave[14] = 0.49056053;
Wave[15] = 0.4936819;
Wave[16] = 0.4928131;
Wave[17] = 0.48789978;
Wave[18] = 0.48682022;
Wave[19] = 0.48950005;
Wave[20] = 0.48521423;
Wave[21] = 0.4827175;
Wave[22] = 0.48489952;
Wave[23] = 0.47903538;
Wave[24] = 0.48073578;
Wave[25] = 0.47822666;
Wave[26] = 0.47148514;
Wave[27] = 0.4682598;
Wave[28] = 0.46451187;
Wave[29] = 0.46046448;
Wave[30] = 0.4576397;
Wave[31] = 0.4520092;
Wave[32] = 0.4462738;
Wave[33] = 0.43526745;
Wave[34] = 0.42538834;
Wave[35] = 0.41509247;
Wave[36] = 0.39842987;
Wave[37] = 0.37841702;
Wave[38] = 0.34524345;
Wave[39] = 0.30338764;
Wave[40] = 0.16029358;
Wave[41] = 0.0;
Wave[42] = 0.0;
Wave[43] = 0.0;
Wave[44] = 0.0;
Wave[45] = 0.0;
Wave[46] = 0.0;
Wave[47] = 0.0;
Wave[48] = 0.0;
Wave[49] = 0.0;
Wave[50] = 0.0;
Wave[51] = 0.0;
Wave[52] = 0.0;
Wave[53] = 0.0;
Wave[54] = 0.0;
Wave[55] = 0.0;
Wave[56] = 0.0;
Wave[57] = 0.0;
Wave[58] = 0.0;
Wave[59] = 0.0;
Wave[60] = 0.0;
Wave[61] = 0.0;
Wave[62] = 0.0;
Wave[63] = 0.0;
Wave[64] = 0.0;
Wave[65] = 0.0;
Wave[66] = 0.0;
Wave[67] = 0.0;
Wave[68] = 0.0;
Wave[69] = 0.0;
Wave[70] = 0.0;
Wave[71] = 0.0;
Wave[72] = 0.0;
Wave[73] = 0.0;
Wave[74] = 0.0;
Wave[75] = 0.0;
Wave[76] = 0.0;
Wave[77] = 0.0;
Wave[78] = 0.0;
Wave[79] = 0.0;
Wave[80] = 0.0;
Wave[81] = 0.0;
Wave[82] = 0.0;
Wave[83] = 0.0;
Wave[84] = 0.0;
Wave[85] = 0.0;
Wave[86] = 0.0;
Wave[87] = 0.0;
Wave[88] = -0.28250885;
Wave[89] = -0.3384695;
Wave[90] = -0.36959076;
Wave[91] = -0.39253998;
Wave[92] = -0.41004944;
Wave[93] = -0.4236784;
Wave[94] = -0.43699074;
Wave[95] = -0.44615078;
Wave[96] = -0.4547882;
Wave[97] = -0.45978832;
Wave[98] = -0.46485138;
Wave[99] = -0.4669447;
Wave[100] = -0.4718361;
Wave[101] = -0.47648525;
Wave[102] = -0.4811802;
Wave[103] = -0.47988033;
Wave[104] = -0.4860916;
Wave[105] = -0.4849472;
Wave[106] = -0.48991394;
Wave[107] = -0.48763084;
Wave[108] = -0.49293137;
Wave[109] = -0.4905548;
Wave[110] = -0.49144554;
Wave[111] = -0.49636936;
Wave[112] = -0.49728394;
Wave[113] = -0.4954157;
Wave[114] = -0.4949398;
Wave[115] = -0.49556828;
Wave[116] = -0.49910355;
Wave[117] = -0.50063324;
Wave[118] = -0.50109863;
Wave[119] = -0.5015297;
Wave[120] = -0.5;
Wave[121] = -0.498394;
Wave[122] = -0.49870872;
Wave[123] = -0.49899006;
Wave[124] = -0.49925995;
Wave[125] = -0.4994793;
Wave[126] = -0.4997139;
Wave[127] = -0.49990273;
Selected.WaveTable.set( 6 , Wave );
Wave[0] = 0.48851013;
Wave[1] = 0.48708057;
Wave[2] = 0.49298096;
Wave[3] = 0.49063206;
Wave[4] = 0.49186707;
Wave[5] = 0.49676037;
Wave[6] = 0.49752808;
Wave[7] = 0.494215;
Wave[8] = 0.4947281;
Wave[9] = 0.49509144;
Wave[10] = 0.49539948;
Wave[11] = 0.4956379;
Wave[12] = 0.4957962;
Wave[13] = 0.4959259;
Wave[14] = 0.49601746;
Wave[15] = 0.4960575;
Wave[16] = 0.4960785;
Wave[17] = 0.49604797;
Wave[18] = 0.49598694;
Wave[19] = 0.49587822;
Wave[20] = 0.49572372;
Wave[21] = 0.49554825;
Wave[22] = 0.49530602;
Wave[23] = 0.49496078;
Wave[24] = 0.4945221;
Wave[25] = 0.4979143;
Wave[26] = 0.49722862;
Wave[27] = 0.49634838;
Wave[28] = 0.49136353;
Wave[29] = 0.49368668;
Wave[30] = 0.48974228;
Wave[31] = 0.49000072;
Wave[32] = 0.48336792;
Wave[33] = 0.48114395;
Wave[34] = 0.47560692;
Wave[35] = 0.47412872;
Wave[36] = 0.46343994;
Wave[37] = 0.4567299;
Wave[38] = 0.4450283;
Wave[39] = 0.43540192;
Wave[40] = 0.41534424;
Wave[41] = 0.39927006;
Wave[42] = 0.37123108;
Wave[43] = 0.3442564;
Wave[44] = 0.3058586;
Wave[45] = 0.25712585;
Wave[46] = 0.16037369;
Wave[47] = 0.029519081;
Wave[48] = -0.0019378662;
Wave[49] = 0.07436085;
Wave[50] = 0.22535324;
Wave[51] = 0.27807426;
Wave[52] = 0.32381058;
Wave[53] = 0.35407066;
Wave[54] = 0.3818512;
Wave[55] = 0.40444946;
Wave[56] = 0.42264557;
Wave[57] = 0.4374199;
Wave[58] = 0.452713;
Wave[59] = 0.46053982;
Wave[60] = 0.46945572;
Wave[61] = 0.47630596;
Wave[62] = 0.48147774;
Wave[63] = 0.4853964;
Wave[64] = -0.48727417;
Wave[65] = -0.4838295;
Wave[66] = -0.47951317;
Wave[67] = -0.47803497;
Wave[68] = -0.4673462;
Wave[69] = -0.46063614;
Wave[70] = -0.44893456;
Wave[71] = -0.43930817;
Wave[72] = -0.4192505;
Wave[73] = -0.4031763;
Wave[74] = -0.37513733;
Wave[75] = -0.34816265;
Wave[76] = -0.30976486;
Wave[77] = -0.2610321;
Wave[78] = -0.16427994;
Wave[79] = -0.03342533;
Wave[80] = -0.0019683838;
Wave[81] = -0.0782671;
Wave[82] = -0.22925949;
Wave[83] = -0.2819805;
Wave[84] = -0.32771683;
Wave[85] = -0.3579769;
Wave[86] = -0.38575745;
Wave[87] = -0.4083557;
Wave[88] = -0.42655182;
Wave[89] = -0.44132614;
Wave[90] = -0.45710754;
Wave[91] = -0.46444607;
Wave[92] = -0.47336197;
Wave[93] = -0.4802122;
Wave[94] = -0.485384;
Wave[95] = -0.48930264;
Wave[96] = -0.49241638;
Wave[97] = -0.49098682;
Wave[98] = -0.4968872;
Wave[99] = -0.4945383;
Wave[100] = -0.49577332;
Wave[101] = -0.5006666;
Wave[102] = -0.5014343;
Wave[103] = -0.49885368;
Wave[104] = -0.49863434;
Wave[105] = -0.4989977;
Wave[106] = -0.49930573;
Wave[107] = -0.49954414;
Wave[108] = -0.49970245;
Wave[109] = -0.49983215;
Wave[110] = -0.4999237;
Wave[111] = -0.49996376;
Wave[112] = -0.49998474;
Wave[113] = -0.49995422;
Wave[114] = -0.4998932;
Wave[115] = -0.49978447;
Wave[116] = -0.49962997;
Wave[117] = -0.4994545;
Wave[118] = -0.49921227;
Wave[119] = -0.49886703;
Wave[120] = -0.49842834;
Wave[121] = -0.50182056;
Wave[122] = -0.5011349;
Wave[123] = -0.50025463;
Wave[124] = -0.49526978;
Wave[125] = -0.49759293;
Wave[126] = -0.49364853;
Wave[127] = -0.49390697;
Selected.WaveTable.set( 7 , Wave );
Wave[0] = 0.0072631836;
Wave[1] = 0.09886837;
Wave[2] = 0.19013405;
Wave[3] = 0.23463917;
Wave[4] = 0.257576;
Wave[5] = 0.2711916;
Wave[6] = 0.2867775;
Wave[7] = 0.2946043;
Wave[8] = 0.3068924;
Wave[9] = 0.3365879;
Wave[10] = 0.3991089;
Wave[11] = 0.42689228;
Wave[12] = 0.4493904;
Wave[13] = 0.45982456;
Wave[14] = 0.47120285;
Wave[15] = 0.47547913;
Wave[16] = 0.48442078;
Wave[17] = 0.48696327;
Wave[18] = 0.48718452;
Wave[19] = 0.49334335;
Wave[20] = 0.4912033;
Wave[21] = 0.4964819;
Wave[22] = 0.49756813;
Wave[23] = 0.49450588;
Wave[24] = 0.49521637;
Wave[25] = 0.49565125;
Wave[26] = 0.49570847;
Wave[27] = 0.49571228;
Wave[28] = 0.49578857;
Wave[29] = 0.49578857;
Wave[30] = 0.4958496;
Wave[31] = 0.4958706;
Wave[32] = 0.4958496;
Wave[33] = 0.49320793;
Wave[34] = 0.49050522;
Wave[35] = 0.4928732;
Wave[36] = 0.4868889;
Wave[37] = 0.486598;
Wave[38] = 0.48519325;
Wave[39] = 0.48119545;
Wave[40] = 0.47415924;
Wave[41] = 0.4671545;
Wave[42] = 0.46777153;
Wave[43] = 0.46831322;
Wave[44] = 0.47275925;
Wave[45] = 0.47326088;
Wave[46] = 0.47379494;
Wave[47] = 0.47430706;
Wave[48] = 0.4709015;
Wave[49] = 0.47137642;
Wave[50] = 0.47185135;
Wave[51] = 0.47232628;
Wave[52] = 0.47670364;
Wave[53] = 0.47712326;
Wave[54] = 0.47758102;
Wave[55] = 0.47802258;
Wave[56] = 0.4784088;
Wave[57] = 0.47773457;
Wave[58] = 0.4681568;
Wave[59] = 0.46180058;
Wave[60] = 0.45270157;
Wave[61] = 0.43364334;
Wave[62] = 0.4127121;
Wave[63] = 0.37638664;
Wave[64] = 0.30361938;
Wave[65] = -0.2223587;
Wave[66] = -0.35762596;
Wave[67] = -0.4024868;
Wave[68] = -0.43321228;
Wave[69] = -0.44918537;
Wave[70] = -0.4599018;
Wave[71] = -0.4716921;
Wave[72] = -0.47533417;
Wave[73] = -0.47959423;
Wave[74] = -0.4791546;
Wave[75] = -0.47878456;
Wave[76] = -0.48226547;
Wave[77] = -0.48184872;
Wave[78] = -0.48140907;
Wave[79] = -0.48098373;
Wave[80] = -0.4805298;
Wave[81] = -0.47615337;
Wave[82] = -0.4757023;
Wave[83] = -0.47521305;
Wave[84] = -0.47473907;
Wave[85] = -0.47813988;
Wave[86] = -0.47763252;
Wave[87] = -0.4771042;
Wave[88] = -0.47463226;
Wave[89] = -0.4767065;
Wave[90] = -0.47973442;
Wave[91] = -0.48362446;
Wave[92] = -0.48710632;
Wave[93] = -0.493783;
Wave[94] = -0.49204636;
Wave[95] = -0.49772263;
Wave[96] = -0.49539185;
Wave[97] = -0.49969482;
Wave[98] = -0.49973297;
Wave[99] = -0.49969482;
Wave[100] = -0.4996872;
Wave[101] = -0.49961853;
Wave[102] = -0.499588;
Wave[103] = -0.4995575;
Wave[104] = -0.49952698;
Wave[105] = -0.49939442;
Wave[106] = -0.49876595;
Wave[107] = -0.50018024;
Wave[108] = -0.50091934;
Wave[109] = -0.49577713;
Wave[110] = -0.49663925;
Wave[111] = -0.4948988;
Wave[112] = -0.49353027;
Wave[113] = -0.48894024;
Wave[114] = -0.48576927;
Wave[115] = -0.47626972;
Wave[116] = -0.4690323;
Wave[117] = -0.45915318;
Wave[118] = -0.44086075;
Wave[119] = -0.4189329;
Wave[120] = -0.3735733;
Wave[121] = -0.2956171;
Wave[122] = -0.2873993;
Wave[123] = -0.27113914;
Wave[124] = -0.25697327;
Wave[125] = -0.23630428;
Wave[126] = -0.16700935;
Wave[127] = -0.078635216;
Selected.WaveTable.set( 8 , Wave );
Wave[0] = 0.0033416748;
Wave[1] = 0.2840538;
Wave[2] = 0.33778572;
Wave[3] = 0.37115002;
Wave[4] = 0.39666367;
Wave[5] = 0.4108343;
Wave[6] = 0.42795372;
Wave[7] = 0.43780327;
Wave[8] = 0.44517517;
Wave[9] = 0.4535532;
Wave[10] = 0.4563961;
Wave[11] = 0.46362495;
Wave[12] = 0.4696808;
Wave[13] = 0.4734192;
Wave[14] = 0.4752121;
Wave[15] = 0.47558403;
Wave[16] = 0.4819641;
Wave[17] = 0.4802084;
Wave[18] = 0.4860096;
Wave[19] = 0.48378563;
Wave[20] = 0.48917007;
Wave[21] = 0.48658752;
Wave[22] = 0.48776054;
Wave[23] = 0.49269962;
Wave[24] = 0.4936447;
Wave[25] = 0.4905405;
Wave[26] = 0.49130058;
Wave[27] = 0.49195957;
Wave[28] = 0.49646378;
Wave[29] = 0.4970026;
Wave[30] = 0.4974842;
Wave[31] = 0.49792194;
Wave[32] = 0.4944458;
Wave[33] = 0.49470806;
Wave[34] = 0.4950199;
Wave[35] = 0.44475937;
Wave[36] = -0.47537613;
Wave[37] = -0.47069263;
Wave[38] = -0.46899414;
Wave[39] = -0.46017742;
Wave[40] = -0.4575653;
Wave[41] = -0.4496107;
Wave[42] = -0.4381218;
Wave[43] = -0.42686367;
Wave[44] = -0.41723633;
Wave[45] = -0.3997507;
Wave[46] = -0.37364388;
Wave[47] = -0.34047413;
Wave[48] = -0.2796173;
Wave[49] = 0.040934563;
Wave[50] = 0.29159737;
Wave[51] = 0.34437847;
Wave[52] = 0.37352753;
Wave[53] = 0.3950081;
Wave[54] = -0.4945011;
Wave[55] = -0.49759007;
Wave[56] = -0.4966507;
Wave[57] = -0.49168682;
Wave[58] = -0.4905262;
Wave[59] = -0.49314022;
Wave[60] = -0.48775482;
Wave[61] = -0.48973942;
Wave[62] = -0.48563957;
Wave[63] = -0.4859209;
Wave[64] = -0.47958374;
Wave[65] = -0.48069954;
Wave[66] = -0.4774723;
Wave[67] = -0.47376156;
Wave[68] = -0.46942902;
Wave[69] = -0.46267986;
Wave[70] = -0.45840073;
Wave[71] = -0.45062065;
Wave[72] = -0.44082642;
Wave[73] = -0.42911434;
Wave[74] = -0.41921043;
Wave[75] = -0.39838028;
Wave[76] = -0.37670517;
Wave[77] = -0.34321404;
Wave[78] = -0.2894535;
Wave[79] = -0.03168392;
Wave[80] = 0.28196716;
Wave[81] = 0.3363304;
Wave[82] = 0.37052536;
Wave[83] = 0.39643574;
Wave[84] = 0.41374588;
Wave[85] = 0.4254942;
Wave[86] = 0.4374466;
Wave[87] = 0.44592285;
Wave[88] = 0.45386505;
Wave[89] = 0.4560156;
Wave[90] = 0.46456528;
Wave[91] = 0.46942234;
Wave[92] = 0.47322083;
Wave[93] = 0.47258377;
Wave[94] = 0.47537422;
Wave[95] = 0.4817648;
Wave[96] = 0.48034668;
Wave[97] = 0.48567963;
Wave[98] = 0.4849472;
Wave[99] = 0.43192005;
Wave[100] = -0.4971695;
Wave[101] = -0.49581337;
Wave[102] = -0.49528313;
Wave[103] = -0.49441147;
Wave[104] = -0.49752045;
Wave[105] = -0.49660015;
Wave[106] = -0.49162292;
Wave[107] = -0.49045944;
Wave[108] = -0.493042;
Wave[109] = -0.48763084;
Wave[110] = -0.48986816;
Wave[111] = -0.48404312;
Wave[112] = -0.48580933;
Wave[113] = -0.47942448;
Wave[114] = -0.4790287;
Wave[115] = -0.4769888;
Wave[116] = -0.47349167;
Wave[117] = -0.46740913;
Wave[118] = -0.46012878;
Wave[119] = -0.45725822;
Wave[120] = -0.4488678;
Wave[121] = -0.44144154;
Wave[122] = -0.4315548;
Wave[123] = -0.41429806;
Wave[124] = -0.39710617;
Wave[125] = -0.37425137;
Wave[126] = -0.3429718;
Wave[127] = -0.28722477;
Selected.WaveTable.set( 9 , Wave );
Wave[0] = 0.4571228;
Wave[1] = 0.49116135;
Wave[2] = 0.49635315;
Wave[3] = 0.49735546;
Wave[4] = 0.49521255;
Wave[5] = 0.4948721;
Wave[6] = 0.49538612;
Wave[7] = 0.49577427;
Wave[8] = 0.48521423;
Wave[9] = 0.4775896;
Wave[10] = 0.48174667;
Wave[11] = 0.4826727;
Wave[12] = 0.4854126;
Wave[13] = 0.48402023;
Wave[14] = 0.4890766;
Wave[15] = 0.48992634;
Wave[16] = 0.48661804;
Wave[17] = 0.4869957;
Wave[18] = 0.48718262;
Wave[19] = 0.48718262;
Wave[20] = 0.48706818;
Wave[21] = 0.48669147;
Wave[22] = 0.49006653;
Wave[23] = 0.48935032;
Wave[24] = 0.48838043;
Wave[25] = 0.4832344;
Wave[26] = 0.4855976;
Wave[27] = 0.4798231;
Wave[28] = 0.4813881;
Wave[29] = 0.474679;
Wave[30] = 0.4712639;
Wave[31] = 0.468338;
Wave[32] = 0.46601868;
Wave[33] = 0.45731258;
Wave[34] = 0.45128822;
Wave[35] = 0.4404087;
Wave[36] = 0.43078613;
Wave[37] = 0.42140293;
Wave[38] = 0.4068699;
Wave[39] = 0.38594723;
Wave[40] = 0.3702469;
Wave[41] = 0.34821892;
Wave[42] = 0.32092857;
Wave[43] = 0.29863453;
Wave[44] = 0.27565765;
Wave[45] = 0.266963;
Wave[46] = 0.26568413;
Wave[47] = 0.27825928;
Wave[48] = 0.30062866;
Wave[49] = 0.32607746;
Wave[50] = 0.35377312;
Wave[51] = 0.3748188;
Wave[52] = 0.39912796;
Wave[53] = 0.41693592;
Wave[54] = 0.42865372;
Wave[55] = 0.44165134;
Wave[56] = 0.4534378;
Wave[57] = 0.46100235;
Wave[58] = 0.46793175;
Wave[59] = 0.47723675;
Wave[60] = 0.48103714;
Wave[61] = 0.48588276;
Wave[62] = 0.4889698;
Wave[63] = 0.48757458;
Wave[64] = 0.4571228;
Wave[65] = -0.375844;
Wave[66] = -0.28404236;
Wave[67] = 0.281456;
Wave[68] = 0.36732483;
Wave[69] = 0.40849972;
Wave[70] = 0.429348;
Wave[71] = 0.4451084;
Wave[72] = -0.01335144;
Wave[73] = -0.48023605;
Wave[74] = -0.47317505;
Wave[75] = -0.46792698;
Wave[76] = -0.4638939;
Wave[77] = -0.46169186;
Wave[78] = -0.4542961;
Wave[79] = -0.4483118;
Wave[80] = -0.4445343;
Wave[81] = -0.44575596;
Wave[82] = -0.44036674;
Wave[83] = -0.44027424;
Wave[84] = -0.44447708;
Wave[85] = -0.44400597;
Wave[86] = -0.4474163;
Wave[87] = -0.4522333;
Wave[88] = -0.45594788;
Wave[89] = -0.46125793;
Wave[90] = -0.46840477;
Wave[91] = -0.47316074;
Wave[92] = -0.4771347;
Wave[93] = -0.48048306;
Wave[94] = -0.47998428;
Wave[95] = -0.4828024;
Wave[96] = -0.48912048;
Wave[97] = -0.48731327;
Wave[98] = -0.49299622;
Wave[99] = -0.4906025;
Wave[100] = -0.49186325;
Wave[101] = -0.49683666;
Wave[102] = -0.49770737;
Wave[103] = -0.49452686;
Wave[104] = -0.49510956;
Wave[105] = -0.49557114;
Wave[106] = -0.49591827;
Wave[107] = -0.50009346;
Wave[108] = -0.50027466;
Wave[109] = -0.5003357;
Wave[110] = -0.5003357;
Wave[111] = -0.5002508;
Wave[112] = -0.5000305;
Wave[113] = -0.49586296;
Wave[114] = -0.4954586;
Wave[115] = -0.49487972;
Wave[116] = -0.49708176;
Wave[117] = -0.4971466;
Wave[118] = -0.49202728;
Wave[119] = -0.490489;
Wave[120] = -0.49046326;
Wave[121] = -0.48983383;
Wave[122] = -0.48256874;
Wave[123] = -0.48037148;
Wave[124] = -0.47523117;
Wave[125] = -0.4645691;
Wave[126] = -0.454731;
Wave[127] = -0.44120884;
Selected.WaveTable.set( 10 , Wave );
Wave[0] = 0.4944458;
Wave[1] = 0.49562073;
Wave[2] = 0.49524498;
Wave[3] = 0.49480343;
Wave[4] = 0.49425888;
Wave[5] = 0.4975443;
Wave[6] = 0.49685097;
Wave[7] = 0.49282265;
Wave[8] = 0.4910965;
Wave[9] = 0.49385452;
Wave[10] = 0.49244308;
Wave[11] = 0.48689842;
Wave[12] = 0.488842;
Wave[13] = 0.4825945;
Wave[14] = 0.4799919;
Wave[15] = 0.4818468;
Wave[16] = 0.47564697;
Wave[17] = 0.4769516;
Wave[18] = 0.473917;
Wave[19] = 0.46454144;
Wave[20] = 0.46137238;
Wave[21] = 0.45235157;
Wave[22] = 0.4372406;
Wave[23] = 0.42971897;
Wave[24] = 0.41844177;
Wave[25] = 0.40428448;
Wave[26] = 0.39300728;
Wave[27] = 0.373497;
Wave[28] = 0.3462143;
Wave[29] = 0.30491447;
Wave[30] = 0.23389053;
Wave[31] = -0.16799545;
Wave[32] = 0.46362305;
Wave[33] = 0.4711218;
Wave[34] = 0.46829414;
Wave[35] = 0.46872044;
Wave[36] = 0.46538925;
Wave[37] = 0.4595995;
Wave[38] = 0.45277214;
Wave[39] = 0.45059872;
Wave[40] = 0.44314575;
Wave[41] = 0.43714523;
Wave[42] = 0.42046547;
Wave[43] = 0.40006065;
Wave[44] = 0.36410522;
Wave[45] = 0.3035288;
Wave[46] = -0.073394775;
Wave[47] = -0.29298592;
Wave[48] = -0.3449707;
Wave[49] = -0.373641;
Wave[50] = -0.39501762;
Wave[51] = -0.42325306;
Wave[52] = -0.43766403;
Wave[53] = -0.4511013;
Wave[54] = -0.46247673;
Wave[55] = -0.46734333;
Wave[56] = -0.47453308;
Wave[57] = -0.48130894;
Wave[58] = -0.48435783;
Wave[59] = -0.4837141;
Wave[60] = -0.48898697;
Wave[61] = -0.48808765;
Wave[62] = -0.49374008;
Wave[63] = -0.49134064;
Wave[64] = 0.48043823;
Wave[65] = 0.48711205;
Wave[66] = 0.48941803;
Wave[67] = 0.48367977;
Wave[68] = 0.48550034;
Wave[69] = 0.47915363;
Wave[70] = 0.4762745;
Wave[71] = 0.47281837;
Wave[72] = 0.4688797;
Wave[73] = 0.4605055;
Wave[74] = 0.4528618;
Wave[75] = 0.4435501;
Wave[76] = 0.4278679;
Wave[77] = 0.4085598;
Wave[78] = 0.38490868;
Wave[79] = 0.35829067;
Wave[80] = 0.31973267;
Wave[81] = 0.256855;
Wave[82] = -0.14711761;
Wave[83] = -0.33280182;
Wave[84] = -0.37997818;
Wave[85] = -0.41208267;
Wave[86] = -0.4299698;
Wave[87] = -0.44100475;
Wave[88] = -0.4491577;
Wave[89] = -0.45380306;
Wave[90] = -0.4621086;
Wave[91] = -0.4643736;
Wave[92] = -0.46950912;
Wave[93] = -0.4739399;
Wave[94] = -0.47699738;
Wave[95] = -0.47580433;
Wave[96] = -0.33874512;
Wave[97] = 0.03026104;
Wave[98] = -0.26984596;
Wave[99] = -0.3228445;
Wave[100] = -0.35744858;
Wave[101] = -0.38022232;
Wave[102] = -0.39818192;
Wave[103] = -0.41229916;
Wave[104] = -0.42569733;
Wave[105] = -0.43394947;
Wave[106] = -0.44519043;
Wave[107] = -0.4571724;
Wave[108] = -0.46473312;
Wave[109] = -0.47423935;
Wave[110] = -0.47528458;
Wave[111] = -0.48195744;
Wave[112] = -0.48446655;
Wave[113] = -0.48273087;
Wave[114] = -0.4886074;
Wave[115] = -0.48730564;
Wave[116] = -0.49341965;
Wave[117] = -0.4913559;
Wave[118] = -0.49681854;
Wave[119] = -0.49496174;
Wave[120] = -0.49533844;
Wave[121] = -0.5001993;
Wave[122] = -0.5009861;
Wave[123] = -0.5016556;
Wave[124] = -0.4983406;
Wave[125] = -0.49884892;
Wave[126] = -0.49925995;
Wave[127] = -0.4996214;
Selected.WaveTable.set( 11 , Wave );
Wave[0] = 0.48213196;
Wave[1] = 0.4874258;
Wave[2] = 0.4879303;
Wave[3] = 0.4923153;
Wave[4] = 0.49274445;
Wave[5] = 0.4931364;
Wave[6] = 0.49352264;
Wave[7] = 0.4938116;
Wave[8] = 0.49409485;
Wave[9] = 0.49045372;
Wave[10] = 0.49063873;
Wave[11] = 0.49080086;
Wave[12] = 0.4908867;
Wave[13] = 0.49098206;
Wave[14] = 0.49109077;
Wave[15] = 0.49144363;
Wave[16] = 0.49171448;
Wave[17] = 0.49197006;
Wave[18] = 0.49215698;
Wave[19] = 0.49201584;
Wave[20] = 0.4917984;
Wave[21] = 0.49151993;
Wave[22] = 0.4911995;
Wave[23] = 0.49111938;
Wave[24] = 0.4911499;
Wave[25] = 0.49111938;
Wave[26] = 0.4910431;
Wave[27] = 0.49090862;
Wave[28] = 0.49074173;
Wave[29] = 0.49051094;
Wave[30] = 0.49264336;
Wave[31] = 0.49373436;
Wave[32] = 0.4954071;
Wave[33] = 0.49588013;
Wave[34] = 0.4957409;
Wave[35] = 0.49557304;
Wave[36] = 0.49539566;
Wave[37] = 0.49516487;
Wave[38] = 0.49489975;
Wave[39] = 0.4946251;
Wave[40] = 0.49422455;
Wave[41] = 0.49769115;
Wave[42] = 0.49698448;
Wave[43] = 0.49375057;
Wave[44] = 0.49073792;
Wave[45] = 0.49300575;
Wave[46] = 0.48703003;
Wave[47] = 0.48885632;
Wave[48] = 0.4862671;
Wave[49] = 0.479084;
Wave[50] = 0.4762478;
Wave[51] = 0.4676342;
Wave[52] = 0.45749283;
Wave[53] = 0.44391727;
Wave[54] = 0.4126854;
Wave[55] = 0.3632326;
Wave[56] = 0.029556274;
Wave[57] = -0.36066628;
Wave[58] = -0.42056465;
Wave[59] = -0.44750118;
Wave[60] = -0.46695328;
Wave[61] = -0.47797394;
Wave[62] = -0.48578072;
Wave[63] = -0.4872532;
Wave[64] = 0.48132324;
Wave[65] = 0.48500443;
Wave[66] = 0.47894096;
Wave[67] = 0.4683714;
Wave[68] = 0.4597168;
Wave[69] = 0.43862534;
Wave[70] = 0.40320206;
Wave[71] = 0.33554554;
Wave[72] = -0.26024628;
Wave[73] = -0.38546848;
Wave[74] = -0.42947388;
Wave[75] = -0.45314693;
Wave[76] = -0.46494293;
Wave[77] = -0.47739697;
Wave[78] = -0.48272705;
Wave[79] = -0.48542786;
Wave[80] = -0.48727417;
Wave[81] = -0.49359035;
Wave[82] = -0.4916191;
Wave[83] = -0.49747467;
Wave[84] = -0.4951172;
Wave[85] = -0.50023746;
Wave[86] = -0.5011959;
Wave[87] = -0.50176525;
Wave[88] = -0.4982605;
Wave[89] = -0.49859238;
Wave[90] = -0.498909;
Wave[91] = -0.4991579;
Wave[92] = -0.49936676;
Wave[93] = -0.49955273;
Wave[94] = -0.49968338;
Wave[95] = -0.49983215;
Wave[96] = -0.49539185;
Wave[97] = -0.497674;
Wave[98] = -0.49653053;
Wave[99] = -0.49438667;
Wave[100] = -0.49458313;
Wave[101] = -0.4947338;
Wave[102] = -0.4948616;
Wave[103] = -0.4949131;
Wave[104] = -0.49491882;
Wave[105] = -0.4948883;
Wave[106] = -0.49507904;
Wave[107] = -0.4953785;
Wave[108] = -0.4956398;
Wave[109] = -0.49583435;
Wave[110] = -0.49593735;
Wave[111] = -0.49573898;
Wave[112] = -0.49549866;
Wave[113] = -0.49519157;
Wave[114] = -0.49482918;
Wave[115] = -0.4947815;
Wave[116] = -0.49472046;
Wave[117] = -0.4946003;
Wave[118] = -0.494442;
Wave[119] = -0.49425316;
Wave[120] = -0.4979248;
Wave[121] = -0.4976387;
Wave[122] = -0.497324;
Wave[123] = -0.49697495;
Wave[124] = -0.49656296;
Wave[125] = -0.49611568;
Wave[126] = -0.49173164;
Wave[127] = -0.49121666;
Selected.WaveTable.set( 12 , Wave );
Wave[0] = 0.4577942;
Wave[1] = 0.4960785;
Wave[2] = 0.4960785;
Wave[3] = 0.4960785;
Wave[4] = 0.4960785;
Wave[5] = 0.4960785;
Wave[6] = 0.4960785;
Wave[7] = 0.4960785;
Wave[8] = 0.4960785;
Wave[9] = 0.4960785;
Wave[10] = 0.4960785;
Wave[11] = 0.4960785;
Wave[12] = 0.4960785;
Wave[13] = 0.4960785;
Wave[14] = 0.4960785;
Wave[15] = 0.4960785;
Wave[16] = 0.4960785;
Wave[17] = 0.4960785;
Wave[18] = 0.4960785;
Wave[19] = 0.4960785;
Wave[20] = 0.4960785;
Wave[21] = 0.4960785;
Wave[22] = 0.4960785;
Wave[23] = 0.4960785;
Wave[24] = 0.4960785;
Wave[25] = 0.4960785;
Wave[26] = 0.4960785;
Wave[27] = 0.4960785;
Wave[28] = 0.4960785;
Wave[29] = 0.4960785;
Wave[30] = 0.4960785;
Wave[31] = 0.4960785;
Wave[32] = 0.4960785;
Wave[33] = 0.4960785;
Wave[34] = 0.4960785;
Wave[35] = 0.4960785;
Wave[36] = 0.4960785;
Wave[37] = 0.4960785;
Wave[38] = 0.4960785;
Wave[39] = 0.4960785;
Wave[40] = 0.4960785;
Wave[41] = 0.4960785;
Wave[42] = 0.4960785;
Wave[43] = 0.4960785;
Wave[44] = 0.4960785;
Wave[45] = 0.4960785;
Wave[46] = 0.4960785;
Wave[47] = 0.4960785;
Wave[48] = 0.4960785;
Wave[49] = 0.4960785;
Wave[50] = 0.4960785;
Wave[51] = 0.4960785;
Wave[52] = 0.4960785;
Wave[53] = 0.4960785;
Wave[54] = 0.4960785;
Wave[55] = 0.4960785;
Wave[56] = 0.4960785;
Wave[57] = 0.4960785;
Wave[58] = 0.4960785;
Wave[59] = 0.4960785;
Wave[60] = 0.4960785;
Wave[61] = 0.4960785;
Wave[62] = 0.4960785;
Wave[63] = 0.4960785;
Wave[64] = 0.4960785;
Wave[65] = 0.4960785;
Wave[66] = 0.4960785;
Wave[67] = 0.4960785;
Wave[68] = 0.4960785;
Wave[69] = 0.4960785;
Wave[70] = 0.4960785;
Wave[71] = 0.4960785;
Wave[72] = 0.4960785;
Wave[73] = 0.4960785;
Wave[74] = 0.4960785;
Wave[75] = 0.4960785;
Wave[76] = 0.4960785;
Wave[77] = 0.4960785;
Wave[78] = 0.4960785;
Wave[79] = 0.4960785;
Wave[80] = 0.4960785;
Wave[81] = 0.4960785;
Wave[82] = 0.4960785;
Wave[83] = 0.4960785;
Wave[84] = 0.4960785;
Wave[85] = 0.4960785;
Wave[86] = 0.4960785;
Wave[87] = 0.4960785;
Wave[88] = 0.4960785;
Wave[89] = 0.4960785;
Wave[90] = 0.4960785;
Wave[91] = 0.4960785;
Wave[92] = 0.4960785;
Wave[93] = 0.4960785;
Wave[94] = 0.4960785;
Wave[95] = 0.4960785;
Wave[96] = 0.4960785;
Wave[97] = 0.4960785;
Wave[98] = 0.4960785;
Wave[99] = 0.4960785;
Wave[100] = 0.4960785;
Wave[101] = 0.4960785;
Wave[102] = 0.4960785;
Wave[103] = 0.4960785;
Wave[104] = 0.4960785;
Wave[105] = 0.4960785;
Wave[106] = 0.4960785;
Wave[107] = 0.4960785;
Wave[108] = 0.4960785;
Wave[109] = 0.4960785;
Wave[110] = 0.4960785;
Wave[111] = 0.4960785;
Wave[112] = 0.4960785;
Wave[113] = 0.4960785;
Wave[114] = 0.4960785;
Wave[115] = 0.4960785;
Wave[116] = 0.4960785;
Wave[117] = 0.4960785;
Wave[118] = 0.4960785;
Wave[119] = 0.4960785;
Wave[120] = 0.4960785;
Wave[121] = -0.49998474;
Wave[122] = -0.49998474;
Wave[123] = -0.49998474;
Wave[124] = -0.49998474;
Wave[125] = -0.49998474;
Wave[126] = -0.49998474;
Wave[127] = -0.49998474;
Selected.WaveTable.set( 16 , Wave );
?>
|
{
"pile_set_name": "Github"
}
|
EPA Watchdog Finds Ex-Chief Scott Pruitt Spent $124,000 On 'Excessive' Airfare
Enlarge this image toggle caption Pete Marovich/Getty Images Pete Marovich/Getty Images
Scott Pruitt, the former head of the Environmental Protection Agency, and his staff spent roughly $124,000 in excessive travel costs during a ten-month period, according to a new report from EPA's internal watchdog.
Pruitt, who resigned from EPA almost a year ago amidst a litany of ethics allegations, and his personal security detail flew in first or business class "without sufficient justification and, initially without appropriate approval authority," the report says.
The EPA's Office of the Inspector General recommended that the agency consider recovering the estimated $123,942 in excessive costs.
In a statement, the EPA says it believes that it complied with federal travel regulations, "making cost recovery inappropriate."
The audit reviewed 40 trips, including six canceled ones, that Pruitt planned or took in 2017. Nearly half of those trips stopped in, or were to, Tulsa, Okla., where Pruitt maintained a personal residence while he was the nation's top environmental steward. The report found "missing detailed support" for some of those trips.
The travel during that ten-month period cost taxpayers almost $1 million.
Pruitt's 24-hour security detail ate up most of that money. In a previous audit, EPA's inspector general found that the agency spent twice as much protecting Pruitt as it did his predecessor.
The EPA and Pruitt, who now works as a consultant for coal companies, said those protections were necessary because of personal threats against the former-administrator.
During his 16-month stint as head of the EPA, Pruitt's security costs, travel and other actions drew criticism from his predecessors, Democrats, environmental groups and even some Republican lawmakers.
The EPA's inspector general says the office received numerous congressional requests and hotline complaints about Pruitt's travel. The new report came as a result of those.
In addition to first-class travel, the audit found that lodging expenses exceeded per diem limits during some of Pruitt's travel as well. There was also inaccurate or incomplete reports for international travel.
Pruitt is not the first member of the Trump administration to be questioned about his travel on the taxpayer dime.
Tom Price, the former head of the U.S. Department of Health and Human Services, broke federal rules on using chartered and military planes for government travel. Price resigned amidst controversy over that, as President Trump complained that he "didn't like the optics" of using taxpayer-funded charter flights.
A report by the Interior Department's inspector general found that former Interior Secretary Ryan Zinke violated the agency's travel policy by having his wife travel with him in government vehicles.
The EPA's watchdog says that actions are needed to strengthen controls over travel at the agency to "prevent fraud, waste and abuse."
|
{
"pile_set_name": "OpenWebText2"
}
|
Maintaining motivation and health among recreational runners: Panel study of factors associated with self-rated performance outcomes at competitions.
To investigate health-related factors associated with self-rated race performance outcomes among recreational long-distance runners. Panel study. Data were collected from runners one month before and after a community-level race event including distances from 8 to 42.2 km. The primary outcome measure was self-rated race performance outcome. The explanatory variables represented health complaints suffered during the build-up year, the pre-race month, and the race and among full marathon runners predicted objective performance outcome (mean pace equal to training pace or faster). Multiple logistic regression was used to determine factors associated with the self-rated performance outcome. Two-hundred forty-five runners (29%) provided complete data sets. Seventy-four percent of the runners reached their desired race performance outcome. Achievement of the performance outcome was more likely when having avoided illness during the build-up and pre-race periods (OR = 3.8; 95% CI:1.8-8.0, p < 0.001), having avoided per-race injury (OR=3.0; 95% CI:1.2-7.4, p = 0.02) and avoided per-race illness (OR = 4.1; 95% CI:1.3-15, p = 0.020). Having obtained the self-rated performance outcome was also associated with running a shorter distance (OR=3.6; 95% CI: 1.7-8.0, p = 0.001) and being younger than 50 years of age (OR = 2.4; 95% CI:1.1-5.3-8.3, p = 0.03). Having met the predicted objective performance outcome predisposed marathon runners to also obtain the self-rated performance outcome (OR = 4.7, 95% CI: 1.5-16, p < 0.01). Having avoided illness during build-up and pre-race was positively associated with self-rated race performance outcome among recreational runners. Adjusting the desired performance outcomes with regard to recent illness and age may help recreational runners to more often achieve their goals and thereby prevent them from leaving the sport.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
regex for decimal value restrict 16 digit
I want a regex expression to restrict input decimal value at max. 16 digits or 15 digits and one character (including decimal point)
I found below Regex it is working find in C# code but when i am using it in TextEdit xaml as mask. (DevExpress) throwing exception syntax error:
Mask:
^(?:(?=.{0,16}$)\d*\.\d+|\d{0,16})[kKmMbBtT]?$
TextEdit Xaml:
<dxe:TextEdit HorizontalAlignment="Left" MaskType="RegEx"
Mask="(?:(?=.{0,16}$)[0-9]*([.]?[0-9]+)|[0-9]{0,16})[kKmMbBtT]?"
VerticalAlignment="Top" Width="150"
EditValue="{Binding DecValue, UpdateSourceTrigger=PropertyChanged, Mode=TwoWay}"
Margin="10,33,0,0"/>
Purpose I want to achieve from it:
User can enter at 16 digits decimal value (including decimal point) or
user can enter 15 digit and one character (including decimal point)
He can enter only decimal point one time
Total length of input string must not more than 16 characters.
A:
According to documentation:
Extended Regular Expressions provide almost unlimited flexibility to
create input masks. The syntax used by masks in this mode is similar
to the syntax defined by the POSIX ERE specification. Back referencing
is not supported.
So, you cannot use grouping constructs such (?: subexpression) or (?= subexpression) etc. You can use some weird mask like this:
\d{0,16}|\d{14}\R.\d{1}|\d{13}\R.\d{1,2}|\d{12}\R.\d{1,3}|\d{11}\R.\d{1,4}|\d{10}\R.\d{1,5}|\d{9}\R.\d{1,6}|\d{8}\R.\d{1,7}|\d{7}\R.\d{1,8}|\d{6}\R.\d{1,9}|\d{5}\R.\d{1,10}|\d{4}\R.\d{1,11}|\d{3}\R.\d{1,12}|\d{2}\R.\d{1,13}|\d{1}\R.\d{1,14}|\R.\d{1,15}
And in your XAML:
<dxe:TextEdit HorizontalAlignment="Left" MaskType="RegEx"
Mask="\d{0,16}|\d{14}\R.\d{1}|\d{13}\R.\d{1,2}|\d{12}\R.\d{1,3}|\d{11}\R.\d{1,4}|\d{10}\R.\d{1,5}|\d{9}\R.\d{1,6}|\d{8}\R.\d{1,7}|\d{7}\R.\d{1,8}|\d{6}\R.\d{1,9}|\d{5}\R.\d{1,10}|\d{4}\R.\d{1,11}|\d{3}\R.\d{1,12}|\d{2}\R.\d{1,13}|\d{1}\R.\d{1,14}|\R.\d{1,15}"
VerticalAlignment="Top" Width="150"
EditValue="{Binding DecValue, UpdateSourceTrigger=PropertyChanged, Mode=TwoWay}"
Margin="10,33,0,0"/>
|
{
"pile_set_name": "StackExchange"
}
|
Comparison of weights for left and right breast muscle of broilers.
Two studies were conducted to investigate the differences in the yield of broiler breast meat (pectoralis muscles) between the left and right sides of the bird. In the first study, 42-day-old male broilers had 3.3% more meat on the left side of the breast than on the right side. This difference was observed for both the Pectoralis superficialis and Pectoralis profundis muscles. In the second study, male broilers were sampled at 50, 57, and 64 days of age. There was 2.7% more meat on the left side of the breast than on the right side, and the percentage difference was not affected by age. The results indicated that both sides of the breast should be sampled in studies concerning the yield of breast meat.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
R function to inspect number of arguments in another function?
Is there a built-in R function, or a way to write an R function, that can check how many inputs another function takes, and also lists names of optional arguments.
Let's call this desired function, f, then the following command:
f(dnorm)
should output
4
and
mean, sd, log
Since there are 4 arguments associated with 'dnorm' and and 3 optional arguments: mean, sd, log.
Or maybe this is not possible? Any insight is appreciated!
A:
You can try:
length(formals(dnorm))
# [1] 4
names(Filter(function(x) !is.symbol(x) || nchar(as.character(x)), formals(dnorm)))
# [1] "mean" "sd" "log"
Two functions technically, but gets the job done. For the second one, you may need to play around a bit if the default arguments are complex.
|
{
"pile_set_name": "StackExchange"
}
|
If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Perhaps you don't understand the subtle difference between "wish" and "wishes". You, like all us other conservatives are nuance-difficient, but please don't take anything for the condition...it's actually a GOOD thing! :p
I love that sidewalk salutation one. 3 more for my C & H vault! My favorite of the bunch is still the shark one: "that guy's a goner"!
|
{
"pile_set_name": "Pile-CC"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.