text
stringlengths
1
3.78M
meta
dict
Factors Affecting Enrolment in the Community Based Health Insurance Scheme of Chandranigahapur Hospital of Rautahat District. Low income countries face considerable challenges in financing health care for their populations. As its consequences, poor people don't have access to desired health services, drugs and medicine.To address the financial barriers to health services, Government of Nepal introduced Community Based Health Insurance scheme at selected health facilities. However, enrolment in the schemeis very low. This study aims to identify the associated factors affecting enrolment in the insurance scheme. A community based case-control study was conducted within the coverage area of CBHI scheme of Chandranigahapur Hospital. CBHI Scheme of Chandranigahapur Hospital was selected purposively. Altogether 416 households were interviewed using a structured questionnaire. The required number of sample size from the enrolled households as cases and equal number of non-enrolled households as controls were selected randomly in 1:1 ratio. The odds of enrolment in the CBHI scheme among male-headed households were found lower than female-headed households (AOR 0.251, 95% CI 0.097 to 0.652). Similarly household head belonging to upper caste/ethnic groups (AOR 3.981, 95% CI 2.027 to 7.816) as well aseducated household head(AOR 6.184, 95% CI 3.137 to 12.188)were more likely to enrol in the CBHI scheme. Households having >60 years elderly were found significantly associated with enrolment in CBHI scheme(AOR 3.996, 95% CI 2.130 to 7.497). Time to reach health facility as well as affordability of premium of the insurance scheme was also found significantly associated with enrolment in the CBHI scheme. The enrolment in the CBHI scheme is determined by combination of householdhead, household and health service related factors.These determinants should be addressed to enhance the enrolment in the insurance scheme.
{ "pile_set_name": "PubMed Abstracts" }
1. Introduction =============== Many attempts have been made to use breeding or molecular biological methods to modify the ability to produce secondary metabolites in medicinal plants. Among the challenges being addressed, manipulations of the morphine biosynthesis in the opium poppy (*Papaver somniferum* L.), particularly the conversion of narcotic morphine to codeine, which is of high importance as an antitussive and a synthetic source of dihydrocodeine, or to thebaine, which is also an important starting material for the semi-synthesis of the analgesic oxycodone, will contribute to the control of narcotics, and to the supply of useful alkaloids for the production of pharmaceuticals. The gradual elucidation of enzymology of the alkaloid biosynthesis in *P. somniferum* led to genetical engineering of alkaloid biosynthetic pathway using native genes. The first report was on the introduction of a gene encoding berberine bridge enzyme (BBE) to *P. somniferum* in antisense orientation \[[@B1-pharmaceuticals-05-00133]\]. To date, several reports on metabolic engineering of *P. somniferum* have appeared, such as RNAi-mediated gene silencing of codeinone reductase (COR) \[[@B2-pharmaceuticals-05-00133]\], overexpression of COR \[[@B3-pharmaceuticals-05-00133]\], overexpression and antisense co-suppression of (*S*)-*N*-methylcoclaurine-3\'-hydroxylase (CYP80B3) \[[@B4-pharmaceuticals-05-00133]\], overexpression and RNAi-mediated gene silencing of salutaridinol-7-*O*-acetyltransferase (SalAT) \[[@B5-pharmaceuticals-05-00133]\], and RNAi-mediated gene silencing of SalAT \[[@B6-pharmaceuticals-05-00133]\]. Mutant poppy *top1* \[[@B7-pharmaceuticals-05-00133]\] which accumulates thebaine and oripavine as major alkaloids instead of morphine was also established by the treatment of mutagen (ethyl methanesulphonate) and screening of progeny plants. The T-DNA insertional mutant clone of *P. somniferum* PsM1-2, which we developed by the infection of the *Agrobacterium rhizogenes* strain MAFF03-01724, regenerated shoots from embryogenic callus that lacked the ability to produce morphine. Codeine was detected as a major alkaloid in this *in vitro* shoot culture \[[@B8-pharmaceuticals-05-00133]\]. By the improvement of the alkaloid analysis and proceeding studies on this mutant, thebaine (*ca.* 55 μg/g dry weight) and codeine (*ca.* 20 μg/g dry weight) were found to be the major opium alkaloids in the *in vitro* regenerated shoots \[[@B9-pharmaceuticals-05-00133]\]. The information provided from this mutant, which shows an altered alkaloid composition, might make an important contribution to the further modification of alkaloid production in *P. somniferum*, and therefore we carried out genetic and phenotypic analyses on this mutant. Recently, long unidentified enzymes involved in the two demethylation steps in the conversion of thebaine to morphine were successfully identified as non-heme dioxygenases \[[@B10-pharmaceuticals-05-00133]\]. These two enzymes, namely, thebaine 6-*O*-demethylase (T6ODM) and codeine *O*-demethylase (CODM), represent the first known 2-oxoglutarate/Fe(II)-dependent dioxygenases that catalyze *O*-demethylation. The altered alkaloid composition in the PsM1-2 mutant may be due to the genetic mutation in the conversion steps from thebaine to morphine. In the present study, an expression analysis of these two enzymes together with selected genes involved in the morphine biosynthesis was carried out to reveal the molecular mechanism of the mutation. 2. Results and Discussion ========================= 2.1. Morphological Characteristics of the PsM1-2 Mutants -------------------------------------------------------- The days to flowering, number of petals, appearance of split on the boundary of the petal, and height of the aerial part at the seed-filling stage of soil-cultivated T~0~ mutant and selfed progenies are summarized in [Table 1](#pharmaceuticals-05-00133-t001){ref-type="table"}. The T~0~ primary mutant showed delay of flowering and dwarfness. In addition, a deep split was observed on the boundary of the petal ([Figure 1](#pharmaceuticals-05-00133-f001){ref-type="fig"}). Delay of flowering was consistantly observed in the progenies. The number of petals, which was not altered at the T~0~, varied in the T~1~, T~2~ and T~3~ progenies. A deep split at the boundary of the petal was observed in 45% of T~1~ plants, 33% to 83% of T~2~ plants, and 8.3% and 10% of T~3~ plants. pharmaceuticals-05-00133-t001_Table 1 ###### Summary of the morphological characteristics of PsM1-2 T~0~ mutant, selfed progenies, and WT plant. Progenies Lines Number of Plants Days to Flowering (Mean ± SD) (days) Number of Petals: Percentage (%) Split on Petal Boundary (%) Plant Height (Aerial Part) (Mean ± SD) (cm) ----------------- ------- ------------------------------ --------------------------------------------------- ---------------------------------- ----------------------------- --------------------------------------------- T~0~ WT 1 47 \*^1^ 4: 100 0 60.0 T~0~ 1 71 \*^1^ 4: 100 100 38.0 T~1~ WT 6 53.5 ± 4.8 4: 100 0 42.4 ± 5.8 T~1~ 60 100.6 ± 14.6 ^\#\#\#\#^ 3: 1.7, 4: 41.7, 5: 35.0, 6: 16.7, 7: 3.3, 8: 1.7 45.0 52.1 ± 8.5 ^\#\#^ T~2~ WT 12 53.3 ± 4.0 3: 25.0, 4: 75.0 8.3 36.0 ± 7.6 \#1-27(HT) 15 90.8 ± 12.6 ^\#\#\#\#^ 5: 60.0, 6: 33.3, 10: 6.7 60.0 44.7 ± 5.4 ^\#\#^ \#2-17(HT) 6 79.8 ± 2.5 ^\#\#\#\#^ 5: 50.0, 6: 33.3, 8: 16.7 83.3 45.1 ± 3.4 ^\#^ \#2-1(LT) 12 83.3 ± 6.8 ^\#\#\#\#^ 5: 66.7, 6: 16.7, 7: 8.3, 8: 8.3 33.3 35.6 ± 7.8 \#2-6(LT) 10 76.4 ± 3.6 ^\#\#\#\#^ 5: 10.0, 6: 40.0,7: 30.0, 8: 10.0,12: 10.0 80.0 39.4 ± 3.1 T~3~ WT 6 109.4 ± 0.9 \*^2^ 4: 100 0 80.3 ± 5.8 \#1-27(HT)L\#2 10 129.2 ± 11.9 \*^3,\ \#\#\#^ 4: 40.0, 5: 50.0, 6: 10.0 10.0 45.8 ± 7.9 ^\#\#\#\#^ \#2-17(HT)\#2-1 12 131.1 ± 7.3 \*^4,\ \#\#\#\#^ 3: 8.3, 4: 75.0, 5: 16.7 8.3 47.0 ± 13.4 ^\#\#\#\#^ \*^1^: Days after transplanting; \*^2^: n = 5; \*^3^: n = 9; \*^4^: n = 11; ^\#^*p* \< 0.05; ^\#\#^*p* \< 0.01; ^\#\#\#^*p* \< 0.005; and ^\#\#\#\#^*p* \< 0.001 *vs.* WT. 2.2. Alkaloid Composition in the PsM1-2 Mutants ----------------------------------------------- The soil-cultivated PsM1-2 T~0~ primary mutant accumulated 16.3% (% dry weight) of thebaine as a major opium alkaloid in the latex, which was not detected in the WT ([Figure 2](#pharmaceuticals-05-00133-f002){ref-type="fig"}; [Table 2](#pharmaceuticals-05-00133-t002){ref-type="table"}). The morphine content in the mutant was 1.3%, which was *ca.* one tenth of that in the WT, and the codeine content was 4.2% in the mutant, *vs*. 1.3% in the WT. ![Appearances of the PsM1-2 T~0~ primary mutant and WT *P. somniferum* soil-cultivated in the phytotron. (**A**) WT, (**B**) PsM1-2 T~0~. Upper left: flower; right: grown plant; bottom left: petals with deep splits (PsM1-2 T~0~ only).](pharmaceuticals-05-00133-g001){#pharmaceuticals-05-00133-f001} ![Alkaloid content in the latex from the soil-cultivated WT and PsM1-2 T~0~ mutant. nd: Not detected.](pharmaceuticals-05-00133-g002){#pharmaceuticals-05-00133-f002} The alkaloid compositions in the dried opium of selected progenies are summarized in [Table 2](#pharmaceuticals-05-00133-t002){ref-type="table"}, and the morphine and thebaine contents of the T~1~, T~2~ and T~3~ plants are plotted on a scatter diagram ([Figure 3](#pharmaceuticals-05-00133-f003){ref-type="fig"}). The HPLC chromatograms of the representative lines of the T~1~ plants, WT plant, and authentic standards are shown in the [Supplementary Figure 1](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"}. pharmaceuticals-05-00133-t002_Table 2 ###### Opium alkaloid contents in PsM1-2 T~0~ mutant and selfed progenies. Progenies Lines Number of plants Morphine Codeine Thebaine Papaverine Noscapine ----------------------- ---------------- -------------------- -------------------- --------------------- -------------------- --------------------- ------------ T~0~ WT 1 10.9 1.3 nd \* 2.0 9.2 T~0~ 1 1.3 4.2 16.3 2.3 10.2 T~1~ WT 6 11.2 ± 4.0 2.6 ± 2.1 0.3 ± 0.2 2.4 ± 0.7 11.6 ± 4.3 T~1~ 60 6.3 ± 4.6 ^\#^ 3.8 ± 1.5 11.1 ± 6.1 ^\#\#\#^ 1.6 ± 0.5 7.9 ± 2.1 ^\#\#\#^ Selected lines (T~1~) \#1-27(HT) \- 4.3 5.1 23.1 2.3 7.2 \#2-17(HT) \- 5.5 3.7 24.4 1.9 6.8 \#2-1(LT) \- 23.0 1.3 0.3 1.8 8.4 \#2-6(LT) \- 13.6 2.1 1.0 1.5 6.9 T~2~ WT 11 18.4 ± 3.3 1.5 ± 0.9 0.4 ± 0.2 2.7 ± 1.0 18.4 ± 4.3 \#1-27(HT) 15 7.0 ± 4.1 ^\#\#\#^ 5.8 ± 1.6 ^\#\#\#^ 19.1 ± 7.3 ^\#\#\#^ 2.2 ± 0.4 9.6 ± 2.4 ^\#\#\#^ \#2-17(HT) 6 9.8 ± 8.1 ^\#^ 6.0 ± 0.5 ^\#\#\#^ 14.5 ± 6.1 ^\#\#\#^ 3.0 ± 0.4 9.2 ± 1.7 ^\#\#\#^ \#2-1(LT) 12 7.6 ± 3.3 ^\#\#\#^ 5.8 ± 1.1 ^\#\#\#^ 15.9 ± 7.2 ^\#\#\#^ 2.8 ± 0.8 8.0 ± 2.0 ^\#\#\#^ \#2-6(LT) 10 7.6 ± 6.8 ^\#\#\#^ 4.6 ± 1.3 ^\#\#\#^ 13.4 ± 6.7 ^\#\#\#^ 2.7 ± 0.5 11.7 ± 2.7 ^\#\#\#^ Selectedlines (T~2~) \#1-27(HT)L\#2 \- 4.9 6.1 29.6 2.4 9.4 \#2-17(HT)\#2-1 \- 3.7 6.5 20.0 3.1 10.4 \#2-1(LT)\#2-4 \- 5.3 4.3 29.4 3.1 9.1 \#2-6(LT)\#2-2 \- 3.1 4.9 21.1 2.7 13.2 T~3~ WT 6 11.1 ± 4.1 1.5 ± 0.5 3.3 ± 2.1 1.9 ± 0.5 4.2 ± 1.6 \#1-27(HT)L\#2 10 2.5 ± 0.6 ^\#\#\#^ 4.3 ± 0.4 ^\#\#\#^ 7.7 ± 1.9 ^\#\#^ 1.2 ± 0.1 ^\#\#\#^ 5.1 ± 0.7 \#2-17(HT)\#2-1 12 1.8 ± 0.5 ^\#\#\#^ 2.9 ± 0.5 ^\#\#\#^ 8.1 ± 2.3 ^\#\#\#^ 1.1 ± 0.2 ^\#\#\#^ 4.1 ± 0.8 Mean value of the alkaloid content (% dry weight) with standard deviation (mean ± SD) for each line and the alkaloid content of selected lines are summarized. nd \*: Not detected; ^\#^*p* \< 0.05; ^\#\#^*p* \< 0.005; and ^\#\#\#^*p* \< 0.001 *vs.* WT. The thebaine content in T~1~ plants varied widely, from 0.3% to 26.5%. From these plants, two high thebaine lines, \#1-27(HT) (thebaine content: 23.1%) and \#2-17(HT) (24.4%), and two low thebaine lines, \#2-1(LT) (0.3%) and \#2-6(LT) (1.0%), were selected and subjected to analysis of the T~2~ progeny. Interestingly, most of the progeny plants from both the HT and LT lines showed the high thebaine phenotype. From the T~2~ lines, two lines, \#1-27(HT)L\#2 (thebaine content: 29.6%) and \#2-17(HT)\#2-1 (20.0%), were selected for the analysis of T~3~ progeny. The thebaine content in T~3~ plants ranged from 4.2% to 10.0% in \#1-27(HT)L\#2 and from 3.7% to 10.9% in \#2-17(HT)\#2-1. The average thebaine content in T~3~ plants (two lines combined) was 2.4-fold of that in the WT; in contrast, the average morphine content decreased to *ca.* one fifth of that in the WT ([Figure 4](#pharmaceuticals-05-00133-f004){ref-type="fig"}). ![Alkaloid content in the latex from the soil-cultivated WT and PsM1-2 T~0~ mutant. nd: Not detected. Scatter diagram of the morphine (x-axis) and thebaine (y-axis) contents in (**A**) PsM1-2 T~1~ plants (n = 60) and WT plants (n = 6), (**B**) four lines of PsM1-2 T~2~ plants and WT plants, and (**C**) two lines of PsM1-2 T~3~ plants and WT plants.](pharmaceuticals-05-00133-g003){#pharmaceuticals-05-00133-f003} ![Morphine, codeine, and thebaine contents in T~3~ progeny. Mean value of six (WT), 10 \[\#1-27(HT)L\#2\], and 12 \[\#2-17(HT)\#2-1\] plants. Bars indicate standard deviation. \* *p* \< 0.005 and \*\* *p* \< 0.001 *vs.* WT.](pharmaceuticals-05-00133-g004){#pharmaceuticals-05-00133-f004} 2.3. T-DNA Insertion Loci Analysis by IPCR and AL-PCR ----------------------------------------------------- The genomic DNA regions adjacent to the inserted T-DNA borders were analyzed by the IPCR and AL-PCR methods. The obtained DNA fragments are summarized in [Supplementary Table 1 and Figure 5](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"} along with the PCR methods, the combination of template circular or adaptor-ligated genome DNA libraries, and the primer sets. Sequence analysis of the amplified products revealed that the fragments were classified into three types, (A) T-DNAs connected with *P. somniferum* genome DNA, (B) T-DNAs connected in tandem, and (C) T-DNAs connected with T-DNA internal fragments, as shown in [Figure 5](#pharmaceuticals-05-00133-f005){ref-type="fig"}. Type (A) includes four types of genome DNA fragments adjacent to T-DNA LB, and six types of genome DNA fragments adjacent to T-DNA RB. Of these fragments, LB1g and RB2g, LB3g and RB6g were confirmed to be both ends of single genomic loci, by PCR over the LB and RB genomic regions. Although the tally of paired border for other fragments were not found, at least eight independent T-DNA integrated sites, namely RB1, RB2 (LB1-RB2), RB3, RB4, RB5, LB2, LB3 (LB3--RB6), and LB4 were estimated to exist in the T~0~ plant. A DNA fragment homologous (59% identity at the amino acid level) to the WRKY4 transcription factor (DDBJ/EMBL/GenBank accession no. AF425835) from *A. thaliana* was found in the LB1g region, at 695--952 bp 5\' upstream of the junction. The DNA sequence of LB1g, which included a *WRKY*-like gene, was deposited in the DDBJ/EMBL/GenBank under accession No. AB574419. No other gene with significant homology was found by the BLAST search tool in the genomic DNA regions adjacent to the inserted T-DNA. ![Schematic diagram of amplified fragments obtained in the analyses of T-DNA insertion loci in T~0~. (**A**) T-DNAs connected with *P.somniferum* genome (10 types, eight sites), (**B**) T-DNAs connected in tandem (six types), (**C**) T-DNAs connected with orf13 fragments (four types).](pharmaceuticals-05-00133-g005){#pharmaceuticals-05-00133-f005} Type (B) includes six types of DNA fragments. RB and LB were connected in a tail-to head manner at different junctions, or short DNA fragments were sandwiched. The fragment "Tandem 3" was found only in the genome direct amplification of T-DNA borders. Type (C) consists of four DNA fragments. Two of the fragments were made up of short partial fragments of T-DNA riorf13 attached to an LB, and the other two were made up of RB attached to a fragment of T-DNA riorf13. In summary, T-DNA border fragments found were at eight independent sites of T-DNA integration, six borders of T-DNAs connected in tandem, and four borders connected with T-DNA internal fragments. As for the copy numbers, types (A) and (B) corresponded to eight and six copies of T-DNAs, respectively. In the case of type (C), LB and RB connected with riorf13 were possibly borders of independent T-DNAs or borders of the same T-DNA. Therefore, the copy number can be estimated as two at minimum to four at maximum. Finally, the T-DNA copy number in the PsM1-2 T~0~ primary mutant could be estimated as 16 to 18. 2.4. Analysis of the Heredity Manner of T-DNA Inserted Loci ----------------------------------------------------------- The PCR analysis over T-DNA border and the adjacent genomic DNA found in the IPCR and AL-PCR analyses revealed that several T-DNA inserted loci were eliminated by selfing ([Figure 6](#pharmaceuticals-05-00133-f006){ref-type="fig"}). ![Inheritance of the eight independent T-DNA insertion loci (RB1, RB2, RB3, RB4, RB5, LB2, LB3, and LB4) in the representative four lines of selfed progenies. (+: Insertion locus detected; -: insertion locus not detected.)](pharmaceuticals-05-00133-g006){#pharmaceuticals-05-00133-f006} In the high thebaine line \#1-27(HT), of the eight loci that were suggested to be independent T-DNA integration sites, RB3 and RB4 were eliminated in T~1~ progeny; in addition, in the high thebaine line \#2-17(HT), RB2, RB4 were eliminated in T~1~ progeny and the additional elimination of LB4 was observed in T~2~ progeny. On the other hand, with respect to the LT lines that showed low thebaine content at T~1~ progeny, sites RB2, RB4, and LB4 were eliminated in \#2-1(LT), and sites RB3, RB4, RB5, and LB3 were eliminated in \#2-6(LT). Notably thebaine content in these LT lines increased again in the T~2~ progeny to 29.4% in \#2-1(LT)\#2-4 and to 21.1% in \#2-6(LT)\#2-2 ([Table 2](#pharmaceuticals-05-00133-t002){ref-type="table"}) without a change in the T-DNA insertion pattern ([Figure 6](#pharmaceuticals-05-00133-f006){ref-type="fig"}). These results imply that none of the eight T-DNA integrated loci were indispensable for the high thebaine phenotype. 2.5. T-DNA Copy Number Analysis by Real-Time PCR ------------------------------------------------ Standard curves for the quantification of the T-DNA copy number in T~0~ and selected progenies were prepared for each target region, LB1g, LB1j and orf2. The formulae and correlation coefficients were as follows: LB1g: y = −1.39ln(x) + 23.82 (r^2^ = 0.991); LB1j: y = −1.44ln(x) + 23.38 (r^2^ = 0.994); and orf2: y = −1.43ln(x) + 23.75 (r^2^ = 0.997). The relative abundances of each region in the samples were calculated by these formulae from the value of Delta Rn. The relative abundances in whole numbers, when the abundance of LB1g was set as 2, were LB1g:LB1j:orf2 = 2:1:15 in T~0~. And for T~1~\[\#1-27(HT)\] and its progenies, the abundances were as follows (in the order of LB1g:LB1j:orf2): T~1~\[\#1-27(HT)\], 2:2:6; T~2~\[\#1-27(HT)L\#2\], 2:2:7; and T~3~\[\#1-27(HT)L\#2\#1\], 2:2:7. And for T~1~\[\#2-17(HT)\] and its progenies, the values were as follows: T~1~\[\#2-17(HT)\], 2:nd:10; T~2~\[\#2-17(HT)\#2-1\], 2:nt:10; and T~3~\[\#2-17(HT)\#2-1\#1\], 2:nt:7 (nd: not detected; nt: not tested). These results are summarized in [Figure 7](#pharmaceuticals-05-00133-f007){ref-type="fig"}. ![Shift of the relative abundance of target regions LB1g, LB1j, and orf2 of T-DNA insertion locus LB1-RB2 analyzed for two selfed lines, \#1-27(HT) and \#2-17(HT) by quantitative real-time PCR.](pharmaceuticals-05-00133-g007){#pharmaceuticals-05-00133-f007} For the abundance of LBj and orf2 in the \#1-27(HT) series, the LB1-RB2 T-DNA insertion locus was estimated to become homozygous at the T~1~ progeny, as indicated by the doubled abundance of LB1j in T~1~. And the T-DNA copy number, estimated by the abundance of the orf2 region, was drastically decreased from 15 to six in T~1~, then increased to seven in T~2~ and kept at seven in T~3~. These data imply that more than half of the total T-DNA copies were eliminated in the first selfing. For the \#2-17 series, the LB1j region was not detected in T~1~, which was consistent with the elimination of the LB1-RB2 T-DNA insertion loci in T~1~ revealed by the T-DNA insertion loci analysis ([Figure 6](#pharmaceuticals-05-00133-f006){ref-type="fig"}). The abundance of orf2 was decreased from 15 to 10 in T~1~, and then decreased again from 10 to seven in T~3~, which implies that more than half of the T-DNA copies in the \#2-17(HT) series were also eliminated by repeated selfing. 2.6. Expression Analyses on Morphine Biosynthetic Genes by RT-PCR ----------------------------------------------------------------- Firstly we tried to apply realtime-PCR for the expression analysis of morphine biosynthetic genes including *T6ODM*, *CODM* using the primer sequence reported by Hagel and Facchini \[[@B10-pharmaceuticals-05-00133]\]. However, prior to run the realtime-PCR, we found that PCR with these primers using our cDNA as a template gave multiple products. Although we have designed several primers, they could not make the PCR product as a single band. It may be attributed to the relatively high sequence homology of coding region among *T6ODM*, *CODM*, and *DIOX2*. Therefore we hired the semi-quantitative RT-PCR method for the expression analysis. To distinguish RT-PCR products between *T6ODM* and *CODM*, primers were designed to give different product size, *i.e.*, 549 bp for *T6ODM* and 411 bp for *CODM*. Expression analysis on selected morphine biosynthetic genes downstream of (*S*)-*N*-methylcoclaurine revealed that the expression of *CODM* was completely diminished in PsM1-2 ([Figure 8](#pharmaceuticals-05-00133-f008){ref-type="fig"}). On the other hand, the expression of *T6ODM* seemed to be slightly up-regulated in the PsM1-2 compared with the WT plant. Specific amplification of these two genes was confirmed by the comparison of the size of the bands and their calculated amplicon size. In PsM1-2, the expression levels of *CYP80B3* and *SalAT* seem to be slightly higher than WT, whereas *4\'OMT* seems to be down-regulated. No significant difference in the expression level of genes was observed between PsM1-2 and WT for *Cor1-1* or *Cor2-1*. ![The morphine biosynthetic pathway downstream of (*S*)-*N*-methylcoclaurine with the results of expression analysis of selected morphine biosynthetic genes, *CYP80B3*, *4\'OMT*, *SalAT*, *T6ODM*, *COR* (alleles *Cor1-1* and *Cor2-1*), and *CODM* by RT-PCR. Actin was used as an experimental control. Presumably, the pathway via oripavine (dotted pathway) does not exist in the *P. somniferum* Japanese cultivar "Ikkanshu" which we have used in this study \[[@B11-pharmaceuticals-05-00133],[@B12-pharmaceuticals-05-00133]\].](pharmaceuticals-05-00133-g008){#pharmaceuticals-05-00133-f008} 2.7. Discussion --------------- Morphological abnormalities, such as varied numbers of petals and splits on the boundary of petals, were frequently observed in the selfed progenies in the present study. However, no clear correlation was found between these morphological abnormalities and altered alkaloid compositions. Therefore, these findings were thought to be independent of the mutation in the secondary metabolism. At the T~2~ generation, difference between high thebaine line and low thebaine line which was obvious at T~1~ generation, has disappeared. If the high thebaine phenotype is cause by the single mutation of the locus by T-DNA insertion, low thebaine phenotype should be dominant in the progeny plants. However, as observed in [Figure 3](#pharmaceuticals-05-00133-f003){ref-type="fig"}, most of the progeny plants of low thebaine T~1~ lines have gained high thebaine phenotype again, which indicates that the multiple loci are responsible for the high thebaine phenotype. For the reason of this phenomenon, it is also possible that, methylation or suppression has occurred in unstable manner on the alkaloid biosynthesis related genes caused by the multiple T-DNA insertion events. The content of thebaine, which was the major alkaloid in the latex of the mature plants of the mutants, varied widely in the T~1~ progeny. But by repeated selfing, in the T~3~ progeny, although the maximum content of thebaine (10.9%) was not particularly high, the range of thebaine content was much narrower than that in the T~1~ and T~2~ progenies. When the value of CV (coefficient of variation: standard deviation/average value) for the thebaine content was compared among T~1~, T~2~, and T~3~ progenies, it was 0.54 in T~1~, 0.40 in T~2~ (two HT lines combined), and 0.26 in T~3~ (two lines combined). And the CV for the morphine content was 0.74 in T~1~, 0.70 in T~2~ (two HT lines combined), and 0.28 in T~3~ (two lines combined). These lines of evidence indicate that the high thebaine (2.4-fold and 2.5-fold of WT in T~3~ \#1-27(HT)L\#2 and \#2-17(HT)\#2-1, respectively) and low morphine (0.2-fold of WT in both T~3~ \#1-27(HT)L\#2 and \#2-17(HT)\#2-1, respectively) phenotypes were stabilized by repeated selfing. Analyses of the T-DNA integration sites and T-DNA copy number on the primary T~0~ mutant revealed that at least eight integration sites exist and as many as 18 copies of T-DNAs were estimated to be integrated into the genomic DNA in a highly complicated manner. Considering the complexity of the T-DNA integration, the IPCR, AL-PCR, and real-time PCR methods employed in this study can be considered as the most suitable methods for T-DNA insertional analysis, and more suitable than Southern blotting, whose signals may be beyond interpretation in this context. The number of T-DNA copies in PsM1-2 was too large for the transgenes integrated by genetical transformation. The presence of high numbers of transgenic insertions can lead to poor expression of transgenes through silencing. In this study, we tried to simplify the T-DNA integration structure and stabilize the high thebaine phenotype, and then to gain insight into the genetic factors for the altered alkaloid composition by obtaining selfed progenies. The T-DNA integration sites in PsM1-2 were paired to be homozygous or dropped off by selfing, and finally became half of the T~0~ in the selected T~3~ progenies. Although it is possible that other T-DNA copies were not detected, no correlation was found between any of the T-DNA integration sites and the altered alkaloid composition, by considering these data together, a reduction in the T-DNA copy number seems to have resulted in the stabilization of the high thebaine phenotype. Although it is hard to confirm, there is also a possibility that genome reorganization independent of T-DNA insertion has occurred during shoot regeneration or long term maintenance of *in vitro* culture. As we have accomplished the stabilization of high thebaine phenotype by selfing up to T~3~ generation, backcross experiment utilizing these selfed progeny plants is in progress. In this study, the only gene homologous to the known gene found at the T-DNA integration loci was the *AtWRKY4* gene homologue found in the 5\' upstream region of LB1g. As some type of WRKY transcription factor may function as a transcriptional regulator of benzylisoquinoline alkaloid biosynthesis in *Coptis japonica* Makino \[[@B13-pharmaceuticals-05-00133]\], the contribution of this locus to the altered alkaloid composition in the mutant was suspected. However, analysis of the T-DNA heredity manner indicated that the T-DNA insertion at LB1-RB2 region was not essential for the high thebaine phenotype. The expression analyses on selected morphine biosynthetic genes, including two novel demethylases, *T6ODM* and *CODM*, between the *in vitro* shoot culture of the PsM1-2 mutant and seedlings of the WT plant revealed that the expression of *CODM* was fully suppressed in the mutant. Although the correlation between the transcript level of biosynthetic genes in young organs, such as seedling or *in vitro* shoot culture, and the alkaloid composition in the latex of mature plant needs to be clarified, the observed differences between the wild type plant and the mutant can be correlated to the alkaloid composition difference in them (morphine was detected in the WT, however, almost no morphine in the mutant \[[@B8-pharmaceuticals-05-00133]\]). Kinetic studies on recombinant T6ODM and CODM from *P. somniferum* \[[@B10-pharmaceuticals-05-00133]\] have revealed that oripavine is the most preferred substrate of T6ODM, followed by thebaine, while codeine is not accepted as a substrate. On the other hand, CODM showed a higher preference for codeine than thebaine. Considering the substrate preference of these two demethylases, thebaine can be accumulated solely only under the condition that the expression of both *T6ODM* and *CODM* is suppressed, and the suppression of *CODM* may result in accumulation of codeine. In actuality, however, a large amount of thebaine with a smaller amount of codeine is accumulated in the latex from mature plants of PsM1-2 mutants. This pattern of compounds detected in the mutant is similar to that of the *T6ODM*-silenced transformant by virus-induced gene silencing \[[@B10-pharmaceuticals-05-00133]\]. In contrast, the *CODM*-silenced transformant accumulates mainly codeine, together with smaller amounts of thebaine and morphine \[[@B10-pharmaceuticals-05-00133]\]. Although the alkaloid productivities of those transformants cannot be simply compared with PsM1-2, as the alkaloid composition varies highly even among the cultivars \[[@B14-pharmaceuticals-05-00133]\], it is assumed that suppression of CODM did not simply lead to the thebaine accumulation in PsM1-2. And it is also possible that in a Japanese cultivar that does not have the pathway from thebaine to morphine via oripavine \[[@B11-pharmaceuticals-05-00133],[@B12-pharmaceuticals-05-00133]\], the substrate preferences of T6ODM and CODM differ from those of oripavine-producing cultivars. As the regulation of opium alkaloid production in *P. somniferum* is highly complicated and varies among cultivars---and even among the developmental stages \[[@B15-pharmaceuticals-05-00133],[@B16-pharmaceuticals-05-00133]\] or individual parts of a single plant \[[@B17-pharmaceuticals-05-00133]\]---further detailed studies on the molecular regulation of alkaloid production, such as expression analyses of *T6ODM* and *CODM* in the latex-producing capsule of PsM1-2, are required. 3. Experimental Section ======================= 3.1. Plant Materials -------------------- The wild type (WT) plant of *P. somniferum* L. used was the Japanese cultivar "Ikkanshu", and the *A. rhizogenes* strain MAFF03-01724 T-DNA insertion mutant line was PsM1-2 \[[@B8-pharmaceuticals-05-00133]\]. The *in vitro* culture of PsM1-2 used in this experiment was previously subjected to a single round of cryopreservation and regenerated to plantlet on Murashige-Skoog (MS) solid media \[[@B18-pharmaceuticals-05-00133]\] by the method described previously \[[@B19-pharmaceuticals-05-00133],[@B20-pharmaceuticals-05-00133]\] with slight modifications. 3.2. Maintenance and Cultivation of Plant Materials --------------------------------------------------- The WT plant seeds were obtained from the field-grown plants at the Research Center for Medicinal Plant Resources, Division of Tsukuba. The PsM1-2 T~0~*in vitro* shoot culture was maintained on MS solid media at 20 °C under a 14 h light/10 h dark condition and then transplanted in soil in a 9 cm diameter pot and acclimatized in a phytotron in 60% relative humidity under a cycle of 16 h light at 20 °C and 8 h dark at 17 °C. Seeds of T~1~ plant obtained from the soil-cultivated plant of the PsM1-2 T~0~ primary mutant were sown on the soil in a 15 cm diameter pot and cultivated in a greenhouse under a 16 h light/8 h dark cycle at 20 °C and 60% relative humidity. Plants were fertilized with 500-fold diluted Hyponex^®^ (Hyponex Japan, Osaka, Japan) once a week. T~2~ seeds from the two lines of T~1~ plants that showed high thebaine content and had abundant mature seeds were selected for cultivation of T~2~ progeny. The cultivation conditions were the same as for T~1~ plants. T~3~ seeds from two lines of T~2~ plants with high thebaine content were germinated on rock wool with fertilization with 2,000-fold diluted Hyponex^®^ in a greenhouse under a 16 h light/8 h dark cycle at 20 °C and 60% relative humidity. After one month, seedlings were transplanted onto the soil in a 9 cm diameter pot, and grown in the growth chamber under a 12 h light/12 h dark condition (short day condition) at 20 °C and 60% relative humidity. *Ca.* 80 days after sowing, the lighting was changed to a long day condition of 16 h light/8 h dark at 20 °C and 60% humidity for flowering. After transplanting, plants were fertilized with 500-fold diluted Hyponex^®^ once a week. For each experiment, WT plants were grown together as an experimental control. All self-pollination events were performed manually. 3.3. Phenotypic Observation of the PsM1-2 Mutants ------------------------------------------------- Phenotypic parameters such as days to flowering, number of petals, appearance of splitting on the boundary of the petal, and the height of the aerial part at the seed-filling stage, were observed on each plant. 3.4. HPLC Analysis of Alkaloid Content in the Latex --------------------------------------------------- The opium alkaloid content in the latex was analyzed by HPLC. Latex was collected from the capsule of either the WT or mutant *P. somniferum ca.* two weeks after flowering, by incising the capsule surface. Collected latex was dried at 50 °C. Approximately 5 mg of dried latex was measured accurately and subjected to alkaloid extraction by adding 5 mL of methanol followed by 30 min of sonication and mixing thoroughly using a tube mixer. After centrifugation at 20,000× *g* for 1 min, supernatant was applied to an Ultrafree-MC spin column (Millipore, Bedford, MA, USA) and centrifuged at 20,000× *g* for 1 min, and then 5 μL of the flow through was injected into an HPLC column. The HPLC conditions were as follows. HPLC instruments: Waters Alliance PDA System (separation module: 2795; photodiode array detector: 2996) (Waters, Milford, MA, USA). Column: TSK-GEL ODS100V (pore size 5 μm, φ4.6×250 mm) (Tosoh, Tokyo, Japan). Solvent system: CH~3~CN (A), 10 mM sodium 1-heptanesulphonate (pH 3.5) (B). Solvent gradient (A%): 0 min 28%, 15 min 34%, 25 to 39 min 40%, 40 min 28%. Detection: UV 200 to 400 nm (spectrometric identification of compounds), UV 284 nm (quantitative analysis). Column temperature: 30 °C. Flow rate: 0.7 mL/min. HPLC data were collected and analyzed by an Empower system (Waters). Alkaloid components were identified by the comparison of retention time and the UV spectra with authentic standards. Morphine hydrochloride and codeine phosphate were purchased from Takeda Pharmaceutical Company Limited (Osaka, Japan). Oripavine was a gift from Einar Brochmann-Hanssen (University of California, San Francisco, CA, USA). Magnoflorine iodide and jateorrhizine were gifts from Akira Ikuta (Science University of Tokyo, Japan). Reticuline and columbamine were gifts from Fumihiko Sato (Kyoto University, Japan). Isothebaine was isolated from *Papaver pseudo-oriental*e (Fedde) Medw. by our group. Thebaine was a gift from Ruri Kikura-Hanajiri (National Institute of Health Sciences, Japan). Papaverine hydrochloride, noscapine hydrochloride, coptisine chloride, sanguinarine chloride, and berberine chloride were purchased from Wako Pure Chemical Industries (Osaka, Japan). Alkaloid contents were calculated as a weight percent of the dried latex (opium). 3.5. Genomic DNA Preparation from P. somniferum ----------------------------------------------- Genomic DNA was prepared from *ca.* 100 μg of fresh leaves of selfed plants grown in the growth chamber, or from *ca.* 100 μg of whole *in vitro* plantlet of the PsM1-2 T~0~ mutant, which mainly consisted of leaves and stems, by using a DNeasy Plant Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. 3.6. Analysis of T-DNA Insertion Loci by IPCR and AL-PCR -------------------------------------------------------- The inverse-PCR (IPCR) method \[[@B21-pharmaceuticals-05-00133],[@B22-pharmaceuticals-05-00133]\] and adaptor ligation PCR (AL-PCR) method were used for the analysis of the flanking unknown genome DNA sequence, adjacent to the inserted T-DNA. In this study, the Vectorette PCR method \[[@B23-pharmaceuticals-05-00133],[@B24-pharmaceuticals-05-00133],[@B25-pharmaceuticals-05-00133]\], an improved method of AL-PCR, was employed to reduce non-specific amplicons. The genomic DNA library for each PCR method was constructed by digestion of genomic DNA by the appropriate restriction enzymes and self-ligation to form a circular DNA library, or an adaptor linker attached genome DNA library. 3.7. Genomic Library Construction for IPCR ------------------------------------------ Genomic DNA was digested with the restriction enzymes *Bam*HI, *Eco*RV, *Hae*III, *Kpn*I, *Pvu*II, *Ssp*I, or *Stu*I. Completely digested DNA was ligated by using a Fastlink^®^ DNA Ligation Kit (AR Brown, Tokyo, Japan) to form a circular genome DNA library. 3.8. Genomic Library Construction for AL-PCR -------------------------------------------- The sequences of the adaptor oligo DNA and adaptor specific primers used in this study are listed in [Supplementary Table 2](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"}. Two complementary oligo DNAs, AP-LS and AP-SS, were annealed to form an adaptor unit. Genomic DNA was digested with the restriction enzymes *Eco*RV, *Hae*III, *Pvu*II, *Ssp*I, or *Stu*I, which produce blunted ends. The completely digested DNA was ligated with adaptor units by using a Fastlink^®^ DNA Ligation Kit to form an adaptor ligated genome DNA library. 3.9. IPCR and AL-PCR -------------------- Amplification of the target region was performed by the nested PCR method using TaKaRa Ex Taq^TM^ DNA polymerase (Takara Bio, Shiga, Japan) under the following conditions. The combinations of PCR methods, template genome DNA libraries and primer sets are listed in [Supplementary Table 1](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"}. T-DNA-specific primers were designed based on the DNA sequence of the T-DNA region of the *A. rhizogenes* plasmid pRi1724 (DDBJ/EMBL/GenBank accession no. AP002086). The first PCR conditions were as follows: primary denaturation at 94 °C for 5 min; followed by 30 cycles of 94 °C for 1 min, 42 °C for 2 min, and 72 °C for 3 min; with a final extension at 72 °C for 10 min. After PCR, the solution was held at 4 °C. The first PCR reaction solution was applied to a SUPREC^TM^-02 filter (Takara Bio) to eliminate the primers and then used as a template for the second PCR. Second PCR conditions were as follows: primary denaturation at 94 °C for 5 min; followed by 30 cycles of 94 °C for 1 min, 48 °C for 2 min, and 72 °C for 3 min; with a final extension at 72 °C for 10 min. After PCR, the solution was held at 4 °C. The product of the second PCR was gel purified and cloned into the sequencing vector pT7-Blue^®^ (Novagen, Madison, WI). Propagated plasmid DNA was subjected to DNA sequencing using a BigDye^®^ Terminator v3.1 Cycle Sequencing Kit and ABI PRISM^®^ 3100---Avant Genetic Analyzer (Applied Biosystems Japan, Tokyo, Japan). Homology search was performed on T-DNA flanking genome DNA sequences with the BLAST tool at NCBI. 3.10. Direct Amplification of T-DNA Borders Connected in Tandem --------------------------------------------------------------- PCR was performed on uncut genome DNA of T~0~ to amplify the border region of T-DNAs connected in tandem. The primers used are listed in [Supplementary Table 1](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"}. The PCR conditions were the same as for the IPCR. 3.11. Analyses of T-DNA Insertion Loci and Heredity Manner by PCR ----------------------------------------------------------------- The PCR method was employed to confirm the T-DNA integration loci on *P. somniferum* genome DNA and to analyze the heredity manner in the selfed progenies. To find out the tally of the paired genomic regions found adjacent to the T-DNA left borders (LBs) and right borders (RBs) revealed by IPCR and AL-PCR analyses, PCR amplification was performed with the pair of genomic region-specific LB and RB (e.g*.*, LB1g *vs*. RB2g) primers listed in [Supplementary Table 3](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"}, under the following PCR conditions: primary denaturation at 94 °C for 5 min; followed by 30 cycles of 94 °C for 30 s, 58 °C for 30 s, and 72 °C for 1 min; with a final extension at 72 °C for 10 min. After PCR, the solution was held at 4 °C. TaKaRa Ex Taq^TM^ was used as the PCR polymerase. The PCR product was separated on agarose gel. The paired genomic regions, which gave a PCR product was judged as the single T-DNA integrated locus. To judge whether or not the T-DNA integration loci were present in the selfed progenies, PCR amplification was performed between the genome region-specific primers listed in [Supplementary Table 3](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"} and T-DNA LB- or RB-specific primers (MAFF-226A or MAFF-14963S). PCR was performed under the following PCR conditions: primary denaturation at 94 °C for 5 min; followed by 30 cycles of 94 °C for 30 s, 58 °C for 30 s, and 72 °C for 1 min; with a final extension at 72 °C for 10 min. After PCR, the solution was held at 4 °C. GoTaq^®^ Green Master Mix (Promega, Madison, WI, USA) was used as the PCR polymerase. The PCR product was separated on agarose gel. 3.12. T-DNA Copy Number Analysis by Real-time PCR ------------------------------------------------- The T-DNA copy number was analyzed by the quantitative real-time PCR method \[[@B26-pharmaceuticals-05-00133],[@B27-pharmaceuticals-05-00133]\]. The strategy used for estimating the T-DNA copy number is as follows. When information of one of the integrated T-DNA sites was provided by T-DNA integrated loci analysis, and also, when the T-DNA integrated site was a single copy, the copy number of the integrated T-DNA could be calculated as a multiple of the relative abundance of standard DNA fragments, as shown in [Figure 9](#pharmaceuticals-05-00133-f009){ref-type="fig"}. This estimation method can be enacted under the hypotheses that (1) the genome DNA of PsM1-2 is diploid (2n = 22) \[[@B28-pharmaceuticals-05-00133]\], (2) all of the T-DNA is integrated into the host genome DNA in a heterozygous manner, and (3) one of the integrated T-DNAs for which both the LB and RB borders are known (e.g*.*, the LB1-RB2 locus) is a single copy. Under these hypotheses, by comparing the relative abundance of the T-DNA internal region (in this case, orf2), the T-DNA---*P. somniferum* genome junction region (LB1j), and the *P. somniferum* genome region (LB1g), we can calculate the inserted T-DNA copy number by fixing the abundance of LB1g as two. ![Schematic diagram of the strategy of T-DNA copy number analysis by real-time PCR.](pharmaceuticals-05-00133-g009){#pharmaceuticals-05-00133-f009} In our experiment, one of the T-DNA-integrated sites, LB1-RB2, which will be described in the Results section, was set as a standard. And for the quantification standard plasmid DNA, we constructed pLB1, which included, the LB1g, LB1j, and orf2 regions of the T-DNA. A DNA fragment with these three regions was amplified by PCR with the primers LB1-orf2-S (5\'-CTC ATA AGC AGT GGT ATT GCT C-3\') and LB1-orf2-A (5\'-CGC ATT CAT GCG GTT ATG GAG-3\') and KOD-Plus-DNA polymerase (Toyobo, Osaka, Japan) under the following PCR conditions: primary denaturation of 94 °C for 2 min; followed by 35 cycles of 94 °C for 15 s, 62 °C for 30 s, 68 °C for 90 s. After PCR, the solution was held at 4 °C. The amplified product was cloned into the pT7-Blue^®^ vector (Novagen) and then propagated in *E. coli*. The quantitative standard plasmid DNA pLB1 and genome DNA prepared from the primary T~0~ mutant and selected T~1~, T~2~, and T~3~ progenies of the PsM1-2 mutant were diluted serially with the dilution buffer supplied with the real-time PCR reagent SYBR^®^ Premix Ex Taq^TM^ II (Perfect Real Time; Takara Bio). Real-time PCR was run using the target region-specific primers listed in [Supplementary Table 4](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"} with the real-time PCR reagent on an ABI PRISM 7000 Sequence Detection System (Applied Biosystems Japan). The obtained data were analyzed using the supplied software (Applied Biosystems Japan) and the relative abundance of each target region was deduced from each Delta Rn value using standard curves. Standard curves for each target region were plotted with the plasmid concentration (fg/μL) on the x-axis and the Delta Rn on the y-axis. The curves showed good correlations. The relative abundances of each target region were calculated so that the abundance of the LB1g region was 2, and then rounded off to a whole number. 3.13. Actin Gene Amplification from P. somniferum ------------------------------------------------- A fragment of actin cDNA was amplified by degenerate PCR using the forward primer 5\'-AAR GCN AAY MGN GAR AAR ATG AC, and the reverse primer 5\'-CCR TAN ARR TCY TTN CKD ATR TC, which were designed from the completely conserved regions of the amino acid sequences of other actins, such as *Arabidopsis thaliana* (*actin-1*: DDBJ/EMBL/GenBank accession No. M20016), *Nicotiana tabacum* (*actin*: X63603), and *Zea mays* (*Maz56*: U60514). cDNA synthesized from the total RNA of young seedlings of *P. somniferum* was used as a template for PCR. The manual hot-start procedure was used for the amplification. TaKaRa Ex Taq^TM^ DNA polymerase was added after primary denaturation at 94 °C for 5 min, and then the following protocol was carried out in a GeneAmp2400 thermal cycler (Applied Biosystems Japan): 30 cycles of 94 °C for 1 min, 48 °C for 2 min, and 72 °C for 3 min; with a final extension at 72 °C for 10 min. After PCR, the solution was held at 4 °C. The amplified fragment was cloned into the pT7-Blue^®^ vector followed by DNA sequencing. Two representative actin cDNA sequences, whose deduced amino acid sequences showed 92% and 95% identity to the *Arabidopsis actin-1*, were named *PsACT1* (AB574417) and *PsACT2* (AB574418), respectively. 3.14. Expression Analysis of the Morphine Biosynthetic Genes ------------------------------------------------------------ The expression levels of selected morphine biosynthetic genes downstream of (*S*)-*N*-methylcoclaurine, *CYP80B3* (DDBJ/EMBL/GenBank accession no. AF134590 \[[@B29-pharmaceuticals-05-00133]\]), (*R*,*S*)-3\'-hydroxy-*N*-methylcoclaurine 4\'-*O*-methyltransferase (*4\'OMT*; AY217333 \[[@B15-pharmaceuticals-05-00133]\]), *SalAT* (AF339913 \[[@B30-pharmaceuticals-05-00133]\]), *T6ODM* (GQ500139 \[[@B10-pharmaceuticals-05-00133]\]), *COR* (allele *Cor1-1*: AF108432; allele *Cor2-1*: AF108438 \[[@B31-pharmaceuticals-05-00133]\]), and *CODM* (GQ500141 \[[@B10-pharmaceuticals-05-00133]\]) in the WT plant and the PsM1-2 mutant were analyzed and compared by the reverse transcription PCR (RT-PCR) method. Total RNA was prepared from the whole plants of two-week-old seedlings of field-grown WT *P. somniferum*, or from whole *in vitro* plantlet of the PsM1-2 T~0~ mutant, which mainly consisted of leaves and stems, by using an RNeasy Plant Mini Kit (Qiagen) according to the manufacturer's instructions. One microgram of total RNA samples was subjected to single-stranded cDNA synthesis by reverse-transcription with oligo-(dT) primer (RACE32: 5\'-GAC TCG AGT CGA CAT CGA TTT TTT TTT TTT TT-3\') \[[@B32-pharmaceuticals-05-00133]\] using Superscript® II Reverse Transcriptase (Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Synthesized ss-cDNA was used as a template for PCR with the gene-specific primers listed in [Supplementary Table 5](#pharmaceuticals-05-00133-s001){ref-type="supplementary-material"}. The PCR conditions were as follows: primary denaturation 94 °C for 5 min; followed by 30 cycles of 94 °C for 30 s, 58 °C for 30 s, and 72 °C for 1 min; with a final extension at 72 °C for 10 min. After PCR, the solution was held at 4 °C. PCR products were separated on 1.0% agarose gel and signal intensities were observed. The actin gene *PsACT1* from *P. somniferum* was used as an experimental control. 3.15. Statistical Analysis -------------------------- Values were expressed as the mean ± standard deviations (SD) and were analyzed by the Tukey-Kramer multiple comparison test using the statistical analysis system "R" software package \[[@B33-pharmaceuticals-05-00133]\]; a *p* value of less than 0.05 was considered significant. 4. Conclusions ============== By combining genetic and phenotypic analyses of the T-DNA insertional mutant PsM1-2 with selfing, we have succeeded in stabilizing the high thebaine phenotype in coordination with a reduction in the number of inserted T-DNA copies. Although the genetic mode of CODM suppression in *in vitro* plantlet and of the accumulation of thebaine still remain unknown, studies on this mutant and its progenies may provide new insights into the molecular basis of morphine biosynthesis, and could ultimately allow us to manipulate the biosynthesis of this compound at will. We thank Naoko Tanaka and Naoko Onodera for their technical assistance. This study was supported in part by a grant from the Ministry of Health, Labor and Welfare of Japan. The authors declare no conflict of interest. ###### PDF-Document (PDF, 200 KB) ###### Click here for additional data file.
{ "pile_set_name": "PubMed Central" }
Changes in the gonads, reproduction and generations of white rats under the effect of some pesticides. The problems of the gonadotropic effect of pesticides, together with changes in reproduction and in the offspring are of great interest and have drawn the attention of scientists during recent years. Assessment of these aspects should be made on the basis of integrated toxicological tests (weight, behaviour, mortality, fertility, etc.). They should be completed by some biochemical studies on homogenates of the testes (determination of RNA and DNA, as well as of enzyme systems which might relative to the metabolic processes in the sex tissue, according to the chemical nature of the examined noxa). These studies should be carried out in the parent generation (F0), directly exposed to chemical agents. The following generations, (F1-F3), not exposed to the agent, are submitted to the same analyses. with regard to the full assessment of the offspring some evaluation of the functional state of given organs (liver, brain, testes) might be performed.
{ "pile_set_name": "PubMed Abstracts" }
Vanessa Virgen Vanessa Virgen Zepeda (born July 11, 1984 in Manzanillo, Colima) is a female beach volleyball player from Mexico, who played during the Swatch FIVB World Tour 2005 playing with Alejandra Acosta and 2006, playing with Diana Estrada. She also represented her home country at the 2006 Central American and Caribbean Games partnering Martha Revuelta and winning the silver medal. At the NORCECA Beach Volleyball Circuit 2008 she claimed three times the 4th place. She claimed her first podium and the 2nd place at the 2009 NORCECA Caymand Islands Tournament playing with Paulette Cruz, whom she conquered some weeks later the 3rd place at the Boca Chica Tournament, in Dominican Republic. Indoor She also played indoor volleyball during the 2007 year, playing the Pan American Cup and the tournament during the Pan American Games. References External links Category:1984 births Category:Living people Category:Sportspeople from Manzanillo, Colima Category:Mexican beach volleyball players Category:Volleyball players at the 2007 Pan American Games Category:Women's beach volleyball players Category:Central American and Caribbean Games silver medalists for Mexico Category:Competitors at the 2006 Central American and Caribbean Games
{ "pile_set_name": "Wikipedia (en)" }
Sooners Would Rather Not Wait For Metoyer What hasn’t changed for Trey Metoyer is that he is again the talk of spring football practice at Oklahoma. What has to change for the Sooners and their potential-packed receiver is everything that follows during the regular season. For now, though, he is who and what he was a year ago, the one with all the attention and expectations, the next one in the line of Sooners receivers behind Mark Clayton and Mark Bradley, Brandon Jones and Travis Wilson, Malcolm Kelly and Juaquin Iglesias, Ryan Broyles and – well, it was supposed to be Metoyer, wasn’t it? It nearly happened in 2012, but it never came together for Metoyer. He started the first four games, but none after that and actually sat out three times. He caught 17 passes for 148 yards and just one touchdown. We forget, though, he was just a freshman and one who hadn’t played much football the year before. Not everyone is willing to grant that exception. Whatever the motivation, college football seeks quick fixes and a player who is billed as the future, but didn’t do it yesterday, sometimes is deprived of tomorrow. The next big thing is always one recruiting class away from replacing the one who did too little. Metoyer might be the type worth waiting for and the Sooners are willing to practice patience, while hoping for Metoyer’s best in 2013. The Sooners have been anticipating that for a while, first celebrating his signed letter of intent in 2011 and then holding on after Metoyer ended up at Virginia’s Hargrave Military Academy so that he could get his grades in order and play a little prep school football on the side. In addition to the reality he was stuck in a prep school, Metoyer also battled an ankle injury that let him play four games. You could excuse the Sooners for allowing a delayed delivery. This was their first five-star receiver prospect who learned his trade in Whitehouse, Texas, and caught at least 15 touchdown passes his final three seasons. He exploded as a senior with 105 catches for 1,540 yards and 23 scores and the Sooners would pump their fists knowing that he picked them and not someone else on a long list of the country’s top college programs. They gladly accepted Metoyer and his tidy transcript in January 2012 and he seemed as though he was making up for lost time last spring. The position was filled with players who had done it before, like Kenny Stills, and players who had been waiting for their time, like Jaz Reynolds. There was no denying Metoyer, though, and he started the spring game and caught six passes for 72 yards. Suddenly, the graduation loss of Ryan Broyles, the NCAA’s all-time leader with 349 receptions, wasn’t as dire as it once seemed. And when the Sooners suspended three receivers in May, including Reynolds, who’d clicked with quarterback Landry Jones after Broyles’s season-ending injury in 2011, Metoyer’s promise softened the blow. The fall was not the spring for Metoyer and many things changed around him. The Sooners would be very good, with or without the suspended players, and they were labeled the Big 12’s preseason favorite by the conference coaches while Metoyer was voted the preseason newcomer of the year. Coach Bob Stoops nevertheless sought insurance for the suspensions and for the fact he would be relying on three true freshmen, a big number no matter their talent. He brought in Justin Brown from scandalized Penn State after Jalen Saunders came over from Fresno State. Brown was allowed immediate eligibility and the Sooners hoped for a waiver for Saunders. Brown would cut into Metoyer’s playing time and Saunders was granted his eligibility before the fifth game – or the first game Metoyer didn’t start. There was urgency for Oklahoma, which hadn’t impressed in the opener at UTEP and lost the third game at home to Kansas State. In the spring a quarterback might spread passes to develop depth and help a new player perform like a veteran. In the season, when a senior like Jones is taking his last snaps, when the window for a national title or a BCS bowl closes slowly, quarterbacks aren’t always as judicious. Jones aimed often at Stills, Brown and Saunders and he was rewarded. Starling Shepard, another true freshman, picked things up quicker than Metoyer and caught 45 passes for 631 yards and three scores. The season transpired without much of a contribution from Metoyer. The Sooners played fast, as fast as anyone else when they pushed the tempo on offense, and that’s possibly the hardest thing for a first-year player to adjust to. It’s a test that stresses a player’s focus and mentality as much as his speed and conditioning. When the body tires, the mind can stagger, too. When the mind staggers, others create distance. New experiences are no longer excuses. Stills and Brown are gone now and the new Sooners quarterback will look to Shepard, Saunders and Metoyer, as well as Reynolds, who was suspended all of last season. For now, Metoyer has done enough to make his coaches say the sort of things that project an improvement, though they want more consistency so that he proves worthy of starting. Fool them twice, shame on them. Metoyer has been silent, focused on the side and not available for interviews throughout the spring. Though he’s not talking, people are still talking about him until he gives them something new to talk about in 2013.
{ "pile_set_name": "Pile-CC" }
### 简介 多数主流编程语言都提供了若干种复杂数据结构,而在ES6以前,js只有数组和对象两种 ES6为了弥补这一方面的不足,引入了四种新的数据结构 它们分别是:映射(`Map`)、集合(`Set`)、弱集合(`WeakSet`)和弱映射(`WeakMap`) ### 正文 Set类似数组,但是成员的值都是唯一的,没有重复的值 ```javascript let set = new Set([1, 2, 3, 3]) console.log(set) // Set(3) {1, 2, 3} [...set] // [1, 2, 3] ``` 我们可以通过给Set构造函数传入一个数组来创建一个set,数组中的重复值被自动删除 set常用的方法不多,常见的有以下几种 - `add(value)`:添加某个值,返回Set结构本身 - `delete(value)`:删除某个值,返回一个布尔值,表示删除是否成功 - `has(value)`:返回一个布尔值,表示该值是否为`Set`的成员 - `clear()`:清除所有成员,没有返回值 另外,`set`通过`size`属性拿到内部成员的个数,而不是数组的`length` ```javascript let set = new Set() set.add(1).add(2).add(2) set.size // 2 set.delete(2) set.has(2) // false set.clear() set.size() // 0 ``` 数组的forEach方法也可以用来遍历set,用法相同这里不再叙述 Map类似二维数组,是键值对的集合,但是书写方式稍微有不同 ```javascript let map = new Map([ ['a', '1'], ['b', '2'] ]) console.log(map) // Map(2) {"a" => "1", "b" => "2"} ``` 与Set相同,Map也用size属性表示内部有多少个键值对 但是从Map中新增,获取值使用set,get方法,其他的has,delete方法与Set相同 ```javascript let m = new Map() let o = {p: 'Hello World'} m.set(o, 'content') m.get(o) // "content" m.has(o) // true m.delete(o) // true m.has(o) // false ``` 对比js对象的优势是,Map可以使用任意值作为键值,包括对象(上面代码中的o) `WeakSet`与`WeakMap`不常用,顾名思义,可以理解为更弱的Set和Map,功能少,而且容易被垃圾回收(内存消耗低) ### 思考 **这部分内容希望你都可以手动敲一遍,独立思考** 使用Set写一个数组去重的方法 接受一个数组参数,并返回一个没有重复值的原数组 --- ```javascript let set = new Set() let a = NaN let b = NaN let c = {} let d = {} set.add(a).add(b).add(c).add(d) ``` 此时set.size应该输出几?试着解释为什么会是这个结果 --- - [上一章:正则](regexp.md) - [下一章:Symbol](symbol.md)
{ "pile_set_name": "Github" }
In The Court of Appeals Sixth Appellate District of Texas at Texarkana ______________________________ No. 06-04-00054-CV ______________________________     IN THE INTEREST OF C. G. B., a/k/a M. G. K., AND J. R. B., a/k/a R. R. K., CHILDREN                                                    On Appeal from the 76th Judicial District Court Titus County, Texas Trial Court No. 30,610                                                   Before Morriss, C.J., Ross and Carter, JJ. Opinion by Chief Justice Morriss O P I N I O N             For almost two years Wade and Debbie Kludt served as foster parents for C.G.B. and J.R.B., placed in their home by the Texas Department of Family and Protective Services. But the Kludts' hopes to adopt the children were first delayed, long beyond the required and customary time periods, and then were interrupted when the Department removed the children from the Kludt home. That removal was based on Department administrative findings that Debbie Kludt had inflicted a blunt force trauma on C.G.B. and then neglected the child's medical needs—findings contained in a Department letter dated May 21, 2003, but contradicted by the Kludts, who maintained C.G.B. was injured in a bicycle accident. The Kludts ultimately asked the trial court to name them, rather than the Department, managing conservators of the two children. Mediation proved futile. Over a year after the children had been removed from the Kludts' home, the trial court was finally able to bring the Kludts' request to a hearing. The court denied the Kludts possession of the children, because by contract the Department could remove the children from the Kludts' home without cause and because the children, by the time of the hearing, had been out of the Kludts home for over a year. The trial court, however, directed at the Department some strongly-worded negative findings, which are not relevant to this appeal; ruled the Department's administrative findings against the Kludts were unfounded; and entered a number of orders, only two of which are challenged in this appeal.             The Department appeals, asking only that we strike from the trial court's order of April 8, 2004, the parts of the order exonerating the Kludts "from any wrongdoing toward" C.G.B. and J.R.B. and ordering the Department "to expunge from its records and its computer system the administrative findings contained in the letter dated May 21, 2003 and signed by Angela L. Nowell." The Department asserts that, once the trial court found the Kludts lacked standing, the trial court had no subject-matter jurisdiction to exonerate the Kludts or to order the Department's records expunged. We disagree and conclude the trial court had authority to enter its order because (1) the Kludts had standing, and (2) even if the Kludts lacked standing, the trial court had inherent power to enter the order. The Kludts Had Standing             Ordinarily, standing must be established before a court will have the subject-matter jurisdiction essential to its power to decide a case. Bland Indep. Sch. Dist. v. Blue, 34 S.W.3d 547, 553–54 (Tex. 2000). Here, the Kludts had standing under Section 102.003(a)(12) of the Texas Family Code, which provides that a suit seeking modification of the parent-child relationship may be filed by "a person who is a foster parent of a child placed by the Department . . . in the person's home for at least 12 months ending not more than 90 days preceding the date of the filing of the petition." Tex. Fam. Code Ann. § 102.003(a)(12) (Vernon Supp. 2004–2005).             The children were foster children in the Kludts' home from the time the Department placed them there, July 17, 2001, until the Department removed them April 2, 2003—a period of over twenty-one months. On June 19, 2003, seventy-eight days after the Department removed the children from their home, the Kludts filed their petition to modify. Therefore, pursuant to Section 102.003(a)(12), the Kludts had standing. The trial court erred in concluding they had none. Because the Kludts had standing, the trial court's order can be sustained on that basis. The Court Had Inherent Power To Enter the Order             But even if the Kludts had no standing to seek conservatorship of the children, the trial court clearly has a continuing statutory duty to oversee the children's case, including the Department's supervision of them.             For example, the supervising court must review the conservatorship appointment of the Department or another agency and the substitute care thereunder. Tex. Fam. Code Ann. § 263.002 (Vernon 2002). The Department must create a "service plan" for each child in its custody within forty-five days after the court renders a temporary order appointing it temporary managing conservator. Tex. Fam. Code Ann. § 263.101 (Vernon 2002). The plan must be filed with the court. Tex. Fam. Code Ann. § 263.105(a) (Vernon 2002). In the plan, the Department must set out its goals for the child, specifying how it intends to seek a "permanent safe placement" for the child, whether by termination and placement for adoption, by return to their family, or by other means. Tex. Fam. Code Ann. § 263.102 (Vernon 2002). The plan is explicitly subject to review by the court of continuing jurisdiction over the child. Tex. Fam. Code Ann. § 263.105 (Vernon 2002).             The Department must prepare a "permanency plan" for each child. Tex. Fam. Code Ann. § 263.3025 (Vernon 2002). The trial court must review the Department's permanency progress reports  in  connection  with  the  "permanency  plan"  created  for  each  child.  Tex.  Fam.  Code Ann. § 263.303 (Vernon 2002). The trial court's hearings must be held no less frequently than as set out by statute. See Tex. Fam. Code Ann. §§ 263.304, 263.305 (Vernon 2002). The statutory scheme sets out a number of things that must be done by the trial court, including reviewing the appropriateness of the current placement; determining the plans, services, and orders needed to ensure that a final order is timely rendered; deciding whether the Department has made reasonable efforts to finalize the permanency plan; and projecting a likely date for the child to be placed for adoption. See Tex. Fam. Code Ann. § 263.306 (Vernon 2002). At permanency hearings, the court is required to review the service plan, permanency report, and other information from the hearing, including the child's safety, the ongoing viability of the current placement, and the compliance and progress made, including whether the Department has made reasonable efforts to finalize the permanency plan. Tex. Fam. Code Ann. § 263.306(b). The court should always be guided by "the best interest of the child." See Tex. Fam. Code Ann. § 263.307 (Vernon 2002).             The Department appears to be taking the position that the trial court's review may be focused on only the actions of parents or foster parents toward the child, not the Department's actions that affect the child. The statute makes no such distinction, and there is no more reason to permit an agency to act outside the best interest of a child than to allow a parent to do so. Accordingly, the actions taken by the trial court fall well within the ambit of its explicit and implicit authority to review this type of proceeding, and it is clear from the court's findings that, in doing so, it was properly performing its duty to C.G.B. and J.R.B.             We affirm the judgment.                                                                                       Josh R. Morriss, III                                                                                     Chief Justice Date Submitted:          April 27, 2005 Date Decided:             May 3, 2005 font-family: Times New Roman">Date Decided: September 14, 2007 1. Sims also sued Donald Ray Holt in this case. Holt was served, but did not appear at trial and has not appealed the trial court's judgment. 2. The trial court conducted a final hearing in this case December 7, 2006. Sims appeared with her counsel; Jenkins appeared pro se. 3. Jenkins also paid Sims $600.00 to have the mobile home moved onto the disputed property. Part of Jenkins' mortgage payment was meant to reimburse Sims for that additional cost. 4. At the conclusion of the hearing, the trial court also made several oral pronouncements that were consistent with its later-entered written findings of fact and conclusions of law. The trial court stated that all the money Jenkins had paid to Sims should be considered as rental payments, which meant Jenkins was not entitled to a refund of any money. 5. Jenkins' appellate brief raises two issues that are, at best, far from a model of clarity. For example, the first issue raised appears to challenge the legal sufficiency of the evidence to support the trial court's judgment. Yet, the brief does not attempt to set forth the proper standard of review for such an issue. And in providing analysis on the issue raised, Jenkins appears to provide a factual sufficiency analysis. Then, to further complicate matters, she asks for this case to be remanded to the trial court for a determination of whether the mobile home is real or personal property. Thus, it would appear she has raised three separate and distinct issues within a single point of error, even though none of these issues have been clearly briefed or analyzed. A similar deficiency appears in the briefing portion for Jenkins' second point of error. A party's brief "must contain a clear and concise argument for the contentions made, with appropriate citations to authorities and to the record." Tex. R. App. P. 38.1(h). Given these deficiencies, we could conclude Jenkins has inadequately briefed these issues. See, e.g., El Paso Natural Gas Co. v. Strayhorn, 208 S.W.3d 676, 681 n.7 (Tex. App.--Texarkana 2006, no pet.). However, in this case, insofar as we can fairly do so, we will address what we have discerned to be the main two issues raised in Jenkins' brief. To the extent that any additional issues have been raised, we overrule those issues as both multifarious, see In re Guardianship of Moon, 216 S.W.3d 506, 508 (Tex. App.--Texarkana 2007, no pet.), and inadequately briefed, see Strayhorn, 208 S.W.3d at 681 n.7. Our leniency in substantively addressing any issues in this case should not be interpreted as future permission to submit briefs that raise multifarious points of error, because our law grants us discretion to summarily overrule the entirety of any multifarious or inadequately briefed points of error. See, e.g., Foster v. State, 101 S.W.3d 490, 499 (Tex. App.--Houston [1st Dist.] 2002, no pet.) (three separate issues combined into single point of error ruled inadequately briefed and multifarious); H.B. Zachry Co. v. Ceco Steel Prods. Corp., 404 S.W.2d 113, 133 (Tex. Civ. App.--Eastland 1966, writ ref'd n.r.e.) (overruling four separate issues as duplicitous and multifarious). 6. Neither party raised at trial or on appeal whether a writing was required even if the mobile home was classified as personal property. See Tex. Bus. & Com. Code Ann. §Â 2.201 (Vernon 1994) (writing is required for a sale of goods for the price of $500.00 or more). 7. Additionally, we note the record is not clear whether the trial court characterized the mobile home as "personal" or as "real" property. But in finding Sims was the owner, the court labeled the properties at issues as both "real and personal property" under a single grouping, a characterization which appears in the trial court's findings. 8. We note that the possible application of Section 5.072(a) of the Texas Property Code (which prohibits oral contracts for sale of land) to the doctrine of promissory estoppel was not at issue in this case. See Tex. Prop. Code Ann. § 5.072(a) (Vernon 2004).
{ "pile_set_name": "FreeLaw" }
--- abstract: '[Starting from a Skyrme interaction with tensor terms, the $\beta$-decay rates of $^{52}$Ca have been studied within a microscopic model including the $2p-2h$ configuration effects. We observe a redistribution of the strength of Gamow-Teller transitions due to the $2p-2h$ fragmentation. Taking into account this effect results in a satisfactory description of the neutron emission probability of the $\beta$-decay in $^{52}$Ca.]{}' author: - ' $^{1),2)}$' title: 'Strength fragmentation of Gamow-Teller transitions and delayed neutron emission of atomic nuclei' --- The multi-neutron emission is basically a multistep process consisting of (a) the $\beta$-decay of the parent nucleus (N, Z) which results in feeding the excited states of the daughter nucleus (N - 1, Z + 1) followed by the (b) $\gamma$-deexcitation to the ground state or (c) multi-neutron emissions to the ground state of the final nucleus (N - 1 - X, Z + 1), see e.g., Ref. [@b05]. Predictions of the multi-neutron emission are needed for the analysis of radioactive beam experiments and for modeling of astrophysical r-process. Recent experiments gave an evidence for strong shell effects in exotic calcium isotopes [@w13; @s13]. For this reason, the $\beta$-decay properties of neutron-rich isotope $^{52}$Ca provides valuable information [@h85], with important tests of theoretical calculations. [ccccccc]{} $\lambda_i^{\pi}=1_i^+$&&\ & Expt. & QRPA & 2PH & Expt. & QRPA & 2PH\ $1_1^+$ &1.64 & 1.5 & 1.3 &4.2$\pm$0.1& 4.3 & 4.3\ $1_2^+$ &2.75 & & 3.9 &4.5$\pm$0.2& & 6.4\ $1_3^+$ &3.46 & & 4.2 &5.3$\pm$0.5& & 9.2\ $1_4^+$ &4.27 & 5.0 & 4.9 &4.0$\pm$0.5& 3.2 & 3.3\ One of the successful tools for studying charge-exchange nuclear modes is the quasiparticle random phase approximation (QRPA) with the self-consistent mean-field derived from a Skyrme energy-density functional (EDF) since these QRPA calculations enable one to describe the properties of the parent ground state and Gamow-Teller (GT) transitions using the same EDF. Making use of the finite rank separable approximation (FRSA) [@gsv98] for the residual interaction, the approach has been generalized for the coupling between one- and two-phonon components of the wave functions [@svg04]. The FRSA in the cases of the charge-exchange excitations and the $\beta$-decay was already introduced in Refs. [@svg12; @ss13] and in Ref. [@svbag14; @e15], respectively. In the case of the $\beta$ decay of $^{52}$Ca, we use the EDF T45 which takes into account the tensor force added with refitting the parameters of the central interaction [@TIJ]. The pairing correlations are generated by a zero-range volume force with a strength of -315 MeVfm$^{3}$ and a smooth cut-off at 10 MeV above the Fermi energies [@svbag14]. This value of the pairing strength has been fitted to reproduce the experimental neutron pairing energy of $^{52}$Ca obtained from binding energies of neighbouring Ca isotopes. Taking into account the basic ideas of the quasiparticle-phonon model (QPM) [@solo; @ks84], the Hamiltonian is then diagonalized in a space spanned by states composed of one and two QRPA phonons [@svbag14], $$\begin{aligned} \Psi _\nu (J M) = \left(\sum_iR_i(J \nu )Q_{J M i}^{+}+ \sum_{\lambda _1i_1\lambda _2i_2}P_{\lambda _2i_2}^{\lambda _1i_1}( J \nu )\left[ Q_{\lambda _1\mu _1i_1}^{+}\bar{Q}_{\lambda _2\mu _2i_2}^{+}\right] _{J M }\right)|0\rangle, \label{wf}\end{aligned}$$ where $Q_{\lambda \mu i}^{+}\mid0\rangle$ are the wave functions of the one-phonon states of the daughter nucleus (N - 1, Z + 1); $\bar{Q}_{\lambda\mu i}^{+} |0\rangle$ is the one-phonon excitation of the parent nucleus (N, Z). We use only the two-phonon configurations $[1^{+}_{i}\otimes 2^{+}_{i'}]_{QRPA}$. In the allowed GT approximation, the $\beta^{-}$-decay rate is expressed by summing the probabilities (in units of $G_{A}^{2}/4\pi$) of the energetically allowed transitions ($E_{k}^{\mathrm{GT}}\leq Q_{\beta}$) weighted with the integrated Fermi function $$\begin{aligned} T_{1/2}^{-1}=D^{-1}\left(\frac{G_{A}}{G_{V}}\right)^{2} \sum\limits_{k}f_{0}(Z+1,A,E_{k}^{\mathrm{GT}})B(GT)_{k},\end{aligned}$$ $$E_{k}^{\mathrm{GT}}=Q_{\beta}-E_{1^+_k},$$ where $G_A/G_V$=1.25 and $D$=6147 s. $E_{1_k^+}$ denotes the excitation energy of the daughter nucleus. As proposed in Ref. [@ebnds99], this energy can be estimated by the following expression: $$E_{1^{+}_{k}}\approx E_{k}-E_{\textrm{2QP},\textrm{lowest}}.$$ $E_{k}$ are the eigenvalues of the wave functions (\[wf\]) and $E_{\textrm{2QP},\textrm{lowest}}$ corresponds the lowest two-quasiparticle energy. The difference in the characteristic time scales of the $\beta$ decay and subsequent particle emission processes justifies an assumption of their statistical independence (see Ref. [@b05] for more details). The $P_{n}$ probability of the delayed neutron emission is defined as the ratio of the integral $\beta$-strength to the excited states above the neutron separation energy of the daughter nucleus. The spectrum of four low-energy $1^+$ states of $^{52}$Sc is shown in Table 1. The structure peculiarities are reflected in the $\log ft$ values. We find that the dominant contribution in the wave function of the first (fourth) $1^+$ state comes from the configuration $\{\pi1f_{7/2}\nu1f_{5/2}\}$ ($\{\pi1f_{7/2}\nu1f_{7/2}\}$). The inclusion of the four-quasiparticle configurations $\{\pi1f_{7/2}\nu1f_{5/2} \nu2p_{3/2}\nu2p_{1/2}\}$ and $\{\pi1f_{7/2}\nu1f_{5/2} \nu2p_{3/2}\nu2p_{3/2}\}$ plays the key role in our calculations of the states $1_{2}^+$ and $1_{3}^+$, respectively. The inclusion of the two-phonon configurations results in the $P_{n}$ value of 5%, and the quantitative agreement with the experimental data [@h85] is satisfactory. Note that this value is almost three times less than that within the one-phonon approximation. In summary, by starting from the Skyrme mean-field calculations the GT strength in the $Q_{\beta}$-window has been studied within the model including the $2p-2h$ fragmentation. We analyze this effect on the $\beta$-transition rates in the case of $^{52}$Ca. Including the $2p-2h$ configurations leads to qualitative agreement with existence of four low-energy $1^+$ states of $^{52}$Sc. As a result, the probability of the delayed neutron emission is decreased. I would like to thank I.N. Borzov, Yu.E. Penionzhkevich, and D. Verney for fruitful collaboration, N.N. Arsenyev and E.O. Sushenok for help. This work is partly supported by CNRS-RFBR Agreement No. 16-52-150003, the IN2P3-JINR agreement, and RFBR Grant No. 16-02-00228. [99]{} $\beta$-delayed neutron emission in the $^{78}$Ni region // Phys. Rev. C. 2005. V. 71. P. 065801. Masses of exotic calcium isotopes pin down nuclear forces // Nature. 2013. V. 498. P. 346–349. Evidence for a new nuclear ‘magic number’ from the level structure of $^{54}$Ca // Nature. 2013. V. 502. P. 207–210. Beta decay of the new isotopes $^{52}$K, $^{52}$Ca, and $^{52}$Sc; a test of the shell model far from stability // Phys. Rev. C. 1985. V. 31. P. 2226–2237. Finite rank approximation for random phase approximation calculations with Skyrme interactions: an application to Ar isotopes // Phys. Rev. C. 1998. V. 57. P. 1204–1209. Effects of phonon-phonon coupling on low-lying states in neutron-rich Sn isotopes // Eur. Phys. J. A. 2004. V. 22. P. 397–403. Charge-exchange excitations with Skyrme interactions in a separable approximation// Prog. Theor. Phys. 2012. V. 128. P. 489–506. Tensor correlation effects on Gamow-Teller resonances in $^{120}$Sn and $N=80,82$ isotones// Prog. Theor. Exp. Phys. 2013. V. 2013. P. 103D03. Influence of 2p-2h configurations on $\beta$-decay rates// Phys. Rev. C. 2014. V. 90. P. 044320. Low-lying intruder and tensor-driven structures in $^{82}$As revealed by $\beta$-decay at a new movable-tape-based experimental setup// Phys. Rev. C. 2015. V. 91. P. 064317. Tensor part of the Skyrme energy density functional: Spherical nuclei// Phys. Rev. C. 2007. V. 76. P. 014312. Theory of atomic nuclei: quasiparticles and phonons. Bristol and Philadelphia, Institute of Physics, 1992. Fragmentation of the Gamow-Teller resonance in spherical nuclei// J. Phys. G. 1984. V. 10. P. 1507-1522. $\beta$-decay rates of r-process waiting-point nuclei in a self-consistent approach// Phys. Rev. C. 1999. V. 60. P. 014302.
{ "pile_set_name": "ArXiv" }
Elvin C. Stakman Elvin Charles Stakman (May 17, 1885 – January 22, 1979) was an American plant pathologist who was a pioneer of methods of identifying and combatting disease in wheat. Stakman was the advisor for Margaret Newton, who completed her Doctor of Philosophy (Ph.D.) studies in 1922, who became an internationally renowned phytopathologist in the study of stem rust. Stakman married the plant pathologist Estelle Louise Jensen in 1917. He also had a major hand in influencing Norman Borlaug to pursue a career in phytopathology. In 1938, in a speech entitled "These Shifty Little Enemies that Destroy our Food Crops", Stakman discussed the manifestation of the plant disease rust, a parasitic fungus that feeds on phytonutrients, in wheat, oat and barley crops across the US. He had discovered that special plant breeding methods created plants resistant to rust. His research greatly interested Borlaug, and when Borlaug's job at the Forest Service was eliminated due to budget cuts, he asked Stakman if he should go into forest pathology. Stakman advised him to focus on plant pathology instead, and Borlaug subsequently re-enrolled to the University of Minnesota to study plant pathology under Stakman. Borlaug went on to discover varieties of dwarf wheat that helped reduce famine in India, Pakistan, and other countries, and received the Nobel Peace Prize for his work in 1970. Stakman died in 1979 of a stroke. In Stakman's honor, Stakman Hall was named for him on the University of Minnesota's St. Paul campus, providing space for Plant Pathology and related fields. References Notes External links Elvin C. Stakman papers, University Archives, University of Minnesota - Twin Cities: http://archives.lib.umn.edu/repositories/14/resources/1744 Category:1885 births Category:1979 deaths Category:American botanists Category:American mycologists Category:University of Minnesota alumni Category:People from Saint Paul, Minnesota
{ "pile_set_name": "Wikipedia (en)" }
// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sizeofKernelSockaddrStorage = 0x80 sizeofSockaddrInet6 = 0x1c sizeofInet6Pktinfo = 0x14 sizeofIPv6Mtuinfo = 0x20 sizeofIPv6FlowlabelReq = 0x20 sizeofIPv6Mreq = 0x14 sizeofGroupReq = 0x88 sizeofGroupSourceReq = 0x108 sizeofICMPv6Filter = 0x20 ) type kernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type inet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type ipv6Mtuinfo struct { Addr sockaddrInet6 Mtu uint32 } type ipv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type ipv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type groupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group kernelSockaddrStorage } type groupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group kernelSockaddrStorage Source kernelSockaddrStorage } type icmpv6Filter struct { Data [8]uint32 } type sockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sockFilter } type sockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 }
{ "pile_set_name": "Github" }
In March and April 2019, online reports asserted that Texas state legislators had voted for a bill that would allow medical professionals to refuse to treat LGBT patients for religious reasons. On 26 March, for example, the website LGBTQ Nation published an article under the headline “Texas Republicans advance a bill that would allow doctors to refuse LGBTQ patients,” which reported that: A bill that would allow state-licensed professionals to refuse to serve LGBTQ people if they cite their religion has advanced out of committee in the Texas senate. Senate Bill 17 would prevent state licensing agencies from denying or revoking licenses from professionals – including doctors, lawyers, pharmacists, and even barbers – if they claim to be following a ‘sincerely held religious belief.’ That article was shared widely on social media and prompted multiple inquiries from Snopes readers about its veracity. What SB 17 says Senate Bill 17 (SB 17) would impose restrictions on professional licensing bodies in the state of Texas, limiting their ability to hold individuals’ “sincerely held religious belief” (or actions or statements they made based on such beliefs) against them in rendering decisions about whether to issue, renew, or revoke professional licenses. Republican State Senator Charles Perry introduced SB 17 on 7 March, and four weeks later the Texas Senate voted 19-12 to pass the bill. The next day it was put before the House of Representatives committee for State Affairs, and as of 9 April it was still under consideration by the House of Representatives. As of 3 April 2019, the text of the legislation stated the following: Sec. 57.003. A state agency that issues a license or otherwise regulates a business, occupation or profession may not adopt any rule, regulation, or policy or impose a penalty that: (1) limits an applicant’s ability to obtain, maintain, or renew a license based on a sincerely held religious belief of the applicant; or (2) burdens an applicant’s or a license holder’s (A) free exercise of religion, regardless of whether the burden is the result of a rule generally applicable to all applicants or license holders; (B) freedom of speech regarding a sincerely held religious belief; or (C) membership in any religious organization. The bill would also allow licensed professionals (such as lawyers, doctors, nurses, etc.) to offer their religious beliefs as a defense in an administrative proceedings against them: “Sec. 57.004. A person may assert that a state agency rule, regulation, or policy, or a penalty imposed by the agency, violates Section 57.003 as a defense in an administrative hearing or as a claim or defense in a judicial proceeding under Chapter 37, Civil Practice and Remedies Code …” It appears that in practice, this section in the legislation would have the effect seen in the following example: A physician is qualified to perform abortions, but declines to perform an abortion for a patient who approaches them due to the doctor’s sincere religious objection to abortion. If that would-be patient were to file a complaint against the doctor, and a review or disciplinary proceeding ensued in which there was, theoretically, the potential that the physician’s license to practice medicine could be revoked, that physician would, under the legislation, be able to offer the fact that their actions were based upon a sincerely-held religious belief as a defense against that outcome. The text of the bill also contains several conditions and stipulations, including: The religious belief defense cannot be used where an individual has committed a criminal offense or as been accused of sexual misconduct. The law would not allow a medical professional to withhold treatment which they are qualified to provide if that treatment is needed to prevent death or imminent serious injury. The law would not apply to first responders. The bill was not exclusive to medical professionals, nor did it make any mention of the LGBT community, specify the kinds of actions that might be regarded as being grounded in a person’s “sincerely-held religious belief,” nor explicate how a licensing body might determine the sincerity of a claimed religious defense. The text of the bill also alluded to the complexity of the broader legal and philosophical conflict between an individual’s right to live in accordance with their religious faith and an individual’s right not to be subjected to discrimination, stipulating that the bill would not “limit any right, privilege, or protection granted to any person under the constitution and laws of this state and the United States.” An important point is that the legislation would not guarantee that the “religious belief” defense would be successful in an administrative proceeding. During a 2 April Senate debate on SB 17, the bill’s author, Republican State Senator Charles Perry, explained that the bill would only provide an additional “enumerated defense” against the revocation of a license, but that a licensing body could still proceed to take away an individual’s license, notwithstanding their “sincerely-held religious belief” defense, if their actions constituted a violation of state or federal law or the requirements of their professional license. Democratic State Senator José Rodriguez teased out that clarification in an exchange with Perry during the Senate debate on 2 April. (That exchange can be viewed here, starting at 1:33.00. The entire debate starts at 0:58.00): Perry: … This bill doesn’t do anything to undermine existing state or federal law. So if it’s against the law to discriminate — and “discriminate” can include a lot of different things, based on religion or based on race or based on a whole host of enumerated items already — then I’m in violation of federal and state law. Senate Bill 17 doesn’t apply. Rodriguez: I guess I’m having trouble understanding, then, the purpose of this defense. Perry: It’s an administrative defense. Rodriguez: But it’s intended to prohibit the agency from taking away your license because of your refusal to provide a service, isn’t it? Perry: No Sir. It’s intended to provide a defense, and then the agency can decide, and if you want to take [the license] away [despite an] application of sincerely-held religious belief, then that’s the agency’s decision, and it would go to the next level [an appeal through the courts]. So according to the text of the bill, as well as clarification offered by its author, SB 17 would not permit acts of discrimination which were already illegal under federal or Texas law, or were a violation of a particular licensing agency’s requirements. Would SB 17 “allow doctors to refuse LGBTQ patients”? The text of the bill itself does not actually explicitly confer upon doctors or other licensed professionals an affirmative right to refuse to serve anyone. Formally speaking, SB 17 does not do that. For that reason, LGBTQ Nation’s headline claim that the bill would “allow doctors to refuse patients” is something of an over-simplification of the reality. The next issue is whether the religious belief defense provides enough protection to licensed professionals that it effectively amounts to permission to refuse to serve or treat certain people — a “license to discriminate,” as the ACLU of Texas has described the provisions of SB 17. Notwithstanding the fact that the law would not explicitly give doctors, for example, the right to refuse LGBT patients, a scenario in which they were assured that discriminating in this way would not lead to their licenses being revoked could quite reasonably be described as one which effectively permitted or allowed them to act in that way. However, it’s not clear that SB 17 would in fact grant such de facto permission. One section of the text states that the bill “may not be construed to … limit any right, privilege, or protection granted to any person under the constitution and laws of this state and the United States.” This indicates that a licensed professional would not be protected from having their license revoked if their act of discrimination (refusal to serve) violated federal or state law, but they would be able to specifically cite their religious belief as the reason for their refusal to serve. As SB& 17’s author Charles Perry said, “This bill doesn’t do anything to undermine existing state or federal law.” However, to complicate matters further, it appears that federal and Texas state laws do not definitively prohibit the kind of discrimination described by LGBTQ Nation in the first place — that is, a doctor’s refusing to serve a patient due to religious objections to the patient’s sexual orientation or gender identity. Christy Mallory, Director of State Policy and Education Initiatives at UCLA’s Williams Institute, told us by email that “Federal and Texas state laws do not explicitly protect people from being refused service based on their actual or perceived sexual orientation and gender identity. Texas state law does prohibit discrimination based on sex. A number of courts have interpreted the term ‘sex’ in non-discrimination laws to also prohibit discrimination based on sexual orientation and gender identity, so a court could interpret Texas’s public accommodations non-discrimination law to prohibit service refusals based on an individual’s sexual orientation or gender identity.” Title 3 of the Texas Occupations Code sets out the rights and obligations of licensed health professionals in the state, as well as the penalties and procedures related to misconduct. Nothing in those regulations prohibits a health professional from refusing to treat or serve an individual on the basis of the patient’s perceived sexual orientation or gender identity. The code does not mention “sexual orientation,” “gender identity,” “gender” or any of the descriptors which make up the LGBTQ initialism. We contacted a spokesperson for the Texas Medical Board seeking clarification on whether any rule that regulates the conduct of health professionals in the state could prevent them from refusing to treat an LGBT patient, but we did not receive a response in time for publication. So if, as appears the case, the regulations and requirements specific to medical professionals in Texas do not prohibit them from refusing to treat someone based on an objection to the patient’s sexual orientation or gender identity, and neither does federal law, then SB 17 would not “allow doctors to refuse LGBTQ patients,” because doctors are already allowed to refuse LGBTQ patients. In an effort to remove any ambiguity from the situation and definitively prohibit refusals to serve LGBT patients, Democratic State Senator José Menéndez proposed an amendment to SB 17 which would have stated that licensed professionals could not refuse to provide a service “based on the sexual orientation or gender identity of the person requesting the service.” That amendment was voted down, 18-13. Would SB 17 encourage doctors to refuse LGBTQ patients? It’s difficult to argue that the legislation itself would “allow” or “permit” a doctor to refuse to treat an LGBT patient, especially since it appears federal and Texas law already effectively allow such discrimination. However, some activists have expressed concerns that introducing SB 17 could indirectly encourage and embolden religiously-motivated service refusals that target LGBT persons, as well as make it more complicated and difficult for licensing agencies to hold such behavior to account. Logan Casey, a policy researcher at the Movement Advancement Project think tank, outlined those concerns in an email, writing: SB 17 would prevent state agencies and licensing boards from setting and enforcing the standards of how all Texans ought to be treated fairly and equally. In other words, it would tie the hands of those whose job it is to prevent discrimination whenever possible. Because the bill would allow individuals to cite their religious beliefs as a reason to refuse service, no matter what their licensing board says is required for their job, SB 17 would effectively embolden discrimination in any state-regulated or -licensed profession, from barbershops and cab drivers to medical and mental health providers. For example, a doctor could refuse to serve a patient based on the doctor’s religiously-based beliefs about marriage — which could mean they could refuse to serve unmarried couples, same-sex couples, interfaith couples, and more. State licensed agencies in multiple states have also used similar laws to discriminate against people of different faiths who wish to serve as foster parents. Furthermore, while the legislation would, according to its author Charles Perry, still give licensing agencies the ability to revoke professional licenses in the event of illegal acts of discrimination, it is reasonable to expect that SB 17 might discourage them from doing so. SB 17 would allow a licensed professional such as a doctor or lawyer to put on record the fact that their actions were an expression of their sincerely-held religious beliefs. In the event that their license was revoked anyway, this could very plausibly strengthen that doctor or lawyer’s case, if they decided to appeal the decision through the courts. Under SB 17, they could argue to a judge that because they put on record the reasons for their behavior, the licensing agency had knowingly and explicitly violated their 1st Amendment right to freely express their religious beliefs, by revoking their license. This could set in motion a protracted or high-profile legal saga which invoked profound constitutional principles and could place the licensing agency under increased public scrutiny and even civil liability. Rather than risk being exposed to those negative outcomes, the licensing agency might decide against revoking a license, or opt for a less severe punishment, thereby potentially creating a set of expectations which encourage license-holders to engage in religiously-motivated service refusals. Finally, the introduction of SB 17 could also quite reasonably be expected to embolden religiously-motivated doctors, nurses, lawyers and others to engage in service refusals in the first place, where previously they might not. Even if the bill would not necessarily innoculate professionals from the risk of having their licenses revoked, it could quite plausibly create a sense that their actions had been given an additional layer of protection. As Senator Perry said during the 2 April debate, SB 17 would add “one more tool in the tool chest.” That sense of emboldenment might be especially strong in the case of medical professionals’ refusing treatment to LGBT patients, something which federal and Texas law does not appear to definitively prohibit in the first place.
{ "pile_set_name": "OpenWebText2" }
I love the Unity dash and lenses, but it only finds files that are under /home/mylogin I would like it to find files that are under /media/MyWindowsDrive/Users/MyWinLogin I tried creating a link to the Music, Pictures and Video folders on my Windows driver and putting the link in the corresponding folders in /home/mylogin , but this doesn't cause Ubuntu (the dash) to index and find these files.
{ "pile_set_name": "Pile-CC" }
Room-temperature synthesis of MnO2.3H2O ultrathin nanostructures and their morphological transformation to well-dispersed nanorods. MnO(2).3H(2)O ultrathin nanostructures with sizes of approximately 2-3 nm were synthesized at room temperature and transformation to well-dispersed nanorods was achieved after hydrothermal treatments.
{ "pile_set_name": "PubMed Abstracts" }
When are withdrawals processed? Withdrawals are processed once per week. All withdrawals requested before 9pm UTC on Wednesday will be consolidated and received by 9pm UTC on Friday. Please allow at least 12 hours for time zone differences. You can also check your withdrawal status by tapping on the requested payment option in the "history" tab. Note: Normally, withdrawals will be processed within a few minutes, but there are cases due to blockchain congestion where it could take several days for it to arrive.
{ "pile_set_name": "Pile-CC" }
print ("Hello World") print ("Hello World")
{ "pile_set_name": "Github" }
Playscapes Playscapes is a playground designed by artist and sculptor Isamu Noguchi and completed in 1976. The playground is located in Piedmont Park, Atlanta, Georgia References Category:Playgrounds Category:Play (activity) Category:Outdoor recreation Category:Parks
{ "pile_set_name": "Wikipedia (en)" }
In current wireless networks, a terminal needs to perform network discovery and physical connection establishment in a number of operation modes/states, such as, in initial network access state for getting initial network access, in power-saving mode (e.g., idle mode) for continuously monitoring the tracking area for location update, or in connection mode (e.g., active mode) for handover. The search includes carrier/channel search, time synchronization, frame boundary search, etc. Current network discovery/search generally is based on a blind physical layer (PHY) search and measurement by the user equipment (UE). The complexity of this operation depends on the size of search space. The blind search and measurement approach is both time and battery power consuming for the UE or mobile terminal. In evolving wireless networks such as fifth generation (5G) dense heterogeneous network (HetNet) deployment, UE discovery of a small cell may be difficult using strictly PHY measurement by the UE, for example due to a strong macro signal. Similarly, in a 5G multi-interface/multi-carrier band co-existing network, network discovery/search is difficult using only PHY measurements due the much larger search space. This approach can be problematic for mobile terminal handover (HO) in active state and for tracking area (TA) tracking in power saving mode in 5G implementation, for example. The problems above become more severe in dense wireless network implementation. There is a need for an improved wireless network discovery/search and physical connection establishment method that overcomes such issues.
{ "pile_set_name": "USPTO Backgrounds" }
1. Field of the Invention The present invention relates to an inkjet print head and a method of manufacturing the same, and more particularly, to an inkjet print head and a method of manufacturing the same that can prevent the ingress of foreign bodies, generated when nozzles are opened and inkjet print heads are cut into chip units, into the nozzles when side shooting type inkjet print heads are manufactured. 2. Description of the Related Art In general, an inkjet print head is a structure that converts an electrical signal into a physical force so that ink is ejected in droplets through small nozzles. Inkjet print heads are divided into side shooting type inkjet print heads and roof shooting type inkjet print heads according to the direction in which pressure is exerted upon ink and the direction in which ink droplets are ejected. As for a side shooting type inkjet print head, the direction in which pressure is exerted upon ink is perpendicular to the direction in which ink droplets are ejected. As for a roof shooting type inkjet print head, a direction in which pressure is exerted on ink is the same as a direction in which ink droplets are ejected. As for the above-described side shooting type inkjet print head, various types of inkjet print heads can be manufactured in large quantities by increasing the integration of heads on a silicon wafer. However, since nozzles are formed by using a dicing process such as blade dicing, laser dicing or laser cutting, the ingress of foreign bodies, such as silicon particles produced during dicing, into the nozzles may occur. Besides, blades may cause physical damage to the nozzles. As such, if nozzles are blocked due to the ingress of foreign bodies or the shape of the nozzles undergoes physical damage, a directional failure or wetting may be caused when an inkjet print head ejects ink, thereby deteriorating the performance of inkjet print heads.
{ "pile_set_name": "USPTO Backgrounds" }
Respective and interactive effects of doubled CO2 and O3 concentration on membrane lipid peroxidation and antioxidative ability of soybean. Effects of doubled CO2 and O3 concentration on Soybean were studied in open-top chambers (OTC). Under doubled CO2 concentration, grain yield and biomass increased, the SOD activity, vitamin C (Vc) and carotenoid (Car) content also increased; Superoxide (O2-*) generating rate decreased, relative conductivity and malondialdehyde (MDA) content significantly declined. But under doubled O3 concentration, the SOD activity, Vc and Car contents declined, resulting in imbalance of activated-oxygen production, enhanced O2-* generating rate and accelerated process of lipid peroxidation and increase in MDA content and ion leakage of leaves. The final result was decreased grain yield and plant biomass. Interactive effects of doubled CO2 and O3 concentrations on soybean were mostly counteractive. However, the beneficial effects of concentration-doubled CO2 more than compensate the negative effects imposed by doubled O3, and the latter in its turn partly counteracted the positive effects of the former.
{ "pile_set_name": "PubMed Abstracts" }
Almost a year ago, city officials united in outrage over damage to Balboa Park’s iconic lily pond and promised those responsible would be brought to justice. But that’s not going to happen: The case is now closed and no one will be prosecuted. The city attorney’s office ended its investigation May 31 after determining there was “no evidence as to the identity of individuals engaged in vandalism that resulted in property damage,” said Michael Giorgino, a spokesman for the city attorney’s office. “Although there was some evidence of individuals involved in promoting the event, our prosecutors found that evidence to be insufficient to prove a crime or obtain an order of restitution for vandalism committed by others.” In the big picture, he said, prosecutors “determined there was not sufficient evidence to prove a criminal case beyond a reasonable doubt.” The damage to the lily pond on the night of Aug. 11, 2012, was one of the biggest local news stories of the year and even became fodder in the mayoral campaign. Here’s what happened: Thanks to word spread through Facebook, hundreds of locals gathered around midnight for what was to be a harmless mass water-gun fight at Balboa Park. The crowd met at the park’s grand fountain, but it wasn’t working so they went to the lily pond in search of water. And then a raucous water-gun fight began, much of it captured in video and photos by thrilled participants. Some participants trampled plants around the pond and even jumped into it. They left trash at the scene, and the pond’s koi and turtles were supposedly “greatly stressed.” (The fish weren’t killed, however.) No cops seems to have been present, although U-T San Diego reported that police knew about the event, which had been held a year earlier with no problems. Stoked by extensive media coverage, elected officials were furious. “We will hold those who did this accountable for their actions — which may be criminal — and for every penny it costs to return this area to its original beauty,” then-Mayor Jerry Sanders declared. Councilman Todd Gloria tweeted that “those responsible for this destruction will be held responsible.” The lily pond damage also crept into the mayoral campaign. Then-Rep. Bob Filner’s campaign accused the partner of his rival, then-Councilman Carl DeMaio, of orchestrating the event. He didn’t, prompting one of DeMaio’s operatives called Filner a “a lying sack of marbles.” An investigation began after the damage was discovered. Well, sort of. According to U-T San Diego, the Police Department assigned two detectives to investigate the case. But as of January, they hadn’t talked to the local writer who’d written extensively about the case, taken video and photographs and posted a diatribe about the event and reckless media sensationalism on YouTube. An investigator did talk to the writer after the VOSD report. Donations helped fund repair and renovation at the lily pond, which was estimated to suffer $10,000 worth of damage. The work was finished by February. The investigation then ended a few weeks ago in May, a few months shy of the one-year deadline for misdemeanor charges to be filed. Voice of San Diego is a nonprofit that depends on you, our readers. Please donate to keep the service strong. Click here to find out more about our supporters and how we operate independently.
{ "pile_set_name": "OpenWebText2" }
We develop Tutanota to fight for our right to privacy and for freedom of speech so donating Secure Connect to news sites and journalists, who defend free speech every day with their work, comes as a matter of course to us. Check out our Demo of Secure Connect. Secure Connect is an open source encrypted communication tool which lets whistleblowers get in touch with the representative of a news site securely and anonymously. It can be added to any website like a standard contact form. The unique benefit is that all data entered into the contact form is automatically encrypted end-to-end before it is being sent to the mailbox of the website owner. Even files can be dropped into the form, which are then encrypted automatically. As websites usually track IP addresses of visitors, an anonymous usage of Secure Connect can be achieved by accessing the website in question via the Tor browser. When a news site adds Secure Connect, the site should clearly state that whistleblowers must access the encrypted contact form via Tor to protect their identity. Day of Press Freedom: Free communication tool for whistleblowers To support the crucial work of journalists and whistleblowers, our encrypted contact form Secure Connect will be free for journalists to place on their websites. We believe in the Human Rights to Privacy and Freedom of Speech – and a secure and private form to communicate online is critical to achieve free speech. With Secure Connect we want to support journalists, activists and whistleblowers for the important work they are doing for all of us. Read these testimonials from journalists and bloggers. This new tool powered by the encrypted email service Tutanota enables literally every blog to offer an encrypted communication channel to potential whistleblowers and activists, without having to set up or maintain their own server for this. As all data is automatically encrypted locally on the device (end-to-end encryption), neither the service provider Tutanota nor any other third party can access this information. Secure Connect – how it works When a news site has added Secure Connect to their website, potential whistleblowers can simply go to that page – best via Tor to protect their identity – and type in the information and drop files they would like to submit to the news site. Secure Connect gives them a random, anonymous email address and a password, which lets the whistleblower re-access his sent message at a later stage and check for replies from the news site. By using Secure Connect, the whistleblower can open a secure and anonymous communication channel without using a personal email address or phone number. Secure Connect - technical instructions When someone starts to communicate with you via the encrypted contact form Secure Connect, the entire communication will be encrypted end-to-end. Encryption takes place locally in the browser so that no third party - not even we as the provider of Secure Connect - can access this information. How to start an encrypted communication channel via Secure Connect Click on Create Request. Enter a subject line. Choose a password and repeat the password. In case you want to check for replies later, write down the password somewhere safe. Enter your message. Drag and drop files into the message field or click on the symbol in the top right corner to attach files. They are automatically attached to the message. In case you want to be notified about replies, enter an email address at the end. This is optional. If you want to stay anonymous we recommend not entering an email address here. Click on Send in the top right corner. A random email address for your encrypted communication channel has been created. Write down this email address (and the previously chosen password) to re-access your encrypted communication channel later. While sending the encrypted message via Secure Connect, Tutanota automatically creates a mailbox for the sender with an automatically generated email address of your custom email domain. The sender can login with the selected password to read your reply and also reply again. With Secure Connect an encrypted communication channel has been established that is both easy to use and secure. How to configure Secure Connect as a website owner You can configure most of your version of Secure Connect (text, style, links etc.) yourself to adapt it to your Corporate Identity. You can even enter texts in different languages to cover different nationalities of your website visitors. Preconditions to set up a Secure Connect encrypted contact form: You need to set up a whitelabel domain with Tutanota. Then you can book and add Secure Connect to your website When you order the whitelabel feature, you have two options: The whitelabel feature is already included in the Pro subscription. Alternatively, you can order it separately in your Premium account. Journalists get Secure Connect for free by contacting press@tutao.de with supplying a link to their website. NPOs get the business version of Tutanota at half price which includes Premium, whitelabel and Secure Connect. Easy to add for any website For the first time, also smaller news agencies and blogs of Human Rights activists can offer a secure communication channel for potential whistleblowers because Secure Connect is so easy to add to any blog. Fight for Press Freedom We hope that Secure Connect will help journalists and activists across the world to fight for Press Freedom, Freedom of Speech and our Right to Privacy. To fight for these fundamental human rights has been our mission since we have started building the encrypted email service Tutanota, and it is a value shared throughout our community. We are happy that we can now support journalists and whistleblowers around the world with our software donation. Together we will stop illegal mass surveillance!
{ "pile_set_name": "OpenWebText2" }
Normally, I would applaud the building of walls; however, recent news from Paris only confirms the dystopian world that we’re all living in. From Politico: The Council of Paris unanimously agreed Monday to a proposal to erect a bulletproof wall around the Eiffel Tower in response to the terror threat in France, Le Monde reported. Parisian authorities will put up bulletproof glass on two sides of the tower area, while the two other sides, which serve as entrance and exit points, will be enclosed by metal grids “reproducing the profile of the Eiffel Tower.” Can you imagine? Instead of confronting the rising crime in our once great cities, Paris has decided to double down on a bulletproof wall, which in practice will only act as another source of anarcho-tyranny to natives. This is part of a greater trend throughout the West which aims to provide preventative actions (think the TSA) and “enhanced” security instead of attacking problems at their root. This is because to ask the question of why the Eiffel tower needs a bullet proof wall is to reveal the contradictions at the heart of the decaying hegemony of liberal discourse in our societies. Instead, we are told that terror attacks are just part of living in a major city, our “way of life” so to speak. We bind ourselves up in a hell and call it heaven because we refuse to imagine any other way of living. Even when it was not so long ago that the idea of a 20 million Euro “bulletproof” wall around the Eiffel tower would be absurd. There is another way. We on the Alt-Right imagine a different world from that of the deputy mayor of Paris. We imagine a world where you can roam in our once great cities, stroll along its boulevards, and sit in its cafes without worry — a world where France is France, and there will always be an England. In sum, we imagine a world where we have homes. With this latest move, the city of light is embracing the cloud of darkness that threatens to engulf all of our countries. As Donald Trump said, “Paris isn’t Paris anymore.” With this latest news, the truth of this statement couldn’t be any clearer.
{ "pile_set_name": "OpenWebText2" }
Q: Specify App Delegate for Storyboard in iOS I'm trying to change my App's main interface which currently is set to a .xib file in the project's "General Configuration Pane" to a new storyboard. I have created the storyboard, and selected it as the main interface. But when I launch the application in simulator, I get a black screen and the following message printed in the console : "There is no app delegate set. An app delegate class must be specified to use a main storyboard file." How should I do that ? A: The app delegate class is generally specified in main.m: int main(int argc, char *argv[]) { @autoreleasepool { return UIApplicationMain(argc, argv, nil, NSStringFromClass([TCAppDelegate class])); } } A: did you set mainWindow if you are not please set your window in your project
{ "pile_set_name": "StackExchange" }
MY DOG THE CHAMPION - 5 Lucky FIDO Fans Will Win a Copy of This DVD My Dog the Champion, a story of friendship, love, teamwork, and challenges, is a must-have release for families nationwide. The film is executive produced by Arthur E. Benjamin, the founder of the American Dog Rescue, an all-volunteer, non-profit organization that works to rescue and promote the health and well-being of dogs and other animals all across the world. In addition, Scout, the movie’s starring dog, is herself a rescue! The inspiring story of an unlikely friendship between a born-and-bred teenage city girl and a cattle dog, the new feature film made its national retail debut on February 4, 2014 AND 5 Lucky FIDO Fans will each win a copy of the DVD - 1 Winner wil be chosen EVERY DAY this week – Feb 17 – 21, 2014 - check out the details and how to enter below! A moving and meaningful story for the whole family, My Dog the Champion (SRP $24.98) begins when Madison (Burge), a spoiled city teen, learns that her mother is being deployed to Afghanistan. She is suddenly forced to uproot to the country and live on her grandfather Billy’s (Henriksen) cattle ranch – in the middle of nowhere. Facing the hardships of farm life and her feelings of loneliness, she befriends Scout, the farm’s old and “useless” cattle dog. The new companions quickly form a special bond. And when the family farm becomes threatened, Madison, with some help from Eli (Linley), a teenage dog trainer, trains Scout to be an agility champion and together they compete with the hope of saving their home. Side by side, the determined duo proves the heart of a champion never dies. We want to know why your FIDO is a champion to you. Send us your WHY in a short paragraph along with a high-resolution image of your FIDO. Send your entry to contest@fidofriendly.com (please put CHAMPION in the subject line). Contest starts NOW and ends February 21, 2014 at midnight EST. One winner will be drawn daily from all qualified entries and the winners will be announced on our Facebook Page. Entries without photos of their FIDO will not be considered. My Dog The Champion (trailer) - Accent Films Madison, a modern young woman absorbed by the toys of modern technology and slave to materialism is sent out to a cattle ranch and put to work by her grandfather, Billy. She soon befriends a cattle dog who seems to hate the country life as much as sh Join the pack of loyal dog owners who take Fido with them wherever they travel! Our community is a growing network of those that follow our motto of Leave No Dog Behind® and believe Fido is part of the family. Get the Newsletter! We'll send you all the essential news, latest contests, and best travel destinations right into your inbox.
{ "pile_set_name": "Pile-CC" }
We see patterns where none exist. It’s what humans do; in fact, it’s what animals do. Mark Twain noticed this, and had a pithy summary. We should be careful to get out of an experience only the wisdom that is in it — and stop there; lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stove-lid again — and that is well; but also she will never sit down on a cold one any more. The wary cat has a theory of the world: Stove burns you. Stay away from stove. Of course, only hot stoves cause burns, so this is not a good theory. But consider two cats: Cat A believes (correctly) that the theory about stoves causing burns is incomplete, and is not sure what does cause burns. Cat B stays far away from all stoves, hot or cold, because they magically cause pain and injury in a way the cat doesn’t understand. It’s pretty clear that Cat B is more likely to survive. The Conspiracy-Theory Mindset An interesting paper, forthcoming in the European Journal of Social Psychology, titled “Connecting the Dots: Illusory Pattern Perception Predicts Belief in Conspiracies and the Supernatural,” investigates this kind of thinking. The central finding is that many people who believe in complex conspiracies and supernatural phenomena also see “patterns” in random data. The authors conclude that “illusory pattern perception is a central cognitive mechanism accounting for conspiracy theories and supernatural beliefs.” We want someone to blame. The interesting part is that humans also add a moral element. We want someone to blame. In the case of the human analogue of stoves — perhaps droughts, floods, or natural disasters — primitive peoples think that angry gods or malevolent forces can be appeased by appropriate human actions. They make sacrifices, and if something bad happens to someone, they imagine that evil forces are at work, or else the person somehow deserved the punishment for having themselves acted badly. I’m not saying this is a conscious, rational response. Quite the contrary. It’s the irrational part of this that makes it adaptive. The belief that I can understand, and take actions to prevent, terrible events is a key part of being happy, and even healthy. As I pointed out in my Analyzing Policy (2000) book, the belief that pork contained evil spirits, or that God had commanded that no one eat pork, proved adaptive in desert regions where deadly invisible parasites were likely to infest the meat. You didn’t have to understand the mechanism, as long as you happened upon a good rule and then imbued that rule with magical significance. A Disaster without an Explanation One of the most famous catastrophes in European history was the Lisbon earthquake of 1755, in Portugal. The earthquake itself caused fires immediately, as well as a tsunami that arrived in about 40 minutes. All told, between 10,000 and 100,000 people died. The question asked throughout Enlightenment-era Europe was, “What did Lisbon do to deserve such punishment?” There were some theories, including religion, sexual mores, and some more fanciful claims, but it seemed hard to believe that these could explain why Lisbon would be almost completely destroyed and Berlin, Paris, Prague, Vienna, and other capitals of sin would be untouched. Unless … what if there was no explanation? No one had done anything, good or bad, to cause it. The earthquake just happened. Now, humans usually don’t like answers like that; we can’t plan, and we can’t identify patterns. On the other hand, accepting that there may be no explanation, much less a blame-worthy action as cause, for events is a step toward thinking scientifically. In a way, the Lisbon earthquake was a great benefit to the cause of enlightenment. As John Hamer said, One of modernity’s greatest achievements is the realization that natural disasters like earthquakes have nothing to do with us, that we need not see the wrath of Zeus in every thunderclap, the displeasure of Poseidon in every menacing wave. The origins of that realization are to be found in the smoldering ashes of Lisbon. Backsliding on Scientific Modernity It’s easy to slide back though. And maybe it’s just my imagination, but we seem to be sliding backward pretty fast. After Hurricane Katrina, a few religious leaders went all “wrath of God” on us, claiming that New Orleans was a modern Gomorrah. After Hurricane Harvey this summer, and again after Hurricane Maria, we were told that Houston, and Puerto Rico, must have done something to deserve this terrible punishment. Sometimes the “sin” is promoting “the homosexual agenda,” of course. Folks on the left tend to think that this kind of religious view is ridiculous, but they are happy to trot out their own “you deserved it!” line of pattern recognition. For the Left, the sin is often the use of fossil fuels. In fact, after Harvey happened to stop in place for 48 hours, immobilized by an utterly random high pressure area to the north, several clear “causes” emerged. One was global warming. Another (I love this one) was “lack of zoning.” Houston, thou has sinned in the eyes of the planners of urban sacredness, and thou willst now suffer the consequences! In fact, as a number of people have pointed out, the scientific basis for blaming the specific path or effects of one hurricane on any one factor is extremely tenuous. It’s unlikely that global warming or lack of zoning caused that specific event and its effects. The rise in the temperature of the Gulf of Mexico at this point is less than 1 degree centigrade; the amount of rain dropped by Harvey was actually within normal bounds for a hurricane of that size. And as for zoning, well, the amount of impermeable paved area in the city doubtless contributed to the severity of the flooding, but 50 inches of rain in two days would not have been “soaked up” by swamps or sloughs; once the ground is saturated, it’s covered with water. And water is impermeable, too. The difficult thing, and I’d say this to all sides, is to discipline yourself to avoid two atavistic moralistic traps. Blame the victim: If you see something bad, you find something about the victim you don’t like. And then you say the bad thing happened because the victim did something bad. For example, Puerto Rico allowed promiscuity, or Houston did not have zoning rules to limit growth. Vengeful gods: If you see something bad, you find something that society as a whole is doing you don’t like. For example, we use fossil fuels, so hurricanes become stuck by high pressure areas, or people have stopped going to church, and so an example had to be made. As Ron Bailey wrote in Reason, there is no overall trend toward worsening intensity, duration, or frequency of hurricanes hitting the United States. There are plausible theories that predict that this may happen, and there are sensible reasons to evaluate these claims seriously. But it’s a violation of the basic scientific principles of the Enlightenment to blame the victims, or to invoke vengeful gods, as explanations. Sometimes things happen. Noticing patterns where none exist is part of being human. Overcoming that impulse is the most important part of science.
{ "pile_set_name": "OpenWebText2" }
package io.cucumber.junit; import io.cucumber.plugin.Plugin; import org.apiguardian.api.API; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; /** * Configure Cucumbers options. */ @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.TYPE }) @API(status = API.Status.STABLE) public @interface CucumberOptions { /** * @return true if glue code execution should be skipped. */ boolean dryRun() default false; /** * @return true if undefined and pending steps should be treated as * errors. * @deprecated will be removed and cucumber will default to strict */ @Deprecated boolean strict() default true; /** * Either a URI or path to a directory of features or a URI or path to a * single feature optionally followed by a colon and line numbers. * <p> * When no feature path is provided, Cucumber will use the package of the * annotated class. For example, if the annotated class is * {@code com.example.RunCucumber} then features are assumed to be located * in {@code classpath:com/example}. * * @return list of files or directories * @see io.cucumber.core.feature.FeatureWithLines */ String[] features() default {}; /** * Package to load glue code (step definitions, hooks and plugins) from. * E.g: {@code com.example.app} * <p> * When no glue is provided, Cucumber will use the package of the annotated * class. For example, if the annotated class is * {@code com.example.RunCucumber} then glue is assumed to be located in * {@code com.example}. * * @return list of package names * @see io.cucumber.core.feature.GluePath */ String[] glue() default {}; /** * Package to load additional glue code (step definitions, hooks and * plugins) from. E.g: {@code com.example.app} * <p> * These packages are used in addition to the default described in * {@code #glue}. * * @return list of package names */ String[] extraGlue() default {}; /** * Only run scenarios tagged with tags matching {@code TAG_EXPRESSION}. * <p> * For example {@code "@smoke and not @fast"}. * * @return a tag expression */ String tags() default ""; /** * Register plugins. Built-in plugin types: {@code junit}, {@code html}, * {@code pretty}, {@code progress}, {@code json}, {@code usage}, * {@code unused}, {@code rerun}, {@code testng}. * <p> * Can also be a fully qualified class name, allowing registration of 3rd * party plugins. * <p> * Plugins can be provided with an argument. For example * {@code json:target/cucumber-report.json} * * @return list of plugins * @see Plugin */ String[] plugin() default {}; /** * Publish report to https://reports.cucumber.io. * <p> * * @return true if reports should be published on the web. */ boolean publish() default false; /** * @return true if terminal output should be without colours. */ boolean monochrome() default false; /** * Only run scenarios whose names match one of the provided regular * expressions. * * @return a list of regular expressions */ String[] name() default {}; /** * @return the format of the generated snippets. */ SnippetType snippets() default SnippetType.UNDERSCORE; /** * Use filename compatible names. * <p> * Make sure that the names of the test cases only is made up of * [A-Za-Z0-9_] so that the names for certain can be used as file names. * <p> * Gradle for instance will use these names in the file names of the JUnit * xml report files. * * @return true to enforce the use of well-formed file names */ boolean useFileNameCompatibleName() default false; /** * Provide step notifications. * <p> * By default steps are not included in notifications and descriptions. This * aligns test case in the Cucumber-JVM domain (Scenarios) with the test * case in the JUnit domain (the leafs in the description tree), and works * better with the report files of the notification listeners like maven * surefire or gradle. * * @return true to include steps should be included in notifications */ boolean stepNotifications() default false; /** * Specify a custom ObjectFactory. * <p> * In case a custom ObjectFactory is needed, the class can be specified * here. A custom ObjectFactory might be needed when more granular control * is needed over the dependency injection mechanism. * * @return an {@link io.cucumber.core.backend.ObjectFactory} implementation */ Class<? extends io.cucumber.core.backend.ObjectFactory> objectFactory() default NoObjectFactory.class; enum SnippetType { UNDERSCORE, CAMELCASE } }
{ "pile_set_name": "Github" }
Epsilon/Getty Images Chelsea have confirmed their latest signing of the summer with Brazilian attacker Willian signing for the Stamford Bridge club, but questions can be asked over whether the player has made the right move this summer. BBC Sport's David Ornstein has confirmed that Willian has joined from Anzhi Makhachkala for £30 million on a five-year deal, being allocated the No. 22 shirt. He is Chelsea's fourth senior signing of the summer following Mark Schwarzer, Marco van Ginkel and Andre Schurrle, and he joins a huge list of attacking-midfield talent at the club. Willian had the option of joining Spurs, where he had a medical before joining Chelsea, while Liverpool were also in the frame for his signature originally. The new Blues recruit established himself as a threat in European football while playing for Shakhtar Donetsk in the Ukrainian top flight and the Champions League before he moved to Anzhi in January. His stint in Russia has been cut short, however, by the Dagestan club's need to bring in funds after owner Suleyman Kerimov opted to stop lavishly funding the side. The Premier League quickly came calling, and while it is probable that Willian has joined the best-equipped of the three teams to try to win the league title, it is also inarguable that he has opted for the side where he could get the least amount of game time. Where Liverpool are known to be looking for a left-sided attacker to add to their first XI on a weekly basis, and Tottenham are seeking to replace Gareth Bale with a quality, pacy attacker, Chelsea are stockpiling a final-third player in an already overcrowded squad. To the "Matazar" trio that shone so brightly last season—consisting of Juan Mata, Eden Hazard and Oscar—has been added to with Van Ginkel and Schurrle, while Kevin de Bruyne has been recalled from his long-term loans in Germany. Frank Lampard and Ramires, while certainly more "central" midfielders than out-and-out attacking ones, are also capable of filling those roles, and Victor Moses also remains at Stamford Bridge. Add in Willian, and that gives Chelsea at least eight players battling for the three roles behind and beside a central striker—and Ben Rumsby of the Telegraph reports that Porto's Christian Atsu, who also plays in wide attacking areas, is now linked with a move to Stamford Bridge too. While it is unthinkable that Chelsea will head past the transfer window closure without offloading at least one or two of those names, it still leaves Willian in the unenviable position of facing a direct battle with Hazard for a place on the left, his regular role with Shakhtar and Anzhi. He can play centrally, of course, but again, in that position he faces huge competition for time on the pitch. Willian has signed a long-term contract, so Chelsea clearly see him as being an important player for now and the future, but if he takes a while to settle into his new team, he could quickly find himself marginalised if others hit the ground running under Jose Mourinho. A £30 million investment is still a lot of money, even for Chelsea, but it might seem to be the case to some that the player himself might have been better served heading to White Hart Lane or Anfield for further first-team football—though, if Chelsea are competing for the league title at the end of the season and he is involved, he will think his decision fully justified. All eyes now will be on just how much the Brazilian contributes to the coming campaign. Follow @karlmatchett
{ "pile_set_name": "OpenWebText2" }
/ C operator tables .globl _getwrd .globl getw .globl fopen .globl _tmpfil .data _getwrd: 1f .text 1: tst buf bne 1f mov _tmpfil,r0 jsr r5,fopen; buf bes botchp 1: jsr r5,getw; buf bes botchp rts pc botchp: mov $1,r0 sys write; botch; ebotch-botch sys exit botch: <Temp file botch.\n>; ebotch: .even .bss buf: .=.+518. .text .globl _opdope .globl _instab _instab:.+2 40.; 1f; 1f; .data; 1:<add\0>; .text 70.; 1b; 1b 41.; 2f; 2f; .data; 2:<sub\0>; .text 71.; 2b; 2b 30.; 3f; 1b; .data; 3:<inc\0>; .text 31.; 4f; 2b; .data; 4:<dec\0>; .text 32.; 3b; 1b 33.; 4b; 2b 45.; 2b; 5f; .data; 5:<ac\0>; .text 46.; 6f; 7f; .data; 6:<mov\0>; 7:<(r4)\0>; .text 75.; 2b; 5b 76.; 6b; 7b 43.; 7b; 1f; .data; 1:<divf\0>; .text 44.; 5b; 0 73.; 7b; 1b 74.; 5b; 0 60.; 0f; 1f; .data; 0:<beq\0>; 1:<bne\0>; .text 61.; 1b; 0b 62.; 2f; 5f; .data; 2:<ble\0>; 5:<bgt\0>; .text 63.; 3f; 4f; .data; 3:<blt\0>; 4:<bge\0>; .text 64.; 4b; 3b 65.; 5b; 2b 66.; 6f; 9f; .data; 6:<blos\0>; 9:<bhi\0>; .text 67.; 7f; 8f; .data; 7:<blo\0>; 8:<bhis\0>; .text 68.; 8b; 7b 69.; 9b; 6b 0 .data .even .text _opdope:.+2 00000 / EOF 00000 / ; 00000 / { 00000 / } 36000 / [ 02000 / ] 36000 / ( 02000 / ) 02000 / : 07001 / , 00000 / 10 00000 / 11 00000 / 12 00000 / 13 00000 / 14 00000 / 15 00000 / 16 00000 / 17 00000 / 18 00000 / 19 00000 / name 00000 / short constant 00000 / string 00000 / float 00000 / double 00000 / 25 00000 / 26 00000 / 27 00000 / 28 00000 / 29 34002 / ++pre 34002 / --pre 34002 / ++post 34002 / --post 34020 / !un 34002 / &un 34020 / *un 34000 / -un 34020 / ~un 00000 / 39 30101 / + 30001 / - 32101 / * 32001 / / 32001 / % 26061 / >> 26061 / << 20161 / & 16161 / | 16161 / ^ 00000 / 50 00000 / 51 00000 / 52 00000 / 53 00000 / 54 00000 / 55 00000 / 56 00000 / 57 00000 / 58 00000 / 59 22105 / == 22105 / != 24105 / <= 24105 / < 24105 / >= 24105 / > 24105 / <p 24105 / <=p 24105 / >p 24105 / >=p 12013 / =+ 12013 / =- 12013 / =* 12013 / =/ 12013 / =% 12053 / =>> 12053 / =<< 12053 / =& 12053 / =| 12053 / =^ 12013 / = 00000 / 81 00000 / 82 00000 / 83 00000 / int -> float 00000 / int -> double 00000 / float -> int 00000 / float -> double 00000 / double -> int 00000 / double -> float 14001 / ? 00000 / 91 00000 / 92 00000 / 93 00000 / int -> float 00000 / int -> double 00000 / float -> double 00000 / int -> int[] 00000 / int -> float[] 00000 / int -> double[] 36001 / call 36001 / mcall
{ "pile_set_name": "Github" }
Jean-Pierre Paquin Jean-Pierre Paquin (born August 23, 1948) is a Canadian importer and politician from Quebec. He served as a Member of Parliament, representing Saint-Jean in the National Assembly of Quebec as a member of the Quebec Liberal Party from 2003 to 2007. Life and career Paquin was born in Montreal, Quebec. He earned a business degree from Cégep de Saint-Hyacinthe in 1965. He founded Propriétaire des Importations J. P. P. in 1972. Paquin later trained in professional marketing and management at Collège Jean-Guy Leboeuf of the Collège de l'immobilier du Québec in Verdun, Quebec in 1976. He served in several other leadership positions in Saint-Jean-sur-Richelieu: as a hospital trustee from 1999 to 2000, on the Chamber of Commerce from 2000 to 2003, and on the Board of Directors of the city's Canada Day celebration in 2001 and 2002. As a candidate for Union Nationale, he was defeated in the 1976 Quebec general election. In the 2003 Quebec general election Paquin changed party affiliations to the Quebec Liberal Party and won the seat previously held since the 1994 Quebec general election by Roger Paquin (no relation) of Parti Québécois. Paquin was defeated in the 2007 Quebec general election by Lucille Méthé, who won 42% of the vote. Paquin finished third with 25% of the vote. References External links Jean-Pierre Paquin biography via National Assembly of Quebec Category:1948 births Category:Living people Category:Businesspeople from Montreal Category:Union Nationale (Quebec) politicians Category:Politicians from Montreal Category:Quebec Liberal Party MNAs Category:21st-century Canadian politicians
{ "pile_set_name": "Wikipedia (en)" }
Two new alcohol glycosides from the roots of Paeonia intermedia C. A. Meyer. Two new alcohol glycosides, 1-O-β-d-glucopyranosyl-deoxypaeonisuffrone (1) and 9-O-β-d-apiofuranoyl-(1→6)-β-d-glucopyranosyl-xanthoarnol (2), together with eight known compounds (3-10), have been isolated from the dried roots of Paeonia intermedia C. A. Meyer. Their structures were mainly elucidated on the basis of ESIMS, one- and two-dimensional NMR techniques. Antibacterial activities of compounds 1-10 were evaluated, and compounds 9 and 10 showed antibacterial activities against Staphylococcus argenteus CMCC26003 and Escherichia coli CMCC44103.
{ "pile_set_name": "PubMed Abstracts" }
Anaerobic adhesives are well known and widely available. Although typically described as being composed of acrylic ester monomers and peroxy polymerization initiators, anaerobic adhesive and sealants are actually extremely complex curable systems: systems reliant upon a delicate balance of number of critical constituents, namely the peroxy initiator; certain cure accelerators, with or without co-accelerators; and stabilizers, as well as access to transition metal ions at the time of use. As known to those skilled in the art, this delicate balance affects not only storage stability but also cure speed. The former relates to how long the composition may be stored in a bottle or other vessel before viscosity build-up resulting from unintended or “background” polymerization increases to a point where it is no longer useful. The latter refers to the time needed to effectuate a bond or bring about the cure or solidification of the composition once oxygen is removed or no longer accessible to the liquid curable composition. Though critical for ensuring a commercially viable cure speed, accelerators and co-accelerators have little, if any, effect on the initiation of polymerization. Instead, initiation of polymerization is contingent upon the generation or build-up of a sufficient level of free radicals in the curable composition—said free radicals generally resulting from the decomposition of the peroxy initiator, a process that is vastly increased by the presence of transition metal ions—and the subsequent activation of the polymerizable monomer, i.e., the reaction of the peroxy free radical with the monomer to form the radical species of the monomer. Once initiated, free radical polymerization proceeds quickly and is further accelerated by the presence of various accelerator and co-accelerator species. Storage stability, on the other hand, is contingent upon the avoidance or minimization of free radical generation combined with the presence of sufficient levels of oxygen, through absorption, aeration and/or diffusion, to inhibit polymerization of the activated monomer. Most peroxy species are inherently unstable and will slowly decompose over time; however, this decomposition is markedly increased by the presence of transition metal ions. Though not intentionally added, trace levels of transition metal ions are essentially inherent, if not natural, contaminants of anaerobic compositions owing to the fact that such compositions and their constituents are produced in, flowed through, and/or stored in metal vessels, the surfaces of which are subject to oxidation resulting in the generation of metal salts and/or ions which are then picked up by and/or dissolved in the anaerobic composition or its constituents. Despite the presence of such free radicals and free radical monomers, so long as sufficient levels of oxygen are present and accessible, polymerization is inhibited due to the preference of the latter for oxygen with which it forms a stable liquid, similar to the original monomer. The undesirable consequence of this is that the peroxy initiator, which is critical to effective polymerization, is depleted over time: thus, necessitating higher loadings to account for anticipated storage life. Such higher loadings, however, increase the amount of free radical generation; thus, straining oxygen inhibition, especially if oxygen diffusion through the liquid composition is slow. Consequently, the further addition of stabilizers has been employed to scavenge such free radicals before cure initiation can arise. While such efforts control and limit the extent of free radical generation and build-up, it is best to avoid their unintended generation to begin with. To that end, efforts have been employed to remove or bind the transition metals through the use of chelators and the like. Thus, historically, it has been vital to the commercial success of anaerobic adhesives to prevent and/or remove transition metals from these systems in the storage phase. Another factor that has greatly limited and impeded the commercial use and broad application of anaerobic adhesives and sealants is their sensitivity to the substrates upon which they are to be employed. Specifically, as noted above, transition metals are critical to effective cure speeds: thus, transition metal substrates, or those containing transition metals, such as those manufactured from steel, brass, bronze, copper and iron, have long enjoyed success with anaerobic adhesives and sealants. That is why anaerobic adhesive and sealants have found such success in threadlocking and retaining applications, especially in machine and equipment assembly, pipe fittings and the like. However, even with such active substrates, a wide variability in performance, especially cure speed, arises due to the differing levels of such transition metal species and/or their evolution prior to or during bonding. Furthermore, certain surface treatments and conditions, such as rust inhibitors or oily surfaces, greatly affect the activation of the peroxy initiator by inherent transition metal species. Thus, even on transition metal substrates, there is still a need to provide more uniformity and predictability in anaerobic adhesives and sealants. While the aforementioned materials and substrates have benefited from anaerobic adhesives and sealants, they represent only a small percentage of the myriad of materials and substrates for which anaerobic adhesives and sealants could prove useful if sufficient cure and cure speed could be affected. Unfortunately, passive materials, such as aluminum, nickel, zinc, tin, oxide films, anodic coatings, stainless steel, ceramics, plastics, and the like, are free or essentially free of transition metal ions and, thus, are incapable of generating sufficient free radicals to effectuate cure of anaerobic adhesives and sealants, at least at a commercially viable rate. Whatever cure is found is too slow for most any application, industrially or to the consumer. Thus, efforts were subsequently directed toward the use of primers and other surface pretreatments to treat one or both surfaces with an activator that, upon interaction with the peroxy initiator, readily brought about the generation of free radicals. For example, Malofsky (U.S. Pat. No. 3,855,040) describes various ferrocene moiety containing activators for anaerobic polymerization and, in use, employs them together with a strong acid in a two-part system. Toback et. al. (U.S. Pat. No. 3,591,438) describe reducing activators selected from sulfur-containing free radical accelerators, such as thioureas, and compounds containing an oxidizable transition metal, which are used in combination with the condensation product of an aldehyde and a primary or secondary amine as pretreatments and primers for anaerobic adhesives. Other two-part systems include those described in, e.g., Bich et. al.—U.S. Pat. No. 4,442,138; Lees—U.S. Pat. No. 3,658,624; Toback—U.S. Pat. No. 3,625,930; and Hauser et. al.—U.S. Pat. No. 3,970,505. Such use of primers and pretreatments have proven successful, but have added another layer of costs and expense to the use of these systems, not only in materials costs but also in time, equipment, processing and applications costs. Since many primers and pretreatments employ solvent carriers, the selection and use of such solvents adds yet additional concerns, environmentally as well as with respect to its impact on the substrate itself. Furthermore, not all applications, from a processing or from a substrate standpoint, are all that amenable to the use and/or application of primers and/or pretreatments. For example, it may be impossible or difficult to limit the pretreatment to the intended bond interface. Furthermore, certain carriers or solvents may adversely affect the substrate and, hence, the ultimate bond strength or appearance thereof. Similarly, the failure to ensure complete coverage of the intended bond interface with the pretreatment may result in areas where no cure takes place and/or in the production of weak bonds which may fail altogether under use conditions. Thus, despite decades of development and the lure of millions of dollars of new potential applications, there is still a need for a single package, storage stable, surface insensitive, anaerobic adhesive and sealant composition. In particular, there is a need and desire for such anaerobic curable adhesives and sealants that may be used on most any substrate without the need for primers or pretreatments. Similarly, there is a need and desire for such anaerobic adhesives and sealants that are capable of cure within a commercially reasonable period of time, preferably within twenty-four hours, and, more preferably, whose cure speed is substantially unchanged, irrespective of the substrate upon which they are used. Additionally, with the growing concern from an environmental and toxicological standpoint of many amines, especially aromatic and tertiary amines, and imides, there is a growing need and desire for anaerobic adhesives and sealants that do not require the use of amine and/or imide accelerators and co-accelerators, especially aromatic or tertiary amines or sulfimides, such as saccharine, for effecting a commercially viable cure speed.
{ "pile_set_name": "USPTO Backgrounds" }
Well-known Jewish dissident Professor Shlomo Sand has admitted that Israel is the “most racist state in the world”—and that Jews in the rest of the world all work to “dominate” and “control” their home nations’ policies to support the racist Zionist state. The astonishing outburst by Professor Sand—who first gained fame for propagating the now-discredited “Khazar”-origin theory of Ashkenazi Jews—is contained in his new book How I stopped being a Jew. In his new book, Sand, who was raised in Israel, discusses what he called the “negative effects of the Israeli exploitation of the chosen people myth” and what he calls its “holocaust industry.” He also rejects the Jewish religion for its “genocidal Yahwestic tradition,”—a reference to that religion’s “holy books” which contain direct instructions to Jews from God which justify, quite literally, the murder and dispossession of Gentiles. In an article in the UK’s Guardian newspaper discussing his new book, Sand went on: I am aware of living in one of the most racist societies in the western world. Racism is present to some degree everywhere, but in Israel it exists deep within the spirit of the laws. It is taught in schools and colleges, spread in the media, and above all and most dreadful, in Israel the racists do not know what they are doing and, because of this, feel in no way obliged to apologise. I am often even ashamed of Israel, particularly when I witness evidence of its cruel military colonisation, with its weak and defenceless victims who are not part of the “chosen people”. Deeper inside his book itself, Sand makes the following observation on how Jews in the Diaspora now use their position to “dominate” and “control” their host nations: Since the fall of the Soviet Union, there is no longer a country in the world where the descendants of the chosen people are prevented from emigrating to the state of the Jews. Zionism has shifted the objective that originally constituted its raison d’être and acquired a second youth through a reinvigorating initiative. Now more than ever, those who aspire to identify themselves with the seed of Abraham are asked to gather funds in support of a land of the Jews that is in full territorial expansion and, above all, to activate all their networks of influence on their country’s foreign policy and public opinion. The results of the latter objective have been remarkable. At a time when communitarianism enjoys growing legitimacy — particularly in an age of reverence for ‘Judeo-Christian’ civilization, underpinning the ‘clash of civilizations’ — it is more possible than ever to harbour pride at being a Jew and finding oneself on the side of the powerful who dominate history. While Sand is approaching the entire topic from what would traditionally be regarded as a “far leftist” position—and his demonstrably false belief that there is no genetic basis to Judaism—his observations about the true nature of Jewish racist tribalism are accurate. Where Sand has made his essential mistake is understanding that all human behavior has in fact a biological origin. All of us—of whatever ethnic origin—are merely products of our ancestors. Our abilities, our limitations, and our innate characters—are all inherited from those who have come before. Sand’s realization that the vast majority of his fellow Jews are ultra-racists—and hypocrites as well—must be difficult enough for him to comprehend. It might then be too much to expect him to admit that there is an inherited aspect to Jewish behavior which perpetuates this open racism towards, and hatred of, Gentiles from generation to generation. Sand’s new book has already attracted the ire of some of his erstwhile fellow Jews. The Jewish journalist Gordon Haber, in an article in The Jewish Daily Forward, for example, dismissed Sands as a “crackpot”—yet was forced to admit that Sands had correctly identified one aspect of Jewish behavior which even Haber found puzzling: namely the staggering hypocrisy of Jews in America supporting Israeli racism while propagating exactly the opposite policy in the US. Haber wrote: And yet, if I am to be honest, Sand does raise a question of grave importance. For Sand, there is a “close link” between an essentialist Jewish identity and how Israel treats its non-Jews. Many, of course, will argue that Israel is a haven for Muslims and everything’s hunky-dory in Gaza. But those of us who strive for intellectual honesty must acknowledge the contradiction between Western ideals and an ethno-religious government that humiliates and brutalizes people under its jurisdiction. American Jews, in particular, need to ask themselves why they support a situation in Israel that they would never countenance in their own country. Of course, neither Sand nor Haber will dare tackle the real reason for this obvious hypocrisy—which is that Jewish racial tribalism engages in a divide-and-conquer strategy to ensure that non-Jews remain divided and deflected away from the real source of power in the US and elsewhere: the Jewish Lobby.
{ "pile_set_name": "OpenWebText2" }
A split has emerged in NSW Labor over policies to tackle inequality just a week after federal opposition leader Bill Shorten declared that doing so would be "a defining mission" for a government led by him. NSW Labor's economic policy committee, controlled by the right faction, has rejected 13 of 17 inequality-related motions put to this weekend's annual state conference at Sydney Town Hall by the left, prompting accusations it is "gutting" attempts to fight the problem. The rejected motions include inserting a declaration of the party's determination to fight inequality in the state Labor platform and implementing an annual report on the state of inequality in NSW. The committee has rejected a motion to add to the state platform support for the Buffett rule – the principle that a minimum level of tax should be paid by all, as advocated by billionaire businessman Warren Buffett – despite it appearing in the national Labor platform.
{ "pile_set_name": "OpenWebText2" }
A Sad Day for U.S. Americans The following article from Reuters saddened me. Here we have three busloads of women and children (some alone) trying desperately to escape lives of grinding poverty and God knows what else, who are turned away from some temporary processing station by Americans waving our flag and yelling to them that they are unwanted. If you want to know some reasons why life is so hard in places like Honduras and Guatemala, check out the book, “Rogue State” by William Blum. He will tell you that the United States has intervened in these countries in ways that were violent and ways that kept them poor. As of 2013, Honduras had the highest murder rate in the world, rampant corruption, a bankrupt government that cannot pay salaries to teachers or doctors, and a collapsing infrastructure, according to Don Godo on his blog “Honduras Living”. If there is any question about the U.S.’s involvement in the poverty and misery in Guatemala, just look up what happened there with regards to the democratically elected president Jacob Arbenz and the U.S. American company, United Fruit. Because of the U.S.’s help in the suppression of a budding social-democracy in Guatemala, no fewer than 200,000 of her citizens were tortured and/or killed. If Guatemala’s government, or anyone else in Guatemala attempts to initiate systemic changes to aid the poor, they are dealt with by U.S. trained and armed military. Again, you can find this and other such information in William Blum’s book, “Rogue State”. Here is the article about how some of us don’t want Guatemalans and Hondurans trying to escape into our sanctuary. (Reuters) – Protesters shouting anti-immigration slogans blocked the arrival of three buses carrying undocumented Central American families to a U.S. Border Patrol station on Tuesday after they were flown to San Diego from Texas. The migrants, a group of around 140 adults and children, were sent to California to be assigned case numbers and undergo background checks before most were likely to be released under limited supervision to await deportation proceedings, U.S. immigration officials said. But plans to bring the immigrants to a Border Patrol outpost in Murrieta, 60 miles (100 km) north of San Diego, sparked an outcry from town mayor Alan Long, who said the migrants posed a public safety threat to his community. The group is part of a growing wave of families and unaccompanied minors fleeing Guatemala, El Salvador and Honduras and streaming by the thousands into the United States by way of human trafficking networks through Mexico. Most have shown up in Texas, overwhelming detention and processing facilities there. The surge has left U.S. immigration officials scrambling to handle mass numbers of Central American migrants who, by law, the government cannot immediately deport, as they normally could illegal border crossers of Mexican or Canadian origin. More than 52,000 unaccompanied children from Central America have been caught trying to sneak over the U.S.-Mexico border since October, double the number from the same period the year before, according to U.S. Customs and Border Protection figures. Thousands more were apprehended with their parents. The group caught up in Tuesday’s confrontation arrived by plane at midday in San Diego from Texas, where they had been apprehended while trying to cross the border, and were put on three unmarked buses for the ride to Murrieta. As the buses neared their destination, some 150 protesters waiving American flags and shouting “Go home – we don’t want you here,” filled a street leading to the access road for the Border Patrol station, blocking the buses from reaching the facility. The demonstrators disregarded orders from police to disperse, but officers did not attempt to intervene physically to break up the demonstration. After about 25 minutes, the buses backed up, turned around and left. A board member of the union representing border patrol agents, Chris Harris, said the buses would likely be rerouted to one of six other Border Patrol stations in the San Diego sector. Lois Haley, a spokesman for the Immigration and Customs Enforcement agency, declined to say where the buses were headed. Local television station San Diego 6 said the buses went to the Chula Vista Station where about 140 migrants, mainly women and children, could be seen entering, though it was unclear if they were processed inside. It also said several of the children were taken to hospital for unspecified treatment. A supervisor at Chula Vista declined to comment. A separate group of undocumented families with children was being sent on Tuesday to a similar processing facility in El Centro, California, a desert community about 100 miles east of San Diego, U.S. immigration officials said. But there was no word on any disruptions of their arrival. Rate this: Like this: Related Post navigation One thought on “A Sad Day for U.S. Americans” We call ourselves good citizens, lovers of freedom, filled with the American spirit of hospitality…..Shame on us for pushing away those who need help. We are not living up to our word! Sent from my iPad On Jul 10, 2014, at 14:10, Justice For Life wrote: WordPress.com justiceforlifecdp posted: ” The following article from Reuters saddened me. Here we have three busloads of women and children (some alone) trying desperately to escape lives of grinding poverty and God knows what else, who are turned away from some temporary processing station by A”
{ "pile_set_name": "Pile-CC" }
Tuesday's Speech from the Throne will usher in the spring sitting at the B.C. Legislature. It lays out the province's plans for the coming year, with specifics to be detailed in next week's budget. So what can British Columbians expect? "We're going to be dealing with the issues around Indigenous rights, making sure our economy continues to grow in rural B.C., and focusing on Clean B.C.," Premier John Horgan told CBC News. B.C. Premier John Horgan speaks to the media on Jan. 24, 2019. (Michael McArthur/CBC) He said affordability will be a priority — and for housing, that means adding to the current supply. "There's going to be lots of talk about the jobs that we're going to create, but also making sure we're kick-starting the private sector to get the homes built in communities that need to keep pace with the fast growth." Horgan also acknowledged the financial woes plaguing ICBC and the debt that's been piling up at BC Hydro. "The challenges at ICBC and BC Hydro are enormous — in the many, many billions of dollars — and these things can't be fixed with a magic wand," he said. "We're facing big challenges on both of those files." Turning point B.C. Liberal Leader Andrew Wilkinson expects the upcoming session to serve as a milestone marker. "I think we're at a turning point in the lifetime of this NDP government in that they came in with a fairly aggressive tax agenda last year and now British Columbians are looking for it to bear some fruit." He said the B.C. NDP has a "tall task" in delivering on affordability promises. "They've tried to do it by raising taxes, supervising increases in ICBC rates, and watching BC Hydro go up," said Wilkinson. "I think their affordability agenda is at grave risk now and we'll be watching that closely." "I think we're at a turning point in the lifetime of this NDP government in that they came in with a fairly aggressive tax agenda last year and now British Columbians are looking for it to bear some fruit, "said Liberal Leader Andrew Wilkinson. (Nic Amaya / CBC) So what specific legislation would the Opposition like to see? "If we were able to advance legislation — and the NDP have [previously] blocked it — we would put in a condo pre-sale flipping tax to stop people from flipping paper contracts on condos and to drive speculation out of the market." Stable minority government? B.C. Green Party Leader Andrew Weaver is confident he'll see further action on his crowning achievement: the province's climate action plan, which was announced last December. "I would hope to see some legislation with respect to the Clean B.C. strategy in terms of legislating vehicular standards, certain low-carbon fuel standards," he said. "I also expect to see something with respect to the poverty reduction plan ... We'll be bringing in a bunch of private members bills and the rest will be up to the government." "We see no reason why we would, at this stage, find reasons to pull our support for the government — we said we wouldn't do that." (Mike McArthur / CBC) The current minority government is teetering on support from the Greens. Weaver doesn't see that changing in the coming months, pointing to the Confidence and Supply Agreement his party signed with the NDP after the 2017 provincial election. "We see no reason why we would, at this stage, find reasons to pull our support for the government — we said we wouldn't do that." Weaver went on to reassure British Columbians this province has a stable minority government. "This is what we've said all along: we as Greens will move forward with our agenda, and at the same time, hold government to account and ensure the people of B.C. are put front and centre in decision making."
{ "pile_set_name": "OpenWebText2" }
ざっくり言うと ある男性がiPhoneの機種変更をすると、見知らぬ連絡先が追加されたという 旧端末の情報は無事で、アドレスのドメイン名などからアップル社員のよう アップルジャパンは不具合などを否定し、データを削除するよう回答したそう 提供社の都合により、削除されました。 概要のみ掲載しております。
{ "pile_set_name": "OpenWebText2" }
Hundreds of protesters took to the streets of Pretoria on Saturday, angered by a rise in violence against women and children in South Africa, including killings and sex attacks. Answering the call by a group calling itself "#Not In My Name", the protesters, most of them men, marched through the streets of the South African capital behind a woman symbolically dressed head to toe in white. "The time to take collective responsibility for our shameful action is now," said Kholofelo Masha, one of the protest organisers, who described himself as "a loving dad, brother and uncle". South African men have remained quiet on the issue for too long, he added: "You hear a lady screaming next door, you decide to sleep when you know there is a problem next door ... No man should beat a woman or rape a woman while you're watching." Reports of the rape and murder of women and girls have been front-page news recently in South Africa, which has some of the worst crime rates in the world. According to official figures, a women is killed by someone she knows every eight hours somewhere in the country and one woman in five has been subjected to at least one act of violent aggression in her life. The killing of Reeva Steenkamp by her boyfriend, Paralympic athlete Oscar Pistorius, drew global attention to the issue of domestic violence in South Africa. South African President Jacob Zuma on Thursday visited the home of the parents of a three-year-old girl who was raped and killed. "We as the citizens of this country must say enough is enough," Zuma said then. "This is one of the saddest incidents I've come across. It's a crisis in the country, the manner in which women and children are being killed." The ruling African National Congress has called the wave of violent acts "senseless and barbaric" while the main opposition Democratic Alliance party has denounced the "failure to make South Africa safe for all", and has called for a national debate on the problem.
{ "pile_set_name": "OpenWebText2" }
Saint Benedict standing cross in glazed red metal of Gothic style. The body of Christ is pinned to the cross (glued to the cross in the smaller size) and the mdeals are silver plated. The production of these metal cross is entirely artisanal using the best materials and manufacturing techniques, therefore this Saint Benedict cross is guaranteed forever. Choose the desired size below the picture.
{ "pile_set_name": "Pile-CC" }
TFM: Right now we plan to work on a new case with TFM: Hacienda Dos Hermanos in Manapla. We visited the Area last November- together with Terry from TFM. Half of the hacienda already got redistributed and some of the farmers are Cloa holder but the owner of the hacienda shut off the only road to the plantation and requires that all motorized carts to pay a fee of 1500 PHP -this also applies to tricycles and Emergency Vehicles.We need to figure out now, if the road is private property or public area –which would make the blockade illegal.The DAR had agreed to make this survey in early December – until now nothing happened.On January 30th we will go to the Board Meeting of TFMand ask for updates on the current situation. BANFFO: On January 14th we visited La Carlota to attend the hearing. It did not take place- when we inquired the reason, we were told by the lawyer of BANFFO that they decided for a settlement with the landowner.The terms of an agreement might include a certain amount of money, a percentage of the land of the Hacienda and to drop all court cases against BANFFO. This decision is not final yet. If they decide for this agreement, BANFFO will stop their CARP Request.On the 27th there will be a meeting with the USEC from the DAR, the lawyerOf BANFFO and the Administration of the Hacienda, where theyDiscuss this future agreement. PM: We will meet PM this week and discuss our future strategy on the murder case of Panggo. The prosecutor just dismissed the case due to a lack of evidence. Furthermore we want to inquirer the current status.
{ "pile_set_name": "Pile-CC" }
I do agree with all the ideas you’ve presented in your post. They’re very convincing and will certainly work. Still, the posts are too short for novices. Could you please extend them a little from next time? Thanks for the post.
{ "pile_set_name": "Pile-CC" }
Q: Swift 3.0 draw circle with same start and end angle results in line I've written following code to draw a rect with a whole in it: fileprivate let strokeWidth: CGFloat = 5 let shapeLayer = CAShapeLayer() shapeLayer.strokeColor = UIColor.white.cgColor shapeLayer.lineWidth = strokeWidth shapeLayer.fillColor = UIColor(white: 0.5, alpha: 0.9).cgColor let p = UIBezierPath(rect: bounds.insetBy(dx: -strokeWidth, dy: -strokeWidth)) let radius = bounds.size.width / 3 p.move(to: CGPoint(x: bounds.midX + radius, y: bounds.midY)) p.addArc(withCenter: CGPoint(x: bounds.midX, y: bounds.midY), radius: radius, startAngle: 0, endAngle: CGFloat(2 * Double.pi), clockwise: false) p.close() shapeLayer.path = p.cgPath layer.addSublayer(shapeLayer) The problem here is this line: p.addArc(withCenter: CGPoint(x: bounds.midX, y: bounds.midY), radius: radius, startAngle: 0, endAngle: CGFloat(2 * Double.pi), clockwise: false) Printing out the description of the path: UIBezierPath: 0x6000000b35c0; MoveTo {-5, -5}, LineTo {380, -5}, LineTo {380, 672}, LineTo {-5, 672}, Close, MoveTo {312.5, 333.5}, LineTo {312.5, 333.49999999999994}, Close As you can see, the last two entries are lineTo and close, which gives me not the expected result (a full circle), I'll get nothing because the line is too short between 333.5 and 333.4999999. This problem occurs since switching to Swift 3, in Objective-C this wasn't a problem. Changing the end angle to 1.9 * Double.pi will also work, no idea why. But the full circle should have 2 * Double.pi. Any idea or is it a Swift 3 bug? A: Try like this: let bounds = UIScreen.main.bounds let strokeWidth: CGFloat = 5 let shapeLayer = CAShapeLayer() shapeLayer.strokeColor = UIColor.white.cgColor shapeLayer.lineWidth = strokeWidth shapeLayer.fillColor = UIColor(white: 0.5, alpha: 0.9).cgColor let p = UIBezierPath(rect: bounds.insetBy(dx: -strokeWidth, dy: -strokeWidth)) let radius = bounds.size.width / 3 p.move(to: CGPoint(x: bounds.midX + radius, y: bounds.midY)) p.addArc(withCenter: CGPoint(x: bounds.midX, y: bounds.midY), radius: radius, startAngle: 2 * .pi, endAngle: 0, clockwise: false) p.close() shapeLayer.fillColor = UIColor.red.cgColor shapeLayer.strokeColor = UIColor.black.cgColor shapeLayer.lineWidth = 5 shapeLayer.path = p.cgPath let view = UIView(frame: bounds) view.backgroundColor = .yellow view.layer.addSublayer(shapeLayer) view
{ "pile_set_name": "StackExchange" }
Q: FragmentStatePageAdapter Doesn't Pause Fragments I'm using a FragmentStatePageAdapter (android.support.v4) and I have setOffscreenPageLimit set to 2, so it creates and stores Fragments 2 ahead and 2 behind of the currently displayed Fragment. Problem: When the off-screen Fragments are created, they are also immediately started and resumed even though they haven't been painted to the screen yet. (!) When the current page is changed and the corresponding Fragment is swiped off screen, it isn't paused or stopped. (!) I've tried logging the behavior of all the callbacks in FSPA and its super class - setPrimaryItem comes the closest to being usable but appears to be called for all sorts of reasons, not just when the fragment is displayed. How can you detect that one of your Fragments is no longer displayed, or returning to the display? A: You could use a listener. mPager.setOnPageChangeListener(new OnPageChangeListener(){ @Override public void onPageScrollStateChanged(int arg0) { } @Override public void onPageScrolled(int arg0, float arg1, int arg2) { } @Override public void onPageSelected(int position) { if(mPageSelectedListener!=null){ mPageSelectedListener.pageSelected(position); } } }); Where PageSelectedListener is defined by you like so public interface PageSelectedListener{ public void pageSelected(int position); }; public void setPageSelectedListener(PageSelectedListener l){ mPageSelectedListener = l; } And use it like this in your fragment if(getActivity() instanceof MyActivity ((MyActivity)getActivity()).setPageSelectedListener(new PageSelectedListener(){ @Override public void pageSelected(int position) { if(position==MyAdapter.MY_PAGE){ // do something with currently viewed page...like resume it } else { // do something with any other page..like pause it } } }); }
{ "pile_set_name": "StackExchange" }
After a year of using Node.js in production - 0xmohit http://geekforbrains.com/post/after-a-year-of-nodejs-in-production ====== placebo I usually don't respond to anything which I feel is just another "language war" provocation, but whenever I see these type of reviews I'm mystified. I've developed in many languages and frameworks, both well known and lesser known - decades of client and server side of C/C++, Javascript, Lua, Java, Python, PHP, Perl, Lisp. Pascal (just to name a few) in projects of all sizes, and not once did I have the thought "this language sucks". I think there are a few reasons for this: 1) Even after all these years, I'm still passionate and excited at the ability to sculpt logic, regardless of the "material" I need to use. 2) A "keep it simple" approach - no way to overemphasise this. Know the advantages and limitations of the language and stick to what works. Keeping things simple should be like a fractal - existing at all levels of abstraction. 3) I'm very wary of hype. New, shiny and trendy does not necessarily mean better, especially when the hype is in conflict with keeping things simple. I find that when you understand the playing field, mark the areas to avoid and keep things as simple as possible, the elegance of the design and implementation usually makes the advantage of language X over Y insignificant, and I feel that blaming failure on the language used is like blaming a bad novel on the word processor used to write it. ~~~ astrobe_ > I feel that blaming failure on the language used is like blaming a bad novel > on the word processor used to write it. Sure. If you're a good novelist you can write something great even on a keyboard with a broken 'e' key (btw, your 'I' key is about to break; you should replace it asap). But novelists don't have deadlines and when the novel is done, they usually don't take reader requests to change this or that part of the story. "The advantages and limitations", what works and what doesn't, "areas to avoid" are precisely the point of that kind of review. ~~~ placebo > (btw, your 'I' key is about to break; you should replace it asap). haha - touché, point taken :-) Deadlines seem like a great excuse for compromising quality. Sure, life is complicated, the boss is demanding, the mortgage has to be paid, the children need to be supported etc. etc. but compromising quality and enthusiasm (they usually are correlated) because of the "terror" of a deadline will just leave you at the mercy of the next "terror", only this time you'll even have even less enthusiasm to fix the spaghetti. Doesn't sound like an enjoyable existence. Of course, very few people have the privilege of never having to compromise, but it's never black or white and there are many more degrees of freedom to choose the path with more quality than are implied. >"The advantages and limitations", what works and what doesn't, "areas to avoid" are precisely the point of that kind of review. The "area to avoid" in the review is Node.js and considering that large and impressive projects have been written in it, it seems that this is another case of throwing the baby out with the bathwater. ------ beders Now all of a sudden, having types and some standards to gather around doesn't sound like a bad idea anymore ;) I agree with one of the commenters: Lessons already learned by older engineers (who went through similar woes with other languages/tools) are being re- learned again and again. The software industry is in a sorry state. Unless you are a very disciplined team with a very strong sense of writing modular code, don't use Node.js for any larger project. And even then, the single-most useful function in an IDE 'Show Call Hierarchy' will never be available when using a dynamically typed language. That is not an issue for smaller projects. However, long before you even get close to the the million lines of code project size, your tools will fail you. Your debugging/refactoring times will explode and adding a new feature will seem unsurmountable. Instead, let's just re-write everything from scratch because the cool hipster that wrote your backend a year ago has left for greener pastures... I won't even try to guess the amount of technical debt produced with Node.js and the likes each day in the bay area. And, yes, I just used Node.js to write a Slack-bot. It was fun, took me two hours and got me up and running quickly. That's the beauty of it. Just be aware of the dangers. ~~~ encoderer I've worked in three million+ loc codebases, in PHP, Python and Java. I don't share your opinion that you need static types in these circumstances. You need discipline, modularity, and most importantly you need to have been blessed with gardeners and maintainers throughout the life of a project and not just after a mess has already taken hold. ~~~ beders Did you read what I wrote? I already said that you need a disciplined team. Good luck keeping that team together for years to come. Not sure if you are disputing the fact that keeping code around is a challenge, or not. ------ tylerlh The Netflix.com site and webapp runs on Node (and talks to a number of services written in mostly JVM based languages). While we encounter challenges just as we would with any other language -- it works for us and I would argue that it's a pretty big application. There's always a multitude of ways to get something done, and it's up to you to decide what tool will do it best. Don't treat any one language as an end- all-be-all and you might find yourself much happier and more productive. Of course, YMMV. ~~~ ChrisAntaki > There's always a multitude of ways to get something done, and it's up to you > to decide what tool will do it best Well said. By the way, what is Netflix's take on Promises vs RxJS? ~~~ ZoeZoeBee I'm pretty sure Netflix is in the Observables camp, as Ben Lesh over at Netflix is RxJs ~~~ Akkuma I watched one of the recent Netflix engineering videos where they moved to a customized version of React and I could have sworn in that talk they actually moved away from Observables[1]. What was confusing is that the other talked released at the same time they talk about enhancing RxJS. [1]: [https://youtu.be/5sETJs2_jwo?t=5m32s](https://youtu.be/5sETJs2_jwo?t=5m32s) ~~~ mikeryan A friend of mine runs some UI engineering at Netflix from talking to him reactive JavaScript is still heavily used. ~~~ Akkuma I'm sure it is still heavily used, but it looks like there are two very opposing views with one going so far as completely removing it. ------ joshmanders These types of articles make me laugh. Typically a dev with many many years experience with one language, learned all it's quirks, standards, etc decides to try Node.js because it's the "new hot fun toy", and expect it to work like their old language, and realize that is not how it works, doesn't know where to find what and fails real hard to realize that JavaScript in general is in a huge influx of updating at this moment, which by nature propagates to Node.js. The end result is they get frustrated and go back to their old language. ~~~ eva1984 Yeah, so some js devs might need to stop overselling javascript to everyone, pretending it is the one language that people are waiting for years...Just saying. ~~~ joshmanders You should try to surround yourself with developers who don't have such attitudes. I am a fan of JavaScript, I love Node.js and I will often times suggest it to newbies. But I don't pretend it's the be all/end all of languages. Just like any other language it has its strengths and weaknesses. It's up to you to decide if it's the right choice for you. ~~~ uptownJimmy It is one of my most cherished professional goals to avoid ever working with people who think like this. The message this sends to junior devs is so backwards and wrong-headed that it defies discourse. ~~~ jackweirdy Why? It seems like a reasonable statement - surround yourself with people who think critically, not evangelically? ~~~ uptownJimmy But I see your point as being precisely backwards: it's the JS crew who constantly evangelize for their way of doing things, and that way of doing things is completely inappropriate for anyone who isn't already expert. "Choose your own tools" is advice for experts, and literally nobody else. It smacks of the cowboy attitude, and that attitude is wildly unhelpful to almost anybody doing professional work in software. Frameworks and coding standards exist for a reason: not to be unreasonable strictures, but to provide guidance and sanity in a staggeringly complex field of endeavor. Many new devs and junior devs are flocking to the JavaScript ecosystem, and it is one of the most troubled and chaotic ecosystems in all of software right now. Anyone who is not a complete JS badass is simply not going to find Node to be even a decent choice for learning best practices pertaining to the larger world of application development. And I feel I'm stating that politely. So: yet another JavaScript Pro tossing out the "choose your tools like a Pro" advice-morsel is part of the problem, as I see it. I don't want to work with people who toss the kids into the deep end and hope a few can learn to swim real quick. I think people deserve a helping hand and a reasonable set of expectations. The JavaScript ecosystem has become self-parodying. Anyone who is oblivious to that fact is inherently NOT a trustworthy witness. That is not to say that the whole thing is rotten and worthless, but it IS messy as heck, and refusing to acknowledge that is a sign that one is in denial about some pretty blatant facts. ------ spion > Coming from other languages such as Python, Ruby or PHP you’d expect > throwing and catching errors, or even returning an error from a function > would be a straightforward way of handling errors. Not so with Node. > Instead, you get to pass your errors around in your callbacks (or promises) > - thats right, no throwing of exceptions. Promises let you throw errors normally. They will propagate up the call stack in a similar manner. With bluebird, you will also get full stack traces in development mode and the performance penalty for that isn't too bad. > The last thing that I found frustrating was the lack of standards. Everyone > seems to have their own idea of how the above points should be handled. > Callbacks? Promises? Error handling? Build scripts? Promises are in ES6 (i don't think it gets more standard than that) and have well defined semantics, including error handling, shared between libraries: [https://promisesaplus.com/](https://promisesaplus.com/) I know that Bluebird's promisifyAll might seem like a bit of a hack, but just try it out. It works surprisingly well, and its really painstaikingly tuned for near-zero performance loss. It will probably be both less painful to do and more performant than any manual attempt to wrap a callback based API into a promise one. ~~~ pjungwir I gave a talk about handling errors in Node a few years ago: [https://github.com/pjungwir/node-errors- talk](https://github.com/pjungwir/node-errors-talk) At the time the solution was "use domains", but I think domains are deprecated now. It was painful enough that I have stuck with Rails since then. I'm glad to hear that Promises are an improvement! ~~~ koolba Domains have been deprecated since at least 0.10. As of yet there's no replacement for them and all node apps should be using them. There's no other way to catch " _But ... but ... that can 't happen!_" type errors. ~~~ spion There is no replacement because they're a fundamentally broken idea. They require the following to happen, in that order: * V8 needs to optimize try-finally * Node core needs to add try-finally at every single place where callbacks are invoked and make sure all state and resource cleanup is properly done to support domains * Popular libraries need to also add try-finally handlers for the above. As to why this is a problem in node and not so much in other languages, its because with node callbacks, the call stack goes both ways. In other languages, libraries mostly call their dependencies' code. In node's CPS style, you call the library but the library also calls your closure code. The semantics for the 2nd part aren't well defined in node - the loose law basically says: I wont call you twice, I'll try not to call you synchronously, and you wont throw (and if you do the behavior is undefined). With promises there is a contract and its enforced by the promise implementation. Since Promises actually have error semantics, you can build resource management strategies on top of them. [http://promise- nuggets.github.io/articles/21-context-manager...](http://promise- nuggets.github.io/articles/21-context-managers-transactions.html) \- and consequently there is no reason to crash your server on errors. ~~~ lobster_johnson Domains are used for another reason: To emulate thread-local variables. I hope that support is not going away, because it's really handy. ~~~ spion Its interesting that the same problem (TLS) can also be solved with something similar to promises :) ------ Corrado I agree with this article and find the Node.js community, and to a lesser extent Javascript itself, exhausting. It seems like every 2 minutes there is a "more" proper way to do something, which tells me that the architecture is not yet mature, even though it's pretty old by now. On a related note, it seems like every time you find something that doesn't quite work correctly or conveniently in Node.js there is a "fix". Don't like callbacks? Force Node.js to look more like other "normal" code and use Promises. Having trouble getting Node.js to concentrate on one thing at a time? Force Node.js to look more like other "normal" languages and use the "async" library. And it just goes on and on and on. If I have to use all of these other pieces and parts to be productive in Node.js I may as well just use some other language. I really wanted to like and use Node.js, but Javascript and the community are holding it back. ~~~ eagsalazar2 Yes, the Javascript world is quickly evolving both on the front end and backend, and of course that can be exhausting but I have to disagree with both conclusions that (a) this means the tech is immature, and (b) the community's readiness to make changes is "holding it back". The js world is very unique in its ability to evolve quickly and things have improved _massively_ over the last few years. Now, apart from the churn itself and precisely because of that rapid evolution, front end and backend development in javascript is amazing compared to most other options (especially on the front end). So while you do have to be realistic about the cost of the evolving ecosystem, it is just a tradeoff for rapid progress, not a flaw. If you crave stability, agreed, this rollercoaster probably isn't your ride. But the evolution of node, the emergence of react/redux, es6, etc is amazing and beautiful IMO and I'm totally enjoying every bit of it compared to the staid mediocrity of my Rails, C/C++, and Java history. ~~~ rimantas The problem I see that in this case "evolving" looks suspiciously like running in circles without going anywhere. ~~~ royjacobs A good example mentioned in the article is the "npm scripts -> grunt -> gulp -> npm scripts" evolution in best practices for building. ------ programmarchy Never had the problems with Node described in the article. Node forces thinking about modularity and composition though, which I'd wager is what the author is actually struggling with. Write functions that do one thing and they're pretty easy to compose, even if they're asynchronous. It's really not hard to debug what went wrong in the stack trace when you name your functions. And I think exception handling is much worse than passing errors up through callbacks. It forces you to think about edge cases. Not sure how python handles this, but I can't tell you how many times I've seen Java or C# code swallow exceptions which is much harder to debug IMO. People really seem to have a problem with there not being "the one true path" in Javascript, but it's not something that gives me much anxiety. Javascript is incredibly moldable, which is part of what makes it so powerful. ------ eknkc TLDR; Author actually wants to use Python. Used Node.JS regardless, for whatever reason.. It did not work the way Python works. Author is frustrated. Complains that JavaScript is not Python. ~~~ fredrb This. You can't apply some other language paradigms in every programming language just because it's the only thing you know how to do. ------ Wintamute > You use Grunt!? Everyone uses Gulp!? Wait no, use native NPM scripts! Although couched as a criticism this is actually the community fixing itself. The evolution from Grunt > Gulp > npm scripts is movement away from needless complexity towards simplicity. Npm scripts are effectively just Bash commands that build and manage your project, which sometimes employ small, unixy tools written in Node. This self correction was pretty quick, it happened within a few years. > Unfortunately, there isn’t any one “standard” (like everything else in > Javascript) for implementing or using Promises. Yes there is. It's called the Promises/A+ spec, and its built into ES6. ~~~ Touche NPM scripts are just a different problem. See: [https://twitter.com/sindresorhus/status/724259780676575232?l...](https://twitter.com/sindresorhus/status/724259780676575232?lang=en) [https://github.com/ReactiveX/rxjs/blob/a3ec89605a24a6f54e577...](https://github.com/ReactiveX/rxjs/blob/a3ec89605a24a6f54e577d21773dad11f22fdb14/package.json#L14-L96) Already people are coming up with new "solutions" to this problem that looks more like Grunt. It's a repetitive circle. Personally I just use Make. ~~~ Wintamute So somebody found a project somewhere on the internet with an exceptionally complicated build process, and you use it to say npm scripts are broken? Sorry, that's absurd. Looking at that particular build process, I don't think a Makefile could have been crafted to make it much simpler or smaller. In that example, the problem lies with the complexity of what they're having to do, not the tool. Npm scripts are really just shell scripting, which means all the real progress happens in the unixy Node tools that do the heavy lifting, where it should be. It's a future proof and scalable approach for the vast majority of projects imo. ~~~ Touche It's an inflated example of what all npm script projects become, imo. First you just have "test", then you add "build", then you separate your "test" into one for the browser, one for Node, one for CI, then you need scripts that combine those together; then you create different "start" versions depending on environment... It blows up quickly. > Npm scripts are really just shell scripting, which means all the real > progress happens in the unixy Node tools that do the heavy lifting, where it > should be. That's fine, but you're missing critical features that Make provides; make won't even rebuild a target if no files have changed. ~~~ Wintamute Admittedly, my first 2 or 3 npm scripts based build processes did start to get a bit ugly. But I'm much better at writing them now, so they stay pretty sane. > That's fine, but you're missing critical features that Make provides; make > won't even rebuild a target if no files have changed. WebPack does this for me too. Also my ava tests don't rerun for files that haven't changed while watching. Genuinely curious, what scenarios precisely do you find this feature of Make useful? Also, Make isn't truly cross platform ... and since I sometimes work with Windows devs this would be a problem. ------ kcorbitt I've spent a lot of time writing Javascript on the front-end in the last year using both React and React Native. I've found the React ecosystem to be a sane, productive and enjoyable development environment. Interested in sharing more model logic between our front- and backends, I also investigated writing some new backend features using Node (we're currently developing with Rails). But after days of research and playing around with the available options, I came to a conclusion similar to Gavin's -- any reasonably complex backend requires you to either roll your own everything, or try to cobble together literally hundreds of tiny dependencies that weren't built to go together, and then somehow keep track of all of their regressions and breaking changes. Node's fantastic performance and unique ability to share logic between client and server are enticing, but I just don't trust the community and "best practices" around it enough to bet the farm on it for now. ~~~ wrong_variable The same problems you cite for node.js are the reasons why a lot of devs love node.js. Its much easier to do your own research and find the best module to solve a particular problem you are having then to shoehorn into some larger monolithic framework. Also its a lot more fundamental then that - Node.js has prolly the fastest iteration cycle for any platform out there since its so easy to create your own module - it leads to some sort of Cambrian explosion of innovation and experimentation. EDIT: Also OP seems to think callback hell and async programming is bad. The important thing is those things are problems for python/.... too ! Its just that python doesn't have a good programming model to even begin to address those concerns. JavaScript at-least tries to say - "hey this is a problem we need to deal with - concurrency is a issues we all face " So when devs complain about callback hell - its just that they have never tried to use python to do async in a neat way. ~~~ empthought > python doesn't have a good programming model to even begin to address those > concerns This isn't even close to true. Anything JavaScript has to express logic in the face of asynchrony, Python has too. There are a half-dozen asynchronous web servers written in Python. There's not as much of a culture of writing APIs that way in Python because it's generally a terrible way to program, and threads/OS processes are good enough for basically everything except HTTP servers with absurd numbers of concurrent connections. ~~~ wrong_variable > There's not as much of a culture of writing APIs that way in Python because > it's generally a terrible way to program I would love to see evidence for you making that statement. Almost every programmer would put out their fav programming language as the 'right' way to program. > everything except HTTP servers with absurd numbers of concurrent connection Once you introduce async operations in your code - you need to follow the execution path through. http request can be async - but then what if the http request results you doing a db lookup or some form of file handling ? you need to make the whole thing event driven. ~~~ empthought It's not like we didn't have cooperative multitasking for 50 years. Having threads/processes and a scheduler is easier and safer, full stop. Potentially long-running portions of the program don't need to be arbitrarily chopped up to yield control back to the server, because they are pre-empted. Your system is no longer at the mercy of the worst code within it. Both nginx and Apache's event MPM handle HTTP connections with events while the app backends are still using preemptive threads for running the HTTP handler code, so it's clearly not the case that "you need to make the whole thing event driven." You just need programmers who don't think, "well since the browser doesn't expose threads to JavaScript programmers, clearly they are useless." ------ narrator I'm using NodeJS on a pretty big project. Things I like: * Async libraries make it easy to make things high performance. * I like Sequelize as an ORM, once I figured out how the async everything works. * The testing support is pretty good, both mocha and e2e testing using selenium * The angular-fullstack generator was really helpful for getting started and setting up the deploy to Heroku. * everything is open source. If I get confused with what a library is doing while I'm debugging I can just stick a print in the lib temporarily. Things I don't like: * "undefined is not a function". When something goes wrong. This is the error I get 80% of the time. * async stuff silently swallows exceptions unless I put try{} catch(err) { console.trace(err) } everywhere * There's a bit of a learning curve with promises. * Needs a lot more automated testing than a really strongly typed language like Scala. * Single Threaded. I know how to program using threads, so I view this as a disadvantage. * No types. I have a lot of type checking asserts at the beginning of dao functions. If I did it all again and my teammates would oblige, I would have probably chosen Play/Scala. I actually reimplemnted things from a Play/Scala project I did a while back (login/signup/forgot password/confirm account) and it took less time in Play/Scala, even without passportJS and friends. I have about a years worth of experience learning Scala before I started that project, so it would probably take longer for a new Scala developer. ~~~ rhinoceraptor > async stuff silently swallows exceptions unless I put try{} catch(err) { > console.trace(err) } everywhere Huh? That's not how async errors work in Node. Try/Catch is not async, the catch block will not magically transfer to your callback function. You check for the error as the first parameter in your callback, that's the standard way of error handling. Throwing errors in Node is considered by most to be an anti-pattern. ~~~ narrator That's nice in theory, but third party libraries may throw exceptions if unexpected things happen at runtime. You'd still have to put the try catch block there and call the callback in the catch block everywhere. ~~~ rhinoceraptor That's only for non-async stuff, which is very rare. If you're using some wacky 3rd party code which throws an Error instead of the accepted convention of (err, response) callback arguments (or returning a promise), I suggest not using it. ------ mrmondo We couldn't even stand 6 months let alone a year. Broken pack depts make it hard to CI, poor performance, memory leaks, huge docker base images due to deps, lots of single cpu only tasks that were too hard to scale out, an ugly language compared to ruby or Python and on top of all that how poor the package management with NPM has been. We've just ditched it and gone back to Python/Django/flask and ruby for the ops tooling. ------ silviogutierrez Same experience as the author. Tons of reinventing the wheel, and hundreds of dependencies. The majority of established companies that say they use Node in production are doing so as a fancy proxy. All the "serious" stuff is done on backend services written in other languages. Moreover, asynchrony is a concept far more advanced than most people think. We will likely continue to leverage Node as a fancy proxy. Adding Typescript will only help. But it's likely that as Node grows into a mature platform, other platforms will continue to fill in the gaps that Node filled. See for example Node constantly adding [foo]Sync versions of methods, while Python adds first class sync support. ------ firasd I've been on a similar learning curve with Node over the last year, and it has certainly been a rougher incline than other languages I've used. The whole async situation needs to settle down, it's completely unacceptable to write code with callbacks, promises, etc. This is because they are not just challenging to deal with, but intrinsically wrong in concept. I have to wait for a database query to complete, then pass the next thing to do to the callback of the query? That can't be right. Interesting article I found: "Async/Await: The Hero JavaScript Deserved" [https://www.twilio.com/blog/2015/10/asyncawait-the-hero- java...](https://www.twilio.com/blog/2015/10/asyncawait-the-hero-javascript- deserved.html) ~~~ Silhouette I find that JS often seems to tie programmers in the most extraordinary knots just to implement even quite simple logic, because of the single-threaded nature of the language. In the programming model used by most other mainstream languages today, if you've got some work to do that interacts with some external system and might take a while, you'd probably start another thread for that task. You'd write the required logic in the usual linear fashion, and just let the thread block if and when it needs to. Modelling this using fork/join semantics and techniques to co-ordinate access to shared resources from different threads are reasonably well understood ideas. Because there is no general support for concurrency and parallelism in JS, you only get one thread, and so in most cases you can't afford to ever block it. Consequently, you get this highly asynchronous style that feels like writing everything manually in continuation passing style, just so you can carry on with something else instead of waiting. That in turn leads to callback hell, where you start to lose cohesion and locality in your code, even though usually you're still just trying to represent a simple, linear sequence of operations. Async/await help to bring that cohesion and locality back by writing code in a style that is closer to the natural linear behaviour it is modelling. However, even those feel a bit like papering over the cracks in some cases. Async/await kinda sorta give us some simple fork/join semantics, but as the blog post linked from the parent shows, we have a lot of promise-based details remaining underneath. Fundamentally, the problem seems to be that JS is increasingly being used to deal with concurrent behaviours, but it lacks an execution model and language tools to describe that behaviour in a natural, systematic way as most other widely used languages can. Being strictly single-threaded avoided all the synchronisation problems in the early days, when the most you had to worry about was a couple of different browser events firing close together and it was helpful to know the handler for one would complete before anything else started happening. I'm not sure it's still a plus point now that we're trying to use JS for much more demanding concurrent systems, though. ~~~ Touche > Modelling this using fork/join semantics and techniques to co-ordinate > access to shared resources from different threads are reasonably well > understood ideas. Writing thread-safe code is anything but easy in languages that support threads. ~~~ Silhouette I respectfully disagree. Dealing with _shared state_ is not always easy when you're working with multiple threads. If you can't reasonably avoid that sharing because of the nature of your problem, and if your choice of language and tools only provide tools on the level of manual locking, then I agree that writing correct, thread-safe code has its challenges. However, there are plenty of scenarios where you don't need much if any state to be shared between threads. That includes almost every example of JS promises or async/await that I've seen this evening while reading this discussion and the examples people are linking to. There are also plenty of more sophisticated models for co-ordinating threads that do need to interact, from message passing to software transactional memory. These are hardly obscure ideas today, and I don't think anyone could reasonably argue that for example message passing makes things complicated but async/await/promises make things simple. ------ deedubaya For all the comments on here about how unfair the author was, there sure is minimal feedback on the problems they highlighted. ~~~ snappy173 the feedback is: stop expecting javascript to act like python ~~~ deedubaya How productive! ~~~ snappy173 sorry if that came off harsh, but that is actually the feedback, and it's valid. whether or not javascript/node is better or worse than python, it's pretty clear that bringing a python style approach to nodejs is going to cause problems, especially with error handling and async stuff. ------ mrgalaxy I have used Node.js in production for about 5 years now and I must agree with the sentiment that JavaScript is "Easy to learn, impossible to master". Yes, error handling is a little confusing at first but it stems from JavaScript's asynchronous nature which is naturally complex for the linear mind. My personal sentiment is to just use Promises, like everywhere. ES7 async/await will really help with this too. ------ clessg For web applications, I've been very happy with the up-and-coming Phoenix[0], a framework for the Elixir language. Very well-designed and thought-out, fast, productive. Leans functional and immutable rather than object-oriented and mutable. It's kind of like Rails but without most of the problems. [0] [http://www.phoenixframework.org/](http://www.phoenixframework.org/) ~~~ deedubaya I've found Elixir to be a delightful language with very palatable syntax compared to ruby. It has been a really enjoyable transition, with some vague reminders of the parts I really like about JavaScript. ------ emilong I've been using Node in production for a few months now, having come from Ruby (Rails & Sinatra) immediately before, but having used JavaEE and PHP before that. I find it... fine. For error handling, bluebird's typed error catching ([http://bluebirdjs.com/docs/api/catch.html](http://bluebirdjs.com/docs/api/catch.html)) is working well for me and I'm finding it analogous to my experiences with Java and Ruby. I'm rather used to using ORMs as well and I use Bookshelf ([http://bookshelfjs.org](http://bookshelfjs.org)) on top of Postgres for this as well. It definitely has room to grow, but it's also fine. I've also gained a dependency injection container (Bottle.js, see my write up here: [https://blog.boldlisting.com/declarative-dependencies-for- un...](https://blog.boldlisting.com/declarative-dependencies-for-unit-testing- node-js-services-45542ceb5703#.gaf9om7zj)), which I sorely missed in my Rails days and which gives a lot of structure to the application. I think the biggest concerns on which I'd agree with the author are the pace of the community and lack of agreement on things which are well-decided in other, more well-established development environments. That being said, there's huge potential with Node because of that. There are more coders in the world than ever (I'm assuming) and Javascript is a great low barrier to entry language that encourages people to explore various runtimes. In 20+ years of coding, I've not seen this level of excitement and engagement in a development environment. While it may be rocky for another few years yet, I suspect we'll end up with a very productive platform, simply because of the amount of involvement. Of course, it's totally understandable to want to wait for that before jumping in. :) As a relatively new Node developer, I'm much more concerned about the single- threaded nature than the development environment, but so far even that hasn't been a problem. ------ Touche A lot of this is residual effects of the (slow) evolution of JavaScript. It was thrust into the spotlight missing a lot of features and these features have only recently been fixed by the language itself. But Node has been around since 2009. 2009 JavaScript was missing _a lot_ of features. It was basically a runtime only advanced users should use. The Node maintainers had to make a lot of decisions that now conflict (to some degree) with fixes that have come later to the language. They chose their callback style; now we have Promises. They chose to throw in methods rather than return an error in the callback (this makes it awkward to use fs with Promises without a wrapper library). They chose to implement their version of CommonJS, now we have the .mjs issue arise. ------ nevi-me Been on Node and Mongo for 4 years now. Both have worked well for me. JS has evolved a lot in the past few years, and with it came all the new shiny tools that left us confused. I think a year was too short for the author. Sounds like they were chasing after every cool thing to make life easy. Error handling is a pain yea, I've seen amateur folk try catch this and that, I think that lends itself to being terrible. One of the things I appreciate the most about JS is JSON. Crafting tens of classes in Java irritates me. I find Python sometimes tricky also when dealing with structs VA lists. I've always stuck to the basics when I was learning how to JavaScript with Node. I used only callbacks for 2 years until I understood what my code was doing. Granted, I'd have spaghetti at the end of complex async queries, but I understood what was going on. I moved to caolan::async and have been using async whenever necessary. I barely use promises as I got confused by the early adoption craze. I learnt how to use Backbonejs, and a bit of Angular+React+Ember, but I found myself comfortable using vanilla JavaScript. Only thing I use is Underscore templates. I know I could benefit from shadow DOMs etc, but I'm content where I am. I think a good way to learn is to take things at bite sized chunks. I've started using RxJS recently, and I'm loving it! I'll keep using JS as my primary tool, but I'm slowly moving to Kotlin. ------ kartickv If this article is true, it paints a concerning picture. I don't want to research libraries, understand the pros and cons of them, run into problems, then switch to another library, and so on. I want there to be a default that works out of the box for the majority of use cases. There can be alternatives, as long as there's a default that works out of the box for most people and most use cases. ------ eldude I train enterprise node.js for a living. My recommendation is to just use songbird (which exposes the forthcoming promise API from core, built on bluebird), async/await and the async `trycatch` library so you don't have to worry about a 3rd party package's choice of asynchrony. It also comes with optional long stack traces. ~~~ snappy173 >async/await in my experience, async/await is a great way to layer indirection and obfuscation over what is still callback hell. it may look better in the editor, but it's hell to debug. ~~~ raarts When I initially encountered event -based processing (libevent in C), those callbacks were indeed difficult to wrap my mind around. But I learned to structure the code in the editor, keeping everything together which made it manageable. I 've worked with node for a couple of months now, and I find promises to be more confusing, because it obfuscates the callback in my mind, and makes it look like ordinary function calls. ~~~ spriggan3 The advantage of promises is that you can return them as a type, you can't do that with a callback. If you fetch something from a database , your data access object can return a promise and let the client code deal with the result. Promises are composable by nature, callbacks are not. ------ SiVal Really, just learn to use callbacks properly. Well, wait, actually, you should skip that and start getting used to using promises instead--a big improvement. Well, I mean, until next version is ready, and we can start using async/await...until wasm makes better thought-out languages available. Despite the sounds of this, I do like the idea of having an experimental platform with which to gain experience using wildly different approaches to the wildly changing world of web apps. I don't take it for granted that old language concepts will turn out to be the most useful for the web platform. So, I'm okay building a website for the PTA or ceramics club with this, and I'm very interested in the experiences of others using a wide variety of technologies and approaches, but I'm not sure Node.js would be a sensible foundation to build a business on. ------ tonyjstark This article definitely matches with my experience. I did two not all too complicated projects on the side with Node.js and the the first steps where so easy that it completely convinced me to go with it. After a while I tried to dig deeper and went to meetups to see whether I do things right as in a community accepted way and if I use the right tools and so on. Since then I refer to the Nodejs community as the most hipster programmer community I've ever seen. As soon as a framework was getting near a 1.0 version nobody wanted to use it anymore, experimental features were used in production code, it was horrifying. For me that ended this endeavour, I just could not keep up with the pace. I always wondered if it would be different if I would have worked full-time on Nodejs projects. ------ ldehaan From reading the comments there are Still a lot of misconceptions about Js. javascript is not new, it's been around since before most of the web devs out there started working in computers. nodejs supports multiple CPUs. nodejs is stable. JavaScript is retarded fast, and it's not c or c++ or any Compiled language, and shouldn't be compared to them because that's unhelpful as a measure. it is the only language for the web which enables you to work in the same language on both fronts. frameworks aren't JavaScript. nodejs isn't JavaScript. JavaScript is so flexible that it can be changed to suit the needs of those writing it, so much so that you get whole new dsl's like typescript. there are more conversations on the internet about JavaScript than any other language being used today. oh and nothing scales if you don't know how to write scalable software, that's on you, not the language. ------ bschwindHN Interesting analysis. I just finished a year of using Node for implementing an HTTP API and a chat server, and found it to be actually pretty pleasant. I'm not chasing the latest and greatest things, there's no ES6, no ORM, and I'm on an older version of Node. But it works and has actually been quite stable! The things I've missed are static type checking at compile time, and execution speed (which is less of an issue when you're talking with databases all the time). I'd be happy to write in more detail if anyone has any questions, but I found I had the opposite experience of this author. The situation makes all the difference though. ------ Chris911 Most of the problems described in the articles can be solved by simply using promises. Error handling is centralized in your chain and any function can throw and just like Python or Ruby you can catch anywhere you want. As for consistency between callbacks, promises and generators, just pick one. We switched from callbacks to promises and it was a great move. Error handling greatly improved and the code looks a lot nicer. No more callbacks hell. Ever. We switched our backend from Python to Node almost 2 years ago now and it was a great decision for us. If you handle a code base that deals with a lot of async requests Node is definitely a top contender. ------ cyberpanther I think by reading the comments here and the article, its apparent that Node.js just isn't as mature yet. If you know what you're doing, it can be great, but for the noobs the right way to do things is not easily apparent. Couple this with the huge choice in libraries and frameworks, makes Node.js harder for now. I think the base is good though and given more time it will become more easy to wield. This is typical of any new tool. And yes Node.js has been around the block for a while, but it is still newish compared to Python, Ruby, PHP, etc. So you're going to pay a new adopter tax still. ------ jrapdx3 Can't fully agree with the article re: using nodejs in production. A couple of years ago I decided to use nodejs for a rewrite of a web/database app that had gotten to be complex and hard to maintain. As so often said, it was true for me that the abundance of modules and choices in node was at first very confusing. I eventually figured out that keeping things as simple as possible was my best approach. What I came up was a server relying on very few module dependencies and written using consistent if not so elegant components. Sure it's kind of verbose and far from totally DRY but fairly easy to understand, modify and extend. Key issue was node's "callback hell" style of async programming. Of course, it's not just node, other languages (Scheme, FP, etc.) can be mind-bending in a similar way. The callback "inside-out" locality inversion was initially hard to grasp, but once I caught on it was possible to get the server working the way I needed it to. The more recent development of promises, etc., certainly provides reasonable ways to reduce the high barriers implicated in using the nodejs style of async programming. ~~~ zyxley Callbacks definitely become much, much less of a problem once you can turn everything into promises. ------ EdSharkey In my opinion, JavaScript is a toy language. Looking at it, and the Node.js ecosystem by-extension, that way has really helped me be effective with it. Tooling has never been more important to my productivity with a language as it has been with JS. I constantly search for tools to paper over the warts and potholes. ------ gedrap The only clear, objective advantage of using nodejs is that if your application is I/O heavy (e.g. tons of sql queries which can be executed in parallel), nodejs event system is helpful and everything you need is pretty much out of the box. Other thing... it largely depends on personal taste and a matter of convention. Reading his "Why I’m switching from Python to Node.js", doesn't seem like he was having an issue with that. And I don't really buy into the "same language everywhere" argument because come on, how hard is it to learn python, ruby, etc enough so that you can be productive? Not hard at all, unless you have hundreds of cubicles filled with drones. Anyway, it's good to see that he made some reasonable conclusions after the experiment. That's a good sign :) ~~~ MichaelGG Same language, in theory, is great. Reuse structure definitions. Reuse rendering logic or even validation logic (perform checks on both, but make it easy to get into client-side). In general, keeping things "in sync". Also can make it easier to write in SPA style but offload rendering to the server when you need it (particularly first-page or reloads). Whether or not tooling is good enough to allow this (either with JS or compilers) is another issue. ------ Clobbersmith We use NodeJS pretty extensively at Yahoo for both front end and back end services and it works well. While some of the complaints are valid, it's not worth flipping tables over. Promises are standard in ES6 and it is "the way" to handle errors. At least if you want to stay sane. ------ btomar Though I agree with what you all explained with the big issues with NodeJS, you have to understand its not all of Javascript. Its only server side JS. The creators have clarified, for heavy CPU intensive services, go away from NodeJS. As far as frontend is concerned, AngularJS and React are just sugar candy for user interaction and structure. Node filled the gap with an async network application in pure JS without the heaviness of Python or traditional languages. It is a hack as in every day people are finding new ways to use NodeJS but I agree, there has to be best practices and less boilerplate (plus less silly npm packages for trivial JS tasks). ------ nodesocket I prefer to just use async ([https://github.com/caolan/async](https://github.com/caolan/async)) for all control and error flow. Namely async.auto() can do almost any crazy flow. ------ dustinlakin I think the switch to an async back-end can be more initial work than many expect. It may take some time to feel as productive, but promises become powerful and became a game changer for me over my previous work with callbacks. Error handling also becomes manageable. What I really enjoy is jumping into new community and getting to work with tools that have built. Choosing the right ones can make or break an experience. I personally enjoyed working with Express and Bookshelf.js/Knex. I appreciate the authors perspective, but I also don't think this should deter anyone from trying out Node. I personally have no overwhelming preference to using a Python or Javascript stack. ~~~ xentronium > Bookshelf.js/Knex After ruby orms, Bookshelf felt very, very underdeveloped. Anything I tried to do beyond "hello world" only brought me pain, especially dealing with associations, but honestly just about everything. I guess I've been spoiled, but getting anything done in express/bookshelf combo seemed like a chore. ------ billmalarky Just FYI, the Koa framework makes node sane again. Callback hell and error handling are no longer issues in node if you just embrace generators or async/await. We use Koa in production and serve billions of requests just fine. If I had to go back to Express I'd say no. ------ velox_io There's a very fine line between between removing too much essential complexity, and leaving programmers with too many limitations. Or the other end where the language overhead overshadows the original task at hand. For instance multithreading should be handled by languages and frameworks 99.9% of the time. Take reading a file (something that should be handled by the language/ framework). Read it asynchronously, structure any dependant code clearly (which JavaScript does pretty well), if any problems are detected they display/ log the appropriate error. You shouldn't be kneck-deep in callbacks. ------ spriggan3 The biggest problem is dealing with callbacks (and yes even promises use callbacks , and generators need to be wrapped in a cor-routine framework in order to work as cor-routines ). I want to write a quick script doing some busy work , I now have to think about synchronicity even though the script does not need to be non blocking. Of course in these circumstances, I want to move back to Ruby or Python, which actually let me code the thing I want to code without forcing callbacks on me. So when you have to do 20 i/o operations in sequence, using nodejs becomes really tedious. ------ jtchang Node is one of those platforms that makes me shudder every time I go looking for solid best practices. It seems to change every 3 months. It's scary (and cool) how fast things are changing. ------ peterashford I think the basic problem is that JS is not the right solution for every task but for the web, it's often the only tool available. I had similar experiences to the OP. I had lots of code written in JS that I was happy with to some extent, but all of it would have been easier / cleaner / more maintainable in a less crap language. ------ arisAlexis Using babel without async await is a mistake IMO should solve almost all your problems and errors. Also the main argument for using js in the back end is that you have the same team working in the front-end too and in any case in the same langage. ------ throwanem tl;dr: "I really miss Python's bondage and discipline, so I'm going back to it, and I'm going to hate on Node on my way out because I genuinely can't wrap my head around the idea of a paradigm other than the one with which I'm familiar and comfortable." It's not even about Node. It's about anything that isn't Python, and doesn't have the Python community's strong "there is exactly one way to do it" tradition. I'm glad OP has realized that's what works for him. It's a shame he lacks the perspective to understand that it's _about_ him. ------ igl Async-functions are the answer to all his problems. I generally agree on his conclusion though: Do small things, don't build big systems. I hate that JS trys to be this OO-FP hybrid. Jack of all trades, master of none. ------ swivelmaster This articulates my fears around switching to node very clearly. I've played with it in the past and felt like it would become too difficult to manage with a large project and lots of contributors. ------ paulftw A year ago one of the reasons to leave Python was poor MongoDB & JSON support, 12 months later the same author complains about the lack of a decent SQL ORM library in js <scratching my head/> while JS is far from perfect, his problem was he was a little bit ahead of time- babel, standard promises & sequelize solve most of the problems. I think Python is still superior for serverside, but JS isn't as bad as portrayed in this post, if you slowdown for a couple of weeks to learn how to use it properly (just like any other popular language or framework). ------ sb8244 I mused on similar things after getting a _very_ small project shipped. It is based on my experience with Node, Ruby, and Node as of 4 years ago (when I first learned it). [https://medium.com/@yoooodaaaa/reflections-on- node-698abecce...](https://medium.com/@yoooodaaaa/reflections-on- node-698abecce1b3) ------ z3t4 I think the problem is they chased the latest and greatest. tips: Always throw errors! Use named functions! ------ jv22222 Uber has done pretty well using node as their core dispatch architecture. ------ tempodox I find this a valid assessment of Node and JS. ------ co_dh welcome back :) ------ vacri Small company ops guy here - a bit over a year ago, I used to joke with my devs, asking them "So, what's this month's recommended way of installing node?". More recently we've been experiencing dependency hell, which is exacerbated by our small team not having enough time to upgrade to the latest LTS release. Node is definitely a language that you have to _manage_. It doesn't sit in the background and let you get on with writing stuff. Is it the right tool for X or Y? I can't say, I'm not a dev. But it does require a lot of hand-holding and keeping current with the zeitgeist. ------ hasenj Everyone and their aunt wants to publish an npm module. That's why you get a ton of badly written poorly thought-out packages. ------ wizard_class doesnt node support modules? thats a way to avoid callback hell ~~~ fredrb How so? ------ JabavuAdams Thanks for this. The Node / Javascript ecosystem just seems like one I don't want to be a part of. I feel so lucky to have avoided the whole disgusting web stack. ~~~ bdcravens Like the author said, Node isn't a great fit for certain apps. It sounds like yours is one of them. ------ bricss Author plz, uninstall Node.js, use your so lovely Python and don't make people concerned. ~~~ tobltobs Because being concerned is a bad thing? ------ santoshalper Node.js, meteor, et al. is a great example of what happens when you let inexperienced developers design a platform and run an ecosystem. Almost every ounce of focus in this community goes to increasing developer productivity. Operational concerns like scalability, security, monitoring, are given the bare minimum of focus. In many years of programming in many languages on many platforms I have never seen a worse platform and ecosystem than Node.js in a very popular language (obviously some obscure languages hardly have anything built around them at all). I really think it is setting the programming world back a lot. Not to mention what a thoroughly shitty language JavaScript is. Being able to code your web app in one language is a neat trick, and the ability to talk back and forth between client and server so seamlessly does kind of feel like magic, but otherwise, this is a meaningfully worse language/platform that Visual Basic. ------ jondubois \- The dependency instability can be avoided by specifying specific versions of your dependencies inside your package.json. \- The fact that the ecosystem is evolving quickly is a good thing. Node.js is still one of the fastest growing software development platforms according to Google trends so you should expect it to change faster than other ecosystems. \- ORMs suck (in every language) - They always sucked; ORMs are a massive hack intended to fix the impedence mismatch between relational DBs and RESTful APIs. If you used a NoSQL DB with Node.js (such as RethinkDB), your life would be much easier. Nobody in the Node.js community except your grandma cares about ORMs because they're considered legacy technology. If you don't like it, then you can stick to your COBOL and Oracle database. \- Node.js lets you choose how you want to handle errors. JavaScript offers an Error class which exposes a name property (which you can overwrite for each error type) and you can also attach custom properties to Error objects (to carry back more detailed info specific to each Error type). JavaScript is really easy to serialize/deserialize so you can even design your error handling system to be isomorphic (same error handling on the client/browser and server). I really enjoy error handling in Node.js/JavaScript - You just have to put some thought into it. \- The way of writing async logic is changing. The fact that Node.js is keeping up is a good thing. There is no single right way to handle async logic. Most well-maintained libraries will keep slowly evolving to use the newer features of the language but it's mostly backwards compatible (many libraries support both callbacks and promises). \- The standards are not bad; they're evolving and you can choose your own styleguides for your projects. Most JS developers will adapt to new styleguides as they move between companies/projects. ~~~ flamethroeaway Is package.json webscale?
{ "pile_set_name": "HackerNews" }
Pouring rain does not permit the start of race 3 Pouring rain with cars behind safety car, until the race is definitely stopped and cannot start again due to terrible weather conditions. No points assigned for this race 3, which saw Blomqvist (KIC Motorsport) start in pole in front of Vesti (Prema Powerteam) and Sophia Floersch (Van Amersfoort Racing AV). The ACI Racing Weekend, certainly negatively hit by bad weather, sees Frederik Vesti (Prema Powerteam) maintaining his first position in the championship with 103 points, in front of Enzo Fittipaldi with 97 points and David Schumacher with 58. Same standings for the Rookie Trophy. Among the teams Prema Powerteam leads with 200 points in front of DR Formula by RP Motorsport with 76, and US Racing with 73. Next racing weekend at the Hungaroring 6 and 7 of July.
{ "pile_set_name": "OpenWebText2" }
Ramsey Orta took plea deal on unrelated charges but says police harassed him after filming officers killing his friend. On July 17, 2014, Ramsey Orta took out his mobile phone and filmed a police officer in New York killing his friend, Eric Garner. But as soon as he stopped recording, Orta says his own life also took a dramatic turn for the worse. Viewed millions of times, Orta’s clip shows Daniel Pantaleo, a white officer, gripping his arms around Garner’s neck in a chokehold. Garner, a black American, was 43 years old at the time, and an asthmatic. “I can’t breathe. I can’t breathe. I can’t breathe. I can’t breathe. I can’t breathe. I can’t breathe. I can’t breathe. I can’t breathe,” Garner said, as he was being pinned to the ground and asphyxiated. They were his last words. Garner, a father of six, was selling loose cigarettes in Staten Island, New York, when officers tackled him. His case was ruled as a homicide, meaning that his death was caused by human beings, but Pantaleo was not indicted. In 2015, Garner’s family reached a $5.9m settlement with the city of New York. Orta’s recording of the killing has been praised by many for bringing to light police brutality, and setting off what has been described as a citizen journalism trend exposing injustices. But ever since releasing the footage of Garner’s killing, Orta, 25, says he has become the target of police retaliation. ‘Behind enemy lines’ On Monday, Orta will begin a four-year prison sentence, after taking a plea deal in July for a weapons and drug case. It is the result, he and his lawyers argue, of a police campaign to harm his life. After filming Garner’s death, they claim, he was increasingly harassed and targeted by police and was arrested at least eight times in fewer than two years. Of several criminal cases against him, only two charges stuck. Two weeks after filming Garner’s death, Orta was arrested on charges of possessing a handgun and was later caught selling heroin to an undercover policeman. “[Hours after] Eric died, at 4am in the morning, there was a spotlight shining through my window. I looked out the window and there was a cop [police] car outside,” Orta told Al Jazeera on Friday. “They parked outside my house and stopped people coming in and out of my house. That was going on until the day they ruled it [Garner’s case] a homicide. I’ve been arrested and let out many times. And now I am convicted of only two of seven cases.” According to reports, Orta is suing New York City for $10m for unwarranted arrests by the NYPD that he says were attempts to discredit his video of Garner’s final moments. Al Jazeera contacted New York City police for comment, but did not receive a response at time of publication. In August 2014, Pat Lynch, president of New York’s biggest police union, said it “is criminals like Mr. Orta who carry illegal firearms who stand to benefit the most by demonising the good work of police officers”. READ MORE: NY police to keep disciplinary records from public Orta has been diagnosed with post-traumatic stress disorder, and suffers from depression, anxiety and paranoia. “My biggest fear about prison would be not coming out alive. I fear for myself being behind enemy lines,” he said. “I’m going in there with a level head. I’m praying that I can come right out and continue my life as an activist.” My biggest fear about prison would be not coming out alive. Ramsey Orta, activist Since Garner’s death, Orta joined the police watchdog organisation Copwatch, has given talks at universities, and become a symbol of the Black Lives Matter movement. At a recent event in Brooklyn, New York, Jewel Miller, the mother of Garner’s youngest child, told Orta: “You took the video … you really filmed up to the last seven and something minutes that he was here on Earth. And even though those words of ‘I can’t breathe’ are in our heads … it is the only voice for my daughter she’ll ever know. And because of you I’ll forever be grateful. Thank you, thank you, thank you.” Orta, a husband and father to two daughters, said he watches the video often. “I watched it the day before yesterday,” he said. “It just stays in my head. I try not to watch certain parts.” While he does not regret filming the killing, he wishes he had posted the clip anonymously. “The only regret I have is not making my identity safe,” he said. Still grieving the loss of Garner, he said: “I miss his sense of humour the most.” ‘Shattering the myth of racial equality’ Orta is among several citizen journalists who say they have been hounded by police, including those who filmed the recent deaths of Alton Sterling, Philando Castile and Freddie Gray, which sparked a wave of protests across the US In August, filmmaker David Sutcliffe wrote an open letter in favour of the “right to record”, which was signed by more than 100 documentarians, including Asif Kapadia, Laura Poitras and Nick Broomfield. “Armed only with camera phones, citizen journalists have shattered America’s myth of racial equality,” the letter said. “Instead of garnering Pulitzers and Peabodys, they have been targeted, harassed and arrested by members of the very institution whose abuses they seek to expose.” Shaun King, a New York-based journalist focusing on justice, told Al Jazeera that harassment was not uncommon. “I have seen many cases where people who film police are unlawfully targeted and harassed by them in response – sometimes for months or even years as a result,” he said. “My question is always this: what are you afraid of? Why does being filmed bother you so much? It’s our right to film the police. In fact, if you ever see police in action and you have the time to film them, do so.” A petition by The American Civil Liberties Union calling on US Attorney General Loretta Lynch to investigate harassment cases has gathered almost 21,000 signatures. ‘Vicious intimidation’ Stanley Cohen, a New-York based lawyer and former social worker who in the 1980s held community cohesion sessions with the city’s police departments, said that Orta’s case was an example of “vicious, retaliatory and vindictive” intimidation. “They want to create an environment where people are terrified to speak up and out and be good citizens,” he told Al Jazeera. “It’s [harassment] not to undo the events of the murder of Garner as is it to deter the next [filming of a police killing].” He added that after Garner’s death, he felt a glimmer of hope. “I had hoped, naively, that the Garner situation would change the relationship between police and community. It did for a short run, but more out of police concern of an explosion. Recently, it seems to be business as usual. There are more stories of the arrogant, abusive attitudes of cops in communities they control … When you combine the militarisation of police with citizen journalists, you get a toxic confrontation.” Ramsey Orta, 25, pictured at a shrine to commemorate Eric Garner, at the Staten Island location where he was killed by police [Ramsey Orta/Al Jazeera] According to Mapping Violence, police have killed at least 217 black people so far this year. Last year, they killed at least 346 black people. As he prepared for jail, Orta said he has little hope for the near future. “I expected this [police killings] to end up where it is now, it’s only gotten worse since it started. I knew from our past history that that video wasn’t going to change anything,” he said. “I don’t want my situation to be a deterrent to people who continue to film, though. I encourage others to take a stand.” Follow Anealla Safdar on Twitter: @anealla
{ "pile_set_name": "OpenWebText2" }
The Magic Formula Fallacy Just because it works inside your head, doesn’t mean it actually works It seems to be a staple of geek (or non-geek come to that) politics. The Magic Formula. – “If we lower taxes on the wealthy, the country will become richer, because the wealthy create wealth” – “If we execute murderers, it will reduce murder, because that’s one less murderer off the streets” – “If we legalise pot, drug use will go up, because pot is a gateway drug” – “It’s ok to torture people because if there’s a ticking bomb like on TV, it might save lives” – “Anyone who works hard can become rich” Some are obviously less based-on-lies than others. Some are true – but only within a limited perspective, outside of which they become a lie. eg: “Anyone who works hard can become rich” – yup, but that’s not the point. The point is that the system is set up so most people don’t become rich regardless of how hard they work. Rags to Riches stories are not the basis of sound social or economic policy. Anyway – Magic Formulas. For geeks it seems to be Libertarianism… particularly American Geeks. Geeks love Game-Theory. Geeks don’t like having to deal with what actual people actually do. Libertarianism a web 2.0-era computer-programmer attempt to “write” a utopianism program. It’s like distributed Das Kapital for people who’s only foray outside the sphere of cable tv, is internet chat rooms. Libertarianism is a classic Magic Formula. All theory; No evidence. In fact it isn’t even a theory because it isn’t falsifiable. In fact it isn’t even an hypothesis because it’s not based on observation. It’s a magic formula. The Austrian School specifically ignores empirical evidence (look it up), so no pesky reality need get in the way… which is handy, because I’ve yet to see a single case of libertarianism actually working anywhere. You can see a gradient in fact: the better the social-spending; the better off, and happier the people. Tax is not fucking theft. Tax is us, thinking in terms of “Us” rather than “Me”. If you look at evidence, what you see is that the opposite of libertarianism seems to work best. The fact that the government has become corrupted by corporate money, doesn’t mean that liberal, secular democracies, with an easily measurable, transparent balance of taxation, don’t produce the best results that we’ve ever seen in the whole of recorded human history. They do. Libertarianism is a magic formula who’s basic fallacy comes down to the idea that “people are individuals”, when actually, on the whole, they’re not. Clay Shirky gave us an example of this in one of his talks… parents picking their kids up from school, some would turn up late – so somebody thought it would be a good idea to issue a small fine. That’s the magic formula. “If you financially penalise someone for a behaviour, the behaviour will decline”. Unbeatable… in the tiny mind of the person who’s idea it was. What actually happened was that “the fine” became the price of being late, which was easier for people to pay, than breaking the unspoken social contract they had before, so lateness increased. Unfortunately, when they tried to put things back the way they were before, it made no difference. The social contract once broken, stayed broken. So what have we got? We’ve got a million different laws based on magic formulas (you have to be a lawyer to understand them all)… we’ve got a whole ecosystem of discourse that is only allowed (by the broadcast media) to take place within the frameworks of magic formulas… and we’ve got well-meaning people, including geeks (who are scientists, so should really know better) providing “solutions” that are all magic formulas. All conservative political policy is based on Magic Formulas. — Just because it works inside your head, doesn’t mean it actually works. Just because it works inside your head, doesn’t mean it actually works. Just because it works inside your head, doesn’t mean it actually works. Here’s an idea: If a theory isn’t falsifiable, then it shouldn’t be part of any policy conversation. Policy without evidence needs to be recognised as being speculative – and experiment is great, we should definitely experiment… but policy without evidence needs to be stamped “SPECULATIVE” with a specific expiration-date built into it, so it is rolled back if it fails to meet specific measurable goals. And to minimise the damage that speculative policy can do, it should be conducted on a small scale, with specific focus given to The Amsterdam Effect*. — Never talk about “belief” – just show us the data. Show us your methodology for collecting it. Leave the interpretation to us. Thanks. *The Amsterdam Effect is what happens when an experiment is not carried out in isolation – when your neighbours swamp your results. A similar thing happens in Estonia – who’s relative prosperity is not because of their flat-tax rate, but because of their flat-tax rate AND their proximity to Scandinavia. Stats are stats – it’s an imperfect science… but at least the simple act of trying to measure something shifts the emphasis from “what should happen” to “what does happen”. Mind you, if you’re talking about social-policy, then the simple act of measuring can have a fairly profound affect on what actually happens. New Labour’s fascination with measuring the fuck out of everything that happens in education was possibly not such a great idea. It basically meant that parents could see which schools produced the best results, which pushed house-prices up in those areas, which effectively created a class-system based on wealth, which was the opposite of what labour should have been about. Education is subtle and tricky. Crime, economics etc etc, not so much. Measuring means learning from measuring but economics and politics are intertwined and economics is no science (hence you have to go to Norway to receive a Nobel price in Ecomics while the real sciences go to Sweden). It is more of a religion. They do measure the crap out of every “economic parameter” known to men and perform some really nifty algorithms in order to evaluate risk for instance (using Black Scholes which is not that much more, then a complex Monte Carlo analysis) but the then forget that Mr. Sholes hedgefund which was based on these principles tanked in 1998 (only a 4.8 billion write-off, what are we talking about). So in 2007 I had to test software based on these functions: and in order to test the software I needed something to test against, so I created a Excel sheet in order to do that (integrating the Monte Carlo tables into the sheet). Now I also knew the limitations of the model full well. It uses market data and these do not take into account the full spectrum of all the possible market moves. Outside the spectrum risks are never taken into account. So if something out of the ordinary happens (like when the East Asian markets crash), option holders are exposed to much more risk then they would like resulting in even more catastrophic faillures, because of rash moves (of men and machine these days). And thats is only one of the problems. What these people basicly do is to treat people as if they behaved like brownian body’s, which have no notion about their surroundings and bump into each other in a compleetly stochastic manner. But people do not behave like brownian bodies. People behave like birds, insects or fish in a swarm. No one controls the swarm but it creates a pattern of its own, due to the fact that individual particles of the swarm react to each other in predictable ways (keeping closeness and distance as constant as possible and all thriving to be as the center of the swarm) and transfer this reaction on other parts of the swarm. Unpredictability is introduced due to the fact that these reactions are never perfect (if only due to the fact that not all particles can be at the middle of the flock). Swarm mentality can be very volatile. A small input (like a hawk appearing on the horizon) can cause small reaction in a few birds and the swarm can change its general course in an instant, thus taking birds out of the danger zone that themselves have never seen the hawk. So a small input gets amplified due to the herd behaviour. In a brownian system this would never happen. A brownian system would absorb the input and distribute it like a ripple in a pond. Economists should take their time of and study swarm and heard behaviour and learn to includes these into their models. Ah, and the trickle down economics part of Mr. Milton Friedman is explained to the general voter via this little (stupid) comparison. “If water rises in a harbor the big boats and the little boats rise alike” Now any skipper who has put his boat in a natural tidal harbor knows that is only true from the time that there is enough water to float all the boats to begin with. In a tidal harbor boats get lifted out of the water one by one and guess what mr. Friedman, the small boats float first and the big ones later (needing more water to float). So in an economy were their isn’t an unlimited supply of money (water) small can be better then big. Only when you make a channel for the big boats (and thus steal the small boats water), big boats float earlier but then the small boats rest on the mudflats untill a spring tide (or an economic bubble) comes along……but spring tides usually disappear as fast as they came………now that is what happens exactly in the neo-con economical model. All it’s parameters are skewed in order to steal the water from the small boats. Tax cuts for bigger companies resulting in big companies being able to evade taxes (legally). Outsorcing benifits big companies more then small ones. More advertising potential and so on, and so on. The list is endless. Sorry – can’t tell if you’re being sarcastic or not… but anyone who talks about “The Liberal Agenda” is probably someone who watches quite a lot of Fox News. But… if we were to look at gun control from an evidence-based POV, What measurable results do you want out of gun law? What other countries achieve those results? What are their laws? Has the US achieved similar results in the past? How? Is there a way that these laws can be trialled on a small scale? Firstly define your desired results, then experiment with how to achieve them… but at least define your desired results… because at the moment, you’re splitting the country with carefully crafted lies, like the idea that there is “A Liberal Agenda”. So for example: “We want US gun deaths below the average of other OECD countries” (rather than being worse than the 3rd world, which is what we have now) Or: “We want to reduce the power of the state to intrude into our lives – in a way that is clear, codified and transparent” Or: “We want to create a resiliant citizen-based militia” all of these are reasonable, rational goals that don’t split the country into liberals/conservatives. There is absolutely no reason on earth why these shouldn’t be achieved – but you’re not going to achieve them if you don’t define them. As it is, you’re not – your’e prattling on about “Liberal Agendas” and the rest of the world, quite rightly is looking on in a kind of shocked bemusement as the country that has ALL the advantages, rapidly stupids itself to death. For the record the “American Gun Control Debate” is one of the prime reasons the rest of the world has managed to get this idea that Americans are stupid. I promise you, you’re not getting this one right.
{ "pile_set_name": "Pile-CC" }
Learning pointsManagement decisions appropriate for a non-athlete might be inappropriate in an athlete as they may result in disqualification, financial loss, or put the athlete at additional risk of complications.Cardiologists with expertise in sports cardiology should be involved in the management of athletes as it is often complex and requires a holistic approach.Left atrial appendage occlusion devices can play an important role in reducing stroke risk in selected cases. Introduction ============ Caring for athletes with cardiac disease requires an approach that caters to the specific needs of the individual. Often athletes require their care to fit around training and competition requirements and this can come into conflict with the best care their clinicians feel they can offer. Medications and interventions with proven symptomatic and prognostic benefit may affect athletes' performance and lead to poor adherence. Moreover, they may result in disqualification from competitive sports which are likely to carry both personal and financial consequences. In some individuals, engaging in demanding physical activity and competitive sports against medical advice may carry significant health risks. Therefore, shared decision-making is vital and alternative management strategies are often warranted to ensure appropriate adherence to prescribed treatment. This case illustrates this conflict in a professional rugby player with a cardiomyopathy and atrial fibrillation (AF) on anticoagulation who wished to continue to play and discusses how an alternative approach was able to optimize his care. Timeline ======== Date Events --------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- November 2017 Admission with decompensated heart failure and new diagnosis of atrial fibrillation (AF).Transthoracic echocardiogram (TTE) showed evidence of left ventricular (LV) dilatation and severe LV systolic dysfunction.Discharged on heart failure medications and anticoagulation for stroke prevention. January 2017 Unobstructed coronary arteries on angiography.Early recurrence of AF following direct current cardioversion. February 2017 Left ventricular systolic function remained severely impaired on outpatient TTE. April 2018 Cardiac magnetic resonance showed dilated cardiomyopathy with ejection fraction (EF) 37%. No scaring or fibrosis seen on late gadolinium enhancement. July 2018 Right arm weakness and paraesthesia in keeping transient ischaemic attack.Screening for connective tissue disorders, HIV, syphilis, and Fabry disease was negative. October 2018 Left ventricular systolic function returned to 'near normal' (EF 50%). Euvolaemic.Advised not to play rugby due to high bleeding risk on anticoagulation. November 2018 Playing rugby on anticoagulation.Referred for consideration of left atrial appendage occlusion (LAAO) device. July 2019 Successful LAAO device implantation.Anticoagulants stopped. Case presentation ================= A 27-year-old male professional rugby player was admitted to his local district general hospital with a 2-day history of chest tightness and breathlessness. He had no other significant past medical history and was not taking any regular medications. He admitted to regularly taking cocaine and performance-enhancing steroids. He was haemodynamically stable with normal saturation. The main findings on physical examination were an irregularly irregular pulse and bibasal crackles. His admission electrocardiogram (ECG) showed AF and his chest X-ray findings were in keeping with pulmonary oedema. Initial bloods tests were within normal range. Transthoracic echocardiogram (TTE) revealed bi-atrial dilatation and a moderately dilated left ventricle (left ventricular end diastolic diameter 7 cm) with mild concentric left ventricular (LV) hypertrophy with an ejection fraction (EF) 35--40%. He was acutely managed with intravenous diuretics and initiated on evidence-based heart failure medications including beta-blockers and angiotensin-converting enzyme inhibitors. In view of his drug history, a working diagnosis of drug-induced dilated cardiomyopathy (DCM) was made. Once stabilized, he was discharged on bisoprolol 2.5 mg and ramipril 2.5 mg. In anticipation of a direct current cardioversion (DCCV), he was started on rivaroxaban 20 mg. He was counselled not to participate in any competitive sports. To further investigate his cardiomyopathy, he underwent an outpatient coronary angiogram which revealed unobstructed coronary arteries. In addition, a cardiac magnetic resonance (CMR) confirmed a dilated left ventricle with globally impaired systolic function and a calculated EF of 37%. There was no evidence of scarring or fibrosis on delayed enhancement images. He was unable to maintain sinus rhythm following DCCV and relapsed back into persistent AF. His CHA~2~DS~2~-VASc score was 1 (LV dysfunction) which does not strictly mandate anticoagulation; however, he made an informed decision to continue rivaroxaban. On a follow-up TTE performed 3 months later, LV systolic function remained unchanged. His ramipril was increased and he was initiated on eplerenone. Left ventricular function gradually improved on optimal medical therapy and, at 8 months of follow-up, had returned to near-normal (EF 50--55%). He remained in AF and experienced a brief episode of left arm weakness and paraesthesia suggestive of a transient ischaemic attack (TIA) despite being compliant with anticoagulation. As his CHA~2~DS~2~-VASc score increased to 3 (TIA, LV dysfunction) he now had a clear indication for anticoagulation. Connective tissue disorders, syphilis, and Fabry disease screening were negative. At 1-year follow-up, he was asymptomatic (New York Heart Association 1) but remained in AF and continued to participate in competitive rugby, whilst on oral anticoagulation despite counselling regarding the high bleeding risk. He was reluctant to terminate his professional rugby career prematurely and sought alternative stroke prevention strategies that obviated the need for continuous anticoagulation. He was referred to a Tertiary Cardiology Centre for further management of his AF and consideration of a left atrial appendage occlusion (LAAO) device. A 27 mm Watchman Flx (Boston Scientific, MA, USA) device was successfully deployed under general anaesthetic in the left atrial appendage with a good seal and no leaks ([*Figures 1*--*4*](#ytz242-F1){ref-type="fig"}, [Supplementary material](#sup1){ref-type="supplementary-material"}). He was discharged home on a 6-week course of aspirin and clopidogrel therapy with a follow-up transoesophageal echocardiogram to assess LAAO device position and guide cessation of antiplatelet strategy. ![Watchman implant procedure under transoesophageal echocardiogram and fluoroscopy guidance. (*A*) Contrast injection delineating the left atrial appendage (black arrow). (*B*) Watchman device (white interrupted arrow) is deployed but still connected to the delivery system. (*C*) Watchman (white interrupted arrow) successfully deployed with no contrast entering the left atrial appendage (black arrow).](ytz242f1){#ytz242-F1} ![Transoesophageal echocardiogram at 75° (mid-oesophageal) showing the left atrial appendage (black interrupted arrow).](ytz242f2){#ytz242-F2} ![Transoesophageal echocardiogram at 95° (mid-oesophageal). (*A*) Watchman device (yellow arrows) implanted in the left atrial appendage (black interrupted arrow). (*B*) Colour flow Doppler shows a successful deployment of the Watchman device (yellow arrows) with a good seal of the left atrial appendage (black interrupted arrow) and no residual leaks.](ytz242f3){#ytz242-F3} ![Transoesophageal echocardiogram showing a 3D reconstruction of the Watchman device.](ytz242f4){#ytz242-F4} Discussion ========== Recommendations regarding participation in competitive sports should be given following comprehensive evaluation of the athlete's disease characteristics and a thorough risk assessment. The work-up includes a 12-lead ECG, echocardiography, CMR, 24-h Holter monitor, and cardiopulmonary exercise testing. Disqualification from competitive sports is likely to carry both personal and financial consequences for athletes.[@ytz242-B1] Ensuring that the athlete is involved in the decision-making process is therefore paramount. This athlete was strongly advised not to engage any competitive sports, in line with the European Association of Preventive Cardiology (EAPC) recommendations. The EAPC position paper states that athletes with DCM should not participate in competitive sports if any of the following are present[@ytz242-B1]: symptomatic, orEjection fraction \<40%, orextensive late gadolinium enhancement (i.e. \>20%) on CMR, and/orfrequent/complex ventricular tachyarrhythmias on ambulatory ECG monitoring and exercise testing, orhistory of unexplained syncope. Once his LV systolic function had improved there was no further restriction on exercise from a cardiomyopathy perspective. On admission, his CHA~2~DS~2~-VASc score was 1 (LV dysfunction) which is not a strict indication to initiate oral anticoagulation \[Class IIA, level of evidence (LOE B)\] as the evidence supporting a net clinical benefit of oral anticoagulation in patient with a single stroke risk factor (excluding gender) is limited.[@ytz242-B2] Oral anticoagulation was started in anticipation of a DCCV and, after appropriate counselling, he made an informed decision to continue on rivaroxaban. However, his CHA~2~DS~2~-VASc increased to 3 (TIA, LV dysfunction) and he had definitive indication to continue term-anticoagulation (Class I, LOE A).[@ytz242-B2] As a professional rugby player, he was susceptible to repeated trauma. Avoidance of playing rugby competitively was warranted due to the high risk of bleeding whilst on anticoagulation. If, despite counselling, athletes continue to participate in full-contact sports, advice should be offered in order to mitigate the bleeding risks. In this case, the most viable management strategy that would enable simultaneously adequate stroke prevention and a return to his professional career was an LAAO device. An LAAO device may be considered for stroke prevention in patients with AF with contra-indications to long-term anticoagulation (Class IIb, LOE B).[@ytz242-B2] The most widely used catheter-based devices are the Watchman (Boston Scientific, MA, USA) and AMULET (St. Jude Medical, MN, USA). They are all self-expanding devices deployed via a percutaneous, endocardial approach.[@ytz242-B5] Alternatively, if the left atrial appendage anatomy is deemed unsuitable, the Lariat (SentreHeart, CA, USA) combines an epicardial and endocardial technique to ligate the left atrial appendage. [@ytz242-B6] In the last decade, a large body of evidence of their efficacy and safety has emerged mainly through large multicentre global registries. Only the Watchman has been compared to vitamin K antagonists (VKAs) in two non-inferiority, randomized controlled trials, PROTECT AF, and PREVAIL.[@ytz242-B7]^,^[@ytz242-B8] In both trials, the Watchman was non-inferior to VKA for the composite primary endpoint of stroke, systemic embolism, and cardiovascular or unexplained death. Further supporting these findings, a meta-analysis combining the data of PROTECT AF and PREVAIL with two registries showed an 80% reduction in the risk of haemorrhagic stroke and a 50% reduction in the risk of cardiovascular/unexplained death when compared with VKA.[@ytz242-B9] The risk of AF in athletes appears to have a U-shaped dose--response curve to exercise, from being protective in low-intensity training to a marked increase in high-intensity endurance athletes.[@ytz242-B2]^,^[@ytz242-B10]^,^[@ytz242-B11] The pathophysiology is unclear but likely results from a complex interaction of atrial remodelling, inflammation, and increased vagal tone.[@ytz242-B12] The recommended initial approach is to assess response to a period of deconditioning for 2 months; an approach not always acceptable to athletes.[@ytz242-B13] A rhythm control strategy, including catheter ablation, may be pursued to mitigate significant AF related symptoms or to preclude the use of antiarrhythmic drugs which may impair performance or be prohibited.[@ytz242-B2] This athlete had an early recurrence of AF following DCCV and deconditioning that did not reduce his AF burden. A catheter ablation was considered but not indicated as he was asymptomatic, beta-blockers were not affecting his performance and there was no evidence of tachycardia-associated cardiomyopathy. Catheter ablation has not been shown to lower the risk of stroke (CABANA trial) and current international guidelines highlight that it is a treatment for symptoms and cannot be used a means to stop OAC in patients with a high-risk profile.[@ytz242-B2]^,^[@ytz242-B14] Conclusion ========== Management of athletes presenting with cardiomyopathy and AF is often challenging. Expert opinion should be sought, and guidance should be individualized. Management options are aimed to minimize the risks to the athletes if they choose to return to competitive sports. It is reasonable to consider an LAAO device in athletes with AF and a risk profile that would normally warrant oral anticoagulation who are competing in contact sports. Lead author biography ===================== ![](ytz242f5){#ytz242-F5} Dr Andre Briosa e Gala graduated from Charles University of Prague in 2011. He completed his Foundation and Core Medical Training in the Oxford Deanery, attaining membership of the Royal College of Physicians (UK) in 2015. In 2016, in started his Cardiology training in the University Hospital of Southampton. He is currently an electrophysiology clinical research fellow in the Oxford University Hospitals with a research interest in atrial fibrillation. Supplementary material ====================== [Supplementary material](#sup1){ref-type="supplementary-material"} is available at *European Heart Journal - Case Reports* online. **Slide sets:** A fully edited slide set detailing this case and suitable for local presentation is available online as [Supplementary data](#sup1){ref-type="supplementary-material"}. **Consent:** The author/s confirm that written consent for submission and publication of this case report including image(s) and associated text has been obtained from the patient in line with COPE guidance. **Conflict of interest:** none declared. Supplementary Material ====================== ###### Click here for additional data file.
{ "pile_set_name": "PubMed Central" }
THE secret location where Network Ten will film its version of the hit American reality series Survivor this year is a remote island off the coast of Malaysia, sources say. Contestants will endure a couple of months of rough living and endurance challenges on Pulau Tiga, west of Sabah in the Kimanis Bay, it’s understood. The location is the same one used for the very first US season in 2000. While it will be presented to viewers as a deserted and isolated spot, the island is actually home to a resort, a campsite, restaurant and national park. Early work is underway on the series — the first Australian instalment since Seven’s celebrity spin-off in 2006, which followed Nine’s adaptation of the format some 13 years ago. Key production hires have been made, sources reveal, including Trent Pattison who headed up the challenge team on I’m A Celebrity … Get Me Out Of Here! last year. Pattison has also worked on several seasons of the US version of Survivor. It’s understood filming will begin sometime in March and run for almost three months. The show will air towards the back end of the year. Beverley McGarvey, the network’s chief programming officer, said picking a spot used for a previous series was deliberate. “We’ll end up collecting an island location — most likely a location that has had a series of Survivor shot there before,” she confirmed in a recent interview. “Just because bringing that expertise to the table is really critical.” Expect it to look and feel like the American show — its producers have been enlisted to help make the local version. “They’re really involved,” McGarvey said. “We think the purity of the concept is something that will really resonate with Australian audiences.”
{ "pile_set_name": "OpenWebText2" }
Donald Trump is afraid of a rebellion by Electoral College members who will break ranks and choose someone else for president, denying him the 270 votes needed when that body meets next week in the final stage of the 2016 election. Trump’s fears are laid out in a lawsuit filed Monday in Colorado, where two Democrats on that party’s Electoral College slate have sued over a stage law that binds its Electoral College members to the popular vote outcome. They are part of the Hamilton Electors group, who are seeking to stop Trump from becoming president. “We are a group founded by several members of the Electoral College dedicated to support [Alexander] Hamilton’s vision that members of the Electoral College should be free to vote their conscience for the good of America,” their website says. “We believe that Hamilton had somebody very much like Donald Trump in mind when he charged Electors in Federalist 68 with safeguarding the office of the presidency.” “In 2016 we’re dedicated to putting political parties aside and putting America first,” they said. “Electors have already come forward calling upon other Electors from both red and blue states to unite behind a Responsible Republican candidate for the good of the nation.” The Colorado suit brought by Democratic electors Polly Baca and Robert Nemanich terrifies Trump, because it could state a federal court precedent that would free electors across the U.S. from being bound by state laws to vote for their state’s popular vote winner. The Constitution doesn’t say that Electoral College members will be subject to such rules. 

“Despite their prior commitment to honor the outcome of Colorado’s presidential election, Plaintiffs now claim they might consider voting for people other than Secretary Clinton and Senator Kaine,” Trump’s motion to intervene in the lawsuit said. “Of course, President-elect Donald Trump and Vice President-elect Mike Pence have more than enough electoral votes to secure their respective offices.” That last assertion is not quite right and is refuted by their motion’s continuing argument. “Plaintiffs’ lawsuit, however, threatens to undermine the many laws in other states that sensibly bind their electors’ votes to represent the will of the citizens, undermining the Electoral College in the process. That is why the President-elect and his Campaign seek to intervene in this case,” they said. “Should this Court conclude (despite decades of legal and historical precedent to the contrary) that it is unconstitutional for Colorado to bind its presidential electors, similar statutes in other states where the President-elect won may also be in jeopardy. The President-elect and his Campaign therefore have a direct, substantial, and legally protectable interest in preventing the invalidation of Colorado’s law requiring presidential electors to honor both their prior commitment and the voters’ will." Trump’s filing is yet more evidence that he is paying close attention to the legal efforts that challenge his fitness for office and could stop him from assuming the presidency. He and Republicans stopped a recount in Michigan, and were successful in limiting ballot scrutiny in recounts in Wisconsin and Pennsylvania. Trump knows his ascent to the presidency hangs on the Electoral College’s antiquated system that overrides the popular vote. Hillary Clinton won 2.85 million more votes than Trump nationally. The question, of course, is how many potential electors would break ranks and how might that play out. So far, nine Democratic electors have endorsed the effort and one Republican – Chris Suprun of Texas – has signaled support as well. These are some of the same electors that called for CIA to brief them on their until recently secret report that showed how Russia helped Trump. Right now Clinton has 232 Electoral College votes and Trump has 306. To win, 270 votes are needed. It’s not as simple as saying that Clinton needs 38 more votes, because some of the Hamilton electors might not support her even if they don't want Trump. It is anybody’s guess how many Republican electors are willing to break ranks. One source said as many as a dozen GOP electors are leaning that way, but that’s far short of the 40-to-50 needed to prevent Trump from becoming president. “The Founding Fathers intended the Electoral College to stop an unfit man from becoming President,” the Hamilton Electors' homepage said. “The Constitution they crafted gives us this tool. Conscience demands that we use it.”
{ "pile_set_name": "OpenWebText2" }
US: FBI warned years in advance of Mumbai attacker’s terror ties NBC news reports that US officials were warned not once but twice about a US businessman who was planning to launch terrorist attacks against targets in Mumbai. But unlike the first warning, the second was never passed on to the FBI. The second warning, which came from David Coleman Headley‘s second wife, came less than a year before the Mumbai attacks of November, 2008. Those attacks involved a series of coordinated bombings of at least 10 locations over three days and resulted in 166 deaths injuries in India’s largest city. The first warning came two years earlier, from Headley’s ex-wife. The FBI arrested Headley in Chicago last year and accused him of running reconnaissance missions for the Mumbai attacks. He pleaded guilty to terrorism charges. …In three interviews with federal agents, Headley’s wife said that he was an active militant in the terrorist group Lashkar-i-Taiba, had trained extensively in its Pakistani camps, and had shopped for night vision goggles and other equipment, according to officials and sources close to the case. The wife, whom ProPublica is not identifying to protect her safety, also told agents that Headley had bragged of working as a paid U.S. informant while he trained with the terrorists in Pakistan, according to a person close to the case. Federal officials say the FBI “looked into” the tip, but they declined to say what, if any, action was taken. Headley was jailed briefly in New York on charges of domestic assault, but was not prosecuted. He wasn’t captured until 11 months after the Mumbai attack, when British intelligence alerted U.S. authorities that he was in contact with al Qaeda operatives in Europe. In the four years between the wife’s warning and Headley’s capture, Lashkar-i-Taiba sent Headley on reconnaissance missions around the world. During five trips to Mumbai he scouted targets for the attack, using his U.S. passport and cover as a businessman to circulate freely in areas frequented by Westerners. He met in Pakistan with terrorist handlers, including a Pakistani major accused of helping direct and fund his missions, according to court documents and anti-terror officials. “The United States regularly provided threat information to Indian officials in 2008 before the attacks in Mumbai,” said Michael Hammer, spokesman for the National Security Council. “Had we known about the timing and other specifics related to the Mumbai attacks, we would have immediately shared those details with the government of India.”
{ "pile_set_name": "Pile-CC" }
The singlet oxygen oxidation of chlorpromazine and some phenothiazine derivatives. Products and reaction mechanisms. A kinetic and product study of the reactions of chlorpromazine 1, N-methylphenothiazine 2, and N-ethylphenothiazine 3 with singlet oxygen was carried out in MeOH and MeCN. 1 undergoes exclusive side-chain cleavage, whereas the reactions of 2 and 3, in MeOH, afforded only the corresponding sulfoxides. A mechanism for the reaction of 1 is proposed where the first step involves an interaction between singlet oxygen and the side-chain dimethylamino nitrogen. This explains why no side-chain cleavage is observed for 2 and 3.
{ "pile_set_name": "PubMed Abstracts" }
Full text ========= A new term, \'selective estrogen receptor modulator\'; (SERM), has infiltrated the estrogen receptor (ER) literature lately \[[@B1]\]. It is nothing more than the reaffirmation of an old fact, namely that different estrogens have different effects, in different tissues. The major natural estrogens -estradiol, estriol and estrone - bind ERs with differing affinities, hence variations in their tissue distribution and concentrations influence the extent of their estrogenic effects. Studies with synthetic estrogens have focused on antiestrogenic ligands, which bind ERs and interefere with the actions of the natural estrogens. Tamoxifen is the prototypical antiestrogen, and newer second-generation antagonists, such as raloxifene, are in various stages of clinical trials \[[@B2]\]. Both tamoxifen and raloxifene are SERMs, because their antiestrogenic effects are restricted to only certain tissues. Tamoxifen has been used for more than 20 years to treat ER-positive breast cancers \[[@B3]\]. It was first demonstrated to be effective in advanced disease, later in adjuvant settings, and most recently as a breast cancer preventant in women at high risk. Thus, in various settings tamoxifen is an inhibitory ER ligand in the breast, and this property explains both its efficacy and its widespread use. The picture is not all rosy, however. True to its SERM nature, tamoxifen is not antiestrogenic in all tissues. For example, in the uterus tamoxifen is a potent estrogen, where, like estradiol (when unopposed by progestins), it induces epithelial hyperplasia and endometrial cancers \[[@B4]\]. The excitement surrounding raloxifene stems from the fact that, like tamoxifen, it is an antagonist in the breast, but, unlike tamoxifen, it lacks estrogenic activity in the uterus \[[@B2],[@B5]\]. In summary, tamoxifen can be either an agonist or an antagonist in normal tissues. Unfortunately, the same duality of function operates in malignant tissues, including breast cancers. Almost without exception, breast cancers that initially respond well to tamoxifen by growth cessation or regression eventually resume growing despite the continued presence of the antagonist. How can this \'acquired resistance\'; be explained? Most tamoxifen-resistant tumors continue to express ER \[[@B6]\], suggesting that resistance is not simply due to outgrowth of a nonresponsive, ER-negative sub-population. Indeed, tamoxifen-resistant tumors remain responsive to growth inhibition by pure antiestrogens (but clinical data are sparse) and other hormonal therapies \[[@B3],[@B5]\]. Paradoxic reports of tumor stasis and even regression after tamoxifen withdrawal in resistant patients \[[@B7]\] suggest that in at least some resistant tumors the antagonist has switched to an agonist. Thus, for several years the notion has been advanced that the term \'resistance\' inappropriately describes such tumors, and that tamoxifen is not simply inactive (as implied by the term \'resistance\'), but, instead, that it has switched to an agonist, and actively stimulates tumor growth \[[@B8],[@B9]\]. That the same ligand can have opposing transcriptional and biologic effects has long been puzzling, but recent advances in our understanding of the molecular biology of steroid receptors has shed light on this paradox. We now know that transcriptional regulation by liganded, DNA-bound receptors is influenced by their association with multiprotein activator or repressor complexes. Detailed analyses of the identity and function of the constituent \'coregulatory\' proteins in these complexes are being carried out in many laboratories. They break down into two classes - coactivators and corepressors - and involve proteins with a variety of functions, including the following: enzymes such as acetylases, deacetylases, methyltransferases, ubiquitin ligases, proteases, ATPases and kinases; proteins with activator or repressor domains that stabilize or destabilize protein-protein interactions; scaffolding proteins involved in the assembly of multiprotein complexes; and even nonpeptide factors such as the steroid receptor RNA activator \[[@B10],[@B11]\]. What does this have to do with tamoxifen? It turns out that the activity of the tamoxifen-ER complex can be exquisitely modulated by the nature of the associated coregulatory proteins. Binding of corepressors, such as the silencing mediator for retinoid and thyroid receptors or nuclear receptor corepressor, suppresses the partial agonist activity of tamoxifen. At least one antagonist-specific coactivator, the L7 switch protein for antagonist, enhances the partial agonist activity of tamoxifen \[[@B9]\]. As a result of these basic molecular studies, there is now intense interest in correlating tamoxifen resistance in breast cancer with the underexpression of corepressors or the overexpression of coactivators. These proteins could clearly represent the next targets for therapeutic interventions. Additionally, although we have learned a great deal about steroid receptor coregulatory proteins in recent years, most investigators believe that only a minor subset have been identified to date. This is because the many subtle structural variations in the conformation of receptors that result from the binding of different ligands yield multiple subtly different targets on the receptor\'s surface for the binding of a variety of coregulators. It is this variability that can, in part, explain the tissue specificity and paradoxic agonist activity of ligands like tamoxifen. The hunt is therefore also on to identify the large number of endogenous coregulatory proteins that are probably lurking in tissues, and, additionally, to synthesize their pharmacologic equivalents with a view to manipulating the functional direction of ligand-receptor complexes. In a recent paper, Norris *et al* \[[@B12]\] described a novel method to define an array of synthetic peptides that interact specifically with estradiol- or tamoxifen-occupied ER, and regulate their transcriptional activity. Several methods have recently been developed to select members of random peptide libraries based on their binding affinity to known protein targets \[[@B13]\]. In the method of phage-display, a library of phage, each displaying a different cloned peptide sequence on its surface, is exposed to a plastic plate coated with the target protein. Specifically bound phage are eluted, the phage are amplified, and the process is repeated for several rounds, after which the selected clones of interest are isolated from the phage, the DNAs are sequenced, and the peptides they encode are deduced. Norris *et al* \[[@B12]\] used tamoxifen- or estradiol-occupied ER as the target protein bound to the plate, and they ensured that the receptors would be in the appropriate DNA-bound structural conformation by precoating the plastic with DNA containing estrogen response elements. The screen led to the isolation of several, 15 amino acid peptides, representing three major classes: α /β I, which interacts with estradiol-occupied ER; α /β III or V, which interact with tamoxifen-occupied ER; and α II, which interacts with ER in the presence of either ligand, in the presence of a pure antiestrogen, and even in the absence of ligand. The α /β I peptide SSNHQSSRLIELLSR interacts with ER only in the presence of estradiol, and not in the presence of SERMs like tamoxifen, raloxifene, GW7604, idoxifene, nafoxidene or the pure antiestrogen ICI182,780. In the presence of agonists, it also interacts with the progesterone receptor B-isoform, and glucocorticoid receptors. When overexpressed, α /β I and α II peptides reduce the transcriptional activity of estradiol, whereas α /β III or V have no effect, which is consistent with their inability to bind ER in the presence of the agonist. On the other hand, peptides α /β III or V are quite tamoxifen-specific for ER, but also bind antagonist-occupied progesterone receptors. Six peptides of the α /β V class were isolated, that had the consensus sequence (S/M)X(D/E)(W/F)(W/F)XXXL. α /β III or V, as well as α II, inhibit the partial agonist effect of tamoxifen, but do not alter transcription by estradiol-occupied ER. The inhibitory activity of these synthetic peptides thus resembles that of the natural corepressors SMRT and N-CoR \[[@B9]\]. It would be of interest to determine whether the complementary DNAs encoding these synthetic peptides could be used as probes to isolate additional endogenous corepressors from complementary DNA libraries. At present the list of known corepressors is much smaller than that of known coactivators \[[@B11]\], and it is unclear whether this discrepancy represents a true cellular condition, or whether it is an artifact due to the technical complexity of screening for corepressors. Norris *et al* \[[@B12]\] speculated that each class of peptides recognizes different protein contact sites on the ER protein; contact sites that are generated specifically by the class of ligand bound to the receptors. They postulated that these contact sites could be targets for drug discovery. Analogous suggestions have previously been made for the use of corepressor or coactivator-occupied receptors to screen for new ligands \[[@B9]\]. The studies of Norris *et al* \[[@B12]\], along with those of others cited herein, indicate that we are at the brink of important insights into the molecular mechanisms by which ER and their ligands regulate hormone dependence and resistance in breast cancers. These insights will bring completely new approaches to treating these tumors, and if their promise is confirmed they will allow us to predict, and pehaps even prevent or reverse, development of resistance. It is an exciting time to be studying the roles of steroid hormones in breast cancer!
{ "pile_set_name": "PubMed Central" }
2006 was the year of THE discovery, and Metal Storm was probably the first magazine to review Diablo Swing Orchestra and predict a wonderful future success to the band. I remember the day when I first listened to the band and said "what the hell is that?", for sure The Butcher's Ballroom was a great surprise and one of the most unexpected album of the last 10 years… I was awaiting a lot their new release, Sing-Along Songs For The Damned & Delirious, with a little fear to be honest, the level of the previous release being extremely high but now that I listened this release a lot of time, I can tell you that Diablo Swing Orchestra didn't miss the goal one more time! I'll quote myself this time: "I like it even more than "The Butcher's Ballroom". They did all what I wanted them to do: become more aggressive, shorten the number of songs, more male singing, and more experimentation. Also they are more experienced musicians than before. They did it!" TBB is a great album, really, but sometimes I find it too long, so, this one is well balanced. And c'mon, yes, the album is crazier, but they aren't at the Unexpect crazyness level, which a good thing for this band, they still have their very own sound. BTW, this guys are very nice, I met them last year and they were really kind to me and my friends. Huge disappointment after the The Butcher's Ballroom. Most of the songs are an unfocused mess, and Annlouice's vocals are really annoying, almost unbearable. I don't know why she changed her singing style, it was much more smooth on the previous album. I don't like the male opera singer either. The only good song is Stratosphere Serenade, because it's mostly instrumental This group is a new discovery for me. I love what I heard from this album, although the vocals aren't particularly my favorite. Regardless, this is some damn fine music and very refreshing. Just what I needed to hear. TBB is a great album, really, but sometimes I find it too long, so, this one is well balanced. And c'mon, yes, the album is crazier, but they aren't at the Unexpect crazyness level, which a good thing for this band, they still have their very own sound.
{ "pile_set_name": "Pile-CC" }
Angela Fraser Dr. Fraser is an Associate Professor/Food Safety Education Specialist in the Department of Food Science at Clemson University. She has a 100% Extension appointment. Her Extension programming focuses on food safety for retail food establishments, consumer food safety, and home food preservation. Dr. Fraser has worked in the area of food safety education since 1987, having held government, teaching, and Extension positions. She received her B.S. in Dietetics in 1984, an M.S. in Institutional Administration in 1987, and a Ph.D. in Food Science in 1995. All of her degrees were earned at Michigan State University. Before coming to NC State University in 1995, she worked as an Extension Specialist at Michigan State University and was an Adjunct Instructor in the School of Public Health, University of Michigan. Before earning her Ph.D., Dr. Fraser worked as an Environmental Health Specialist for six years in the State of Michigan.
{ "pile_set_name": "Pile-CC" }
Othman El Kabir Othman El Kabir (; born 17 July 1991) is a Dutch footballer of Moroccan descent currently playing for the Russian club Ural Yekaterinburg as a left midfielder. Club career Djurgårdens IF El Kabir signed a 3 and a half year deal with Swedish top tier Djurgårdens IF on 14 July 2016. On 24 August El Kabir scored his first goals for Djurgården, scoring two goals in the Swedish Cup qualifier 5-1 win against Smedby AIS. On 19 February 2018 El Kabir signed with Ural Yekaterinburg. Personal life Othman El Kabir is the younger brother of Moestafa El Kabir. Career statistics References External links Djurgården profile Category:Living people Category:Footballers from Amsterdam Category:1991 births Category:Dutch footballers Category:Dutch people of Moroccan descent Category:Dutch expatriate footballers Category:Association football midfielders Category:Djurgårdens IF Fotboll players Category:Allsvenskan players Category:Superettan players Category:Expatriate footballers in Sweden Category:FC Ural Yekaterinburg players Category:Expatriate footballers in Russia Category:Russian Premier League players
{ "pile_set_name": "Wikipedia (en)" }
StartChar: brokenbar Encoding: 166 166 314 GlifName: brokenbar Width: 1024 VWidth: 0 Flags: W HStem: 1024 21G<512 640> VStem: 512 128<-128 384 1024 1536> LayerCount: 5 Back Fore SplineSet 640 384 m 1 640 -128 l 1 512 -128 l 1 512 384 l 1 640 384 l 1 640 1536 m 1 640 1024 l 1 512 1024 l 1 512 1536 l 1 640 1536 l 1 EndSplineSet Validated: 1 Layer: 2 Layer: 3 Layer: 4 EndChar
{ "pile_set_name": "Github" }
High prevalence of deep vein thrombosis in elderly hip fracture patients with delayed hospital admission. Deep vein thrombosis (DVT) is a common complication in hip fracture patients, associated with significant morbidity and mortality. Research has focused on postoperative DVT, with scant reports on preoperative prevalence. The aim of this study was to describe the prevalence of preoperative DVT in patients accessing medical care ≥ 48 h after a hip fracture. We included elderly patients admitted ≥ 48 h after sustaining a hip fracture, between September 2015 and October 2017. Patients with a previous episode of DVT, undergoing anticoagulation therapy, with pathologic fractures or undergoing cancer treatment were excluded. Of 273 patients, 59 were admitted at least 48 h after the fracture. DVT screening by Doppler ultrasound of both lower extremities was carried upon hospital admission. We recorded age, sex, Charlson comorbidity index and ASA score, fracture type, time since injury, time from admission to surgery and total length of hospital stay. We studied 41 patients, 79 (± 10.34) years old. The delay from injury to admission was 120 h (48-696 h). Seven patients (17.1%) had a DVT upon admission. There were no significant differences between patients with and without DVT, regarding time from admission to surgery or the total length of the hospital stay. The prevalence of DVT in patients admitted ≥ 48 h after a hip fracture was 17.1%. The diagnosis and management of DVT did not increase time to surgery or hospital stay. Our results suggest routine screening for DVT in patients consulting emergency services ≥ 48 h after injury.
{ "pile_set_name": "PubMed Abstracts" }
Monthly Archives: June 2013 From peeves to plotlines in only seven days!! Our mission, should we undertake it (and I’m taking it as under as I can), is to tell the world about our favorite and unfavorite books. (I made up a word! Hooray … Continue reading → I’m dedicating this story to my friend Gayle, who often asks me, “Can’t you write something that’s not scary?” Yes, dear, I can. Regardez: Genius Has Its Own Time The studio is dark and quiet. The occasional fretful moonbeam shows … Continue reading → This week I am privileged to have Pamela Foreman as my guest. This week we are getting our author pet peeves off our chests. And there’s a note about chocolate at the end. Every writer has them. Peeves–whether they be … Continue reading → When I leaf through the pages upon pages of my writing, I marvel at the ability we have to craft a tale from just our minds. Jasper Fforde’s “Thursday Next” series, especially the book entitled “The Well of Lost Plots”, … Continue reading → I wrote this awhile back, and have posted it before, but for lack of anything else to throw in here today I am going to re-post it. It was inspired by watching a cat saunter down the beach promenade one … Continue reading → It is an absolutely gorgeous day here in the Pacific Northwest. I think I shall go find meself a margarita and catch some rays. In the meantime, Angela St. Clair will take over the helm of the good ship Bloggipop. … Continue reading → Well, here I am, embarking on a new blog. This is one that will deal with me as an author, not as my other self. Heaven only knows what I will write on here. This is the blogsite that I … Continue reading →
{ "pile_set_name": "Pile-CC" }
209 Cal.App.2d 647 (1962) SHELLEY R. COTHRAN, Individually and as Executor, etc., Plaintiff and Respondent, v. THE TOWN COUNCIL OF LOS GATOS, Defendant and Appellant. Civ. No. 19919. California Court of Appeals. First Dist., Div. One. Nov. 19, 1962. J. Rainey Hancock for Defendant and Appellant. Everett P. Rowe for Plaintiff and Respondent. SULLIVAN, J. This is an appeal by The Town Council of Los Gatos from a judgment granting respondent's petition for a writ of mandate and commanding appellant to terminate certain annexation proceedings initiated pursuant to the Annexation of Uninhabited Territory Act of 1939. (Gov. Code, 35300-35326.) [1] The basis of the above judgment was a finding that on June 6, 1960, the day such proceedings were initiated by appellant (see Gov. Code, 35310) there were 12 registered voters residing within the territory proposed to be annexed. Since under the applicable statute as it then read, "territory shall be deemed uninhabited if less than twelve registered voters reside within it at the time of ... the institution of proceedings on motion of the city legislative body" (Gov. Code, 35303; emphasis added), the court concluded that the territory in question was not uninhabited, that appellant in initiating the proceedings exceeded its jurisdiction and that the writ should issue to terminate them. [fn. 1] At the trial the parties stipulated and the court found that *651 eight specified persons were registered voters residing within the territory on June 6, 1960. In addition, the court found and concluded that Oliver F. Hitchcock and Marie Hitchcock, husband and wife, and Joseph A. Rogers and Linda L. Rogers, husband and wife, were also registered voters residing there on said date, thereby bringing the total number of such persons to 12. It is around the last four persons that the present controversy revolves and it is against the findings and conclusions pertaining to them, that appellant directs its attack. We therefore proceed to determine, separately as to each of the above married couples, whether the court's determination that they were registered voters residing within the territory on the crucial date is sustained by the evidence and the law. Oliver F. and Marie Hitchcock The Hitchcocks owned two contiguous parcels of land. One consisted of approximately 14 acres, abutting the boundary line of the territory proposed for annexation, but lying wholly outside of it. The other consisted of approximately 29 acres, also abutting the above boundary line, but lying wholly inside of it. The 14-acre parcel was in section 23 (T.8 S. R.1 W-M.D.B. & M.) and the 29-acre parcel in section 24. The section line, therefore, and the proposed annexation boundary line which followed it in this area, ran between the two parcels. The two parcels were acquired by the Hitchcocks at different times and by different deeds. They were assessed by the County of Santa Clara according to different code areas and the taxes levied thereon billed to the Hitchcocks by separate tax statements. The Hitchcocks reside at 15060 Kennedy Road. While both parcels abut this road on the north, the Hitchcock home is located on the smaller parcel and about 400 feet from and outside of the proposed boundary line. At the time they built this home, however, in 1955, the Hitchcocks had already acquired the full acreage of both parcels. Mr. Hitchcock testified that aside from the fact that "[t]here's an old broken-down fence between the properties" no longer in repair, there was no other "physical mark" indicating the line between the parcels and no "physical barrier such as a mountain ridge or a stream" dividing them. He further stated that he obtained his water supply from the larger parcel, that both parcels were tied together with power lines and pipe lines, that he lived on the entire 43 acres and that he considered the "whole unit" of both parcels as his *652 residence. Hitchcock, finding that the larger parcel had gone "back to nature pretty much" had started the work of clearing it and had installed an expensive culvert. He estimated expenditures on the 29-acre parcel in the year before the trial to be $1,000. There is evidence that he operated both parcels as a unit, that he considered the larger parcel "more like a front yard" and had in fact purchased it because of the attractiveness of its "wild and uncultivated state." He used the larger parcel "for recreational purposes," to "shoot pistols with the neighbors" and "to exercise my dog." The trial court found, so far as is pertinent here, that all of the property comprising the 43 acres of both parcels "is one unit and constitutes one 'home place' " and that the Hitchcocks "use the property as one entire place and live on it as one entire piece" and concluded that Mr. and Mrs. Hitchcock (along with the 10 other specified persons) were registered voters residing within the territory proposed for annexation. The learned trial judge filed an extensive memorandum opinion which has been included in the present record (Cal. Rules of Court, rule 5 (a)) [fn. *] and which we may consider for the purpose of understanding the foregoing findings and conclusions (Trans-Oceanic Oil Corp. v. City of Santa Barbara (1948) 85 Cal.App.2d 776, 790 [194 P.2d 148]) and the process by which the judgment was reached. (Union Sugar Co. v. Hollister Estate Co. (1935) 3 Cal.2d 740, 750-751 [47 P.2d 273].) Such opinion discloses that the trial court's determination that Mr. and Mrs. Hitchcock were residing within the area was reached by applying to the above facts the legal principles announced by Mr. Justice Wood for this court in People v. City of Richmond (1956) 141 Cal.App.2d 107 [296 P.2d 351]. After quoting from the above case, the trial judge stated: "In this instant case it will be recalled that there is no natural boundary following the annexation line, or vice versa, and the Court cannot help but conclude that the use of the property by the Hitchcocks, together with their own intention as expressed by the testimony of Mr. Hitchcock that they have always considered and used it as one single parcel of land and intend to so use it results in the inescapable conclusion that the whole parcel of property including the twenty-nine acres lying within the area to be annexed and the fourteen acres lying outside of the artificial line created by the Resolution and upon which the home is situated are inhabited *653 as a single unit by the Hitchcocks who are registered voters in the home and therefore are registered voters residing within the territory to be annexed as indicated in Section 35303 of the Government Code." (Original emphasis.) We conclude that the court's findings are supported by substantial evidence, that the rule of People v. City of Richmond was properly applicable and that the court's determination based thereon was correct. In People v. City of Richmond, supra, the boundary line transected a lot 100 feet wide by 500 feet deep so as to include the rear fourth portion thereof within the territory proposed for annexation. The dwelling house was located on the front of the lot and over 300 feet from the above boundary. This court held that the trial court's finding that the premises consisted of a single undivided parcel of land, all of which was used for residential purposes, was supported by substantial evidence and that as a consequence the registered voters resident thereon were to be counted as residing in the territory proposed for annexation. Mr. Justice Wood, speaking for the court, said: "To annex land is one thing; to strip it of its quality of being inhabited is quite another. [2] The state of being inhabited is an attribute, a characteristic, a quality every bit as real as the state of being owned, possessed or farmed. Bisecting a lot by means of an annexation boundary line does not extinguish any of these qualities of the land. Each of the resultant portions continues an integral part of the whole in respect to ownership, possession, occupancy, use and residency. The power to fix the course of the boundary line does not include the power to strip the land of any of these qualities nor the power to interfere with or to cut down any of the rights or privileges of the owner or of the occupier of the land." (141 Cal.App.2d at p. 111.) Observing, anent the trial court's finding, that "[t]his concept of 'inhabited' finds support in the case law which developed prior to the legislative definition of 'uninhabited' " in section 35303 and predecessor statutes, [fn. 2] this court quoted the following language, inter alia, from People v. City of *654 Lemoore (1918) 37 Cal.App. 79, 81 [174 P. 93] as "the judicially developed concept of 'inhabited' ": " 'The fact of occupancy is not limited, manifestly, to the space occupied by the building or buildings, but extends to every portion of the single tract of which that space is an undivided part. It may be difficult to formulate a description that can be applied with accuracy to every situation, [footnote omitted] but to say that any portion of a single and separate tract of land is uninhabited when people actually reside within the boundaries of that tract of land involves a contradiction in terms.' " (141 Cal.App.2d at pp. 112-113.) Finally in answer to the objection that the concept of residence applied by the trial court would have no limit, [fn. 3] Richmond declares: "The nature of the use of the various parts, if there be parts, of a ranch, an estate, or a town lot, would be a significant factor to consider. Is it residential in character or is it nonresidential, such as industrial, commercial or agricultural? If it is any of the latter, does it pervade the part so used in such a manner and to such an extent as to make it unreasonable to view it also as residential? Conceivably, a person might have a home orchard in his backyard or might conduct on the premises a small business of such a character and in such a manner that it would still be within reason to say that he nevertheless resides upon the entire lot. Then, too, in some cases the existence of a permanent barrier, natural or artificial, would enter into the picture for determination; e.g., a stream, a mountain ridge, a street, or a railroad. There well may be other factors, upon occasion, factors which presently do not occur to us." (141 Cal.App.2d at p. 115.) In City of Port Hueneme v. City of Oxnard (1959) 52 Cal.2d 385 [341 P.2d 318], the proposed annexation of uninhabited territory having been found to include 14 registered voters, the first boundaries were withdrawn and revised boundaries substituted "so as to exclude [footnote omitted] three houses inhabited by eight of the 14 registered voters. These excluded houses were located on land which was in each case part of a larger parcel belonging to the same owner, the remainder of which larger parcel was included within the so-called 'second proposed' " territory for annexation. Mr. *655 Justice Schauer held the attempted annexation void as an attempt "to exclude from such annexation the habitations of eight registered voters and thus to sever such habitations from the parcels of which they were an integral part" (52 Cal.2d at p. 391) stating: "Whether the territory included within the proposed annexation was inhabited is a question of fact which does not depend upon whether the houses of the registered voters in which they ate and slept were within the boundaries of the proposed annexation but upon whether such houses were an integral part of the whole parcel (including the portion thereof which fell within the boundaries of the proposed annexation) so as to render the whole parcel inhabited. (People v. City of Richmond (1956) 141 Cal.App.2d 107, 111-114 [296 P.2d 351].)" [3] The essence of the foregoing cases is that where all of the land in question can be reasonably said to be used in connection with the home located thereon as an integrated whole, all of it is inhabited. The determination of whether and to what extent the land possesses such an integrated and unitary character rests upon an overall consideration of the facts of the particular case bearing upon location, physical appearance, and use, and not on any precise formula, composed solely of units of land measurement expressed lineally or in quantity. An additional city lot, so designated on official records, lying adjacent to a home, may be an integral part of it as a garden; several acres in a rural area may be a part of a country estate for the additional recreational facilities or merely for the natural beauty which they add. These conditions are matters of common knowledge. The resultant place of residence is nonetheless indivisible although consisting of more than one lot, acre or other portion of land. Whether it is an indivisible residence is a question of fact. In the instant case, the trial court found upon substantial evidence that all of the Hitchcock property on both sides of the proposed boundary line was "one unit" or to state it another way in terms of Hueneme, supra, that the Hitchcock house was an integral part of the whole 43 acres including the portion thereof which fell within the boundaries of the proposed annexation. In the light of the supporting evidence and measured by our views expressed in Richmond, we cannot say that such finding represents an unreasonable extension of the concept of residence. [4] Appellant argues that the instant case is distinguishable from Richmond and Hueneme on two bases. First it *656 claims that in neither of the last two cases "was the court faced with the problem of a separate parcel" (emphasis added) of 29 acres within the territory proposed to be annexed and 14 acres outside of it. Such claim of "separateness" is gratuitous and against the evidence. In making it, appellant chooses to disregard the court's finding that the above parcels were, so far as the Hitchcock occupancy was concerned, not two separate parcels but one unit. Secondly, it claims that Richmond and Hueneme involved what appeared to be an intentional cutting of the land undertaken for the purpose of excluding registered voters (Hueneme, supra, 52 Cal.2d at pp. 390-391) or of circumventing the Annexation Act of 1913 (Gov. Code, 35100-35158; Richmond, supra, 141 Cal.App.2d at p. 119). There is nothing in either case which restricts the rule announced therein to situations involving an intentional exclusion of registered voters. The rule rests upon the character of the land involved and not the intention of the annexing municipality. We should also mention at this point appellant's argument that where several large parcels of land are accumulated within a certain area, it is not correct to say all are occupied as a residence, for it would thus be possible to extend a person's residence to another county, another state, or theoretically across the United States. As we have already pointed out, this "extension of residence" argument was answered by us in Richmond. (See Richmond, 141 Cal.App.2d 114-115, quoted supra.) We are not called upon here to pass upon, and hence express no views concerning a situation where a residence overlaps a county line. Finally appellant urges that in view of the addition of section 35008 to the Government Code in 1957 (Stats. 1957, ch. 1665, p. 3046, 1) there is no longer any necessity for the extension of the principle of law developed in the Richmond case. Section 35008, added to general provisions affecting annexation of territory ( 35000- 35012) but made applicable to the Act of 1939 here under consideration ( 35301), provides in substance that boundaries shall not be fixed without the owner's consent so as to exclude the site of his residence dwelling, and where so fixed in violation of the section, the owner may within one year of the completion of the proceedings file a statement of violation with the annexing municipality and have the property excluded. [fn. 4] On this statute appellant *657 constructs a bifurcated argument. The gist of the first part is that the Hitchcocks may eventually consent to the annexation of the 29 acres. To this the simple answer is that, on the record before us, they have not. The gist of the second part is that if they do not consent "and the 29-acre parcel is thereby excluded," they would have none of their property within the proposed territory. To this the equally simple answer is that the 29 acres have not been excluded but are attempted to be annexed. [5] While section 35008 may provide an additional remedy to such residents as the Hitchcocks, we find nothing in the statute which abrogates or restricts the right of other property owners in the territory proposed for annexation to compel termination of the proceedings on the grounds that there were not "less than twelve registered voters" within it. Appellant relies on City of Morgan Hill v. City of San Jose (1961) 192 Cal.App.2d 383 [13 Cal.Rptr. 441] dealing with the annexation of inhabited territory under the Annexation Act of 1913 (to which 35008 is also made applicable) and involving the splitting of certain properties by the proposed annexation. It was held that the City of Morgan Hill could not urge invalidity of the annexation based on noncompliance with section 35008 since the remedy afforded by the statute was resident in the affected property owners who had consented afterwards anyhow. The method of annexation was entirely different (see comparison made by Mr. Justice Tobriner in City of Campbell v. Mosk, supra, 197 Cal.App.2d 640, 643), no jurisdictional question was presented based on required number of registered voters, and no holding was made that section 35008 established an exclusive remedy. [6a] We are unimpressed with appellant's argument that *658 there was no finding that the Hitchcocks were registered voters "either within or without the territory proposed to be annexed." The court concluded that "there were twelve registered voters residing within" the territory proposed to be annexed and thereupon listed the Hitchcocks among "the names of said twelve registered voters." Although made as a conclusion of law, the above language insofar as it refers to "registered voters" would appear to be a finding of ultimate fact. [7] A finding may be considered as a valid and effectual finding of fact, even though it is included among stated conclusions of law. (Linberg v. Stanto (1931) 211 Cal. 771, 776 [297 P. 9, 75 A.L.R. 555]; Safeway Stores, Inc. v. Massachusetts Bonding & Ins. Co. (1962) 202 Cal.App.2d 99, 106 [20 Cal.Rptr. 820]; Petersen v. Cloverdale Egg Farms (1958) 161 Cal.App.2d 792, 797 [327 P.2d 127].) [6b] Mr. Hitchcock testified that both he and his wife were registered voters. An assistant registrar of voters of Santa Clara County testified that they were both registered in precinct 5783 of Santa Clara County. We therefore hold that the trial court applying the legal principles we have set out above properly counted Mr. and Mrs. Hitchcock as registered voters residing within the territory proposed for annexation. Joseph A. and Linda L. Rogers Joseph A. Rogers, Jr., and his wife, Linda L. Rogers, were the son and daughter-in-law of Joseph and Belle A. Rogers. The senior Rogers were stipulated by the parties, and found by the court, to be registered voters residing within the territory proposed for annexation. For convenience we will refer to the father as Rogers Sr. and to the son as Rogers Jr. [8a] Rogers Jr. was an officer in the United States Air Force, having entered the service in 1954, at the age of 19. At that time he was unmarried and living with his parents at their residence in San Jose. He was assigned to duty with the Strategic Air Command at Lincoln, Nebraska. Joseph Jr. and Linda were married in Omaha in August 1958. Shortly after the wedding they came to California and had a reception at the home of Rogers Sr. in San Jose. Although Rogers Sr. had just purchased the property located at 15201 Deer Park Road in the territory under annexation, he and his wife were still residing in San Jose. After a short visit, Rogers Jr. and his wife returned to Lincoln where they lived in a house owned by the grandfather of Linda Rogers. *659 Rogers Sr. moved to his new residence on Deer Park Road, Los Gatos on October 20, 1958. In August or September 1959, Rogers Jr. visited his parents and stayed with them at their new home for about two weeks. The record is unclear as to whether his wife accompanied him. At the time of the trial in August 1960 he was assigned to duty in Spain and his wife Linda was visiting the senior Rogers in Los Gatos. A primary election was held in California on June 7, 1960. It is important to note that this was one day after the critical date here involved when the appellant council instituted annexation proceedings by appropriate resolution. Joseph Rogers, Jr., and his wife Linda both voted in the above election by absentee ballot. Each of the above persons signed a separate "Post Card Application for Absentee Ballot" at Lincoln, Nebraska, on May 11, 1960. These applications were mailed on May 14, 1960, to the County Clerk of Santa Clara County at San Jose and were received by him on May 17, 1960. Joseph Jr.'s application stated inter alia that he was a member of the Armed Forces of the United States and that for the preceding 13 years his residence in California had been at 15201 Deer Park Road, Los Gatos. [fn. 5] Linda's application, made as a spouse of a member of the Armed Forces, showed the same place of residence for 2 years previous. In return they each received a separate "Affidavit of Registration" for Santa Clara County Precinct 5783, a War Voters Ballot and a War Voters Identification Envelope. The parties agree that the above precinct is the one in which the Deer Park Road home of Rogers Sr. was located. Joseph's affidavit was subscribed and sworn to on May 19, and Linda's affidavit on May 20. Each of the affiants stated therein that his or her residence was 15201 Deer Park Road. There is no dispute that Rogers Jr. and his wife returned the above affidavits of registration together with the above envelopes containing their ballots and that their votes were counted. The present controversy centers about when they mailed back the above documents and when they were received. An assistant registrar of voters of Santa Clara County testified that Joseph A. and Linda L. Rogers were registered voters in said county in June 1960, that their affidavits of registration were on file, and that their precinct was 5783. He *660 also testified that it was the practice of the registrar's office, in complying with pertinent provisions of the Elections Code [fn. 6] to accept registration affidavits and ballots from a war voter if the envelope transmitting them bore a postmark of a date on or before the day of election and the envelope was received within six days thereafter. If the above conditions of date were met, the envelope was not kept and the vote was counted. As a result, the envelope used by Joseph and Linda Rogers was not available to establish either the date they mailed it or the date it was received by the registrar of voters. It was therefore the testimony of the assistant registrar that the affidavits and ballots of Rogers Jr. and his wife were received at some time prior to six days after June 7, 1960, but that it could not be ascertained on what specific date they were actually received. The evidence showed that the names of Joseph A. Rogers, Jr., and Linda Rogers, his wife, did not appear on the printed precinct list for precinct 5783 which was distributed by the registrar on May 27, 1960, but that both names were added in longhand to the list maintained in the registrar's office at some time after the election. The trial court found that when Rogers Jr. entered the military service "he did so with the intention of retaining the residence of his father and mother ... as his place of residence"; that when he attained the age of 21 he "did form no change of intention"; that his residence after marriage in the home of his wife's grandfather in Lincoln was one "of a purely temporary nature"; that the residence of Rogers Jr. was that of his wife; that both Rogers Jr. and his wife "had registered as voters and were registered voters within the area to be annexed, to wit: At 15201 Deer Park Road, on June 6, 1960; ..." The opinion of the trial judge, already referred to, discloses that at the basis of the above findings was the court's determination that the affidavits of registration executed by both *661 of the above parties had been received by the registrar of voters before the critical date of June 6, 1960, when annexation proceedings were instituted. In reaching this conclusion, the court noted that it had taken three days for the post card applications for absentee ballots to reach San Jose from Lincoln, from which the court inferred that "the normal traveling of mail time" between such places was three days, stating: "From this, the Court infers and finds that the Affidavits of Registrations signed on May 19th and 20th, 1960 were mailed on or about May 20th, 1960 to the registrar of voters at San Jose from Lincoln, Nebraska and that they were received by that office on or about May 23rd, or possibly May 24th, 1960." Appellant attacks the above findings by claiming in effect that the foregoing evidence establishes as a matter of law that Rogers Jr. and his wife (1) did not reside within the territory, (2) were not electors in the precinct and (3) were not registered on or before June 6, 1960. [9] Section 35303 of the Government Code upon which this controversy is centered provides that the territory proposed for annexation shall be deemed uninhabited "if less than twelve registered voters reside within it" (emphasis added) on the crucial date. Appellant attempts to splinter off the concept of residence implicit in the above italicized language and to engraft on the statute a new and different one. The residence prescribed by section 35303, appellant argues, means actual residence ("actually living and dwelling") within the territory, not legal residence or domicile which governs registration and voting. The argument has no merit. It is clear to us that the word "reside" in its above context means the residence requisite for the registration of voters. As this court declared in Perham v. City of Los Altos (1961) 190 Cal.App.2d 808 [12 Cal.Rptr. 382], in the course of construing section 35303: "The obvious purpose of this reference to the voter registration records is to furnish a convenient and ready means of ascertaining the number of legal residents of the territory. Registration is based upon the affidavit of the voter (Elec. Code, 120) which must show the affiant's 'place of residence and post- office address with sufficient particularity to identify it and to determine affiant's voting precinct' ( 220, subd. (c); see also 230, subd. 3). The rules for determining 'residence' for the purpose of registration and voting show that it means legal residence or domicile. ( 5650-5661, especially 5652.)" (P. 809; emphasis added.) *662 Former section 5652 (now 14282) of the Elections Code, [fn. 7] as in effect on June 6, 1960, provided: "That place is the residence of a person in which his habitation is fixed, and to which, whenever he is absent, he has the intention of returning." Former section 5653 (now 14283) of the Elections Code provided in relevant part: "A person does not gain or lose residence solely by reason of his presence or absence from a place while employed in the service of the United States, ...." Former section 5654 (now 14284) of the Elections Code provided in relevant part: "A person does not lose his residence who leaves his home to go into another State ... for temporary purposes merely, with the intention of returning." [8b] Applying the above statutes, it is clear that the evidence summarized by us above, together with the reasonable inferences therefrom, support the trial court's findings that Joseph A. Rogers, Jr. intended to and did maintain the home of his parents as his permanent residence during the period of time here involved, that his residence was not lost by his military service or by the moving of his parents' home from one precinct to another during such service, and that his residence in the house of his wife's grandfather in Lincoln was purely temporary. Such permanent residence became also the residence of Linda Rogers upon her marriage to Rogers Jr. since "[t]he residence of the husband is the residence of the wife" (former Elec. Code, 5660 [now 14290]) except in certain circumstances not here applicable. Appellant concedes that Rogers Jr. did not lose his residence when he enlisted in the Air Force (former Elec. Code, 5653), but maintains that upon his becoming 21 years of age he could have acquired a residence of his own either in Nebraska or by retaining his San Jose residence. The trial court, as we have pointed out, found on substantial evidence that Rogers chose the latter course. Appellant then argues that although Rogers Jr. might have thus retained his parents' home in San Jose as his permanent residence, such residence was not changed to Los Gatos when Rogers Sr. moved there. However we think that the above findings, supported by the evidence, are embracive of this circumstance. After Rogers Sr. moved to the Deer Park Road dwelling in October 1958, *663 Rogers Jr., still in military service, made no attempt to establish a permanent residence elsewhere. He and his wife continued their temporary residence in her grandfather's house. In the next year, 1959, he visited his father's Los Gatos home in much the same way he had visited the San Jose home in the preceding year. In 1960 he registered to vote giving the Deer Park Road address as his residence in both his application for an absentee ballot and in his affidavit of registration. [10] Voting registration is "[o]ne of the important acts to be considered" in determining residence (Ballf v. Public Welfare Dept. (1957) 151 Cal.App.2d 784, 788 [312 P.2d 360]). [8c] From the above and other evidence which we have summarized, viewed in the light of the provisions of section 5653 of the Elections Code, supra, that a person does not lose residence by reason of his absence while employed in the service of the United States, the trial court could properly infer that Rogers Jr. intended that his permanent residence should remain at the home of his father after the latter moved to Los Gatos. This, we think, was a reasonable inference. The alternate suggested by appellant, namely, that absent any proof of domicile in Nebraska, Rogers Jr. "retained the San Jose Precinct as his residence" even though the father had moved and the son had no fixed place of residence there, would lead to an absurdity. We turn to appellant's second point of attack. It urges that "Rogers, Jr. did not qualify as an elector, because he had not been a resident 'in the election precinct (Los Gatos) fifty-four (54) days next preceding the election,' as required by California Constitution Article II, Section 1." This argument has no merit. The affidavit of registration of Rogers Jr., introduced into evidence by appellant, contained the following statement: "I will be at least twenty-one years of age at the time of the next succeeding election, a citizen of the United States ninety days prior thereto, and a resident of the State one year, of the County ninety days, and of the Precinct fifty-four days next preceding such election, and will be an elector of this County at the next succeeding election." Furthermore, as we have pointed out, the trial court found on substantial evidence that Rogers Jr. was a resident within the territory in question from the time his father moved there in October 1958. Cases cited by appellant dealing with the illegality of votes cast by nonresidents are not here pertinent. The evidence shows that the votes cast by Rogers Jr. *664 and his wife in the primary election held on June 7, 1960, were counted and were never declared illegal. We conclude, therefore, that the trial court properly determined upon substantial evidence that Rogers Jr. and his wife were registered voters residing within the territory in question. But the crucial question raised by appellant's third objection remains: Were they registered voters on June 6, 1960? As we have set forth in detail above, the trial court answered this question in the affirmative by inferring that the affidavits of registration were mailed in Lincoln on or about May 20, 1960, from the fact that they were signed on May 19 and 20, and by further inferring that they were received by the Santa Clara County Registrar on or about May 23, or May 24, 1960, from the fact that incidents pertaining to the mailing and receipt of the previous applications disclosed the normal mail time to be three days. On this reasoning the court concluded that the Rogers became registered voters no later than May 24, 1960, and were therefore to be counted as such on June 6, 1960. [11] A legal inference can be drawn only from the facts proved. It must be reasonably and logically drawn and it may not be based only on imagination, speculation, supposition, surmise, conjecture or guess work. (Code Civ. Proc., 1958, 1960; Eramdjian v. Interstate Bakery Corp. (1957) 153 Cal.App.2d 590, 602 [315 P.2d 19]; Marshall v. Parkes (1960) 181 Cal.App.2d 650, 655 [5 Cal.Rptr. 657]; 18 Cal.Jur.2d, Evidence, 60, pp. 479-481; Witkin, Cal. Evidence, 121, p. 145.) We find in the record no fact established from which the trial court could have inferred that the affidavits of registration were mailed by Rogers Jr. and his wife from Lincoln "on or about" May 20, 1960. It cannot be logically concluded that these documents were mailed on the day of their date. The only evidence in the record is that they were received by the registrar of voters in San Jose at some time prior to June 13, 1960, and that at the time of their receipt they bore a postmark showing mailing on or before June 7, 1960, the date of the election. On oral argument, counsel for respondent with commendable candor conceded that the court's determination that the affidavits were received "on or about May 23rd, or possibly May 24th, 1960" was without any support in the record. If the court's conclusion that the Rogers were registered voters in the area on the critical date is to be upheld, it must therefore be on another basis. It has been established by uncontradicted evidence that *665 Rogers Jr. and his wife registered and voted as absentee war voters. (Elec. Code, former 48, 132.6, 220, 230. [fn. 8]) The provisions quoted below make it clear that under the applicable sections of the Elections Code then in effect, an absentee war voter received his ballot and affidavit of registration at the same time, and "on or before the day of election" executed the affidavit and returned it to the county clerk with his ballot enclosed in an identification envelope. Thus, in such cases, the clerk and the registrar of voters received the affidavit of registration and the ballot at the same time. Former section 132.6 then continues in its third paragraph: "Upon receipt thereof within the time required by law for the return of the absent voter's ballots, the clerk shall examine the affidavit of registration and if it appears therefrom that the affidavit of registration is properly executed and that the facts stated therein are such as would have entitled the applicant to register and vote at the election, if the affidavit had been executed in this State and within the time required by law, then the affiant shall be deemed a duly registered elector as of the date of the affidavit to the same extent and with the same effect as though he had registered in proper time prior to the election before the clerk." (Emphasis added.) Former section 5932 of the Elections Code which we have already set forth (see footnote 6) prescribed that absent voter's ballots had to be received by the clerk "within six days after the date of the election. ..." [12] It has been established by uncontradicted evidence *666 in the instant case that the affidavits of registration executed by Rogers Jr. and his wife were received by the county clerk of Santa Clara County within such permissible period, although the precise date of receipt is not ascertainable and that the ballots of these parties were counted. Such being the case, by virtue of the third paragraph of section 132.6, Rogers Jr. was to be deemed a "duly registered elector" as of May 19 and his wife such as of May 20, 1960. Appellant urges upon us a number of reasons why section 132.6 should not be so applied. Before we take these up, we make one preliminary observation. Former section 132.6 of the Elections Code states that upon the proper and timely return of the absent war voter's affidavit of registration, he shall be deemed a duly registered elector, while section 35303 of the Government Code prescribes the test of uninhabited territory on less than 12 registered voters. As we said in Perham v. City of Los Altos, supra, 190 Cal.App.2d 808, 810, there is no substantial difference in the use of such terms when considered in connection with ascertaining residence of registered voters. As defined by former and present section 21 of the Elections Code " '[v]oter' means any elector who is registered under the provisions of this code." The employment of the two terms "elector" and "voter" is not a factor in the problem. Appellant takes the position that (1) "registered voters" as used in section 35303 of the Government Code does not include "registered war voters" and (2) even if it does, the term "registered voters" should only include those persons whose affidavits of registration are actually in the possession of the registrar of voters and on the official precinct list available for inspection in connection with annexation proceedings. We think that appellant confuses the issue before us which is a question of residence and registration (Perham, supra, 190 Cal.App.2d at p. 809) by coupling with it the question of voting. Admittedly, the latter question easily obtrudes upon the former in the instant case, since the primary election for which the Rogers registered, occurred one day after the critical date for determining whether there were less than 12 registered voters in the area. Be that as it may, we are not concerned with the question of voting. We refer to Perham, supra. In that case, similar proceedings under the Act of 1939 here involved were declared void because on the day when the petition for annexation was filed, there were 14 registered voters within the territory proposed *667 for annexation. It was claimed on appeal that 8 of the 14 had registered less than 54 days before the crucial date, leaving only 6 who could be considered "registered voters." This court affirmed the judgment, holding that the 8 registered residents should be counted, even though they might have been ineligible to vote if an election had been held on the crucial day for counting under Government Code section 35303. We said: "It is a question of residence and registration, not a question of voting, on that day." (P. 809.) Perham becomes a guidepost in the instant case where an election was held, not on the crucial day, but the day after. Despite its proximity in time, we are still concerned only with registration and not with other circumstances pertaining to voting. [fn. 9] As we have pointed out, the resident registered voters contemplated by section 35303 of the Government Code are those determined to be such according to the provisions of the Elections Code. (Perham v. City of Los Altos, supra, 190 Cal.App.2d 808.) We fail to find, nor has appellant referred us to, any provisions of the Elections Code creating different categories of registered voters. All voters must be registered according to the Elections Code (former 21, 70) by affidavit of registration (former 120) according to a specified content and form (former 220, 230). The manner of effectuating the act of registration may vary, as for example in the case of absentee registration (former 132) or absentee war voter registration (former 132.6), but the result is uniform. All, whether making the affidavit in person before the county clerk or returning it by mail, become registered voters. We can therefore find no reason for the conclusion that an absentee war voter who registers according to the applicable statute (former 132.6) becomes a member of any different or separate class of registered voters. It is significant that the statute *668 which provides for this procedure of registration did not so state. On the contrary, it specifically stated that such a person became a duly registered elector "to the same extent and with the same effect as though he had registered in proper time prior to the election before the clerk" (former 132.6; emphasis added). [13] We hold therefore that a war voter (former 48) who registers according to law is a "registered voter" within the purview of section 35303 of the Government Code. [14] Appellant urges the sequential point that such a registered voter should not be counted unless his affidavit of registration has been processed as provided by former section 331 (now 422) of the Elections Code. [fn. 10] It is appellant's position that it is the bound and indexed book of affidavits of registration, compiled pursuant to such section, which constitutes the voter registration records and that only the names therein contained were the registered voters residing in the proposed territory on the crucial day. We disagree. Former section 120 (now 200) of the Elections Code provides: "No person shall be registered as a voter except by affidavit of registration. The affidavit shall be made before the county clerk and shall set forth all of the facts required to be shown by this chapter." Former section 121 (now 202) provides that the county clerk "may take the affidavit of registration in any adjoining county." Former section 123 (now 204) states that the county election board may provide "for the registration of electors" in precincts and "at specified times and places" other than the office of the county clerk. Former section 125 (now 205) provides that "[a]ny registration which may be made at the main office for registration in any city and county may be made and taken in any place in the city and county in the manner provided by rules and regulations made by the election board." We are persuaded that, in the light of the foregoing sections, registration is effected when the requisite affidavit is "made" by the voter and "taken" by the county clerk or other authorized person. The elector then becomes a "registered voter." [15] Appellant's insistence that the bound precinct registration *669 books constitute the sole list of registered voters is grounded on the erroneous premise that in the instant problem we are concerned with the question of voting. Former section 331 is coordinated with former section 122 to control the closing of registration and listing of voters for a particular election. Former section 122 (now 203) provides that "[r]egistration of electors shall be in progress at all times except during the 53 days immediately preceding any election, when registration shall cease for that election ..." (emphasis added). It in effect closes, not all registration, but registration for a particular election. As we said in Perham v. City of Los Altos, supra, 190 Cal.App.2d 808, 810, "[t]hat, obviously, is a limitation imposed to facilitate the orderly and accurate preparation of voting lists for use at any election" (emphasis added). Former section 331 then comes into play prescribing that within 15 days of such closing of registration, precinct registration books shall be prepared. All of the foregoing has to do with the question of voting. The act of registration was completed when the affidavit of the voter was properly made and taken. It did not depend upon its being incorporated in the precinct register, so far as registration is concerned. Thus, persons may be registered voters although ineligible to vote at the next election (cf. Perham, supra) or because under other registration procedures it was not contemplated that their affidavits of registration be subject to the compilation requirements of section 331. Former section 132.6 provides in substance for the filing of an absentee war voter's affidavit of registration by transmission to the clerk on or before the date of election with receipt thereof permissible within six days thereafter (former Elec. Code, 132.6, 5932). Quite obviously, such affidavits might be legally "made" and "taken" long after the time prescribed for the compilation of precinct registration books. [16] We must recognize that, if the "registered voters" residing in the territory proposed for annexation are not confined to those names contained in the precinct registration book, some difficulty will ensue in ascertaining the number of registered voters. Nevertheless we feel the proper question is who are the registered voters, not who are the registered voters qualified to vote at any election. Appellant maintains that the recognition of absentee war voter's registration and the retroactive effect of it pursuant to section 132.6 should not be permitted to void the instant annexation proceedings. This may be an unfortunate result but nevertheless, the law, as we view *670 it, permits it. We revert to Perham. It is a question after all of residence and registration. Section 35303 of the Government Code ordains that we count all registered voters residing in the territory on the crucial date. Former section 132.6 permits absentee war voter registration and ordains its retroactive effect "as of the date of the affidavit" of registration. This section, too, controls the counting of registered voters. We hold therefore that for the foregoing reasons Mr. and Mrs. Joseph A. Rogers, Jr., were also properly counted as registered voters residing within the territory proposed to be annexed. With Mr. and Mrs. Hitchcock, this brought the total number to 12 and justified the trial court's conclusion that appellant had exceeded its jurisdiction. The judgment is affirmed. Bray, P. J., and Molinari, J., concurred. Former section 5932, also dealing with absent voting, provided as follows: "All ballots cast under the provisions of this chapter shall, in order that they may be counted, be received by the clerk from whom they were received within six days after the date of the election in which they are to be counted." (New 14667 changes the above time to "not less than three days before the date of election. ...") "(a) Member of the armed forces of the United States or any auxiliary branch thereof." "* * *" "(f) Spouses and dependents of the persons enumerated herein. ..." Former section 132.6 provided, in its first two paragraphs, as follows: "Whenever any person not a registered elector, or any person who has changed his residence since last registering, who qualifies under the provisions of Section 48 shall apply in writing or in person to the clerk for an absent voter's ballot and the application shows that he is a war voter, and that his place of residence is in the county, the clerk shall mail to the applicant with the absent voter's ballot, or deliver to him, blank forms of registration affidavit as prescribed in Article 3 of this chapter to be executed in duplicate by the applicant." "If the applicant desires to vote at the election he shall, on or before the day of the election and before marking the absent voter's ballot, execute the affidavit of registration under the provisions of Section 120, 132, or 132.5 ... and return the same, in the return envelope but not in the identification envelope, together with the absent voter's ballot enclosed in the identification envelope, to the clerk from whom the same were received." (Emphasis added.) NOTES [fn. 1] 1. Mandamus is a proper remedy to compel termination of proceedings under the Annexation of Uninhabited Territory Act of 1939 prior to the time when quo warranto becomes available. (County of San Mateo v. City Council, Palo Alto (1959) 168 Cal.App.2d 220, 221 [335 P.2d 1013]; American Distilling Co. v. City Council, Sausalito (1950) 34 Cal.2d 660, 666-667 [212 P.2d 704, 18 A.L.R.2d 1247]; City of Campbell v. Mosk (1961) 197 Cal.App.2d 640, 645 [17 Cal.Rptr. 584].) [fn. *] *. Formerly Rules on Appeal, rule 5(a). [fn. 2] 2. Richmond thereupon notes that the Supreme Court in People v. Town of Ontario (1906) 148 Cal. 625, 641 [84 P. 205], sustained a trial court finding that certain territory " 'taken as a whole, may fairly be said to be inhabited' " notwithstanding " 'the presence of several uninhabited tracts or parcels, each exceeding five acres in area,' " Ontario being thereafter followed and applied in Rogers v. Board of Directors of Pasadena (1933) 218 Cal. 221, 223 [22 P.2d 509] and in People v. City of Whittier (1933) 133 Cal.App. 316, 320-321 [24 P.2d 219]. [fn. 3] 3. The argument proceeded as follows: "... that, logically extended, it would apply to a holding of 1,000 acres or more if the occupant is a registered voter and has a dwelling house in one corner of it." (141 Cal.App.2d at p. 115.) Appellant herein makes a similar contention. [fn. 4] 4. Section 35008 provides: "The boundaries of territory proposed to be annexed shall not be fixed without the consent of the owner of the property so as to exclude the site of the residence dwelling of the owner of the property and to include the remainder of the property of such owner, where the site of the residence dwelling is contiguous or adjacent to the remainder of the property. If in any annexation proceedings boundary lines are fixed in violation of this section, the affected property owner may at any time before one year after the completion of the proceedings file a statement of the violation of this section with the clerk of the legislative body of the city annexing, or proposing to annex, such property and at its next meeting the legislative body shall by resolution exclude such property from the territory annexed. If the annexation proceedings have been completed, the legislative body shall transmit a certified copy of such resolution, describing the boundaries of the annexed territory, as changed, with the Secretary of State, who shall file it and transmit a certificate of the filing to the clerk of the legislative body and to the board of supervisors of the county in which the city is situated. [fn. 5] 5. The exhibit brought before us shows that Rogers, after stating the length of residence, first wrote and then deleted the street address of his former San Jose home. Linda did the same thing. [fn. 6] 6. Former section 5931 of the Elections Code provided in part as follows: "At any time on or before the date of an election, an absent voter, regardless of whether he is within or without the territorial limits of the United States, may mark his ballot and transmit it, on or before the day of election, to the clerk by mail. ..." [fn. 7] 7. Since the present Elections Code of 1961, in effect September 15, 1961, represents extensive amendments and changes in the former Elections Code, we will refer to the pertinent sections as in effect on June 6, 1960, by the numbers used in the former (1939) Elections Code. [fn. 8] 8. Former section 48 provided: " 'War voter' refers to an elector who comes within one of the following categories: [fn. 9] 9. Perham v. City of Los Altos, supra, was decided on April 5, 1961. Shortly thereafter 35303 was amended by Stats. 1961, ch. 1988, p. 4183, 15 approved by the Governor July 19, 1961, and effective September 15, 1961, to give a new definition of uninhabited territory. Section 35303, as thus amended, now provides: "For purposes of this article territory shall be deemed uninhabited if less than 12 persons who have been registered to vote within the territory for at least 54 days reside within the territory at the time of the filing of the petition for annexation or the institution of proceedings on motion of the city legislative body." (Emphasis added.) We are not called hereupon to determine the effect of such amendment on the registration of absentee war voters covered by 250-254 of the 1961 Elections Code, which represent a reenactment of former 132.6 of the 1939 Code without substantial change. It is to be noted that the retroactive effect of such registration is still provided for in new 252 in language identical to that used in former 132.6. [fn. 10] 10. Former 331 provided: "Within fifteen days after the last day of registration for any election the county clerk shall arrange the original affidavits of registration for each precinct in which the election is to be held, alphabetically by surnames in each precinct, and bind them into books with an alphabetical index. Each book shall be marked on the outside with the name or number of a precinct, and shall contain all, and only, the original affidavits of registration of the voters residing within the precinct."
{ "pile_set_name": "FreeLaw" }
Introduction {#s1} ============ The assessment of human daily physical activity in population studies requires accurate, cheap, and feasible measurement technology [@pone.0061691-Corder1], [@pone.0061691-Wareham1], [@pone.0061691-Wong1]. Accelerometers are increasingly being used for physical activity assessment and most of the accelerometers that have been used in population studies express their output in proprietary units usually referred to as "counts" [@pone.0061691-Hagstromer1], [@pone.0061691-Colley1]. Accelerometer devices, based on acceleration sensors which allow for raw data storage expressed in g-units or SI units at a relatively high sampling frequency have been used in gait analysis [@pone.0061691-Brandes1], [@pone.0061691-MoeNilssen1] and ambulant activity classification [@pone.0061691-Aminian1], [@pone.0061691-Veltink1] for a number of years. The output of raw accelerometers is not summarized by the monitor allowing for increased control over data processing by the end-user in contrast to the traditional accelerometers. Technological developments in recent years have made raw accelerometry feasible for population research, allowing weeklong data collection. A measured acceleration signal consists of a gravitational component, a movement component, and noise [@pone.0061691-Veltink1]. During static conditions or conditions of steady state non-rotational movement, the gravitational component is visible as the offset of one or more sensor axes and can then be used for detection of the sensor orientation relative to the vertical plane [@pone.0061691-Veltink1]. The separation of the gravitational component from the acceleration signal is complicated by the fact that in the presence of rotational movements the frequency domains of the movement-related component and the gravitational component can overlap, thus making simple frequency-based filtering inappropriate for perfect separation. The first two studies that identified the challenge of separating the components of acceleration lacked a comparison against a reference method [@pone.0061691-Redmond1], [@pone.0061691-VanSomeren1]. Studies by Bouten et al. and Bourke et al. used a reference method, but were limited to laboratory experiments that may not generalise to accelerometer data collected under real life conditions [@pone.0061691-Bourke1], [@pone.0061691-Bouten1]. None of the studies as mentioned above systematically evaluated how metric accuracy varies across magnitudes and frequencies of acceleration. Characterisation of the latter may be important to gain insight into metric performance under real-life conditions. The use of gyroscopes in addition to acceleration sensors could be regarded as the solution for separating the gravitational component from the acceleration signal [@pone.0061691-Roetenberg1], [@pone.0061691-Sabatini1], [@pone.0061691-Yun1]. However, these devices do not yet meet feasibility requirements for use in large scale observational research. Raw accelerometry has been applied in various epidemiological studies since it became sufficiently feasible in the period 2008--2010. Most of these studies are not published yet, but already amount to over ten thousand participants. None of these datasets include gyroscopic data and therefore require an accelerometer-specific solution. The main objective of the present study was therefore to evaluate the ability of different methods (metrics) of processing acceleration signals to remove the gravitational component of acceleration by comparison against a reference method under a range of standardised kinematic conditions. A second objective was to assess the shared variance between these metrics in human physical activity data collected during daily life and the impact of metric selection on the accuracy with which daily energy expenditure can be estimated. Methods {#s2} ======= Ethics Statement {#s2a} ---------------- Ethical approvals were obtained from the Cambridgeshire research ethics committee, Cambridge (UK) and from the Regional Ethical Review Board in Umeå (Sweden). Study Design {#s2b} ------------ The main experiment in this study was done with a robot and did not involve testing of human participants. Two additional sets of experiments were performed, the first to test the degree to which metrics convey similar information when applied to wrist and hip signals, and the second to assess the implication of such differences for estimation of daily physical activity-related energy expenditure. Robot Experiment {#s2c} ---------------- An industrial robot (TX90, Stäubli Tec-Systems GmbH, Bayreuth, Germany; see [**Figure 1**](#pone-0061691-g001){ref-type="fig"}) was used to rotate accelerometers (GENEA, Unilever Discover, Sharnbrook Bedfordshire, UK) in the vertical plane following a general minimum-jerk oscillatory motion (single plane). The motion was applied to establish a standardized alternating contribution of gravity to the accelerometer output. The robot consists of an articulated arm with six joints from which the fifth joint counted from the base of the robot was used in this study. The oscillating motion was continuous (non-damping) around a single horizontal axis. The trajectory was programmed using a 7th order polynomial function with kinematic constraints **([Supporting Information S1](#pone.0061691.s001){ref-type="supplementary-material"})**. A high order function was needed to reduce the natural vibrations transmitted between the robot and its own base [@pone.0061691-Piazzi1], [@pone.0061691-Kyriakopoulos1]. An example of the angular position over time for one experimental condition is given in [**Figure 2**](#pone-0061691-g002){ref-type="fig"}. ![Experimental setup.\ A bar (B) holds five accelerometers and rotates around robot joint (A).](pone.0061691.g001){#pone-0061691-g001} ![Robot joint angle and horizontal acceleration for condition: 1 Hz, amplitude 45°, radius = 0.5 m.](pone.0061691.g002){#pone-0061691-g002} The frequency of oscillation, the radius of rotational movement (shortest distance to centre of rotation), and the angular range of motion were systematically varied. The range of frequency conditions was limited by the maximal amount of mass moment of inertia and torques that could be absorbed by the robot and supporting frame. For all frequencies ranging from 0.05 Hz to 1.2 Hz, eighteen tri-axial accelerometers were positioned along the length of a 70 cm bar mounted to the flange of the robot at 10 cm from the centre of rotation. The application of eighteen accelerometers in parallel allowed for assessment of the relationship between metric output and the radius of movement. To reduce mass moment of inertia at the higher frequencies of oscillation (\>1.1 Hz) a shorter bar (20 cm) was used, see [**Figure 1**](#pone-0061691-g001){ref-type="fig"}. The shorter bar provided space for the attachment of only five accelerometers. The torque can be further reduced by reducing the range of angular rotation; some experimental conditions were defined by this constraint. For reference purposes, all eighteen accelerometers were also tested under static conditions (no robot movement) at angles 0° and 22.5°. Each experimental condition was done for three minutes. An overview of all experimental conditions is shown in [**Table 1**](#pone-0061691-t001){ref-type="table"}. For monitoring potential vibrations, a source of experimental error, one additional accelerometer was attached to the base of joint 5 for all experimental conditions. The base of joint 5, i.e. the robotic with its joint 1 up to joint 4, should in theory not move during these experiments. 10.1371/journal.pone.0061691.t001 ###### Experimental conditions of the robot setup. ![](pone.0061691.t001){#pone-0061691-t001-1} Frequencies Angle range\* Number of accelerometers (range in position relative to axis of rotation) -------------------------------------------------------- --------------- --------------------------------------------------------------------------- 0 Hz 0° and 22.5° 18 (0.13--0.78 m) 0.05 to 0.55 Hz (steps of 0.05) 0--90° 18 (0.13--0.78 m) 0.60, 0.70, and 0.80 Hz 0--45° 18 (0.13--0.78 m) 0.90, 1.00, and 1.10 Hz 0--20° 18 (0.13--0.78 m) 1.20 and 1.30 Hz 0--45° 5 (0.13--0.29 m) 1.4 to 2.6 (steps of 0.1), 2.8, 3.0, 3.2, 3.6 and 4 Hz 0--20° 5 (0.13--0.29 m) \[\*for 0° the bar is in horizontal position and for 90° the bar is pointing upwards relative to the axis of rotation\]. Human Experiments {#s2d} ----------------- In order to facilitate the interpretation of the robot experiment in the context of human daily (free-living) physical activity, we asked 47 men and 50 women (healthy, aged 22--65 yrs) to wear accelerometers on their wrist and on their hip for seven days during free-living as previously described [@pone.0061691-vanHees1]. We also re-analysed wrist acceleration signals obtained during free-living conditions from 65 healthy women (aged 20--35 yrs) as previously described [@pone.0061691-vanHees1]. In this latter sample, physical activity-related energy expenditure (PAEE) was assessed using the doubly labelled water method in combination with resting energy expenditure measured by indirect calorimetry [@pone.0061691-vanHees1]. For both human studies, objectives and procedures were explained in detail to the participants, after which they provided written and verbal informed consent. Accelerometer {#s2e} ------------- The accelerometer comprised a tri-axial STMicroelectronics accelerometer (LIS3LV02DL) with a dynamic range of ±6 g (1 g = 9.81 m·s^−2^), as described elsewhere [@pone.0061691-van1]. The acceleration was sampled at 80 Hz and data were stored in g units for offline analyses. In the robot experiment, the accelerometer was aligned by two aluminium strips on each side of the bar (insert, [**Figure 1**](#pone-0061691-g001){ref-type="fig"}) and covered by duck-tape on top, see [**Figure 1**](#pone-0061691-g001){ref-type="fig"}. The radius length, i.e. the distance from the axis of rotation to the accelerometer chip, was assessed by measurement tape to the closest mm. The position of the accelerometer chip inside the accelerometer packaging was obtained from the manufacturer. In the human experiment, the accelerometers were attached to the wrist with a nylon weave strap and to the hip with an elastic belt. Participants were instructed to wear the accelerometer on the wrist continuously for 24 hours per day throughout the whole observation period and to remove the hip accelerometer during sleeping hours. The manufacturer calibration of all acceleration sensors was tested under static conditions (no movement, vector magnitude = 1 g) and adjusted if necessary. Metrics {#s2f} ------- For the robot analyses three metrics for the estimation of acceleration related to movement were evaluated: (i) the Euclidean norm (vector magnitude) of the three raw signals minus 1, referred to as ENMO; (ii) the application of a high-pass frequency filter (4^th^ order Butterworth filter with ω~0~ = 0.2 Hz) to each raw signal, after which the Euclidean norm was taken from the three resulting signals, , referred to as HFEN, and; (iii) metric HFEN plus the Euclidean norm of the three low-pass filtered raw signals (4^th^ order Butterworth with ω~0~ = 0.2 Hz) minus 1 g, referred to as HFEN~+~. The third metric has not been described previously. The motivation for metric HFEN~+~ is as follows: In the absence of rotational movement the Euclidian norm of the three low-pass filtered raw signals (LFEN) is equal to 1 g. In the presence of rotation, however, LFEN may be different to 1 g due to imperfect separation; there we add this difference (positive or negative) to HFEN. A low frequency component above 1 g may result from low-frequency accelerations perpendicular to the direction of rotation, e.g. the centripetal force when sitting on a swing. A low frequency component below 1 g could indicate that part of the gravitational component is still contained in the high-frequency content, e.g. rotations in the vertical plane as a result of which gravity is an alternating component in the signal. A further elaboration on the motivation for metric HFEN~+~ can be found in **[Supporting Information S1](#pone.0061691.s001){ref-type="supplementary-material"}**. For some of the metrics described above the output could in theory be negative. To gain insight into when this happens, negative values were not corrected for the robot experiment. However, for the accelerometer data collected in daily human movement, negative metric output was rounded off to zero before further analysis. The filter cut-off frequency of 0.2 Hz for metrics HFEN and HFEN~+~ was chosen on the presumption that most of daily acceleration related to movement for most human body parts occurs at frequencies higher than 0.2 Hz. n the robot experiment, the exact absolute value of this filter cut-off frequency (0.2 Hz) was considered of minor relevance as this experiment intends to investigate frequency of rotation and frequency of filtering on a relative scale. For the human part of our study, both a cut-off frequency of 0.2 Hz and 0.5 Hz were evaluated to assess the effect of threshold selection in relation to human movement. Additionally, the human part of our study was extended with the application of a band-pass frequency filter version of HFEN (4^th^ order Butterworth filter with ω~0~ = 0.2--15 Hz), referred to as BFEN, to assess the effect of high-frequency noise removal. Finally, the Euclidean norm of the three raw acceleration signals (EN) without subtraction of gravity was added to the evaluations in human data to assess the relevance of attempting to remove the gravitational component from an applied perspective. To sum up, metrics evaluated in this investigation include Euclidian Norm (EN), Euclidian Norm Minus One (ENMO), Bandpass-Filtered followed by Euclidian Norm (BFEN), Highpass-Filtered followed by Euclidian Norm (HFEN), and Highpass-Filtered followed by Euclidian Norm Plus difference between 1 g and low-pass-filtered component (HFEN~+~). Analysis {#s2g} -------- Reference values for robot acceleration were calculated based on forward kinematics of the robot arm using the radius length () of each accelerometer relative to the axis of rotation and the robot arm's angle , angular velocity , and angular acceleration over time. Although the robot recorded the joint angle at 250 Hz, this information was not used due to known issues of numerical noise in the derivation of angular velocity and angular acceleration. Instead, the angular velocity and angular acceleration were derived analytically by taking the first and second derivative of the input command equations describing the angular motion as used for controlling the robot. Next, equation I was used to calculate reference acceleration . Here, represents the tangential acceleration and represents the centripetal acceleration, which when taken together as the vector magnitude add up to the overall acceleration of the accelerometer. The average metric output and reference values were calculated over an integer number of oscillating periods in the middle two minutes of each experimental condition (3 minutes), after which absolute and relative measurement errors were expressed. Relative errors were calculated as (Estimated -- Reference)/Reference. For reference purposes, all analyses were repeated based on simulated acceleration signals using the equations as in equation II and equation III. Here, refers to the acceleration signal perpendicular to the length of the bar which captures the tangential acceleration combined with the effect of the gravitational component and refers to the acceleration signal in parallel to the length of the bar which captures the centripetal acceleration combined with the gravitational component. The centre of rotation is assumed to not change position. Metrics ENMO, HFEN, HFEN~+~, BFEN and EN were applied to the raw data collected on the wrist and hip (7 days) after which metric output was averaged over consecutive non-overlapping 1 minute time windows. Further, metrics ENMO, HFEN, HFEN~+~, BFEN and EN were applied to the raw data collected in the human participants where PAEE reference data was available. Here, metric output was averaged per person. A detailed description of the detection of monitor non-wear periods and signal clipping are provided in **[Supporting Information S1](#pone.0061691.s001){ref-type="supplementary-material"}**. Fifteen minute blocks that were classified as non-wear or clipping were replaced by the average of blocks at the same time periods of the day (from the other days in each individual record). If no data was collected for a certain part of the day then it was imputed by 1 g for metric EN and by 0 g for all other metrics. All signal processing and statistics were performed in R (<http://cran.r-project.org>). Statistics {#s2h} ---------- Means and (relative) differences were computed for the data resulting from the robot experiment. In order to evaluate whether differences between metrics resulted in different measures of free-living human movement, repeated measures ANOVA was used to assess the within- and between-individual explained variance between metrics, stratified by wrist and hip placement. Analyses were performed for all data points excluding non-wear time segments and repeated including imputed data for non-wear time segments. The most important difference is that this would either include or exclude hip accelerometer values for sleeping hours. Results were very similar, and we only report results excluding non-wear time for these analyses. Average and standard deviation of metric output are reported based on imputed data to facilitate the comparison between this study population with future study populations. For the PAEE analyses, participant inclusion criteria were identical to our previous work [@pone.0061691-vanHees1]: more than 50% detected monitor wear time and at least one day of valid data. Linear regression analysis was used to assess how much of the variation in daily PAEE, expressed in MJ/day, can be explained by each metric in combination with body weight. Additionally, we tested the additive value of metrics by adding combinations of metrics to the regression model. Results {#s3} ======= Robot conditions and corresponding reference acceleration are presented in [**Figure 3**](#pone-0061691-g003){ref-type="fig"}. The accelerometer attached to the base of joint 5, which in theory should not move, recorded a magnitude of acceleration (vibration) beyond the sensor's noise level (SD: 2.6 mg = 0.0026 g) for most experimental conditions. On average the acceleration of the robot joint was 4% to 5% of the average acceleration of the accelerometers on the bar attached to the flange, see [**Table 2**](#pone-0061691-t002){ref-type="table"}. The highest value of 76% for ENMO was the result of computed acceleration being close to zero (−5.13 mg). ![Robot conditions and corresponding reference acceleration (mg), where A = amplitude of angle.](pone.0061691.g003){#pone-0061691-g003} 10.1371/journal.pone.0061691.t002 ###### Average (mg) and relative (%) acceleration of the base of joint 5 (should ideally be zero) by experimental condition and metric. ![](pone.0061691.t002){#pone-0061691-t002-2} Metrics ------------ ------------- --------- ------ ------ 0.05--0.2 0--90 −3.9 13.4 9.4 76.0% 7.1% 6.3% 0.25--0.55 0--90 −4.9 14.2 9.2 −12.3% 2.2% 2.4% 0.6--0.8 0--45 −2.9 18.9 15.7 −8.1% 3.7% 3.7% 0.9--1.1 0--20 0.9 21.5 22.0 6.7% 5.8% 6.4% 1.2--1.3 0--45 1.5 9.3 10.8 2.0% 1.4% 1.9% 1.4--2.0 0--20 0.1 35.9 35.2 0.4% 7.8% 7.9% 2.1--3.0 0--20 1.4 17.1 18.3 0.7% 1.9% 2.1% 3.2--4.0 0--20 2.3 74.8 73.5 0.2% 4.3% 4.3% **Average** 0.7 25.6 24.3 8.2% 4.3% 4.4% Relative values are expressed as percentage of average metric output for all accelerometers attached to the bar as fixed to the flange. The metric output for each accelerometer attached to the bar was compared against the reference acceleration. Metric HFEN~+~ was more accurate compared to metric HFEN with an average difference in absolute measurement error of respectively, 90 mg and 109 mg. Measurement error was lowest for metric HFEN~+~ in all but one experimental conditions based on oscillation frequencies higher than 0.2 Hz. On the contrary, metric ENMO outperformed the other metrics for frequencies of oscillation below 0.2 Hz, see [**Table 3**](#pone-0061691-t003){ref-type="table"}. For all metrics, except ENMO, relative and absolute measurement error was lower for higher radius settings, see [**Table 3**](#pone-0061691-t003){ref-type="table"}. 10.1371/journal.pone.0061691.t003 ###### Evaluation of metrics using empirically recorded acceleration signals. ![](pone.0061691.t003){#pone-0061691-t003-3} Freq.(Hz) Angle (°) Radius (m) Acc. (mg) ENMO HFEN HFEN~+~ ------------ ----------- ------------ ----------- ------------ ------------ ------------ 0\* 0 0.1--0.3 0 −9 4 −5 0\* 0 0.3--0.6 0 0 6 6 0\* 0 0.6--0.8 0 −3 9 6 0\* 22.5 0.1--0.3 0 −4 3 0 0\* 22.5 0.3--0.6 0 −11 5 −4 0\* 22.5 0.6--0.8 0 −11 7 −4 0.05--0.2 0--90 0.1--0.3 14 −16(−173) 167 (1427) 132 (1184) 0.05--0.2 0--90 0.3--0.6 31 −38 (−162) 155 (619) 112 (447) 0.05--0.2 0--90 0.6--0.8 48 −55 (−144) 152 (442) 107 (343) 0.25--0.55 0--90 0.1--0.3 129 −122 (−98) 435 (498) 212 (272) 0.25--0.55 0--90 0.3--0.6 281 −251 (−93) 364 (194) 89 (76) 0.25--0.55 0--90 0.6--0.8 434 −354 (−86) 308 (108) −3 (24) 0.6--0.8 0--45 0.1--0.3 161 −153 (−97) 206 (149) 141 (102) 0.6--0.8 0--45 0.3--0.6 351 −328 (−95) 152 (49) 57 (21) 0.6--0.8 0--45 0.6--0.8 541 −465 (−87) 118 (24) 9 (3) 0.9--1.1 0--20 0.1--0.3 134 −128 (−99) 93 (78) 83 (67) 0.9--1.1 0--20 0.3--0.6 293 −292(−100) 73 (27) 44 (17) 0.9--1.1 0--20 0.6--0.8 451 −419 (−93) 68 (16) 35 (8) 1.2--1.3 0--45 0.1--0.3 508 −432 (−87) 160 (35) 63 (14) 1.4--2.0 0--20 0.1--0.3 390 −364 (−95) 72 (22) 54 (16) 2.1--3.0 0--20 0.1--0.3 832 −618 (−79) 47 (7) 22 (3) 3.2--4.0 0--20 0.1--0.3 1700 −779 (−50) 45 (3) 14 (1) Values are average absolute differences in mg (average relative error % in brackets §) between each metric output and the actual acceleration related to movement for various sections of the experiment. \[Acc, average reference acceleration; \*zero movement condition; § Relative measurement error was calculated per experimental condition and then averaged across each section of the experiment\]. Replication of the analyses with simulated acceleration signals confirmed the empirical findings as described above. A detailed overview of the results based on simulated acceleration signals are included in **[Supporting Information S1](#pone.0061691.s001){ref-type="supplementary-material"}**. Data and R-scripts related to the robot experiments are available on our website: <http://www.mrc-epid.cam.ac.uk/research/resources>. When metrics were applied to human wrist and hip acceleration signals collected during free-living conditions, repeated measures ANOVA showed that the shared within- and between-individual variances (r-squared) varied between metric pairs and body locations, see [**Table 4**](#pone-0061691-t004){ref-type="table"} **and** [**Table 5**](#pone-0061691-t005){ref-type="table"}. Lowest shared variance was found for metric-pairs involving metric EN; for example, this metric shared 54 and 11% of the within- and between--individual variance, respectively, with metric BFEN for hip acceleration, see [**Table 5**](#pone-0061691-t005){ref-type="table"}. Highest shared variances were observed between the filter-based metrics. For example, metrics HFEN and BFEN as well as versions of HFEN with different cut-off frequencies were all highly correlated both within and between individuals and for both hip and wrist data (r-square values \>0.96), see [**Table 4**](#pone-0061691-t004){ref-type="table"} **and** [**5**](#pone-0061691-t005){ref-type="table"}. A difference between wrist and hip worth noting was the shared variance between ENMO and the filter-based metrics HFEN, BFEN and HFEN+. Here, the shared variance within individuals was highest for the hip (0.92 vs. 0.87 on average), while the shared variance between individuals was highest for the wrist (0.87 vs. 0.62 on average), see [**Table 4**](#pone-0061691-t004){ref-type="table"} **and** [**Table 5**](#pone-0061691-t005){ref-type="table"}. 10.1371/journal.pone.0061691.t004 ###### Explained variance (r^2^) within (above diagonal) and between (below diagonal) individual wrist accelerometer data for all combinations of data processing metrics. ![](pone.0061691.t004){#pone-0061691-t004-4} ω~0~ (Hz) EN ENMO BFEN HFEN HFEN HFEN~+~ HFEN~+~ --------------------------------- ----------- -------- --------- ----------- -------- --------- --------- --------- **ω~0~ (Hz)** **−** **−** *0.2--15* *0.2* *0.5* *0.2* *0.5* **EN** **−** **−** 0.91 0.61 0.62 0.71 0.75 0.80 **ENMO** **−** 0.92 **−** 0.80 0.81 0.89 0.91 0.95 **BFEN** *0.2--15* 0.58 0.80 **−** 0.99 0.96 0.96 0.93 **HFEN** *0.2* 0.60 0.82 1.00 **−** 0.98 0.97 0.94 **HFEN** *0.5* 0.64 0.88 0.98 0.99 **−** 0.98 0.98 **HFEN~+~** *0.2* 0.74 0.91 0.97 0.97 0.98 **−** 0.99 **HFEN~+~** *0.5* 0.77 0.95 0.94 0.95 0.98 0.99 **−** *Mean (sd) acceleration \[mg\]* 1016(9) 32(10) 114(25) 118(26) 93(22) 110(25) 94(23) \[ω~0~: cut-off for frequency filter\]. 10.1371/journal.pone.0061691.t005 ###### Explained variance (r^2^) within (above diagonal) and between (below diagonal) individual hip accelerometer data for all combinations of data processing metrics. ![](pone.0061691.t005){#pone-0061691-t005-5} ω~0~ (Hz) EN ENMO BFEN HFEN HFEN HFEN~+~ HFEN~+~ --------------------------------- ----------- -------- -------- ----------- -------- -------- --------- --------- ω~0~ (Hz) − − *0.2--15* *0.2* *0.5* *0.2* *0.5* EN − − 0.77 0.54 0.55 0.58 0.61 0.63 ENMO − 0.75 − 0.89 0.90 0.92 0.94 0.95 BFEN *0.2--15* 0.11 0.46 − 1.00 0.99 0.99 0.98 HFEN *0.2* 0.10 0.46 1.00 − 0.99 0.99 0.98 HFEN *0.5* 0.11 0.48 0.98 0.98 − 0.97 0.99 HFEN~+~ *0.2* 0.52 0.85 0.78 0.78 0.75 − 0.99 HFEN~+~ *0.5* 0.54 0.89 0.76 0.75 0.76 0.99 − *Mean (sd) acceleration \[mg\]* 1007(15) 18(16) 46(15) 48(15) 42(14) 50(21) 45(20) \[ω~0~: cut-off for frequency filter\]. For the modelling of PAEE, HFEN~+~ outperformed metrics ENMO, HFEN, BFEN and EN, explaining 36% of the variance in daily PAEE, see [**Table 6**](#pone-0061691-t006){ref-type="table"}. When pairs of metrics were added to the regression model, no significant additive value was found (p\>0.05 corresponding with increases in model r^2^ of less than 0.01). 10.1371/journal.pone.0061691.t006 ###### Overview of regression models for predicting PAEE (MJ day^−1^) based on N = 63 women. ![](pone.0061691.t006){#pone-0061691-t006-6} Model input ω~0~ (Hz) SE R^2^ Equation ------------- ----------- ------ ----------------------------------------- --------------------------------------- EN − 0.99 0.26[\*](#nt109){ref-type="table-fn"} −56.146 + BW × 0.023 + EN × 57.093 ENMO − 0.94 0.34[\*\*](#nt108){ref-type="table-fn"} −0.172 + BW × 0.025 + ENMO × 0.057 BFEN 0.2--15 0.97 0.30[\*\*](#nt108){ref-type="table-fn"} −0.913 + BW × 0.021 + BFEN × 0.023 HFEN 0.2 0.97 0.30[\*\*](#nt108){ref-type="table-fn"} −0.905 + BW × 0.021 + HFEN × 0.023 HFEN 0.5 0.95 0.32[\*\*](#nt108){ref-type="table-fn"} −0.769 + BW × 0.022 + HFEN × 0.027 HFEN~+~ 0.2 0.93 0.36[\*\*](#nt108){ref-type="table-fn"} −1.114 + BW × 0.023 + HFEN~+~ × 0.025 HFEN~+~ 0.5 0.93 0.36[\*\*](#nt108){ref-type="table-fn"} −0.805 +BW × 0.023 + HFEN~+~ × 0.026 \[SE: Residual standard error; : p\<.001; : p\<.01; ω~0~: cut-off for frequency filter; BW = body weight (kg)\]. Discussion {#s4} ========== The present study demonstrates that the choice of signal processing technique for summarising accelerometer data can have a substantial impact on the accuracy with which acceleration related to movement is measured. Subsequently, the choice of signal processing technique impacts on the summary measures of human acceleration data and criterion-related validity for estimating daily PAEE. In the past, physical activity researchers did not have the opportunity to select a metric; the metric decision was made by the manufacturer of the accelerometer [@pone.0061691-Plasqui1], [@pone.0061691-Bonomi1], [@pone.0061691-Corder2], [@pone.0061691-Assah1], [@pone.0061691-Rothney1]. The first and main part of this paper evaluated metrics under a range of standardised kinematic conditions in order to gain insight into how the accuracy of metric output relates to the kinematics of movement. No single metric outperformed all other metrics for all experimental conditions. Metric HFEN~+~ resulted in less measurement error compared to metric HFEN. This result may indicate that HFEN~+~ manages to retrieve some of the non-gravitational acceleration in the lower frequency range and/or remove gravitational acceleration from the frequency range above the filter threshold in contrast to metric HFEN. Metric HFEN~+~ outperformed metrics ENMO and HFEN for the experimental conditions based on oscillating frequencies higher than the cut-off frequency as used by its frequency filter (0.2 Hz), while the ENMO metric outperformed metrics HFEN and HFEN~+~ for experimental conditions based on oscillating frequencies below this cut-off frequency. This difference between HFEN, HFEN~+~ and ENMO may partly be explained by the fact that metrics HFEN and HFEN~+~ aim to remove the gravitational component by making assumptions on its representation in the frequency content of an acceleration signal, while ENMO aims to remove the gravitational component based on assumptions with regard to its magnitude. Metric HFEN~+~ could be seen as a hybrid version of the two approaches as it relies on both an assumption about the representation of gravity in the frequency domain and an assumption about the magnitude of gravity. The mutual assumption by metrics ENMO and HFEN~+~ that gravity is measured as 1 g would not hold true if acceleration sensors are not accurately calibrated and would therefore result in biased metric output. Further, metric ENMO has one additional limitation: For a signal with an offset of 1 g (e.g. containing the gravitational component) and an amplitude of less than 1, taking the square will increase the amplitude. On the contrary, if the square is taken from a signal with no offset (e.g. no gravity) and the amplitude is less than one, then taking the square will decrease the amplitude. Therefore, taking the square of three orthogonal signals like in metric ENMO will result in a stronger contribution of vertical accelerations that alternate around 1 g to the resulting summary measure compared with horizontal accelerations that alternated around 0 g. The reference acceleration as used for the evaluation of the metrics may not have been exactly equal to the true acceleration that the accelerometers were exposed to; imprecision in accelerometer positioning and system vibrations are possible sources of error. In theory, the acceleration of a rotating and non-translating object is proportional to the distance from its centre of rotation, the radius length. A discrepancy of 5 mm (plausible) in the assessment of accelerometer position would represent 0.6% for the accelerometer farthest away and 3.7% for the accelerometer closest to the axis of rotation. This would translate into a similar degree of error in the calculated reference acceleration (0.6--3.7%). Secondly, vibrations of the whole robot during operation may have resulted in the true acceleration exposure being higher than what we calculated it to be. The accelerometer attached to the base of joint 5 did record acceleration beyond the sensor's noise level likely resulting from the movement of the robot system itself. We believe that robot movement was caused by the supporting frame that vibrated towards the extreme experimental conditions; the robot itself has a high stiffness. The accelerometers attached to the bar mounted on the flange have been exposed to these vibrations as well as those intended by the experimental design. The replication of the robot analyses with simulated acceleration signals confirmed the empirical findings, indicating that environmental vibrations had no significant impact. As for the analyses conducted on data collected during human daily life, the shared within-individual variances were all above 80% between metrics which make some attempt at removing the gravitational component, indicating the pattern within an individual is picked up quite similarly between those metrics. The between-individual shared variances, which is a measure of the metrics' ability to rank individuals similarly, showed some differences between hip and wrist positions, most notably lower similarity between ENMO and frequency-filtered metrics for hip than wrist. Whether this reflects differences in monitoring protocols (24-hr vs. non-sleep time), differences in signal to error ratio and/or differences in frequency characteristics of the gravitational component as measured by triaxial accelerometry at these two positions is difficult to conclude from our data. However, it should be noted that shared variances only indicate to what extent metrics are similar in describing variance on a relative level but not what the shared variance represents; it will also include any correlated measurement error and should therefore be interpreted with caution. Physical activity-related energy expenditure and body acceleration are only distally related to each other. As a consequence, differences in explained variance in daily PAEE does not serve as direct evidence for a metric's ability to remove the gravitational component. HFEN~+~ outperformed HFEN when using daily PAEE as a reference, which confirms the findings from the higher frequency conditions in the robot experiment. Further, ENMO turned out to be a good alternative for HFEN~+~. The correspondence between the strong performance of ENMO in explaining variance in PAEE in the current analysis with the strong performance of ENMO in the lower frequency range of the robot experiment might indicate that wrist acceleration in daily life is dominated by translational accelerations and/or accelerations resulting from low frequency rotations. A second explanation for the strong performance of metric ENMO may be its higher sensitivity to vertical accelerations (vertical acceleration is amplified) as explained above. The latter would indicate that vertical wrist accelerations are the stronger determinant of daily PAEE compared with accelerations in the horizontal plane. A third and final explanation could be that ENMO is more accurate at measuring translational acceleration compared with some of the other metrics, as the signal is never deformed by frequency filtering in ENMO. The subtraction of one in ENMO has a constant effect on all the metric output and would in theory be perfectly correlated with EN, which should therefore correlate the same with PAEE. However, there is one additional difference between the two metrics, namely the replacement of negative values by zero in ENMO, which explains why metric ENMO outperforms metric EN for the prediction of PAEE. The truncation of negative values to zero could be hypothesized to be an effective correction mechanism for errors in the subtraction of the gravitational component. Filter settings for HFEN and HFEN~+~ were briefly evaluated indicating that a 0.5 Hz filter cut-off frequency may perform slightly better than a 0.2 Hz filter cut-off frequency for predicting PAEE. A more thorough optimization of filter settings could lead to further improvement but also introduces the risk of over-fitting filter configurations to one study population, which may not generalise to others. One previous study investigated the need for removing the gravitational component using metabolic energy expenditure as reference method and concluded that attempting to remove the gravitational component is not worth the effort [@pone.0061691-Bouten1]. In that particular study, body segment position and orientation over time were derived from a 2D optical system and used to simulate acceleration sensor output [@pone.0061691-Bouten1]. The validity of these simulations was only assessed for the lower back position and not for the five other simulated sensor positions, complicating the interpretation of study results. Our own results indicate that attempting to remove the gravitational component is worth the effort for estimating daily PAEE in humans based on wrist accelerometry as ENMO, HFEN and HFEN~+~ clearly outperformed metric EN. Additional research is needed to explore the potential of combining metrics in a fashion that the best metric is chosen depending on the kinematic conditions. It should be noted that all PAEE-related results apply to the wrist placement and cannot be generalized to other body locations. Future research is therefore also needed to explore the importance of metric selection for other body locations, in particular commonly used positions at the lower back and the hip. Conclusions {#s4a} ----------- In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice of metric explains different degrees of variance in daily physical activity. Supporting Information {#s5} ====================== ###### **Additional information on signal processing and replication of robot findings with simulated data.** (DOC) ###### Click here for additional data file. We thank those who participated in the study. We acknowledge Unilever Ltd for loan of GENEA monitors. Finally, we would like to thank Antony Wright and Les Bluck (MRC Human Nutrition Unit) for their involvement in the analysis of isotopic enrichment of urine samples. [^1]: **Competing Interests:**Vincent van Hees, who led on this manuscript, was funded by a BBSRC industry-CASE studentship. This studentship came with funding from both the BBSRC and an industry partner, Unilever Discover Ltd in this case (<http://www.bbsrc.ac.uk/web/FILES/Guidelines/studentship_handbook.pdf>). Unilever Discover Ltd had no involvement in the study as presented and was only informed about progress and final results. This does not alter the authors\' adherence to all the PLOS ONE policies on sharing data and materials. [^2]: Conceived and designed the experiments: VVH LG ECDL ME MP ST UE FR PWF AH SB. Performed the experiments: VVH LG ME ECDL FR. Analyzed the data: VVH. Contributed reagents/materials/analysis tools: VVH ECDL ME. Wrote the paper: VVH SB.
{ "pile_set_name": "PubMed Central" }
{ "$schema" : "http://json-schema.org/draft-03/hyper-schema#", "id" : "http://json-schema.org/draft-03/json-ref#", "additionalItems" : {"$ref" : "#"}, "additionalProperties" : {"$ref" : "#"}, "links" : [ { "href" : "{id}", "rel" : "self" }, { "href" : "{$ref}", "rel" : "full" }, { "href" : "{$schema}", "rel" : "describedby" } ], "fragmentResolution" : "dot-delimited" }
{ "pile_set_name": "Github" }
Quantitative real-time PCR based on single copy gene sequence for detection of Actinobacillus actinomycetemcomitans and Porphyromonas gingivalis. To establish a method for quantification of Actinobacillus actinomycetemcomitans and Porphyromonas gingivalis from subgingival plaque by real-time polymerase chain reaction (PCR) technique. Bacterial cells from both species were obtained from type culture and counted microscopically. Cellular suspension in sterile distilled water was used for DNA extraction by boiling for 20 min, with a mineral oil cover. Primers for PCR were selected from sequences of LktC gene (A. actinomycetemcomitans) and Arg-gingipain (P. gingivalis) to yield amplicons below 100 bp. SYBR Green I based real-time PCR was adjusted to quantify separately both species. A good sensitivity and specificity were obtained for both species, although the yield was better for A. actinomycetemcomitans. A good repeatability of cycle threshold (CT) was encountered, so coefficient of variation was below 6% at every initial copy number. A new method of quantification of A. actinomycetemcomitans and P. gingivalis based on SYBR Green real-time PCR is presented. Its good sensibility and repeatability will allow its application to analysis of subgingival plaque samples.
{ "pile_set_name": "PubMed Abstracts" }
Cuenca, Ecuador is the most amazing place to study Spanish! It was such a welcoming city, I found I could learn, study, make friends, and practice my Spanish, all with ease. It’s a unique part of Ecuador with history so special that it has been chosen as a Unesco World Heritage Trust Site! There are so many things to see and do in Cuenca, including admiring the amazing views of the Andean Mountains which surround the city! Cuenca is also a bustling university city, so I was always sure to find both international and local students amongst its streets mingling among the locals. The city had an awesome atmosphere – a real mix of old and traditional and youthful and modern everywhere I went. I stayed in Ecuador for two and a half weeks, and it was the most interesting and exciting holiday I ever had! Although the country is quite small, when compared to other South American countries, it offers you the full range of South American landscapes: the Pacific coast, the Andes, and the jungle. And all of them are just a bus ride away from each other! If you are considering a Spanish course in Ecuador and want to stay in a really vibrant city, then go to Quito! For the first couple of days you might feel a bit lost in this busy city, but don’t worry, you will love it! The colonial architecture in the historic part of Quito is just beautiful, and there are incredible views of the volcanoes which stand all around the city. There are also so many places around Quito that are well worth seeing. For example, not far from Quito is Otavalo with its famous indigenous art and handcraft market. There you can buy beautiful traditional Indian clothes, jewelry and much much more. If you love strolling around markets as much as I do, you absolutely must go there. Not far from Otavalo is Cotacachi, known as the “leather town”, where you can buy all kind of leather handcrafts.
{ "pile_set_name": "Pile-CC" }
Previously in another post, I had created a uploader using simple HTML and PHP to upload files directly to Amazon AWS S3 server. In this tutorial, we will just transform the form into Ajax based file uploader using jQuery. Ajax makes it really easy for the user as the page doesn’t need to be reloaded and we can also show a progress bar as the user waits for the upload to finish. Whether you’re developing a custom contact form, comment or testimonial section in your WordPress site, you will need a strong anti-spam solution to protect yourself from bombardment of Spam content. Who else can give you better protection than inbuilt Akismet plugin in WordPress. New Comments About Sanwebe Welcome to sanwebe.com, a blog 100% inspired by our ever changing web development world, it's a small effort to provide useful related resources, tips and tutorials to web developers and newbies. Blog was launched back in 2011, and recently been moved from saaraan to sanwebe.com, blog needs some catching up to do, but your valuable feedbacks will always help.
{ "pile_set_name": "Pile-CC" }
Q: External Style Sheet does not work with `h1` in XAMPP I am trying to make a navigation bar in a separate file, so that I may include it with php later. The file, nav_menu.php, contains a h1 tag, a p tag, and an ul containing a tags.I made the CSS in an External Style Sheet. I styled all the elements. The h1 tag didn't work. Why? nav_menu.php: <html> <head> <link rel="stylesheet" type="text/css" href="nav_menu.css"> </head> <body> <h1 title="School Helping Program">------ S.H.P. ------</h1> <p title="S.H.P.">School Helping Program</p> <ul> <li><a href="home.php">home</a></li> <li><a href="marks.php">marks</a></li> <li><a onclick="logout()">log out</a></li> </ul> </body> </html> nav_menu.css: <style> h1{text-align:center;color:#CC0000;}/*Here's the prolbem*/ p{font-style:italic;text-align:center;} li{float:left;} ul{list-style-type:none;margin:0;padding:0;} a{ display:block; width:180px; text-align:center; background-color:#5CB8E6; text-transform:uppercase; color:#CC0000; text-decoration:none; padding:10px 135px; cursor: pointer; } A: This is because you got <style> in first line and browser threats it as <style>\n\nh1 selector.
{ "pile_set_name": "StackExchange" }
Q: RxJava - Can't create handler inside thread that has not called Looper.prepare() - API 16 Full disclosure, I'm still learning RxJava, it's a bit hard to grasp the idea if many of the tutorials available are not newbie friendly. This error happens on API 16, works fine on API 23 & above (have tested below). As you will see, I'm trying to replace Async Task with RxJava. This is my code: private void getGps() { TrackGPS gps = new TrackGPS(this); Single.fromCallable(() -> { if (gps.canGetLocation()) { mMainVariables.setLongitude(gps.getLongitude()); mMainVariables.setLatitude(gps.getLatitude()); if (mMainVariables.getLongitude() != 0.0) { Geocoder geocoder; List<Address> addresses = null; geocoder = new Geocoder(this, Locale.getDefault()); addresses = geocoder.getFromLocation(mMainVariables.getLatitude(), mMainVariables.getLongitude(), 1); // Here 1 represent max location result to returned, by documents it recommended 1 to 5 Log.e("Address:", addresses.get(0).getAddressLine(0)); mMainVariables.setAddress(addresses.get(0).getAddressLine(0)); mMainVariables.setCity(addresses.get(0).getLocality()); mMainVariables.setState(addresses.get(0).getAdminArea()); mMainVariables.setCountry(addresses.get(0).getCountryName()); mMainVariables.setPostalCode(addresses.get(0).getPostalCode()); mMainVariables.setKnownName(addresses.get(0).getFeatureName()); Log.d("Lat Long:", "Lat: " + Double.toString(mMainVariables.getLatitude()) + " Long: " + Double.toString(mMainVariables.getLongitude())); return addresses.get(0).getAddressLine(0); } else { gps.showSettingsAlert(); } } else { Toasty.error(this, "Can't locate GPS", Toast.LENGTH_SHORT, true).show(); } return ""; }).subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe((result) -> { mTxtResult.setText(result); }); } EDIT: Stack below: 05-23 20:26:58.936 3420-3420/com.example.ga.realm3 E/AndroidRuntime: FATAL EXCEPTION: main io.reactivex.exceptions.OnErrorNotImplementedException: Can't create handler inside thread that has not called Looper.prepare() at io.reactivex.internal.functions.Functions$OnErrorMissingConsumer.accept(Functions.java:704) at io.reactivex.internal.functions.Functions$OnErrorMissingConsumer.accept(Functions.java:701) at io.reactivex.internal.observers.ConsumerSingleObserver.onError(ConsumerSingleObserver.java:45) at io.reactivex.internal.operators.single.SingleObserveOn$ObserveOnSingleObserver.run(SingleObserveOn.java:79) at io.reactivex.android.schedulers.HandlerScheduler$ScheduledRunnable.run(HandlerScheduler.java:109) at android.os.Handler.handleCallback(Handler.java:730) A: You cannot show UI related stuff in your Scheduler thread. You are trying to show a Toast and also I presume your showSettingsAlert() is also trying show a dialog. This is against the threading policy. Very similar to Can't create handler inside thread that has not called Looper.prepare() inside AsyncTask for ProgressDialog
{ "pile_set_name": "StackExchange" }
770 F.2d 1072 Lameyv.Heckler 84-3706 United States Court of Appeals,Third Circuit. 5/6/85 1 W.D.Pa. AFFIRMED
{ "pile_set_name": "FreeLaw" }
Canas snaps Federer's streak Guillermo Canas snapped Roger Federer's 41-match winning streak on Sunday, beating him 7-5, 6-2 in the third round of the Pacific Life Open. Written by Indo-Asian News Service Read Time: 3 mins Indian Wells, California: Guillermo Canas snapped Roger Federer's 41-match winning streak on Sunday, beating the world's top player 7-5, 6-2 in the third round of the Pacific Life Open. Federer had arrived at the Indian Wells Tennis Garden having won seven consecutive tournaments and was considered an odds-on favorite to break the record of 47 straight matches won by Guillermo Vilas of Argentina 30 years ago. Canas, an Argentine who got into the tournament as a "lucky loser" from qualifying when Xavier Malisse withdrew, played more like a man who once was ranked eighth by the ATP tour. Canas went up 6-5 in the first set with a service break and held to close out the set. During the break, Federer removed his shoes and summoned the ATP Tour trainer for the first of two times to attend to an undisclosed problem with his feet. Federer stayed in the match for a while thereafter. But after Federer had a second visit from the trainer, Canas won the final three games of the set and threw open the championship of this $3.3 million tournament. Surprisingly, Federer then went out to join Swiss countryman Yves Allegro for a doubles match against David Ferrer and Tommy Robredo of Spain. Canas returned to the tour in September after serving a 15-month drug suspension. Ranked No 8 in June 2005, he won his seventh title earlier this year and was ranked 60th when this event got under way. He lost to Alexander Waske in the final round of qualifying, though, and was on the way home until Malisse dropped out. Sharapova beats Dechy In a women's third-round match, top-ranked and defending champion Maria Sharapova of Russia stayed on course to repeat with a 7-5, 6-2 win over Nathalie Dechy of France. Sharapova is playing in her first tournament since early February and is struggling to regain her form. She had four double faults and just one ace, and made 47 unforced errors to go with 24 winners. She has made 97 errors in her two wins, overcoming her miscues by playing well on key points. "I think that's to be expected," she said. "When you don't compete for that amount of time, it's normal. I definitely feel like these matches are getting the rough stuff (out of the way). "It's just good to be back on court and being in some tight situations. I miss that after a while." Sharapova had been idle since retiring from her semifinal at Tokyo with a left hamstring strain, and she said some of her service inconsistency stems from the injury. "In Tokyo I struggled with (the serve) a little bit," she said. "Part of it was probably the leg because that's how I injured it. That limited me from serving a lot in practice." Sharapova's next opponent will be No. 15 seed Vera Zvonareva, who advanced with a 6-3, 6-3 win over Victoria Azarenko on a day that the top women advanced. The men's draw, however, was full of upsets. American Mardy Fish, the 21st seed, had a 5-2 lead in the third set tiebreaker but lost the next five points and the match to Paul-Henri Mathieu of France, 7-6 (4), 4-6, 7-6 (5). American Michael Russell had better luck and toppled eleventh-seeded Tomas Berdych of the Czech Republic, 7-6 (2), 6-4. Dmitry Tursunov (20), Marat Safin (23), Dominic Hrbaty (24) and Radek Stepanek (25) also were upset victims.
{ "pile_set_name": "Pile-CC" }
{{#emitJSDoc}} /** * Allowed values for the <code>{{baseName}}</code> property. * @enum {{=<% %>=}}{<%datatype%>}<%={{ }}=%> * @readonly */ {{/emitJSDoc}} exports.{{datatypeWithEnum}} = { {{#allowableValues}} {{#enumVars}} {{#emitJSDoc}} /** * value: {{{value}}} * @const */ {{/emitJSDoc}} "{{name}}": {{{value}}}{{^-last}}, {{/-last}} {{/enumVars}} {{/allowableValues}} };
{ "pile_set_name": "Github" }
Q: NSMutableArray parsing csv not working? I have this code where I use NSMutableArray to parse a csv file. There are no errors that stop me from running the app however the map doesn't display anything. NSString *csvFilePath = [[NSBundle mainBundle] pathForResource:@"Data2" ofType:@"csv"]; NSString *dataStr = [NSString stringWithContentsOfFile:csvFilePath encoding:NSUTF8StringEncoding error:nil]; NSMutableArray *allLinedStrings = [[NSMutableArray alloc]initWithArray:[dataStr componentsSeparatedByString:@"\r"]]; NSMutableArray *latitude = [[NSMutableArray alloc]init]; NSMutableArray *longitude = [[NSMutableArray alloc]init]; NSMutableArray *description = [[NSMutableArray alloc]init]; NSMutableArray *address = [[NSMutableArray alloc]init]; NSMutableArray *temperature = [[NSMutableArray alloc]init]; NSMutableArray *time = [[NSMutableArray alloc]init]; NSMutableArray *ambient = [[NSMutableArray alloc]init]; NSMutableArray *filteredLocations = [NSMutableArray array]; MKMapPoint* pointArr = malloc(sizeof(MKMapPoint) * filteredLocations.count); for (int idx = 0; idx < [allLinedStrings count]; idx++) { NSMutableArray *infos = [[NSMutableArray alloc]initWithArray:[[allLinedStrings objectAtIndex:idx] componentsSeparatedByString:@","]]; if ([infos count] > 1) { [latitude addObject:[infos objectAtIndex:4]]; [longitude addObject:[infos objectAtIndex:5]]; [description addObject:[infos objectAtIndex:0]]; [address addObject:[infos objectAtIndex:10]]; [temperature addObject:[infos objectAtIndex:6]]; [time addObject:[infos objectAtIndex:15]]; [ambient addObject:[infos objectAtIndex:8]]; if([[latitude objectAtIndex:4] isEqualToString:@"NULL"] || [[longitude objectAtIndex:5] isEqualToString:@"NULL"] || [[description objectAtIndex:0] isEqualToString:@"NULL"] || [[address objectAtIndex:10]isEqualToString:@"NULL"] || [[temperature objectAtIndex:6] isEqualToString:@"NULL"] || [[time objectAtIndex:15]isEqualToString:@"NULL"] || [[ambient objectAtIndex:8] isEqualToString:@"NULL"]) {continue;} CLLocationCoordinate2D coordinate; coordinate.latitude = [[latitude objectAtIndex:4] doubleValue]; coordinate.longitude = [[longitude objectAtIndex:5] doubleValue]; Location *annotation = [[Location alloc] initWithName:[description objectAtIndex:0] address:[address objectAtIndex:10] temperature:[temperature objectAtIndex:6] time:[time objectAtIndex:15] ambient:[ambient objectAtIndex:8] coordinate:coordinate] ; [mapview addAnnotation:annotation]; [filteredLocations addObject:annotation]; MKMapPoint point = MKMapPointForCoordinate(coordinate); pointArr[idx] = point; } } self.routeLine = [MKPolyline polylineWithPoints:pointArr count:filteredLocations.count]; [self.mapview addOverlay:self.routeLine]; free(pointArr); MKMapRect zoomRect = MKMapRectNull; for (id <MKAnnotation> annotation in mapview.annotations) { MKMapPoint annotationPoint = MKMapPointForCoordinate(annotation.coordinate); MKMapRect pointRect = MKMapRectMake(annotationPoint.x, annotationPoint.y, 0.1, 0.1); zoomRect = MKMapRectUnion(zoomRect, pointRect); } [mapview setVisibleMapRect:zoomRect animated:YES]; self.mapview.delegate = self; } I guess there must be something wrong with how I'm calling the objects or maybe the MKMapPoint but I don't manage to find what's blocking the app from displaying the data. I've tried using both "initWithObjects" and removing "if ([infos count] > 1){" but when ran it crashed showing a breakdown point in "NSMutableArray *latitude = [[NSMutableArray alloc]init];". A: Based on your previous questions about this project, you want to do the following at a high level: Parse a CSV file where each line has coordinate data. Ignore lines that have "null" data. (For the purpose of this answer, let's ignore that one could use a pre-built CSV parser, or use a different format altogether.) Show annotations for lines with "good" data. Connect all the annotations with a line. For requirement 1 (R1), you already know how to load the CSV file, loop through the lines, and identify the lines with "null" data. For requirement 2 (R2), after some research, you know that you can create and add annotations to the map one at a time and the map doesn't need to know ahead of time how many you will add so that means the first two requirements could be done in the same loop. For requirement 3 (R3), after some research, you know that to create and add a polyline to the map, you need to know ahead of time how many points will be in the line. For R1 and R2, you will be looping through the lines of the CSV and identify the non-null lines. So that means you will know how many points will be in the polyline after the loop that handles R1 and R2. That means the polyline must be created after that loop. But to create the polyline, you need not just the point count but the coordinates for each point as well. That means while looping through the lines in the CSV, you need to save the coordinate data somewhere (in the same order it appeared in the CSV). In Objective-C, a convenient structure that allows you to add data to it without knowing in advance how many objects will be added is an NSMutableArray. So now we have this very high-level plan: Loop through the CSV file, ignore lines with null data, create and add annotations, add the line data to an NSMutableArray (NSMA). Create a polyline using the point data in NSMA, add the polyline to the map. With this plan, we see we need one NSMutableArray. Notice that in the existing code, you have a Location class that holds (or could hold) all the data from each line of the CSV. That means we could simply add these Location objects to the NSMA. NSMutableArrays can hold any type of object (they don't have to be just NSStrings). So here's a slightly more detailed plan: Initialize an NSMutableArray called filteredLocations (eg. NSMutableArray *filteredLocations = [NSMutableArray array];). Loop through the CSV file, ignore lines with null data, create a Location object and add as an annotation, add the Location object to filteredLocations (eg. [filteredLocations addObject:annotation];). Initialize (malloc) a C array to hold the points of the polyline with the point count being the count of filteredLocations. Loop through filteredLocations, add point from filteredLocations to the C array. Create and add a polyline to the map. In this plan note we have two separate loops: The first one is for R1 and R2. The second one is for R3. If required, I will post sample code that implements this plan. First, just to explain your latest NSRangeException error, it is happening on this line: if([[latitude objectAtIndex:4] isEqualToString:@"NULL"] || ... because you've declared latitude as an array and the first time the if executes in the loop, latitude only has one object (a few lines above this if you do [latitude addObject:...). The index of an array starts at zero so the bounds of an array with one object are zero to zero hence the error message saying index 4 beyond bounds [0 .. 0]. There are many other issues with the rest of the code. There is not enough room in this answer to explain in detail. I urge you, if possible, to stop, step back and re-start with a much simpler project or tutorials and, most importantly, learn the absolute basics of programming in general. Here is an example of code that should work based on your sample data: -(void)viewDidLoad { [super viewDidLoad]; self.mapview.delegate = self; NSString *csvFilePath = [[NSBundle mainBundle] pathForResource:@"Data2" ofType:@"csv"]; NSString *dataStr = [NSString stringWithContentsOfFile:csvFilePath encoding:NSUTF8StringEncoding error:nil]; NSArray *allLinedStrings = [dataStr componentsSeparatedByCharactersInSet:[NSCharacterSet newlineCharacterSet]]; NSMutableArray *filteredLocations = [NSMutableArray array]; for (int idx = 0; idx < [allLinedStrings count]; idx++) { NSArray *infos = [[allLinedStrings objectAtIndex:idx] componentsSeparatedByString:@","]; if ([infos count] > 15) { NSString *latitude = [infos objectAtIndex:4]; NSString *longitude = [infos objectAtIndex:5]; NSString *description = [infos objectAtIndex:0]; NSString *address = [infos objectAtIndex:10]; NSString *temperature = [infos objectAtIndex:6]; NSString *time = [infos objectAtIndex:15]; NSString *ambient = [infos objectAtIndex:8]; if([latitude isEqualToString:@"NULL"] || [longitude isEqualToString:@"NULL"] || [description isEqualToString:@"NULL"] || [address isEqualToString:@"NULL"] || [temperature isEqualToString:@"NULL"] || [time isEqualToString:@"NULL"] || [ambient isEqualToString:@"NULL"]) { continue; } CLLocationCoordinate2D coordinate; coordinate.latitude = [latitude doubleValue]; coordinate.longitude = [longitude doubleValue]; Location *annotation = [[Location alloc] initWithName:description address:address temperature:temperature time:time ambient:ambient coordinate:coordinate]; [mapview addAnnotation:annotation]; [filteredLocations addObject:annotation]; } } MKMapPoint* pointArr = malloc(sizeof(MKMapPoint) * filteredLocations.count); for (int flIndex = 0; flIndex < filteredLocations.count; flIndex++) { Location *location = [filteredLocations objectAtIndex:flIndex]; MKMapPoint point = MKMapPointForCoordinate(location.coordinate); pointArr[flIndex] = point; } self.routeLine = [MKPolyline polylineWithPoints:pointArr count:filteredLocations.count]; [self.mapview addOverlay:self.routeLine]; free(pointArr); [self.mapview showAnnotations:self.mapview.annotations animated:YES]; }
{ "pile_set_name": "StackExchange" }
Q: How to insert xml text & value in html option text & value I want to insert a option tag with text and value from reading xml tags. Here I want to insert <option value> Text</option> like this way below: <option value[from each xml col1]> Text [from each xml col2] </option> Here is the code: var xml = ' <row id="1_2"> <col1>46.0</col1> <col2>Acting Allowance</col2> </row> <row > <col1>A1</col1> <col2>Allowance for 65 years plus</col2></row>', xmlDoc = $.parseXML(xml), $xml = $(xmlDoc); $xml.find('col2').each(function () { var option = $(this).text(), //i want to take text from each col2 as option text value = $(this).attr('value'); //i want to take text from each col1 as option value $("#selectID").append("<option value='" + value + "'>" + option + "</option>"); }); <select id="selectID"></select> Please let me know for further information. Thanks. A: Your XML was invalid. You have to insert any root node (I just was wrapped the xml with <root> node. var xml = '<root><row id="1_2"><col1>46.0</col1><col2>Acting Allowance</col2></row><row><col1>A1</col1><col2>Allowance for 65 years plus</col2></row></root>', xmlDoc = $.parseXML(xml), $xml = $(xmlDoc); $xml.find('row').each(function() { var row = $(this), option = row.find('col2').text(), //i want to take text from each col2 as option text value = row.find('col1').text(); //i want to take text from each col1 as option value $("#selectID").append("<option value='" + value + "'>" + option + "</option>"); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <select id="selectID"></select>
{ "pile_set_name": "StackExchange" }
Since retiring from the shoe business in 2007, Vaccaro has pivoted into being a full-time, all-out advocate for change in the basketball industry he helped create. The man who outfitted coaches in swooshes and funneled millions into universities’ athletic departments is now one of the NCAA’s most outspoken critics, working behind the scenes to organize efforts for the antitrust suit led by former UCLA basketball star Ed O’Bannon. This video is no longer available for viewing. Chapter 1: “How Sonny Became Sonny” Chapter 2: “The Jordan Effect” Chapter 3: “Kobe and the Gunslinger” Chapter 4: “There Are Victims Here” Chapter 5: “The LeBron Affair” Sole Man will make its prime-time television debut on Thursday, April 16, at 9 p.m. ET on ESPN.
{ "pile_set_name": "OpenWebText2" }
The main page has all kind of strip games, including strip poker, strip black jack and strip puzzels. The rest of the site is split up in a bunch of sections: the adult section is for virtual porn, the hentai section for the oriental touch and the chill Neutral Forces sex game .. Two sexy girls play hard lesbian games on the screen. Start this lesbian gender reassignment surgery animation together with Rose, a Laura is the most splendid girl I have ever met. Canis is the sexiest slutty teacher I have ever known! NaughtyQuiet Please! As a master you will dedicate this sexy boobie girl in Have you ever been in such place? If you are looking for Megan is 21 and she is Gorgeous. Take her for a date and Best porn fucking game for adults - The Heist - A bugler goes lesbian forced porn the security system and finds something interesting This one lesbian forced porn not be easy. FantasyLesbian forced porn Fantasy - Great erotic game - Hot elf girl enters the ancient forest and is attacked by swamp goblin immediately. Date with Naomi and take her to your apartment Private Sex - Feel yourself a casanova - stick your dick in that sexy ladies pussy and ass, fuck her to death! lesbian forced porn Tastes differ - Choose the partner you would like and have sex! Venus Hostage - The game where craziest sex actions take place. Working for Evil - game for adults with awesome fucking action - Inviting wet pussies and huge boobs will make you want it more and more Choose one of these sexy girls and follow the instructions, One sexy young girl tells her friend she had sex with a BeautiesMasturbationS fucked. You need to help hot girl loose hentai card game weight flrced School Girl lesbian forced porn, Claire The exchange student - hot lesbian adult game - Kendra's school joined the exchange program and so she has decided oorn let one lesbian forced porn the other The social network or third party may also automatically collect lesbian forced porn like your IP address, information about your browser and device, and my girlfriend horny address of the webpages you are visiting on the Website. The Company may use the information that it collects about you or that you provide to the Company, froced any personal information:. The Company may ponr use your information to contact you about the Company's own and third parties' goods and services that may be of interest to you. If you forcfd not want the Company to use your information in this way, please contact the Company at https: For more information, see What choices do you have about how the Company uses and discloses your information. The Company may use the information it has collected from you to allow lesbiqn Company to display advertisements to its advertisers' target audiences. Even though the Company does not disclose your personal information for these purposes without your consent, if you click on or otherwise interact lesbian forced porn an advertisement, the advertiser may assume that you meet its target criteria. riley steele sex video Contact Us The Company may do this by way of new products and applications that the Company introduces on one or more occasions, including new products powered by the Company. The Lesbiian may disclose aggregated information about its users, and information that does not identify any individual, without restriction. The Company may disclose personal information that it collects or you lovechess age of egypt as described in this policy:. What choices do you have about how the Lesbian forced porn uses and discloses your information? The Company provides you the ability to exercise certain controls and choices regarding its collection, use, and sharing of your information. In accordance with local law, your controls and choices may include:. For personal information that the Company holds, the Company will provide you with access for any purpose including to request lesbian forced porn the Company correct the data if lesbiaan lesbian forced porn inaccurate or delete the data if the Company lesbian forced porn not required to retain it by law or for legitimate business purposes. Please note that even when you remove information, sleeping teen orgasm Company will retain in its files certain data, including information used lesbbian resolve disputes, troubleshoot problems, torced security, reduce fraud, comply with applicable law, or to enforce any agreements, policies, and rules governing your use of the Website. Removed information also may lesbian forced porn in backup copies or other users' caches. The Company has implemented measures designed to secure your personal information from accidental loss and from unauthorized access, use, change, and disclosure. All information you provide to the Company is stored on its secure servers behind firewalls. The safety and security of your information also depends on you. Where the Company has given you or where you have chosen a password for access to certain parts of the Website, you are responsible lesbian forced porn keeping this password confidential. The Company asks you not to lesbian forced porn your password with anyone. In addition, the Company urges you to be careful about giving gta sex xxx information in public areas of the Website. Lesbian Forced Sex Games The information you share in public areas may be viewed by any user of the Website. The transmission of lesbian forced porn lwsbian the Internet is not completely secure. Although the Company does its best to protect your personal information, the Company does not guarantee the security of your personal information transmitted robin bdsm the Website or guarantee against all unauthorized disclosure, alteration, or destruction of personal information. Any transmission of personal information is at your own risk. The Company is not responsible for circumvention of any privacy lesboan or security measures contained on lesgian Website. This policy is intended to cover collection of information lesbian forced porn or through the Lesbixn from residents of the United Lesbian forced porn. The data protection and other laws of the United States and other countries lesbian forced porn not be as comprehensive as those in your country. Please be assured that the Company seeks to take reasonable steps to make sure that your privacy is protected. When you provide personal information to the Company through the Hental porn video, you consent to the processing of your data in, and the transfer of your data to, the United States or any other country in which the Company or its affiliates, subsidiaries, or service providers host these services. DNT is a way for you to inform girl x battle hentai and services that you do not want certain information about your fkrced visits flrced over time and across websites or over watch pron services. The Company is committed to providing you with meaningful choices about the information it collects and that is why lesbian forced porn Company provides you the ability to opt forcced. For more information, visit www. If you are a California resident, you may have certain additional rights. California Civil Code Section Further, if you are a California resident and would like to opt out from the disclosure of your personal information to any third party for direct marketing purposes, please contact the Company here. Please be advised that if you opt out from permitting your personal information to be shared, you may still forcrd selected offers directly from the Company in accordance with California law. The Company will only use your personal information for the purposes intended and as detailed in this policy unless the Company has obtained your lesbian forced porn to use it for other purposes. Residents of Canada are notified that the personal information they provide to the Company is stored in its databases outside of Canada, including oorn the United States, and may be subject to disclosure to authorized law enforcement or government agencies in response to lawful demand under the pokemon hilda futanari of that country. If you need to contact the Company about lesbjan personal information or believe that the Company has violated your privacy rights, please contact the Company here. You may visit lesbian forced porn. The Website contains links to other websites. Please be aware that the Company is not responsible for the lsbian or privacy practices of those other websites. The Company encourages pron customers to be lesbian forced porn when they leave the Website and to read the privacy statements of any other website that collects top porn games for pc identifiable information. This policy does not create rights enforceable by third parties or require disclosure of any personal information relating to users lesbian forced porn the Lesbixn. If you reasonably believe that your copyrighted work has been used or posted by a lesbian forced porn party without your consent, you may follow the instructions here hot fucking nurses how to report it. By submitting a copyright infringement notice or other communication including communications forcev content stored on or transmitted through the Websiteyou lesbian forced porn to have these communications forwarded to the person or entity who stored, porrn, or linked to the content addressed by your communication, to facilitate a prompt resolution. The Company forwards DMCA infringement notices including any personally identifying information ofrced in the notices as submitted to the Company without any deletions. Although most changes are likely to be minor, the Company may change this policy on one or more occasions, and in its sole discretion. The Company encourages visitors to frequently check this page for any changes to this policy. Your continued use of the Website after any change in this policy will constitute your acceptance of the changes. Have questions about RabbitsReviews? Need help with recommendations, customer support, billing, pricing for any of the paysites we review? We are here to help! Contact our support team: Are you a webmaster and want to work with Rabbits? Lesbian forced porn Popular 92 Rewards. Porn Reviews Categories Most Popular 92 sites. Categories Lesbian forced porn Popular Rewards. What lesbian forced porn I do? Content Produced by Third Parties The operators of this website are not the "producers" of any depictions of actual or simulated sexually explicit conduct which may appear on this website. Content Produced by Website Operators To the extent that any images appear on the website, for which the operators of this website may be considered the "producer," those images are exempt from the potn of 18 U. Bioshock intimate full game Records Custodian Without limiting in any way the applicability of the above-stated exemptions, the operators of this website have designated lesbian forced porn custodian, whose address appears below, to be the keeper of original records described in 18 U. The aforementioned records lesbian forced porn their custodian can lesbian forced porn found at the following location: You acknowledge lesbian forced porn the Company lesbian forced porn and does not state that files available for downloading from the Internet or the Website will be free from loss, corruption, attack, viruses or lesbbian destructive code, interference, hacking, or other security intrusions. You are responsible for implementing sufficient procedures and checkpoints to satisfy your particular requirements for antivirus protection and accuracy of data input and output, and for keeping a cum on her nipples external to the Website for any reconstruction of any lost data. The Company will not be liable for any loss or damage caused by a distributed denial-of-service DDoS attack, viruses, or other technologically harmful material that might infect your computer equipment, rorced programs, data, or other proprietary material due to your use of the Website or any services or items obtained through the Website or lesbian forced porn your downloading of any material posted on the Website, or on any website linked to the Website. You acknowledge that you may be exposed to content that lesbian forced porn inaccurate, offensive, indecent, or objectionable, and you hereby waive any legal or equitable rights or remedies you have online dating games free may have against the Company with respect lesbian forced porn this content. The Company makes the information presented on or leesbian the Lebian available for general information purposes only. The Company is not making any warranty about the accuracy or usefulness of this information. Any reliance you place on this information is strictly at your own risk. All users are encouraged to think for themselves. The Company will not be liable for any reliance placed on these materials by you or any other visitor to the Website, or lesbian forced porn anyone who may be informed school henti any of its contents. All statements or opinions expressed in these lesbian forced porn materials, and all website reviews and responses to questions and other content, other than the content provided by the Company, are solely the opinions and the responsibility of the lesbian forced porn providing third-party materials. Lesbians | Play Sex Games Third-party materials do not reflect the opinion of the Lezbian. The Company will not be liable to you or any other person for the content or accuracy of any third-party materials. The Company will use reasonable efforts to protect information submitted by you in connection with the Website, but you acknowledge that your submission of this information is at your sole risk, and the Company will not be liable to you for any loss relating to that information. Your use of the Website, its content, and any services or items obtained through the Website is at your own risk. The Company is not making any warranty 1 that the Website, its content, or any services or items obtained through the Website will be lesbian forced porn, reliable, error-free, or uninterrupted; 2 that defects will be corrected; lesbian forced porn that forrced Website or the server that makes it available are free of viruses or other harmful lesbian forced porn or 4 that the Force or any services or items obtained through the Website will otherwise meet your needs or expectations. The Company is not making forrced warranty, whether express, implied, statutory, or otherwise, including warranty of merchantability, title, noninfringement, privacy, security, and fitness scooby fucks a particular purpose. No advice or information, lesbian forced porn oral forved written, obtained from the Company, the Website, or elsewhere will create any warranty not expressly stated in this agreement. Unless caused by gross negligence or intentional misconduct, the Company, its directors, officers, employees, agents, subsidiaries, affiliates, sims 4 sex shop, content providers, and service providers will not be liable to you for any direct, indirect, special including so-called consequential damagesstatutory, punitive, or exemplary damages arising out of or relating to your access or your inability to access the Website or the content. This exclusion lesbian forced porn regardless of theory of liability and even if you ledbian the Company about the forcer of these damages or the Company knew or should have lesbian forced porn about the possibility of these damages. Daenerys Targaryen lesbian sex with the slave in Game of Thrones parody.
{ "pile_set_name": "Pile-CC" }
Wanna be a successful musician? Stay humble and keep a few important things in mind... This last spring, I watched the band Culture Abuse play the Boise Knitting Factory opening for The Story So Far. Sometime during the set, vocalist David Kelling addressed the crowd, comprised mostly of starstruck teenagers waiting to watch TSSF, with a speech unlike any I would expect to hear at a show that big. “Hey, nobody in a band is better than you!” Kelling said with a wide grin on his face, and you could tell he meant it. “Anything you see us doing on this stage, you can do, too.” His message was important and one I’d love to hear from artists at any level of success. Memories come to mind of local support bands leaving once they play (and taking all of their friends with them), or stories of touring artists demanding fans to refill their drinks, as if fans should feel lucky to be in their presence (a good friend of mine actually was on the fan end of this). Promoters are doing you a favor by booking you Whether you’re a headliner or local support, promoters put a lot of effort, time, and money into getting people out to your performance. Often, this is at the risk of losing money or not breaking even. I’m not saying you shouldn’t be upset with promoters who do bad jobs, but respect that they’ve gone out of their way to book you. Most promoters I know, “big time” or not, don’t make their living off of booking, especially for bands like yours and mine. If you’re a local artist, thank everyone who allows you onto their events and try to maintain a good relationship with them. Do your best to get as many people out as you can and promote yourself to the best of your ability. If you’re from out of town, be grateful that someone went out of his or her way to get you in front of people. Whether you have a draw or not, there’s still work that goes into setting a show up. A promoter often books artists on chance their show will be profitable, not a guarantee of it. Everyone you play with is your equal It’s natural to talk about other bands behind closed doors and to dislike their music or their personalities sometimes. But as a musician, a performer, and an entertainer, they're your equals. If you’re a local band, every other local you play with is part of your local scene. They're your peers whether you like it or not. It's my belief that you should try to catch at least part of each artist’s set. You, in all honesty, are a part of their draw and their audience. If you can’t do that or are leaving early for some reason, at least try to talk to them a little bit or grab some merch. If you’re on a touring circuit, I’d argue the same actions are good, even though they're not always essential. It can be draining to watch a local band every night for two weeks in a row, especially when they often aren’t very good. And, of course, you could have been driving for eight hours and haven’t eaten since last night, so you’ll want some food before you play. But at least thank the locals for playing and for bringing people to see your sorry self. You’re not any better than them because you’re from out of town, and they showed up to help you have an audience. Fans and show-goers are the fuel for your career People who watch you and buy your music are the ones who keep your career going. You could be the most talented musician ever, but without people to watch you or buy your music (i.e., no demand), you’d just be playing for your stuffed animals forever. How many venues have shut down due to people not coming out to shows? How many tours have been unsuccessful because people weren’t buying merch or sticking around to watch the bands? Fans and showgoers are just as important as the artists they go to watch. So when they’re watching you, thank them for coming out and say it like you mean it. Showgoers being there isn’t something you deserve – it’s something you’re very lucky to have. To the self-entitled musicians: do you see yet that there's no room for your ego in this game? Every player contributes and is essential for a win. Everyone offers the scene and industry something valuable and necessary to keep it going. Rob Lanterman is a writer and musician currently living in Boise, ID. He also runs Hidden Home Records, which is the love of his life but also a gigantic money sucker.
{ "pile_set_name": "Pile-CC" }
In vitro attachment of bovine hatched blastocysts on fibronectin is mediated by integrin in a RGD dependent manner. We investigated the effect of extracellular matrix protein on in vitro attachment and outgrowth of bovine hatched blastocysts. In vitro produced bovine hatched blastocysts were cultured on a fibronectin- or laminin-coated Petri dishes. Hatched blastocysts adhered and outgrew on the fibronectin-coated dish whereas no attachment was observed on the laminin-coated dish. The attachment and outgrowth on fibronectin were significantly inhibited in the presence of synthetic peptides containing the Arg-Gly-Asp (RGD) sequence, which interacts with the fibronectin receptor (integrin alpha5beta1), but were not inhibited by the control peptides containing the Arg-Gly-Glu (RGE) sequence. Addition of anti-fibronectin receptor (integrin alpha5beta1) antibody to the culture medium also inhibited the attachment and outgrowth on fibronectin-coated Petri dishes. Subsequently we examined mRNA expression and protein expression of alpha5 and beta1 integrin subunit in the hatched blastocyst by reverse transcription-polymerase chain reaction (RT-PCR) and immunostaining, respectively. Expression of both mRNA and protein were detected in blastocysts. These results indicate that trophectoderm cells of bovine hatched blastocysts have already acquired the ability to adhere and outgrow on fibronectin in vitro by an integrin- mediated manner.
{ "pile_set_name": "PubMed Abstracts" }
Ischnosiphon Ischnosiphon is a genus of plants native to Central America, South America, Trinidad and the Lesser Antilles. It was first described as a genus in 1859. species References Category:Marantaceae Category:Zingiberales genera
{ "pile_set_name": "Wikipedia (en)" }
Not every hit Cam Newton takes warrants a flag, and the sack he took from Rams outside linebacker Mark Barron in the third quarter of Sunday’s game was brutal. The referees seemed to think it was also totally legal, and it looked that way from the broadcast. But another angle makes it look like the refs missed the call. Here’s the hit as it was shown live on the game broadcast. According to the angle shown on television, Barron led with the shoulder and didn’t make contact with Newton’s head or neck. Viewed from another angle, it looks like Barron did hit Cam’s head. @CarPanthersNews he hit him in the head then pushed him down?? pic.twitter.com/L9xf5vU9pb — Chris Wilson (@chris6615) November 6, 2016 It looks like Barron’s head made contact in this one too, but it’s harder to tell. That's gonna leave a mark... pic.twitter.com/NIPwrnis9h — CAR Panthers News (@CarPanthersNews) November 6, 2016 From behind, it doesn’t look like an illegal hit at all. That’s also where the referee would have been standing. Though many hits on Newton this season have been questionable, and the league has acknowledged some missed calls by officials. Initially, it didn’t look like this one fit into that category. Following the Panthers’ Week 8 matchup against Arizona, Newton said he didn’t feel protected by the referees, and specifically pointed to a low hit by Cardinals defensive lineman Calais Campbell as his “breaking point.” "It's taking the fun out of the game for me," Newton said after the game. Newton later discussed the matter with NFL commissioner Roger Goodell, and Panthers head coach Ron Rivera and GM Dave Gettleman both addressed the matter with the league office. Campbell was fined $18,000 for the hit. After the Rams game, Rivera said there was one unflagged hit on Newton that bothered him: "There's one of concern," but otherwise Rivera believes officials did a good job in Rams game. — Bryan Strickland (@PanthersBryan) November 7, 2016 Rivera didn’t say which one it was, though. If it was the Barron hit, we could be hearing a lot more about it this week when the NFL has a chance to weigh in.
{ "pile_set_name": "OpenWebText2" }
What is really going on in politics? Get our daily email briefing straight to your inbox Sign up Thank you for subscribing We have more newsletters Show me See our privacy notice Invalid Email Nearly 10,000 people have died in just over two years after being denied full sickness benefits and told to get a job, the Government admitted. After an 18-month Freedom of Information battle with the Mirror, the Department for Work and Pensions revealed that 2,380 people died between December 2011 and February 2014 after failing controversial government tests and being found “fit to work”. A further 7,200 people claiming Employment and Support Allowance died after being put in the “work-related activity group”, which means they get reduced benefits and are told to get a job. TUC General Secretary Frances O’Grady said: “We urgently need an inquiry into the government’s back-to-work regime. These disturbing findings cannot be swept under the carpet. “The fact that more than 80 people are dying each week shortly after being declared fit for work should concern us all. We need a welfare system that supports people to find decent jobs not one that causes stress and ill health.” Find out what you can do if you're wrongly declared 'fit to work' here. Video Loading Video Unavailable Click to play Tap to play The video will start in 8 Cancel Play now The Mirror first used Freedom of Information laws in 2012 to expose how every week 32 people were dying after being put into the work-related activity group following “work capability assessments”. The DWP insisted that the death rate for people on benefits had fallen over 10 years to 2013 “in line with the general working age population”. But the death rate of those in the “work related activity group”, who now get the same rate as the jobseekers allowance, rose in 2013 to 532 per 100,000 – more than double the general working age population. Back to work deaths 2,380 People died in just over two years after being declared fit to work by Government officials Rob Holland, of Mencap, said: “These tragic figures warrant further investigation. We know the fit for work test is failing disabled people, with devastating consequences.” A DWP spokesman said: “The Government continues to support millions of people on benefits with an £80billion working age welfare safety net in place.” Earlier this week, Work Secretary Iain Duncan Smith announced that he planned changes to the work capability assessments.
{ "pile_set_name": "OpenWebText2" }
103 F.2d 765 (1939) NEIRBO CO. et al. v. BETHLEHEM SHIPBUILDING CORPORATION, Limited, et al. No. 309. Circuit Court of Appeals, Second Circuit. April 10, 1939. Robert P. Weil, of New York City (Laurence A. Tanzer, of New York City, of counsel), for appellants. William Dwight Whitney, of New York City (Cravath, deGersdorff, Swaine & Wood, and Robert D. Blasier, all of New York City, of counsel), for appellee. Before L. HAND and CLARK, Circuit Judges. CLARK, Circuit Judge. This appeal assigns error in the action of the District Court in granting the motion of Bethlehem Shipbuilding Corporation, *766 Ltd., to quash service of process upon it and in dismissing the action as to it on the ground that it was not a resident of the Southern District of New York within the requirements of the federal venue statute, Jud.Code § 51, 28 U.S.C.A. § 112. Appellants, plaintiffs below, ground their appeal on two claims: first, that appellee, the Bethlehem corporation, is a resident of the District, notwithstanding its incorporation in the State of Delaware, because of the location of its chief business and executive offices within the District and its designation of an agent to accept process there, in compliance with the conditions under which a foreign corporation is legally permitted to do business within the State of New York, and second, that such designation of an agent to accept process in connection with appellee's qualification to do business in New York is a waiver of the venue defense. In the light of the statutory language and of the well settled rule that lack of venue is a personal privilege which a defendant can waive, a reversal of the order of dismissal would become necessary if either the claim of residence in the district or that of waiver could be sustained. But whatever objections of policy may be urged against it, we feel the law to the contrary is too well established to be now overturned. The action was originally brought by the appellants, who are citizens and residents of New Jersey, against United Shipyards, Inc., a New York corporation of which they are stockholders, to restrain the carrying out by the latter of a contract for the sale of drydocks in the waters of New York Harbor and other property to Bethlehem Shipbuilding Corporation, Ltd. The court refused to stay the sale, but added certain other persons as parties on the plaintiffs' motion. Then the plaintiffs filed an amended and supplemental bill alleging the consummation of the sale and praying relief in respect thereof. In this bill they asked that the Bethlehem corporation be added, and they described it as "a corporation organized and existing under the laws of the State of Delaware, and * * * a citizen and resident of the State of Delaware." The court ordered that Bethlehem be added as a defendant. Upon being served with process, Bethlehem appeared specially and moved to quash the service and the Marshal's return thereof. The appeal is taken from the order granting Bethlehem's motion and dismissing the action as to it. The material provisions of Jud.Code § 51, 28 U.S.C.A. § 112, applicable to this action are as follows: "* * * no civil suit shall be brought in any district court against any person by any original process or proceeding in any other district than that whereof he is an inhabitant; but where jurisdiction is founded only on the fact that the action is between citizens of different States, suit shall be brought only in the district of the residence of either the plaintiff or the defendant." Since jurisdiction of the present action is founded on the diversity of citizenship of the parties, the latter part of this statute applies. It is settled, however, that except for the limitation of suit to a single district — that whereof the defendant is an inhabitant — in suits other than those based on diversity of citizenship, the requirements of the two parts of the statute are identical, and precedents as to one part are equally authoritative as to the other. In re Keasbey & Mattison Co., 160 U.S. 221, 16 S.Ct. 273, 40 L.Ed. 402. The defense of lack of venue was open to this defendant, notwithstanding the presence in the action of other defendants properly sued in the district. Camp v. Gress, 250 U. S. 308, 39 S.Ct. 478, 63 L.Ed. 997; McLean v. State of Mississippi, 5 Cir., 96 F. 2d 741, 119 A.L.R. 670, certiorari denied 59 S.Ct. 84, 83 L.Ed. ___. We shall consider successively the two claims of error urged by appellants. First. Suits by and between corporations as citizens of different states have always presented troublesome problems of jurisdiction to the federal courts. For half a century after the passage of the first judiciary act, a corporation was allowed to sue or be sued in the circuit courts only when all its members were citizens of the state which created it. Bank of United States v. Deveaux, 5 Cranch 61, 3 L.Ed. 38. But in 1844, it was held in Louisville, C. & C. R. Co. v. Letson, 2 How. 497, 11 L.Ed. 353, that for the purposes of determining federal jurisdiction a corporation was to be deemed a person or an inhabitant, and thus a citizen, of the state in which it was incorporated. Although this conclusion has been assailed as unreal, it has been consistently followed ever since, and attempts at legislative *767 change, even when made under distinguished sponsorship, have proven unsuccessful.[1] Hence on all questions of jurisdiction involving diversity of citizenship, this appellee is conclusively determined to be a citizen of the State of Delaware by reason of its incorporation there. It was perhaps not logically necessary that a like conclusion should be reached as to the residence of a corporation under the requirements as to venue; but such a conclusion was a natural one, in the light of the language of the Letson case and the policy involved. And it was the meaning ascribed to the residence requirement in Ex parte Schollenberger, 96 U.S. 369, 377, 24 L.Ed. 853, decided in 1877. Yet the question was not then important, for the venue statute, from the time of the original judiciary act, had provided that a defendant might be sued in a district in which he should be "found" at the time of serving the writ. Act of Sept. 24, 1789, c. 20, § 11, 1 Stat. 79; Act of Mar. 3, 1875, c. 137, 18 Stat. 470. Hence the court held that a corporation doing business within the state was to be found within it for the purposes of venue. Ex parte Schollenberger, supra. This part of the statute was, however, eliminated in 1887. Act of Mar. 3, 1887, c. 373, § 1, 24 Stat. 552, as corrected by the Act of Aug. 13, 1888, c. 866, § 1, 25 Stat. 433. From that time the statute has required residence in (or being an inhabitant of) the district to support the action. Jud.Code § 51, 28 U.S.C.A. § 112, supra. After the change in the statute it has been held uniformly by the Supreme Court and generally by the lower federal courts that residence is limited to the state of incorporation of the corporation and is not satisfied by the doing of business within the state. Shaw v. Quincy Mining Co., 145 U.S. 444, 12 S.Ct. 935, 36 L.Ed. 768; Southern Pacific Co. v. Denton, 146 U.S. 202, 13 S.Ct. 44, 36 L.Ed. 942; In re Keasbey & Mattison Co., 160 U.S. 221, 16 S.Ct. 273, 40 L.Ed. 402; Macon Grocery Co. v. Atlantic Coast Line R. Co., 215 U.S. 501, 30 S.Ct. 184, 54 L.Ed. 300; Seaboard Rice Milling Co. v. Chicago, R. I. & P. Ry. Co., 270 U.S. 363, 46 S.Ct. 247, 70 L.Ed. 633; Yanuszauckas v. Mallory S. S. Co., 2 Cir., 232 F. 132; McLean v. State of Mississippi, 5 Cir., 96 F. 2d 741, 119 A.L.R. 670, certiorari denied, 59 S.Ct. 84, 83 L.Ed. ___; Central West Public Service Co. v. Craig, 8 Cir., 70 F. 2d 427; De Dood v. Pullman Co., 2 Cir., 57 F.2d 171, affirming D.C.E.D.N.Y., 53 F.2d 95. Among the several decisions of district courts to the same effect may be cited that of A. N. Hand, D.J., in Beech-Nut Packing Co. v. P. Lorillard Co., D.C.S.D.N.Y., 287 F. 271, in 1921, relied on by the court below in the present case. The only exception in recent years to this uniform current of decision seems to be Dodge Mfg. Co. v. Patten, 7 Cir., 60 F.2d 676, affirming D.C.Ind., 23 F.2d 852, which was based upon the decision of Mr. Justice Harlan on circuit in U. S. v. Southern Pacific R. Co., C.C.N.D.Cal., 49 F. 297. The court, however, failed to note the later contrary decisions of the Supreme Court, in several of which Mr. Justice Harlan dissented. Cf. Shaw v. Quincy Mining Co. and Macon Grocery Co. v. Atlantic Coast Line R. Co., supra.[2] Appellants, however, criticize the policy followed in these cases as applied to the modern private corporation doing business in many different places and suggest ingenious distinctions to lessen their force as precedents. But so far as the policy is concerned, Congress has shown itself as yet distinctly uninterested in a change in *768 the direction urged by appellants. Indeed, so long as the citizenship of a corporation for jurisdictional purposes is determined by the state of its incorporation, there would seem no good reason for a different view of the venue requirements. The limitation on suits against a corporation implicit in the rule as to jurisdiction would be of comparatively little effect if all plaintiffs who lived outside the state of incorporation could sue the corporation at will wherever it carried on business. The suggested grounds for distinguishing the cases find no support in the precedents themselves or in the reasons behind them. Thus it is asserted that in no one of the decisions of the Supreme Court was there involved a corporation which not only does business and has its headquarters within the state, but also maintains there a designated office and agent to accept process. The cases discussed below in connection with appellants' second claim show that this combination of circumstances is not enough to show even waiver of the venue defense, and therefore certainly not residence in the district. Moreover, the reasons for, and historical development of, the rule show that the claimed distinction would be simply a direct repudiation of the rule itself. See especially Shaw v. Quincy Mining Co. and In re Keasbey & Mattison Co., supra. The latter case demonstrates, too, that no sound distinction is possible based on the fact that some of the cases, such as those for patent or copyright infringement, can be brought only in the federal courts, while actions such as the present one can be brought in either the state or the federal courts. Indeed, the stricter rule might be justified, if at all, in cases such as the present where a more extensive choice of forum is possible. But no such distinction appears in the statute or is suggested by the cases. Some support is claimed from decisions of the New York courts that a foreign corporation having qualified to do business within the state is present within the state, so as not to prevent the running of the statute of limitations in its favor. Cf. Comey v. United Surety Co., 217 N.Y. 268, 273, 274, 111 N.E. 832, Ann.Cas.1917E, 424. Such a result is reasonable, since the corporation is continuously subject to suit in the state court. It is not useful in determining the entirely different problems of federal jurisdiction. Erie Railroad Co. v. Tompkins, 304 U.S. 64, 58 S.Ct. 817, 82 L.Ed. 1188, 114 A.L.R. 1487, does not make such cases binding authorities on the present issue; whatever else that case may do, it certainly does not throw the determination of federal jurisdiction into the state courts. Second. Appellants' other claim is based upon the facts, admitted on the record, that appellee qualified to do business within the State of New York in 1918 and, as required by the law then in force, designated an agent to accept process.[3] Such agent, who is located in the Southern District of New York, is still acting under such designation. But the question whether such designation constitutes a waiver of the venue defense seems also well settled, contrary to the appellants' contention. In the case of Southern Pacific Co. v. Denton, 146 U.S. 202, 13 S.Ct. 44, 36 L.Ed. 942, decided in 1892, there is a clear statement of the court that the supposed agreement of the corporation went no further than a stipulation that process might be served upon its officers or agents, and that, while this might subject the corporation after proper service to the jurisdiction of a federal court, so long as the federal statutes allowed it to be sued in the district in which it was found, such an agreement could not, since Congress had made citizenship of the state, with residence in the district, the sole test of jurisdiction in this class of cases, "estop the corporation to set up non-compliance with that test" when sued in a federal court. The appellants attempt to distinguish this case on several grounds going to show that this statement was dictum only, unnecessary to the decision. It was, however, a direct statement on the point in question and has been accepted as authoritative. It is true that in the Denton case the Texas statute authorizing qualification of a foreign corporation to do business in the state was held unconstitutional, since it required as a condition of such qualification that the corporation relinquish its right of removal of cases to the federal courts. But the holding that there was no *769 waiver was made an alternative ground of decision. It is also claimed that the act had already been repealed, and that the Texas records do not disclose any actual designation of the defendant corporation. But the facts stated by the court show that the repeal came after the action was brought, and the decision was rendered on the basis, admitted on the record, that the corporation had complied with the statute. Cases following and applying the Denton case include McLean v. State of Mississippi, 5 Cir., 96 F.2d 741, 119 A.L. R. 670, certiorari denied 59 S.Ct. 84, 83 L. Ed. ___, Heine Chimney Co. v. Rust Engineering Co., 7 Cir., 12 F.2d 596, and numerous district court decisions. Beech-Nut Packing Co. v. P. Lorillard Co., supra; Gray v. Reliance Life Ins. Co., D.C. W.D.La., 24 F.Supp. 144; Standard Stoker Co. v. Lower, D.C.Md., 46 F.2d 678; Thomas Kerfoot & Co. v. United Drug Co., D.C.Del., 38 F.2d 671; Jones v. Consolidated Wagon & Machine Co., D.C.S.D. Idaho, 31 F.2d 383, appeal dismissed 280 U.S. 519, 50 S.Ct. 65, 74 L.Ed. 589; Hagstoz v. Mutual Life Ins. Co. of New York, C.C.E.D.Pa., 179 F. 569; Platt v. Massachusetts Real-Estate Co., C.C.Mass., 103 F. 705. The decision by the Fifth Circuit Court of Appeals in McLean v. State of Mississippi, supra, contains a careful analysis of the authorities and considers and disposes of the various objections offered against the ruling in the Denton case. Application for writ of certiorari in the McLean case was denied by the Supreme Court on October 10, 1938, after the decision below in the pending action. 59 S.Ct. 84, 83 L.Ed ___. Decisions and opinions to the contrary seem to be limited to the ruling of the district court, found in D.C.Ind., 23 F.2d 852 the case of Patten v. Dodge Mfg. Corp., supra — since the circuit court of appeals affirmed solely on the point of residence to which its opinion is cited above (7 Cir., 60 F.2d 676) — and some early rulings of lower courts. Shainwald v. Davids, D.C.N.D. Cal., 69 F. 704; and Consolidated Store-Service Co. v. Lamson Consolidated Store-Service Co., C.C.Mass., 41 F. 833, the force of which is weakened by the decision of the same judge contra in Platt v. Massachusetts Real-Estate Co., supra.[4] It is asserted, however, that the Denton case and the cases relying on it are not controlling here because they involved only a consent by the corporation through its designated agent to accept process, while the present consent is one to be sued within the state. And reliance is placed upon a recent case, decided by the Circuit Court of Appeals for the Tenth Circuit, Oklahoma Packing Co. v. Oklahoma Gas & Elec. Co., 100 F.2d 770.[5] In this case the court, while recognizing the general rule referred to above, did hold that an Oklahoma statute required a consent not merely to accept process, but to be sued in the state courts (as indeed the trial court found, 100 F.2d at pages 773, 774), and that this consent likewise operated in the federal courts to constitute a waiver of the venue defense. Notwithstanding the authority of this court, we are not convinced of the soundness of this distinction. If state statutes of the form considered in the cases discussed above do not force a waiver of the federal venue privilege, it is difficult to see what is added by this further provision. It would mean, of course, that the addition of a few words to a state statute would demolish a privilege given by federal law. Moreover, as a practical matter, state legislation requiring consent to be sued as a condition of qualification is usually an unnecessary gesture, since state courts will have jurisdiction of such suits against foreign corporations as concern the state or its citizens, if service of process can be had on the corporation. The statute therefore need only provide for service of process. Indeed, the added provisions of the Oklahoma statute have another reasonable explanation, i.e., the necessity that suits against corporations should satisfy local requirements of venue, since the designated agent was to be found only at the State Capitol. Hence there was coupled with the requirement of the designation of an agent upon whom service of process might be made, the further statement that action might be brought in any county. Okl.Stat.1931, § 130, 18 Okl. St.Ann. § 452; cf. Okl.Const. Art. 9, § 43, Okl.St.Ann. *770 Whatever may be the force of the distinction, however, the court did recognize the general rule that a mere designation of an agent to accept process is not a waiver of the venue privilege. Appellants' attempted construction of the New York corporation laws as including a consent to be sued comparable to that found in the Oklahoma statute cannot be sustained. The applicable statutes in New York are clear in requiring merely the designation of an agent "upon whom all process in any action or proceedings against it may be served within this state."[6] It is true that another statute authorizes actions against foreign corporations under certain circumstances, including that "where a foreign corporation is doing business within this state." N.Y.Gen.Corp.Law, § 225. But this is but a general grant of jurisdiction applying to all actions within the defined classes, whether the corporation has properly qualified to do business or not and whether it is acting legally or illegally. Since, therefore, this general grant of jurisdiction operates independently of any designation of an agent to accept process, it cannot be construed to extend the consent which the corporation has made by affirmatively complying with the requirements of law as to doing business within the state. Appellants also claim that the New York statute has been construed as one requiring the corporation's consent to be sued in the local courts, citing Smolik v. Philadelphia & Reading C. & I. Co., D.C. S.D.N.Y., 222 F. 148, and Bagdon v. Philadelphia & Reading C. & I. Co., 217 N.Y. 432, 111 N.E. 1075, L.R.A.1916F, 407, Ann. Cas.1918A, 389. These cases are relied on as showing the corporation's consent, by its designation of an agent to accept process, to being sued on all transitory actions, whereas without such designation, suit would lie only in the jurisdiction where it had done the business out of which the cause of action arose. But this concerns the jurisdiction of the court over the case, not the personal privilege as to the place of the suit accorded the defendant by Jud. Code § 51, 28 U.S.C.A. § 112. Jurisdiction over the causes was obtained, since personal service could be had on the defendants through their agents designated to accept process. Beech-Nut Packing Co. v. P. Lorillard Co., supra. So here, had plaintiffs been residents of the Southern District of New York, so that the venue was properly laid, service of process upon the defendant would have been had by service upon its agent. The cases do not go beyond this or affect the requirement of venue. We conclude, therefore, that under existing law appellee was entitled to claim its privilege not to be sued in the Southern District of New York. When it did so,[7] there was no other course of action open to the court below but to dismiss the action as to it. There seems little hardship involved to the plaintiffs, for appellee can be sued in the appropriate federal districts, while the entire action as now conceived by the plaintiffs can be brought in the state courts. And if a change in policy is desirable, it must be sought from Congress. It follows that the order of dismissal as to appellee must be affirmed. Affirmed. NOTES [1] Among proposals to limit the jurisdiction of the federal courts before the Congress in 1932 was one drafted by the then Attorney General Mitchell and sponsored by President Hoover which would not have eliminated any of the language of Jud.Code § 24 (1), 28 U.S. C.A. § 41 (1), defining the original jurisdiction of the federal courts over suits between citizens of different states, but would have added to it a provision that a foreign corporation carrying on business in a state other than the one wherein in it was organized should be treated as a citizen of the former state for suits by residents thereof arising out of the business carried on in such state. S.B. 937, 72d Cong., 1st Sess. (1932). It failed of passage. See Comment by members of the faculty of the University of Chicago Law School, Limiting Jurisdiction of Federal Courts, 31 Mich.L.Rev. 59 (1932), and Clark, Diversity of Citizenship Jurisdiction of the Federal Courts, 19 A.B.A.J. 499 (1933), for discussion of the proposal and for other relevant citations. [2] The Patten case has recently been questioned in its own circuit for this reason. Hamilton Watch Co. v. George W. Borg Co., N.D.Ill.E.D., Mar. 6, 1939, 27 F.Supp. 215, Wilkerson, D.J. [3] N.Y.Gen.Corp.Law, §§ 15, 16, Consol. Laws of 1909, c. 23. These statutes were later changed to provide for the designation of the secretary of state as the agent of the corporation upon whom process might be served, but all prior designations of particular agents were continued. N.Y.Gen.Corp.Law, §§ 210, 213. [4] Cf. also statements in Bogue v. Chicago, B. & Q. R. Co., D.C.S.D.Iowa, 193 F. 728, U. S. v. Sheridan, C.C.W.D.Ky., 119 F. 236, and O'Donnell v. Slade, D.C. M.D.Pa., 5 F.Supp. 265. [5] Certiorari was granted April 17, 1939, 59 S.Ct. 789, 83 L.Ed. ___. [6] This is the wording of the present statute, N.Y.Gen.Corp.Law, § 210, under which the secretary of state is the agent to be designated; the language of the earlier statute was similar. See note 3, supra. [7] The cumbersome method of claiming the privilege resorted to here — special appearance and motion to set aside service of the subpœna and the Marshal's return of service of the subpœna — may now be superseded by a simple motion to dismiss, under Rule 12(b), Federal Rules of Civil Procedure, 28 U.S.C.A. following section 723c.
{ "pile_set_name": "FreeLaw" }
Charles Frink Charles Frink may refer to: Charles N. Frink (1860–?), American travelling salesman, insurance executive and member of the Wisconsin State Legislature
{ "pile_set_name": "Wikipedia (en)" }
Lord Creator Lord Creator (born Kentrick Patrick, circa 1940, San Fernando, Trinidad and Tobago) is a calypso, r&b, ska and rocksteady artist. Alongside Cuban-born Roland Alphonso, Barbadian Jackie Opel and fellow Trinidadians Lynn Taitt and Lord Brynner, Lord Creator was an important and positive "outside" influence during the early development of the Jamaican music scene. Career He started as a calypso singer in Trinidad under the stage name Lord Creator and recorded his first hits, "The Cockhead" and "Evening News", in Trinidad in 1958 and 1959 respectively with Fitz Vaughan Bryan's big band. Due to the success of his hit "Evening News", which was released in Trinidad on the Cook label and also in the UK on the Melodisc label, he moved to Jamaica in late 1959 to perform and record and decided to settle there. In 1962, he recorded "Independent Jamaica" with producer Vincent "Randy" Chin, which became the official song marking Jamaica's independence from the British Empire on 6 August 1962. That song was also the first record on Chris Blackwell's newly founded Island Records label in the United Kingdom (Island 001). In 1963, "Don't Stay Out Late", produced by Chin, became a hit in Jamaica. In 1964, he had a further hit with "Big Bamboo", produced by Coxsone Dodd with Tommy McCook on saxophone. After "Little Princess" in 1964, he recorded a calypso album, Jamaica Time, at Studio One. It included calypso classics like "Jamaica Farewell" and "Yellowbird", as well as a cover of Bob Dylan's "Blowin' in the Wind". His next album, Big Bamboo, was recorded at Dynamic Studios sometime after 1969, when the studio was established by Byron Lee. Carlton Lee is listed as the producer. Creator had another big hit with "Kingston Town", a tune he recorded for producer Clancy Eccles in 1970. After that, Lord Creator virtually disappeared from the music industry; although in 1976, he still recorded "Big Pussy Sally", a no-holding-back, free-spirited song which was done on the same tape as Fay Bennett's equally lewd and light-hearted "Big Cocky Wally" for Lee 'Scratch' Perry in the Black Ark studio. Both songs were released on two separate Island Records singles in the UK, both on the B-side accompanied by two different Upsetters dubs. In 1978 Creator returned to the Black Ark to re-record his in 1968 in Randys studio recorded, Vincent Chin produced song, "Such is Life". He returned to Trinidad and Tobago after suffering two strokes. In 1989, the British band UB40 recorded a cover version of "Kingston Town", which helped to revive Lord Creator's career. He appeared in oldies shows in Jamaica, and toured Japan. He now lives in Montego Bay. References External links In the Battle for Emergent Independence: Calypsos of Decolonization, by Ray Funk Category:Trinidad and Tobago musicians Category:Calypsonians Category:Jamaican ska musicians Category:Island Records artists Category:Year of birth uncertain Category:Living people Category:Stroke survivors Category:Year of birth missing (living people) Category:People from San Fernando, Trinidad and Tobago
{ "pile_set_name": "Wikipedia (en)" }
Blog Read our unique insights into hiring, HR and careers. Strategic Staffing: Phone Interviews for Candidate Screening August 15th, 2014 Realistically speaking, you probably receive a plethora of resumes for each open position you post. Many candidates may appear promising on paper, but you can only choose one for the job. Inviting dozens of people into your office for a job interview simply isn’t realistic, as it would take a huge amount of time. Many companies combat this issue by conducting phone interviews for initial candidate screenings. This is a quick and easy way to eliminate those who aren’t right for the job, so you can focus all of your attention on the top contenders. 5 Tips to Conduct an Effective Phone Interview Feeling a bit unsure about how to conduct a phone interview? Use the following five tips to get started: Be Prepared. Take the time to thoroughly review the candidate’s resume prior to the interview, so you’re able to ask specific questions about their skills and experience. You’ll need as much detail as you can get to make an informed decision. Cover the Basics: A phone interview is used as an initial candidate screening tool, so be sure to ask basic questions such as salary requirements and availability to rule out those who don’t meet your criteria. Keep it short. Aim to make the interview around 30 minutes. This gives you enough time to explain the position, ask questions, and answer any inquires the candidate may have. Take Notes: While you may think you’ll remember all of the person’s responses, there’s a good chance you won’t ─ especially if you’re talking with a number of candidates. Taking notes allows you to retain the key points of the interview, so you’re not left confused later on when trying to determine what each person said. Find a Quiet Space: If you don’t have your own office to conduct the call, reserve a conference room or another quiet area for the interview. Constantly having to interrupt the candidate or ask them to repeat themselves is distracting and can make you appear unprofessional. Looking for outstanding professionals to fill open positions at your organization? Contact the Michigan staffing and recruiting experts at Malone today. We’ve been connecting premier employers with top talent for more than 40 years.
{ "pile_set_name": "Pile-CC" }
There was a time when school inspectors were austere characters in dark suits, sober ties, short haircuts and squeaky, highly polished shoes. They were known as HMI (Her Majesty’s Inspectors of Schools) and many were former headmasters of a particularly formidable kind. After a lifetime of insisting on the highest standards in traditional disciplines, such as English grammar, mathematics and the correct spelling and pronunciation of French irregular verbs, they sought to bring academic rigour to every classroom in the land. So it comes as something of a shock to find that in their latest incarnation they have been transmuted into the educational thought police, the shock troops of the Harman-inspired crusade to make progressive, right-on attitudes the guiding light of British education. It is like waking up to find that Captain Mainwaring has been reborn as the Wolf of Wall Street. The quest for something called British values has exposed the inspectorate’s previously hidden role as cultural commissars – a kind of Stasi officially known as the innocuous sounding Ofsted – operating in our schools. This crusade, a modern equivalent of the medieval search for the Holy Grail, was triggered by the so-called Trojan Horse scandal in Birmingham where it was found that state schools (not faith schools) had been hijacked by Islamic extremists intent on imposing their fundamentalist agenda, such as the banning of un-Islamic subjects such as music, segregating boys and girls, promoting a medieval view of the role of women and gays, and forcing non-Muslim heads and teachers out of their jobs. Even worse, from the inspectorate’s credibility, some of these Trojan Horse schools had previously been rated “outstanding” by them. Ministers responded with the highly un-British notion that schools should be under a duty to promote “British values”, which apparently include such uncontentious notions like tolerance, liberty and democracy. But that is not how Ofsted see it, no doubt egged on by the witless Education Secretary Nicky Morgan, apparently appointed to cave into the NUT-dominated Blob. It has focused its attention on mainly Church schools and has embarked on a purge of traditional attitudes. The inspectors (sorry, Stasi) appear to see it as their duty to enforce Harriet Harman’s 2010 Equality Act (described by one Labour Cabinet minister as “socialism in one clause), which outlaws discrimination on the grounds of race, gender or sexual orientation. So, in their eyes, it is not enough for a school to tolerate ( a British ‘value’) homosexuality, it now must actively promote it. Hence the swamp into which the inspectors have blundered. Saturday’s Daily Mail, over two pages following reports in other media detailed six schools across the country that have fallen foul of Ofsted. Several have been downgraded and placed in “special measures” – one is being forced to close its doors. Even The Guardian has got in on the act reporting that nine-year-olds at Jewish schools have been quizzed by Ofsted inspectors about gay marriage. As part of this Draconian (and surely very un-British witch-hunt), a girl of 11 at a free school in Durham was asked by an inspector if she knew what it meant to be gay and whether she had any gay friends. She was also asked if she ever felt she was in the wrong body. The Durham inspection carried out last autumn also wanted to know what the school was doing about female genital mutilation. One pupil, asked about Muslims, had the temerity to link them to terrorists. Another school in the North East has been downgraded for failing to teach about the diversity within modern British society. A 10-year-old girl pupil was also asked the transgender question and others were asked if they knew what lesbians “did”. The list is endless. It is not enough for heads and teachers to teach toleration of sexual and ethnic minorities. They are now expected, on pain of disciplinary action or the closure of an offending school, to promote such causes. Imams must be invited into schools, including Christian ones, festivals from other religions must be “celebrated” not just taught about, children not yet in their teens must be instructed in all the manifold permutations of human sexuality. Talk about the loss of innocence. No doubt they will soon be a GCSE in the horrors of FGM. Childhood is being destroyed on the orders of the State and in the pursuit of a most un-British agit-prop agenda. No wonder that the Church of England has complained that the British values test is “dangerous and divisive” and accused Ofsted of turning into a “schoolroom security service”. Some 25 years ago Margaret Thatcher’s government made it illegal for local councils to promote homosexuality in schools. Well, the wheel has turned full circle. It is now pretty much illegal not to promote the LGBT agenda in schools. As The Conservative Woman has often argued, our schools and universities are no longer places of disinterested learning and civilised, informed debate and discussion. They are now becoming the vanguard of a cultural revolution in which traditional Christian morality is being put to the sword. Harriet Harman and her legions of angry sisters should be delighted because although the Left has lost the battle over the market economy, it has scored a staggering victory in terms of cultural values. Even worse, as the Left’s long march through the institutions turns into a gallop, all this is happening under a Conservative-led government. Michael Gove, who gave some impression of understanding what was at stake, has been replaced with the spineless Mrs Morgan while our Prime Minister confines himself to banging on about his long-term economic plan and amusing himself with selfies and photo-calls with prominent members of the Davos-style global elite. For all that, the Tories are stuck at around 33 per cent in the polls – well short of the number Dave needs to form a majority government. They should reflect on how long the “Tories” will last as a credible political force if they continue to hand the Left an unfettered hand in the classroom and the lecture theatre? Without the survival of social conservatism in Britain, the Conservative Party is doomed. If you appreciated this article, perhaps you might consider making a donation to The Conservative Woman. Our contributors and editors are unpaid but there are inevitable costs associated with running a website. We receive no independent funding and depend on our readers to help us, either with regular or one-off payments. You can donate here. Thank you.
{ "pile_set_name": "Pile-CC" }
House Intelligence Chairman Adam Schiff (D-Calif.) sent a letter to House Judiciary Chairman Jerry Nadler (D-N.Y.) on Tuesday notifying him of two flash drives containing additional evidence related to the impeachment inquiry, which was obtained from indicted Giuliani associate Lev Parnas. Why it matters: As Axios' Alayna Treene reported earlier today, a public release of some or all of these materials could give Democrats new ammunition to argue that the White House must turn over more information and allow new testimony from witnesses. The big picture: The Soviet-born Parnas helped connect Rudy Giuliani to Ukrainian officials while the pair were engaged in a campaign to pressure Ukraine to investigate President Trump's political opponents. The records transmitted by Schiff include communications between Parnas and Giuliani, as well as text messages in Russian that show Parnas was in contact with key Ukrainian figures caught up in the Trump-Ukraine scandal, such as former prosecutors Viktor Shokin and Yuriy Lutsenko. Lutsenko and Shokin helped spread the unsubstantiated allegations that former Vice President Joe Biden attempted to interfere in a Ukrainian investigation into the gas company Burisma, where his son Hunter sat on the board of directors. Read the annotated evidence. Read the attachments. Read the letter from Schiff.
{ "pile_set_name": "OpenWebText2" }
Q: Color each point individually in column chart highcharts I have a highchart as shown in the fiddle series: [{ data: [ [ 'Actual Wishlist Requests' , 6000], [ 'Actual Approved Wishlists', 3000], [ 'Actual Research Completed', 2000], [ 'Actual Interviews Scheduled', 1000], [ 'Actual Successful Interviews', 500], [ 'Actual Contracts Signed', 50]] }, { data: [ [ 'Target Wishlist Requests' , 9305 ], [ 'Target Approved Wishlists', 6557], [ 'Target Research Completed', 5069], [ 'Target Interviews Scheduled', 2290], [ 'Target Successful Interviews', 686], [ 'Target Contracts Signed', 37]] }] http://jsfiddle.net/Sq8fq/1/ I want to color each point with an individual color. How can i do this with the existing chart? Thanks Abhishek A: You can specify lots of attributes at the point level, including color. Try something like this: series: [{ data: [ {y:6000,color:'red'}, http://jsfiddle.net/G575t/
{ "pile_set_name": "StackExchange" }
1. Field of the Invention The present invention relates to a clothes dryer, and in particular to a method and an apparatus for detecting a residual drying time of a clothes dryer. 2. Description of the Related Art In general, a clothes dryer rotates clothes in a drum by rotating the drum and generates heat by using a heater, and low temperature-little moisture air is converted into high temperature-little moisture air while passing the heater according to rotation of a drying fan. The clothes dryer heats the clothes by making the high temperature-little moisture air flow into the drum. Herein, the high temperature-little moisture air is converted into high temperature-much moisture air by steam generated while the clothes are heated. The high temperature-much moisture air is converted into low temperature-little moisture air by being condensed by an internal condenser, and it is converted into high temperature-little moisture air while passing the heater according to the rotation of the drying fan. In more detail, the clothes dryer dries clothes in the drum by performing the clothes heating process repeatedly. In addition, when clothes drying is finished, the clothes dryer stops the operation of the heater and cools the dried clothes by operating only a motor in order to make a user take out the clothes easily. Herein, a time required for heating clothes in the drum of the clothes dryer and cooling the clothes thereafter is called a clothes drying time. In addition, in the conventional clothes dryer, in order to display a residual drying time, a drying time and a cooling time are preset, the set drying time is reduced in drying with the passage of time, and the set cooling time is reduced in cooling with the passage of time. In the meantime, detailed description about the clothes dryer was disclosed in U.S. Pat. No. 6,449,876. However, in the conventional clothes dryer, because a preset residual drying time is displayed regardless of a quantity of moisture contained in clothes in the drum of the clothes dryer, there may be error between an actual residual drying time and a displayed residual drying time, and accordingly reliability of the clothes dryer may be lowered due to that error.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Jar dependencies compile time and runtime If by default, when I produce a jar application, the dependencies are not include, does that mean that the user should download all the dependencies of my app to use it? Why includes dependencies in jar is not the default thing? How can I expect users to have/download all the dependencies at with the exact version needed? A: From https://imagej.net/Uber-JAR: Advantages: A single JAR file is simpler to deploy. There is no chance of mismatched versions of multiple JAR files. It is easier to construct a Java classpath, since only a single JAR needs to be included. Disadvantages: Every time you need to update the version of the software, you must redeploy the entire uber-JAR (e.g., ImageJ is ~68 MB as of May 2015). If you bundle individual JAR components, you need only update those that changed. This issue is of particular relevance to Java applications deployed via Java Web Start, since it automatically downloads the latest available version of each JAR dependency; in that case, your application startup time will suffer if you use the uber-JAR. You cannot cherry-pick only the JARs containing the functionality you need, so your application's footprint may suffer from bloat. If downstream code relies on any of the same dependencies which are embedded in an unshaded uber-jar, you may run into trouble (e.g., NoSuchMethodError for unshaded uber-JARs) with multiple copies of those dependencies on your classpath, especially if you need to use a different version of that dependency than is bundled with the uber-JAR. As you can see, it is important to understand how use of the uber-JAR will affect your application. In particular, Java applications will likely be better served using the individual component JARs, ideally managed using a dependency management platform such as Maven or Ivy. But for non-Java applications, the uber-JAR may be sufficient to your needs. It basically depends on the use case. If the jar will be used in development of other applications and its dependencies might be updated time to time, it makes sense to use a normal jar, but if the jar is to be run/deployed, it might be better to use an uber/fat jar.
{ "pile_set_name": "StackExchange" }
Contexts are a feature that will eventually be released in React.js - however, they exist today in an undocumented form. I spent an afternoon looking into the present implementation and was frustrated by the lack of documentation (justified, as it is a changing feature). I’ve pieced together a few code examples that I found helpful. In React.js a context is a set of attributes that are implicitly passed down from an element to all of its children and grandchildren. Why would you use a context rather than explicitly passing properties down to child elements? There are a few different reasons. You may be building a widget with a large child tree where child elements have the ability to drastically affect the widget’s overall state. If you’re not using the Flux pattern (where the parent widget listens to Stores that are affected by Action Creators invoked by the child elements), the idiomatic way to do this is to pass callbacks that affect the overall widget through props - this can be a bit awkward when you are passing a callback down several levels. Another situation where contexts are useful is where you are doing server-side rendering - in this case data comes in that is uniquely associated with the user (e.g. session information). If your elements require session information this needs to be passed down from parent to child which gets inelegant very quickly. Update (2/19/2015): React.withContext is deprecated as of React 0.13-alpha. You should investigate getChildContext with a wrapper component for future-facing code. Contexts themselves are not going away - they are [planned for React 1.0]((https://facebook.github.io/react/blog/2014/03/28/the-road-to-1.0.html#context) and at ReactConf 2015 the React team confirmed that the context feature was staying, with some cool examples of how contexts have been used in the past. React.withContext React.withContext will execute a callback with a specified context dictionary. Any rendered React element inside this callback has access to values from the context. var A = React . createClass ({ contextTypes : { name : React . PropTypes . string . isRequired , }, render : function () { return < div > My name is : { this . context . name } < /div> ; } }); React . withContext ({ 'name' : 'Jonas' }, function () { // Outputs: "My name is: Jonas" React . render ( < A /> , document . body ); }); Any element that wants to access a variable in the context must explicitly a contextTypes property. If this is not declared, it will not be defined in the elements this.context variable (and you will likely have errors!). If you specify a context for an element and that element renders its own children, those children also have access to the context (whether or not their parents specified a contextTypes property). var A = React . createClass ({ render : function () { return < B /> ; } }); var B = React . createClass ({ contextTypes : { name : React . PropTypes . string }, render : function () { return < div > My name is : { this . context . name } < /div> ; } }); React . withContext ({ 'name' : 'Jonas' }, function () { React . render ( < A /> , document . body ); }); To reduce boilerplate, it is possible to mix in the contextTypes property to an element using the mixins property on an element. var ContextMixin = { contextTypes : { name : React . PropTypes . string . isRequired }, getName : function () { return this . context . name ; } }; var A = React . createClass ({ mixins : [ ContextMixin ], render : function () { return < div > My name is { this . getName ()} < /div> ; } }); React . withContext ({ 'name' : 'Jonas' }, function () { // Outputs: "My name is: Jonas" React . render ( < A /> , document . body ); }); If you rely on a context element it is probably best to ensure that its contextTypes property is set as required. That way if you forget to specify a context React will give a warning back: var A = React . createClass ({ contextTypes : { name : React . PropTypes . string . isRequired }, render : function () { return < div > My name is { this . context . name } < /div> ; } }); // Warning: Required context `name` was not specified in `A`. React . render ( < A /> , document . body ); getChildContext, childContextTypes, and context Child contexts allow an element to specify a context that applies to all of its children and grandchildren. This is done through the childContextTypes and getChildContext properties. var A = React . createClass ({ childContextTypes : { name : React . PropTypes . string . isRequired }, getChildContext : function () { return { name : "Jonas" }; }, render : function () { return < B /> ; } }); var B = React . createClass ({ contextTypes : { name : React . PropTypes . string . isRequired }, render : function () { return < div > My name is : { this . context . name } < /div> ; } }); // Outputs: "My name is: Jonas" React . render ( < A /> , document . body ); Similarly to how elements must whitelist the context elements they have access to through contextTypes , elements that specify a getChildContext property must specify the context elements that are passed down. Otherwise your code will error! // This code *does NOT work* becasue of a missing property from childContextTypes var A = React . createClass ({ childContextTypes : { // fruit is not specified, and so it will not be sent to the children of A name : React . PropTypes . string . isRequired }, getChildContext : function () { return { name : "Jonas" , fruit : "Banana" }; }, render : function () { return < B /> ; } }); var B = React . createClass ({ contextTypes : { fruit : React . PropTypes . string . isRequired }, render : function () { return < div > My favorite fruit is : { this . context . fruit } < /div> ; } }); // Errors: Invariant Violation: A.getChildContext(): key "fruit" is not defined in childContextTypes. React . render ( < A /> , document . body ); Suppose you have multiple contexts at play in your application. Elements added to the context through withContext and getChildContext are both accessible to child elements. child elements still need to subscribe to the context elements that they want through contextTypes . var A = React . createClass ({ childContextTypes : { fruit : React . PropTypes . string . isRequired }, getChildContext : function () { return { fruit : "Banana" }; }, render : function () { return < B /> ; } }); var B = React . createClass ({ contextTypes : { name : React . PropTypes . string . isRequired , fruit : React . PropTypes . string . isRequired }, render : function () { return < div > My name is : { this . context . name } and my favorite fruit is : { this . context . fruit } < /div> ; } }); React . withContext ({ 'name' : 'Jonas' }, function () { // Outputs: "My name is: Jonas and my favorite fruit is: Banana" React . render ( < A /> , document . body ); }); Finally, the context that is applied is the closest one to the element. If you specify a key in the context through withContext and then specify an overlapping key through getChildContext , the overlapping key wins. var A = React . createClass ({ childContextTypes : { name : React . PropTypes . string . isRequired }, getChildContext : function () { return { name : "Sally" }; }, render : function () { return < B /> ; } }); var B = React . createClass ({ contextTypes : { name : React . PropTypes . string . isRequired }, render : function () { return < div > My name is : { this . context . name } < /div> ; } }); React . withContext ({ 'name' : 'Jonas' }, function () { // Outputs: "My name is: Sally" React . render ( < A /> , document . body ); }); Caveats I ran these examples through jsfiddle with React 0.12. I’ve played a bit with similar functionality in React 0.10 and it looks like this has roughly the same behavior. I found the React test suite really helpful in understanding the intended behavior of the context feature: specifically, the withContext tests and the getChildContext tests really helped me understand how contexts were intended to work. Finally, as contexts are an undocumented feature of React.js, caveat emptor - everything I’ve written here may change completely in an upcoming release and just because you can use them today doesn’t mean that you necessarily should. Hope you’ve found this helpful!
{ "pile_set_name": "OpenWebText2" }
Take a trip to your local supermarket and you're bound to see an entire section devoted to gluten-free products. Once the exclusive domain of people with celiac disease, the trend towards gluten-free wheat has quickly become all the rage. So, what's to account for all this? As researchers from the Mayo Clinic have recently pointed out, it may have something to do with high-tech wheat that was developed in the 1950s and the subsequent rise of "gluten sensitivity". Gluten, a protein that's found in bread and other foods, has to be avoided by people with celiac on account of their inability to properly digest it. The protein damages the lining of the small intestine, so foods like pasta, oats, and even beer have to be avoided. It's typically added to other kinds of foods to help dough rise and give baked goods their structure and texture. Concerned about the rising rates of celiac in the general public and the popularity of gluten-free food products, researchers Joseph Murray and James Everhart compiled a thorough survey to get a definitive answer. What they discovered was that about 1.8 million Americans have celiac disease, and that another 1.4 million are likely undiagnosed. And surprisingly, another 1.6 million have adopted a gluten-free diet despite having no diagnosis. In fact, their study indicated that most persons who were following a gluten-free diet did not even have a diagnosis. As CBS News points out, the burgeoning desire to avoid gluten may having something to do with the state of today's wheat and the rise of "gluten sensitivity": In the 1950s, scientists began cross-breeding wheat to make hardier, shorter and better-growing plants. It was the basis of the Green Revolution that boosted wheat harvests worldwide. Norman Borlaug, the U.S. plant scientist behind many of the innovations, won the Nobel Peace Prize for his work. But the gluten in wheat may have somehow become even more troublesome for many people, Murray said. That also may have contributed to what is now called "gluten sensitivity." Doctors recently developed an ambiguous definition for gluten sensitivity. It's a label for people who suffer bloating and other celiac symptoms and seem to be helped by avoiding gluten, but don't actually have celiac disease. Celiac disease is diagnosed with blood testing, genetic testing, or biopsies of the small intestine. The case for gluten sensitivity was bolstered last year by a very small but often-cited Australian study. Volunteers who had symptoms were put on a gluten-free diet or a regular diet for six weeks, and they weren't told which one. Those who didn't eat gluten had fewer problems with bloating, tiredness and irregular bowel movements. Clearly, "there are patients who are gluten-sensitive," said Dr. Sheila Crowe, a San Diego-based physician on the board of the American Gastroenterological Association. What is hotly debated is how many people have the problem, she added. It's impossible to know "because the definition is nebulous," she said. Gluten-free diets are also being used as a way to lose weight, or as part of low carb and paleo diets. There has also been increasing concern that all people have some kind of gluten sensitivity. And as the CBS article indicates, this is quickly turning into big business, with an estimated $7 billion being spent on gluten-free foods this year. You can hit the entire study at The American Journal of Gastroenterology. Image Dejan Stanisavljevic/Shutterstock.com.
{ "pile_set_name": "OpenWebText2" }
Q: Compile error: Argument not optional vba excel I'm trying to write a code where I input an image after checking the info in each sheet of my workbook. Since I added for each to the code it stopped working and started giving me this compile error message, the code works without the for each but i want it to be automatic. Can you help? Sub ForEachWs() Dim ws As Worksheet For Each ws In ActiveWorkbook.Worksheets Call Worksheet_SelectionChange Next ws End Sub Sub Worksheet_SelectionChange(ByVal Target As Range) On Error Resume Next If Target.Column = 2 And Target.Row = 1 Then ' onde clicar para buscar imagem BuscarImagemTavares (Target.Value) End If End Sub Sub BuscarImagemTavares(Produto As String) On Error Resume Next 'Autor: Tavares If Range("B2") = "ok" Then 'Verifica se celula B2 tem ok se sim não insere a imagem novamente Exit Sub End If Dim Imagem, CaminhoImagem As String If Len(Produto) = 3 Then 'acrescenta 00 antes do cod do produto Produto = "00" & Produto End If If Len(Produto) = 4 Then 'acrescenta 0 antes do cod do produto Produto = "0" & Produto End If Imagem = Dir("\\Clfssrvfar\ENGENHARIA\GESTAO_DE_PROJETOS\04. FOLLOWUP\09. ARQUIVOS PARA FERRAMENTAS\09.1 IMAGENS\09.1.2 IMAGENS PRODUTOS\" & Produto & "*", vbDirectory) CaminhoImagem = "\\Clfssrvfar\ENGENHARIA\GESTAO_DE_PROJETOS\04. FOLLOWUP\09. ARQUIVOS PARA FERRAMENTAS\09.1 IMAGENS\09.1.2 IMAGENS PRODUTOS\" & Imagem With ActiveSheet.Pictures.Insert(CaminhoImagem) 'Mostra Imagem 'Define tamanho e posição da imagem With .ShapeRange .Width = 75 .Height = 115 .Top = 7 .Left = 715 '*above it's me trying to make white background transparent* 'With .PictureFormat '.TransparentBackground = True '.TransparencyColor = RGB(255, 0, 0) 'End With '.Fill.Visible = True 'End With 'ActiveSheet.Shapes.Range(Array("Picture 2")).Select 'Application.CommandBars("Format Object").Visible = False End With End With If CaminhoImagem <> "" Then 'Após inserir imagem informa "ok" na B2 para não inserir de novo Range("B2").Select ActiveCell.FormulaR1C1 = "OK" End If End Sub A: Since you want to run the sub BuscarImagemTavares for every worksheet you have, you have to alter both the subs ForEachWs and BuscarImagemTavares. ForEachWs: Sub ForEachWs() Dim ws As Worksheet For Each ws In ActiveWorkbook.Worksheets 'Here you can directly call the sub without the sub Worksheet_SelectionChange Call BuscarImagemTavares(ws, ws.Cells(1,2).Value) 'in BuscarImagemTavares you´ll need the ws reference to actually work on the right worksheet (otherwise youll always work on the selected one) Next ws End Sub BuscarImagemTavares: Sub BuscarImagemTavares(ByVal ws as Worrksheet, Produto As String) 'Mind the additional parameter 'ws' On Error Resume Next 'Autor: Tavares 'If Range("B2") = "ok" Then 'Verifica se celula B2 tem ok se sim não insere a imagem novamente If ws.Range("B2") = "ok" Then 'Here you actually have to use a reference to the Worksheet you want to use, otherwise alwys the same will be used Exit Sub End If ... 'You need the reference here as well so you won#t use the same worksheet over and over again With ws.Pictures.Insert(CaminhoImagem) 'Mostra Imagem ... If CaminhoImagem <> "" Then 'Após inserir imagem informa "ok" na B2 para não inserir de novo 'Range("B2").Select 'ActiveCell.FormulaR1C1 = "OK" 'If you don´t actually need the cell in excel to be selected after the programm finished you should´nt use the '.select' and '.selection' instead use this: ws.Range("B2").Value= "OK" 'Since you aren´t adding a formula you should address the '.value' property End If ... End Sub Hope I could help you a bit.
{ "pile_set_name": "StackExchange" }
Business Directories Viva celebrates Bahraini culture with Lewan Ramadan Manama , July 15, 2013 Viva Bahrain, through its corporate social responsibility arm Jusoor, has brought to life Bahraini culture and traditions during the holy month with its sponsorship of Lewan Ramadan at Bahrain City Centre. The stand, located in the main galleria will host various family oriented activities that will connect everyone to the essence of Ramadan. In partnership with the Ministry of Social Development and Bahrain City Centre, the miniature cultural village, Lewan Ramadan, will keep family values alive through a festive schedule of themed interactive activities and cultural entertainment for families and individuals to enjoy. "We are proud of our partnership with the Ministry of Social Development and Bahrain City Center to bring the festivities of the holy month to all the people in Bahrain," said Ulaiyan Al Wetaid, Viva Bahrain CEO. "Through our CSR arm, Jusoor, we aim to reiterate our commitment to giving back to the community through an array of cultural activities and initiatives that will benefit everyone. Last year's event was a huge success, and we look forward to repeating the same this year." – TradeArabia News Service
{ "pile_set_name": "Pile-CC" }
"Previously, on Top Chef Masters..." "God help me." "Seven of the most acclaimed chefs in America put their reputations on the line in one culinary clash of the titans." "Hello, chefs." "Maroon 5 asked the chefs to rock out a family-style meal on a tour bus." "Why are you going to the spice thing, too?" "I'm going to the spice thing." "Ugh!" "Traci scored her second win of the day with her Japanese-inspired steak." "Traci's steak was really well executed." "But Alex overextended himself, taking on too many dishes." "The enchilada just had a bizarre texture." "Please return to the tour bus and pack your knives." "Six chefs remain." "But only one can emerge victorious and win the grand prize of $100,000 for their charity, furnished by Kitchenaid, and be crowned as the winner of Top Chef Masters." "♪ Top Chef Masters 3x07 ♪ Date Night Original Air Date on May 18, 2011" "== sync, corrected by elderman ==" "Hello, everyone." "Hi." " Oh, what are we doing?" " Yeah." "Oh, gosh." "A nose thing?" "We walk into the Top Chef Masters kitchen, and there's a bunch of stuff on our stations-- headphones, nose plugs, and..." "Lovely, a blindfold." "As chefs, we use our senses all the time." "Well, today we're gonna see just how well tuned your senses are." "Here we go." "On the station in front of you, is a nose plug, blindfold, and headphones." "You'll use these to identify five ingredients by using just one sense." "The chef that identifies the least amount of ingredients in each round is out of the game." "The last chef standing will receive $5,000 for their charity." "But unfortunately, chefs, immunity is no longer on the table." "I think we're all terrified." "There's no more immunity." "It's more competitive." "People are gonna fall off, and we're gonna get to be a very small group." "It's getting down to the wire." "The first round is taste-- a test of your palate." "Please put on your nose plug and blindfold, and get ready to put on your headphones." "I feel so sexy right now." "I am claustrophobic." "I have a balance issue." "And with my ears closed, my nose closed, and my eyes closed, I am not gonna be able to do this." "Time will start when the waiters put your trays down." "You'll have one minute to taste each of the five ingredients in front of you." "You can now put your headphones on." "My concern is that I'm gonna put something in my mouth that's just gonna provoke a gag reflex, 'cause I'm not looking at it." "I'm not really tasting these things," "I'm trying to make a guess from how they feel, because the taste, without my sense of smell, is just not happening." "Time's up, chefs." "Ugh!" "I'm wearing it." "My ears kept jamming up every time I chewed." "Please write down what you think you just tasted-- five ingredients." "Not only having to identify these in a very disoriented state, but to have to remember them and write them down, that's the part I'm stumbling over." "Are you ready to find out what you just tasted?" " Yes." " Ugh!" "So first up, we had water chestnuts." "Who got that right?" "Floyd, you missed it." "What did you think it was?" " I thought it was jicama." " Very similar texture." "Worcestershire sauce." "Traci, you took a bath in it, obviously." "I did, unhappily." "Cashews." " Cashews, not walnuts." " Papaya." "I wrote down papaya, and I changed it to tomato." "I don't know why." "Our last ingredient, mustard green." "Oh, no!" "Nobody." "What did you all think it was?" " Basil." " Parsley." "I get one out of five." "And I'm really disappointed." "Who got none?" "Floyd!" "Unfortunately, you're out of this competition." "You'll have to come over and sit this out in the wine room." "I'm so embarrassed." "I think that it was the disorientation that really threw me for a loop." "Next, we're gonna be testing your sense of smell." "I'm gonna give you five ingredients, and you're gonna have 60 seconds to smell those ingredients." "Put on your headphones, and your time starts now." "I can tell a lot using just my sense of smell." "I can tell when the wine and the alcohol is burned off of a reduction," "I just use it more than any other of my other senses." "Time's up, chefs." " Done." " Okay." "This is what you just smelled." "The first ingredient was epoisses cheese." "Did anybody get that correct?" "Hot sauce." "Ooh, I put capsicum." " Root beer." " Ugh!" "Root beer?" "Rice vinegar." "Hugh?" "I just smelled things I want." "I wrote coffee." " Mayonnaise." " Oh." "I didn't get it." "I have had many articles written about my love for mayonnaise." "So I'm disappointed in myself that I didn't recognize the smell." " I got nothing right." " Really?" "Traci, unfortunately, you're the only one that didn't get one right, so I'm gonna have to ask you to join Floyd in the wine room." " Floyd." " Welcome to my world." "Our next round, we're gonna test your sense of touch." "And we will eliminate two chefs that get the least amount of ingredients right." "Put your headphones on now." "As a proud father of a six- and an eight-year-old," "I know the features of your standard gummy bear." "Time's up, chefs!" "Please write down what you think you just felt." "Let's take a look." " Oh, ." " Our first ingredient, is chayote." "Also known as mirliton, right?" "That's right, so we'll accept that." "Arborio rice." " All right." " Blackberries-- all four of you." "Gummy bears." "Do you eat those down under?" "Okra." "Who got them all right?" "Okay, Celina and Naomi, you've tied for the least correct answers." "So I'm sorry, but you're both eliminated." "Come hither." "Hugh, Mary Sue, congratulations." "You're in the final round." "And one of you will win" "$5,000 for your charity." "And it all comes down to your final sense..." "Sound." "I grew up in a household where my mom was very hard of hearing." "I just never developed a really keen sense of hearing because of loud people talking so that my mom could hear." "So what do you think your sense of sound's like?" "I hear compliments, I don't hear complaints." "The first person to identify three ingredients correctly wins." "You can just go ahead and shout out the answer." "If you get that correct, then that's one point to you." "If you get it incorrect, then the other person gets a free guess." "We clear?" "Yep." "Ingredient number one." "Any guesses?" "Vinegar and soda?" "Okay, Hugh, you now get a free guess." "Tapioca?" "I'm shocked that they didn't get it." "I felt like if I would've been listening, rice krispies would've been a quick and easy one." "Neither of you got it correct, unfortunately." "It's rice krispies and milk." "Ah." "Ingredient number two." "Are you ready?" " Celery." " Celery!" "Hugh, I think you got in there just a hair in front of Mary Sue." "I wanted to see the photo finish on that one." " Celery." " Celery." "That's one point to Hugh." "Are you ready for your next ingredient?" "Yes." " Carrot." " Mary Sue?" " Celery." " You're both incorrect." "That was the crunch of a potato chip." " Really?" " It was a big crunch." "Evidently, large Australian males eat potato chips in a different way than Americans do." "Ice." "We haven't started yet." "Oh!" "Sorry." "Okay, the question is, here what am I doing?" "Shucking an oyster." " Oh." " Good work, Hugh." "Nice." "So that's two points to Hugh, which means if you get the next one right, you win the challenge and the $5,000 for your charity." "So the question is, what am I doing?" "Buttering toast." "Hugh, congratulations." "Well done." "Very good." "A quickfire like this is definitely challenging." "And it's fun to really look at how interconnected all of our senses really are-- mostly when it comes to food." "Well done, Hugh, that's $5,000 for your charity," "Wholesome Wave, furnished by Lexus." "That's awesome." "I feel beaten senseless." "It was love at first sight." "This is poignant and makes me throw up in my mouth." "For your next elimination challenge, we're gonna examine the relationship between food and another one of life's essentials-- love." "Many milestones in relationships revolve around food, from the first date to the wedding night." "To help us get romantic," "Top Chef Masters is gonna have a date night." "And to find out more, please welcome Chris." "How's it going, Chris?" "Good, how are you?" "Good to see you, buddy." " Hi, Chris." " Hi, Chris." "Chris, welcome to Top Chef Masters." "Tell us all about your loved one." "I've been dating my girlfriend, Victoria, for almost four years now." "And I can truly say it's been the best four years of my life." "She doesn't see this coming." "She doesn't think I'm ever going to do this, but..." "I am going to propose to my girlfriend." "Congratulations." "That's awesome." "You'll all be creating a six-course meal, each of you responsible for one course." "Your dish will be inspired by a seminal moment in Chris and Victoria's history." "The meal will be served tomorrow at date night for 21 couples, including the critics." "But Victoria has no idea that the biggest surprise of her life is coming at the end of the meal." "It's really sweet to see he's obviously so in love." "And it certainly is a huge public declaration." "Sweet." "I don't know what I would do if someone was paying that kind of attention to me." "To help you menu-plan, Chris is here to tell you more about his relationship with Victoria." "Take a seat in the wine room, and I'll see you for dinner tomorrow night with the critics-- good luck." " Hey, Chris." " How are you?" " How are you?" " Good." "Very well." "Well, thank you all so much, first of all." "This means a lot to me, and it's gonna mean a lot to Victoria." "Just to give you kind of, like, a visual idea of Victoria and I," "I'm gonna pass around these photos." "This is a big P.D.A. moment." "I'm not really big on P.D.A." "But that's okay." "I'm happy to cook." "We were friends until finally there was that moment of the first kiss." "Time stopped." "We knew at that moment, like, this is gonna develop into something much bigger." "My first kiss with my wife was amazing." "We had been friends for close to eight or ten years, but never dated." "We had gone on a trip with some friends, and we happened to be alone, and that's how the kiss happened." "That's when I realized I wanted to marry her." "As far as a favorite moment goes, one day we were walking down the street and saw a marquee that said, "Paris, je t'aime."" "She said "je t'aime" means "I love you."" "My first gift I ever got for her was a bracelet that said, "je t'aime."" "This is poignant..." "And makes me throw up in my mouth." "We go to sporting events every now and then, and there's kind of, like, tradition that we have to get a beer and pretzel." "It's small, but it's something that we really look forward to." "And she really tries to get me to be adventurous." "She was the first person to introduce me to Sushi." "And she actually made salmon, and she told me it was chicken." " Okay." " Once I realized it was salmon," "I was like, "ooh, your trickery."" "I don't know how she could've fooled him into thinking that salmon was chicken." "What about shellfish?" "We never had it." "Never?" "Really?" "And you think you're ready to get married, and you've never had clams or mussels?" "A lot of times, the ring is presented with dessert." "Do you guys have a favorite?" "For my birthday, she got me a red velvecake." "I got her an apple pie before, which we really enjoy." "She loves to plan these surprises, which is great, because now I'm about to plan the biggest surprise of her life." "This is fun." "Chefs and restaurants in general are always huge parts of people's lives, and they're milestone occasions." "Thank you." "Thank you so much." "Thanks, everybody." "Thank you." "It's an honor to be a part of it." "Okay, we have to do something French." "Can I take chicken?" "And I'm gonna do something French with it." "Yeah, that's fine." " I'll take dessert." " Do "je t'aime" on the plates." "All right." "I'm gonna take watermelon and make it look like tuna." "It's a surprise." "That's kind of cool." "She surprised him that one time." " Yeah." " Salmon--that's fun." "The last challenge was really eye-opening for me." "That was the first time I was in the bottom of any challenge, so I knew right then that I needed to go back to my roots." "And I'm gonna make things exciting as hell." "It's got to have a lot of flavor, with all the textures I can bring in." "You could do onion rings that look like bracelets." "And his first gift to her was a bracelet." "That's nice." "Don't say I never did anything for you, buddy." "Yeah, I know." " All right." " Time to shop." "I rush inside and go directly up to the meat department and see what they've got." "Give me 11 of those, about that size." "Chris and Victoria are not the most culinary people I've ever heard." "So my dish is gonna appeal to them because they like beef." "They like broccoli." "And then the onion ring is the real keepsake of this." "I think it really brings it back to the bracelet." "Now, the bracelet that he gave her was not edible." "But this one is." "50 chicken thighs." " All right." " Okay." "So I'll be back." "Thanks." "All right." "I want to do this dish that's braised chicken thigh." "But what if leaving chicken on the bone is a mistake?" "What if people are grossed out by that?" "There's things that can go wrong, and a lot of it has to do with the guests' expectation about what makes up a romantic meal." "Will you shout back at him and see if my thighs are ready yet?" "15 minutes!" "Do you guys have dried porcinis?" "Some black mussels?" "As fresh as you've got." "I am doing an apple galette for dessert." "And I'm also gonna do a little red velvet cupcake." "I'm definitely gonna make beer and soft pretzels." "I'm a little nervous." "Dessert's not my forte." "Everybody else has done dessert already, and I don't think it's anyone's great comfort zone." "And I sort of feel like it's-- you know, it's my turn." "Let's just hope that that's not the kiss of death for the chef." " We're in." " You got all your stuff?" " That's everything." " Press the magic button." "Look at that." "We get into our Lexus RX and drive to Top Chef Masters kitchen to start cooking." "We have two hours to prep tonight, and I have a lot to get done." "I want to get my mussels and clams scrubbed down." "I have to take the beards off of all the mussels." "You don't want to eat it." "It doesn't taste good." "And for Chris, who's never had mussels, it could ruin his idea of how delicious a mussel could be." "We have romantic dinners all the time." "I've got a list of things that definitely need to get done today." "I primarily need to get stock started." "I'm gonna be braising my chicken thigh." "I just want it to be, like, a powerful chicken flavor." "It's not always the prettiest, but a lot of heart goes into the food that I make." "Did somebody take all the carrots?" "The dish I'm making is kama sutra black pepper shrimp with watermelon, lime, and mint." "And I call it "kama sutra shrimp,"" "because I have two shrimp hugging each other." "It used to be a very popular dish at my restaurant." "And on date night, I think everybody's hoping that's where it's gonna go." "I think I'm a big romantic." "I love candlelight dinners." "When I proposed to my wife, we were out having dinner, and it wasn't even planned." "I think if you don't have romance in your life, what's the point?" "Our first wedding anniversary, we went to a steak house in New York." "And we paid through our noses for that meal." "And that was a point in our life where we couldn't afford very much." "So every time we eat steak at home, we always remember that one moment." "I got engaged on Valentine's day in a French restaurant." "I met my wife when I was 11." "I don't think cooking for her at that point in time was in the cards." ""Here's a peanut butter and jelly sandwich."" "Not tres romantic, unless you use the heart-shaped cutter." " I didn't get proposed to." " Neither did I!" "It was a conversation." "How long were you living together before you got married?" "Got married on our sixth anniversary of him moving in." "I think we were together about 15 years." "See, at that point, really, is he gonna surprise you by proposing?" "I mean..." "Yeah, right." "I met my husband when we were designing city restaurant." "I'd heard about him for years, because my business partner, Susan, had been married to him." "Every time I had boyfriend trouble, she would say, "oh, I wish you could meet my ex-husband, Josh." "He'd be so perfect for you."" "And I always thought, "yeah, sure, right."" "When he came out to talk to us about designing our new restaurant, it was love at first sight." "And that was 27 years ago." " 25 minutes." " Oy." "We're kind of all a little bit unsure of how adventurous his palate is." "So I think this is a very moderate interpretation of beef and broccoli." "There's definite pressure to make Chris and Victoria happy." "But I think that I need to be cooking food that I'm comfortable with." "You can take a risk and really come out on top, but you can take a risk and fall pretty far." "That one doesn't look very pretzel-like." "Celina's making pretzels." "Pretzel is something you eat on the street of New York." "It's hard to pull into a fine-dining experience." "Ten minutes, everybody!" "Uh..." "Fyi, everyone..." "This scale is inaccurate." "Traci has trained in French kitchens." "And she's elected to do pastry, which is a risky proposition for any chef that's not a pastry chef." "The scale was off by, like, three ounces." "That's a lot." "And then realizing that the scale is broken and that she has to start again, she could be making a pastry that would get her sent home." "Ugh." "I had trouble with one of the scales." "I'm not having a great cook." "With pastry, it's all about having to measure out all the ingredients." "And I'm having trouble with the equipment." "I've spent a lot of time doing something that I'm probably gonna throw in the garbage can tomorrow." "And so I'm just kind of freaked out." "35 seconds." "I'm not sure I'm gonna be able to pull it off." " Time's up." "Time's up." " Okay, let's go." "I had a bad day." " What are you doing?" " I'm going!" "I just snapped." "There's magic happening right now." "This is gonna be the most awkward moment in television history." "Our elimination challenge is to create a six-course menu-- each of us responsible for one dish-- for one special couple in particular," "Chris and Victoria." "And Chris is gonna ask for Victoria's hand in marriage at this special dinner." "Love is in the air for them." "And I'm happy to be a part of it." "We're down to six." "The competition is heating up dramatically." "I think everyone is feeling this sense of pressure." "I've made the watermelon look like tuna." "Served with a black pepper shrimp." "So you get sweet, sour, spicy." "That's kind of the food I like to do." "Chris seems very in love with Victoria, and he's been waiting for this opportunity for a long, long time, and I hope that she says yes." "It's my kama sutra shrimp." "Head to tail, tail to head, where would you see that?" "You making a pie?" " Uh, yeah, galettes." " Cool." "I need to make 43 apple galettes." "Yesterday the scale wasn't working." "And working with dough is not, you know, one of my strongest suits." "So it's taking me a lot longer than I would have thought." "I'm not doing the velvet cakes." "Okay." "Oh, darn it." "Darn it." "Did you cut yourself?" "Badly?" "Just..." " Oh!" " Right off the bat, the first thing I do is cut off the tip of my thumb." "I'm really just irritated." "Mary Sue, are you okay?" " I'll be fine." " I look over, and I see" "Mary Sue is throwing the top of her thumb into the garbage can." "Wait for a minute, okay?" "I haven't cut myself like that in decades, so it's really annoying." "I have to stop and tape it up..." "It's gonna hurt." "So that slows me down a little bit." "Thank you." "Chefs..." "I've got something to tell you." "You're gonna love it." " Great." " Are you sure?" "And I immediately say, "uh-oh."" ""What is it?"" "In the interest of all this romance that's going on tonight, what we thought we'd do is bring Chris and Victoria's mums in." "And they're gonna secretly be watching what's going on out at the table in the wine room on the big screen." " Oh, that's lovely." " That's pretty cool." "And then after the big moment, we're gonna send them out." "Cool." " All right?" " That's great." "Thanks." " Good luck." " There's always a twist." "But I'm pretty comfortable with this one." " Hi." " Hello." "Hi." "How are you?" " Hello." " Hi." "Everybody's very excited that they get to kind of have this bird's-eye view into this momentous occasion." "15 minutes left!" "Isn't this wonderful?" "Look at this." "Yum." "Oh, it's beautiful." "Thank you." "I'm so hungry." "You know what I find is the most romantic thing in a relationship?" "Is making each other laugh." "And your husband's got a wicked sense of humor." "It's the only reason I feel comfortable being on a date with you tonight." "Do you ever eat any particular foods to get in a romantic mood?" "No, I just try not to drink too much so that I'll stay in the romantic mood." "Like this?" "Just like that?" "Yeah." " Okay, all the shrimp is down." " Beautiful." "I'm excited for the surprise." "We've actually got the mums of both." "They're in the wine room right now watching the table, so..." "Look at how beautiful she is." "We only met Chris." "So we haven't actually met Victoria yet." "Forget everything else." "Forget the world that goes on around us." "Like, let's just you and I enjoy a nice dinner." "I absolutely agree--that's why all first dates always start with something food-related." "Thank you very much." " Oh, wow." "Already." " So this is Floyd's dish." "He's called it a kama sutra black pepper shrimp with watermelon, lime, and mint." "Look how the shrimp are entwined." "It's like spooning." "I almost hated to tear them apart." "This has really got some heat to it." "I'm already starting to feel more romantic toward you, Gael." "I'm on guard." " It's aggressively spicy." " It is." "I can see a lot of people going for glasses of water and glasses of wine." "Yeah, that's true." "Well, the wine can't hurt." "That's right." " That's a good thing." " It's date night." "Loosen everybody up a little bit." "It's so good." "There's magic happening right now." "After my first course goes out," "I got to help Celina, because she's next." "I said, "Celina, you tell me what you need, and I'll do it for you,"" "because we don't need too many chiefs." "We need some Indians." "Oh, lunch." "Celina's plate looks interesting." "You know, it may be a little disjointed, but the pretzels look awesome." "I hope she gets it, but doesn't realize it." "The next course we've got is cooked by Celina-- a salad with roasted cauliflower, and she's serving it with a soft pretzel." "I love that Celina made her own pretzels." "That was one of the things that our couple really likes, right?" "That's right." "I can't stand the anticipation--I'm like..." "Just knowing that he knows." "Oh, my gosh." "A salad and a pretzel?" "I'm in heaven." "It's, like, my two favorite things in the world." "It seems like there's only one thing missing here, though." "Beer and a hockey game." "I concur." "He's doing a job like a professional." "He ought to be in acting." "Celina's dish is more like junior-high romance." "Floyd's dish was like full-on college romance." " Mary Sue?" " What are you doing?" " You better start plating." " Yeah, we got six minutes." "We got to bust it here." "I'm going!" "Everybody's rushing me, almost with the tone of voice like--that I didn't know what I was doing." "And I just snapped." "I don't need everybody yelling at me, though." "If your plates aren't ready, they are going out empty." "I don't want it sitting-- that's all." "I just hate all that rushing and stressing." "As long as you don't shout at me again." "I know, I know!" "I wonder if our special friend is very nervous right now." "Yeah, maybe." "Aw, they're..." "Clinking, toasting." "I'm so nervous for him." "Is he gonna do it now?" "No, I think he's gonna wait till dessert." " Oh." " I think." "Who knows?" "Oh, it's beautiful." "So this is Mary Sue's seafood stew." "So you think that Mary Sue's dish is a good date-night dish?" "I do, 'cause there's a bit of fun around it." "They put their fork in..." "And they use it as a spoon." "Oh, my God!" "I love that!" "Isn't that cute?" "Did you have a bite of the crouton?" "I wonder if that's too crunchy for a romantic dinner." "Hey, guys, I need help now." " What do you need?" " Plating." "You want me to start another line here or no?" "Yes, please." "Floyd, not in the center." "Whenever Naomi's focused on a dish, nothing else matters." "You got it, towards the back, and then sauce." "Traci, can you help Floyd?" "I'm a little nervous about how rustic my plate is, compared to Hugh's plate, that comes next." "Hugh is, like, master at making beautiful, elegant food when it comes to, like, a romantic evening." "But it's just not my style, so I can't worry too much about it." "Do you think Christopher's anticipating now?" "Naomi is next with porcini-braised chicken thigh." "See how it tastes." "I don't find Naomi's dish very romantic." "I think that my romantic feelings will survive Naomi's dish..." "'Cause I love the chicken so much." "It's a huge portion." "And it's a rich portion." "I think Naomi's really tried to just steal the show." "I thought this was really, really good." "What did you think?" "I think chef Naomi did an incredible job." "Yeah." "You guys must be so nervous." "Oh, I can hardly wait." " Six minutes." " Oh, we're gonna be good." "Can I give you these?" "The next one's Hugh's." "He has a very complicated plate, so it requires all hands on deck to get that up." "Hugh, you're missing sauce here, sauce here..." "There's definite pressure." "This is such a seminal moment in their lives." "I want to make sure they're really ecstatic with my dish." "Can you grab those?" "I will trip you on purpose." "There you are." "Stretch back." "Got to stretch the stomach out." "Oh, that's a good size." "Thank you." "Considering Hugh was giving us the biggest course of the night, you know, the steak and potatoes course, it feels really succinct and focused as a dish." " Yeah." " It is good." "I'm so full, though." "Ugh, I can't do it." "Suck it up, princess." " Oh." " How many of these couples do you think are getting some action tonight?" "I don't know." "I would like to know, though." "What about James?" "Do you think he's gonna get some action?" "I'm kind of feeling the chemistry over there." "James, you've been chewing on that same piece of meat for a minute and a half." "Look at how you're chewing." "This is not attractive." "This is not seductive." "Now, I have heard for many years about your Elvis affair." "I didn't have an affair with Elvis." "I had an hour with Elvis." "I was the only woman in the hotel room when he came back from doing a show." "So he took my hand and led me into the bedroom." "Wow." "As I was leaving, he said..." ""Oh, ma'am, would you call room service and order me a fried-egg sandwich?"" "That's why I'm a food writer." "My apple tarts have just come out of the oven, and they're looking fabulous." "Hugh and Celina have helped me with writing the je t'aime on the plates for the big moment." "Good job, people." "Give me your good hands." "Give me your good hands." "Nice, the good hand." "Maybe they'll just keep us all." "Do you see what it says?" "What?" "And she still doesn't get it." "She still doesn't get..." "I'm, like, nervous for him." "I know." "I know." "I got butterflies." "I love that." "If she doesn't get it, I don't know." "That's awesome." "Oh, it's beautiful." "What is it?" " Je t'aime-- "I love you."" " Je t'aime." "Je t'aime, gael Greene." "Je t'aime." "You couldn't make up your own line?" "You had to use hers?" " My problem is..." "It's dry." " Mm-hmm." "The dessert holds so much significance in a meal like this one." "You want that last hurrah to really sweep you away." "Right." "They do say that the key to a man's heart is through his stomach." "What do they say is the key to the woman's heart?" "Diamonds." "Oh." " You think she knows?" " I can't stand it." "I think it's time for me to get up and make a little announcement." " Okay." " This is gonna be fun." "We should probably grab a tissue." "I think we're gonna need it." "Or, you know, I have the-- we have a towel." " Here you go." " I was like..." "Thank you." "Ladies and gentlemen, thank you for attending date night." "Our chefs." "Now, every couple's relationship's very special." "But there's one couple that's here for a very special reason." "I can't breathe." "Chris." "Victoria, I love you with all of my heart." "I want to spend the rest of my life with you." "I want you to want to spend the rest of your life with me." "Will you marry me?" "If she says no, this is gonna be the most awkward moment in television history." "Yes, of course." "Are you kidding me?" "This is--feels like a dream." "I don't feel like this is actually happening." "Chris, Victoria, congratulations." "Chris said the first thing that you'd want to do is speak to your mum." "Yes." "The mom is here!" "Ah!" "I have a partner in James Oseland, 'cause he's definitely choked up, too." "As tough as I come off in the kitchen, and probably in my regular life, too," "I'm a softy on the inside." "Oh, my gosh." "I had no idea." "So to help celebrate your engagement, please accept this three-liter bottle of Chimney Rock Cabernet Sauvignon and a three-day, two-night trip to the Terlato family vineyard." "Thank you." "To Chris and Victoria, may all your dreams come true." "Thank you." "It's obviously a great joy for me to be a part of cooking a meal that I know that she'll remember forever." "It was sweet." "It's never a good idea to cook down to your guests." "I think you missed a chance to do something to raise it to another level." "It is funny that now that we're only six, it's just so much easier to help each other, except for Mary Sue's alter ego." "I'm going!" "Margaret did get out today." "And you would just get in there." "My bad self-- that's Tiffany." "Come on, George, you're us up." "Mine has been dubbed Hank." "I was really only Hank for one day." "Are you directing, or am I tending them?" "Maybe you don't do that right in front of where I'm doing red meat." "And then Floyd, I think is just Floyd." "I'm the only chef who doesn't have an alter-ego in the kitchen, because I can't cook when I'm not happy." "Well, that was sweet." " That was sweet." " Very sweet." " Yeah." " And he thought of everything." "But more importantly, I think our food was very solid." "I think so, too." "Cheers, guys." "Cheers." "Chefs, you did it." "She said yes." "The critics would like to see Naomi..." "Mary Sue, and Floyd." "Thank you." "Good luck." "You think we're in the bottom?" "No, I actually don't think so." "The good reason I think that we're in the top group is that if we're interpreting the challenge..." "You guys both hit that on the head." "Right." "Floyd, Mary Sue, and Naomi..." "Tasting tonight were our critics," "Gail Simmons, host of Top Chef:" "Just Desserts," "James Oseland, editor in chief of Saveur magazine." "And please welcome back to our critics' table," "Gael Greene, who's legendary restaurant reviews" "Your challenge was to create a six-course meal inspired by Chris and Victoria's relationship." "Well, the critics obviously had some favorites and some least favorites, and they decided that tonight your dishes... were their favorite dishes." "Congratulations." "Wow." "You know, it could've gone either way." " Thank you." " Yes, thank you." "Floyd, when I first saw your shrimp," "I thought to myself, "oh, no, he's put too much pepper."" "It was really aggressively seasoned." "And I commend you for it." "You took a chance with it, and it really paid off." "Thank you." "Floyd, I especially liked the fact that the shrimp were hugging each other." "I thought they were doing something worse." "Floyd, I thought your dish was a really startling combination-- fruity and very spicy and very wonderful." "Thanks." "Mary Sue, I was amazed that you could get such perfection of cooking, in the mussels, especially." "And also, the spiciness of the sausage mixed with the vegetables was wonderful." "Thank you." "Naomi, the crispy chicken skin had a lot of kind of rustic savoriness to it." "You know, I'm assuming you braised that dish, and then crisped up the skin afterwards-- a detail that I think so many people don't do, and it made such a difference." " Thank you." " Now, all that said..." "The critics only had one favorite." "And the chef who made that winning dish will receive $10,000 for their charity, furnished by Lexus." "And the winning chef is..." "Naomi." "Congratulations." " Congratulations." " Wow." "Awesome." "Thanks, guys." "Good for you." "It's a huge honor." "I guess the love got felt." "As a chef, there's really no higher compliment." "Congratulations, Naomi." "That's $10,000 to your charity, Seed Savers Exchange, which brings your total to $25,000." "Yeah." "Thank you." "I get the second-best dish or the third-best dish..." "But I don't get the best dish." "And, you know, I'm tired of coming in second." "Will you now please return to the wine room and ask your colleagues to join us?" " Thanks, guys." " Thanks, guys." "Well..." "Well, what?" " You won." " Thank you." "Thanks." "Congratulations." "Well, one of us is going home." "I assume they want to see all of us." " They do." " They do." " Good luck, you guys." " Thanks." "Three times, and I haven't won." "Sorry." "Am I a bad-luck charm?" "No." "Celina, Traci, Hugh..." "Tonight you had the critics' least favorite dishes." "Celina, what was the story behind your dish?" "One of their biggest moments together is going to sporting events." "And their ritual whenever they do so is to have a beer and a pretzel." "I thought your pretzel was pretty great." "I wanted there to be an integration somehow between the salad and the pretzel, 'cause they felt a little bit disjointed." "I kind of create playful food." "That's kind of what my restaurant's all about." "And that's what that was." "Traci, were you happy with your apple galette?" "I think the pastry was nice." "It was fluffy." "The apples were delicious." "I w very happy with the way it came out." "It was missing something, because there wasn't enough sauce or something that added moisture to it." "To me, they don't need anything but the tart." "The pink lady apples that you used verge on being a drier apple." "Maybe not so moist as something like, say, a gala." "I do, alas, agree with Gael about the dryness factor." "Traci, this is actually the first time that you've landed yourself in the bottom three." "Are you surprised to be here?" "I think that everyone had incredibly strong dishes today." "And I think it's--you're splitting hairs at this point." "Hugh, were you happy with the dish?" "Overall, yeah." "I mean, the meat was...fine." "My particular piece of meat-- it was very chewy, very chewy." "Okay." "I always think you should not serve anything that takes a lot of chewing while you're trying to seduce the guy across the table." "I think the three people who really hit the nail on the head on what the challenge was about, which was hitting the six events in their life that they listed" "I think these are the three that hit those things." "I felt like it was appealing to a relatively pedestrian crowd overall, and I was gonna do that." "So, Hugh, do you cook down to people?" "You gonna pay the bill?" "Yeah, I'll cook down to you anytime." "Chefs, please return to the wine room while the critics make their final decision." " Thank you." " Thank you." " Hello." " So what happened?" "My dish--they said they didn't get the pretzel with the salad." "I was like, "they love pretzels." "They love salad." "So I tied the two together."" "In mine, they wanted more sauce on the plate." "So..." "Then you guys have such nice responses to them, and then I'm like..." " Really?" " You." "Nice." "Whatever." "At this point, it's always gonna be about the tiny, little details that make all the difference." "Let's talk about Celina's salad for a minute, because she literally heard the fact that they liked a pretzel and beer, and she did a pretzel and a beer and cheese sauce on the same plate as the salad." "She could've done a million things with a pretzel, why did she need to keep it in a traditional pretzel shape?" "Why not make a lobster pot pie with the topping being pretzel-- Ooh, that would've been great." " Little pretzel puffs?" " We demand that the chefs give us more elevated food every time, and salad with a pretzel on the side is not something that's gonna win our hearts." "I think Hugh's dish also didn't seem to be particularly ambitious." "It was very banal-- a little broccoli and some celery-root puree." "Hugh's great error was in choosing to cook beef like that." "And I worry about Hugh's comment, saying he was kind of cooking down to them, and they're not gastronomes." "Give us your love." "We're in an evening of love." "The issue is how you choose to cook it." "And I actually don't think that Hugh did a perfect job." "Traci's tart wasn't anything that spectacular, that interesting." "At this point, I want to be wowed." "And if she's gonna make a dessert, show us something exciting and new." "It was just a dry tart." "It was missing something." "It could've had applesauce under the apple." "That's true." "At this stage of the competition," "I want to know more about your capacity as a chef." "Yes." "Well, it's a tough decision, but it seems like you all agree on your least favorite dish." "We do." "Let's get them out." "Celina, Traci, Hugh..." "Unfortunately, one of you served the critics' least favorite dish and will be eliminated tonight." "Celina, I think you made a lovely salad and pretzel." "I just don't think that the two came together and made a cohesive dish." "Hugh, it's never a good idea to cook down to your guests." "And sadly, I felt that's what you did tonight." "And the dish did not benefit from it." "Traci, I think you missed a chance to do something more with your dish that would've raised it to another level and possibly even given it the moisture that it lacked." "The chef that will be leaving us tonight... is Celina." "You've cooked some beautiful food through this competition, and you're a great chef." "Thanks for the opportunity." "We will be making a donation to your charity," "Harvesters." "Thanks." "Please return to the kitchen and pack your knives." "Traci and Hugh, you may return to your fellow chefs." " Thank you." " Thank you." " Thanks, you guys, all of you." " Thank you." "Celina's a fabulous chef." "She's cooked some beautiful dishes, but the pretzel just didn't work tonight with the salad." "Mm-hmm." "Aw!" "Lady." "You did a beautiful job." " Aw, I'm gonna miss you." " You too." "I was there to serve a purpose for that moment in Chris and Victoria's life." "And if that doesn't please the critics, then it doesn't please the critics." "I can't tell you how super bummed I am to leave the competition, but I just wish I could've made more money for harvesters." "== sync, corrected by elderman ==" "Next time on Top Chef Masters..." "You're cooking for our edible science fair." "I think I was skipping class when we learned about that." "Oh, fire." "Explode, explode, explode." "I don't want to come second anymore." "This is amazing!" "Do this." "Honestly, Hugh, I would say this is barely a mayonnaise." "Oh, you're looking at me like I'm wrong." "Here, you explain that." "Augustine thinks I'm an idiot at this point." "It says it's getting hot." "The induction burner is just not cooking." "For more information on Top Chef Masters,"
{ "pile_set_name": "OpenSubtitles" }
After five shark attacks in the past year, and five deaths in the country in a little over a decade, South Africa's Western Cape is deploying shark nets to protect swimmers and surfers from Great White Sharks. Millions of tourists visit the Cape Town area every year. Tens of thousands of US dollars are being committed to trial the world's first environmentally friendly barrier shark net. Karen Allen reports.
{ "pile_set_name": "OpenWebText2" }
El actor Willy Toledo ha quedado en libertad provisional sin fianza y sin ninguna medida cautelar tras presentarse ante el juez en el juzgado de instrucción número 11 de Madrid este jueves 13 de septiembre a la espera de que continúe la investigación por un delito de ofensa a los sentimientos religiosos. Willy Toledo ha asegurado que "los calabozos son bonitos" a su salida de los Juzgados de plaza Castilla de Madrid, donde ha declarado durante apenas 10 minutos este jueves por la mañana por insultar a Dios y a la Virgen después de que este miércoles fuera detenido por no presentarse ante el juez en dos citaciones anteriores y haya dormido en los calabozos de la comisaría de Moratalaz. "No he declarado nada, me he limitado a remitirme a un escrito que habíamos presentado hace meses en el que pone que no he cometido ningún delito y que por tanto no considero que fuera necesario que me presentara ante ningún juez ni ningún fiscal", ha revelado. En declaraciones a la prensa, el actor ha señalado que no sabe qué va a pasar: "Puede ser que archive la causa, puede ser que vayamos a juicio, porque esto ha sido una vista previa para tomar declaraciones a la acusación y la defensa". Sobre su noche en dependencias policiales, Toledo ha dicho que "los calabozos son bonitos, es la tercera vez que voy y estoy feliz. Estoy haciendo lo que tengo que hacer, que es llamar la atención" sobre este asunto. El actor ha defendido que no ha cometido "ningún delito" y ha lamentado que la situación es "tercermundista": "Me parece increíble que en este país todavía haya cinco artículos del Codigo Penal referentes a las ofensas a los sentimientos religiosos". Sin embargo, el actor ha asegurado que va a acatar lo que decida el juez. A la entrada del juzgado, Toledo ha gritado "tendría usted que desaparecer de la faz de la tierra", un comentario que según el actor ha dirigido a todos los "fascistas". La Asociación Española de Abogados Cristianos, que ejerce de acusación, aseguró este miércoles tras conocerse la detención del actor que este estaba "forzando" la situación.
{ "pile_set_name": "OpenWebText2" }
Customer Reviews Forever Grateful Ruslan by russ_bond David Jerison is simply the best. Calculus has never been so easy before. Thanks MIT OCW Nice work by Dusto218 These are some of the clearest explanations I've seen of these concepts. Add to that the ability to pause or rewind and view clearly displayed formulas on the chalkboard and I feel like I'm grasping this subject fully for the first time.
{ "pile_set_name": "Pile-CC" }
[In vitro organogenesis in Dalbergia retusa (Papilonaceae)]. Plants were obtained via organogenesis from hypocotyl explants of Dalbergia retusa from in vitro germinated seedlings. Adventitious bud induction was achieved on Murashige and Skoog medium containing five BA (benzyladenine) concentrations. The best BA concentration for budding induction and budding development was 8.8 microM. Shoot rooting was obtained on half-strength modified MS basal medium, supplemented with 20 g x l(-1) of sucrose and five concentrations of indole-3-butyric acid (IBA). The highest number of shoot rooting was obtained with 19.7 microM IBA but the highest average number of roots for plantlet was achieved with 24.6 microM IBA. Plants were transferred to greenhouse conditions.
{ "pile_set_name": "PubMed Abstracts" }
Q: What happens if I do not call pthread_mutex_destroy when using PTHREAD_PROCESS_SHARED On Linux, a mutex can be shared between processes by using PTHREAD_PROCESS_SHARED attribute then saved in a mapped file that may be used by many processes. This is an example in https://linux.die.net/man/3/pthread_mutexattr_init that do the job above: For example, the following code implements a simple counting semaphore in a mapped file that may be used by many processes. /* sem.h */ struct semaphore { pthread_mutex_t lock; pthread_cond_t nonzero; unsigned count; }; typedef struct semaphore semaphore_t; semaphore_t *semaphore_create(char *semaphore_name); semaphore_t *semaphore_open(char *semaphore_name); void semaphore_post(semaphore_t *semap); void semaphore_wait(semaphore_t *semap); void semaphore_close(semaphore_t *semap); /* sem.c */ #include <sys/types.h> #include <sys/stat.h> #include <sys/mman.h> #include <fcntl.h> #include <pthread.h> #include "sem.h" semaphore_t * semaphore_create(char *semaphore_name) { int fd; semaphore_t *semap; pthread_mutexattr_t psharedm; pthread_condattr_t psharedc; fd = open(semaphore_name, O_RDWR | O_CREAT | O_EXCL, 0666); if (fd < 0) return (NULL); (void) ftruncate(fd, sizeof(semaphore_t)); (void) pthread_mutexattr_init(&psharedm); (void) pthread_mutexattr_setpshared(&psharedm, PTHREAD_PROCESS_SHARED); (void) pthread_condattr_init(&psharedc); (void) pthread_condattr_setpshared(&psharedc, PTHREAD_PROCESS_SHARED); semap = (semaphore_t *) mmap(NULL, sizeof(semaphore_t), PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); close (fd); (void) pthread_mutex_init(&semap->lock, &psharedm); (void) pthread_cond_init(&semap->nonzero, &psharedc); semap->count = 0; return (semap); } semaphore_t * semaphore_open(char *semaphore_name) { int fd; semaphore_t *semap; fd = open(semaphore_name, O_RDWR, 0666); if (fd < 0) return (NULL); semap = (semaphore_t *) mmap(NULL, sizeof(semaphore_t), PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); close (fd); return (semap); } void semaphore_post(semaphore_t *semap) { pthread_mutex_lock(&semap->lock); if (semap->count == 0) pthread_cond_signal(&semapx->nonzero); semap->count++; pthread_mutex_unlock(&semap->lock); } void semaphore_wait(semaphore_t *semap) { pthread_mutex_lock(&semap->lock); while (semap->count == 0) pthread_cond_wait(&semap->nonzero, &semap->lock); semap->count--; pthread_mutex_unlock(&semap->lock); } void semaphore_close(semaphore_t *semap) { munmap((void *) semap, sizeof(semaphore_t)); } The following code is for three separate processes that create, post, and wait on a semaphore in the file /tmp/semaphore. Once the file is created, the post and wait programs increment and decrement the counting semaphore (waiting and waking as required) even though they did not initialize the semaphore. /* create.c */ #include "pthread.h" #include "sem.h" int main() { semaphore_t *semap; semap = semaphore_create("/tmp/semaphore"); if (semap == NULL) exit(1); semaphore_close(semap); return (0); } /* post */ #include "pthread.h" #include "sem.h" int main() { semaphore_t *semap; semap = semaphore_open("/tmp/semaphore"); if (semap == NULL) exit(1); semaphore_post(semap); semaphore_close(semap); return (0); } /* wait */ #include "pthread.h" #include "sem.h" int main() { semaphore_t *semap; semap = semaphore_open("/tmp/semaphore"); if (semap == NULL) exit(1); semaphore_wait(semap); semaphore_close(semap); return (0); } But calling pthread_mutex_destroy() on a shared mutex is tricky because it can cause error on other process and the example above also does not call pthread_mutex_destroy(). So I am thinking not to destroy it. My question is: Is it safe if I init a PTHREAD_PROCESS_SHARED mutex, save it to a mapped file and use it forever on many processes without calling pthread_mutex_destroy() or re-initialize it? A: My question is: Is it safe if I init a PTHREAD_PROCESS_SHARED mutex, save it to a mapped file and use it forever on many processes without calling pthread_mutex_destroy() or re-initialize it? It is allowed for a process-shared mutex to outlive the process that initialized it. If you map such a mutex to a persistent regular file, then its state will persist indefinitely, even while no process has it mapped. As long as the integrity of its state is maintained -- including, but not limited to, no process destroying it via pthread_mutex_destroy() -- new processes can map it and use it. That is to say, the semantics of what you describe are well-defined. But is it safe? Not especially. The first issue is that you need to know when to create it, and you need to avoid race conditions when you do. If you rely on the processes that regularly use the mutex to initialize it at need, then you have to make sure that exactly one creates and initializes it when the file does not already exist. Another issue is that using a long-lived shared mutex like that produces a great deal of exposure to failures. For example, if a program crashes while holding the mutex locked then it will remain locked until you take some kind of manual corrective action. Or if the mapped file is manipulated directly then the mutex state can easily be corrupted, producing undefined behavior in all programs using it -- even across reboots. If you really need a long-persisting synchronization object, then I would suggest considering a POSIX named semaphore. It is designed for the purpose, taking the above considerations and others into account. It differs somewhat, however, in that such semaphores reside in the kernel and have kernel persistence, so they do not persist across reboots (which is generally a good thing), and they are not susceptible to ordinary file manipulation. Alternatively, you could consider a System V semaphore. This is an older semaphore implementation that also has kernel persistence. Its interface is considerably clunkier than that of the POSIX semaphore, but it has a few useful features that the POSIX semaphore does not, such as providing for automatic unlocking when the process holding one locked terminates (even abnormally).
{ "pile_set_name": "StackExchange" }
Introduction {#Sec1} ============ Prion diseases, such as bovine spongiform encephalopathy (BSE) in cattle, scrapie in sheep, and Creutzfeldt-Jakob disease (CJD), kuru and Gerstmann-Sträussler-Scheinker (GSS) syndrome in humans, are a group of neurodegenerative disorders caused by prions, self-replicating β-sheet-rich infectious polymeric assemblies of misfolded host-encoded cellular prion protein (PrP^C^)^[@CR1]--[@CR4]^. Whilst rare, prion diseases are an area of intense research interest, as it is increasingly recognised that other degenerative brain diseases, such as Alzheimer's and Parkinson's diseases, also involve the accumulation and spread of aggregates of misfolded host proteins through an analogous process of seeded protein polymerisation^[@CR2],[@CR5]--[@CR8]^. Consequently, study of 'prion-like' mechanisms has been recognised to have much a wider relevance to the understanding of neurodegenerative disorders^[@CR9]--[@CR11]^. PrP^C^ is a cell surface, predominantly α-helical, glycosylphosphatidylinositol (GPI)-anchored glycoprotein that is sensitive to protease treatment and soluble in detergents^[@CR1]^. In contrast, prions may acquire protease-resistance and are classically designated as PrP^Sc^ (refs. ^[@CR12],[@CR13]^[@CR13]. PrP^Sc^ is found only in prion-infected tissue and is β-sheet-rich aggregated material, partially resistant to protease treatment, and insoluble in detergents^[@CR14]^. Transmission experiments to transgenic mice provide strong supporting evidence that alternative conformers or assembly states of PrP^Sc^ encode multiple prion strains, which differ in their pathogenic properties^[@CR2],[@CR15]^. Transgenic mice expressing only human PrP with either valine or methionine at residue 129 have shown that this common human polymorphism constrains the propagation of distinct human prion conformers, and the occurrence of associated patterns of neuropathology consistent with the conformational selection model of prion propagation^[@CR16]--[@CR20]^. Heterozygousity at codon 129 is thought to confer resistance to prion disease by inhibiting homologous protein--protein interactions essential for efficient prion replication with the presence of methionine or valine at residue 129 controlling the propagation of distinct human prion strains^[@CR2],[@CR21]^. Biophysical measurements suggest that this powerful effect of residue 129 on prion strain selection is likely to be mediated via its effect on the conformation of the disease-associated PrP^Sc^ form, or its precursors or on the kinetics of their formation, as it has no measurable effect on the structure, folding or stability of PrP^C[@CR22]^. The acquired prion disease kuru, which was epidemic amongst the Fore linguistic group of the Papua New Guinea highlands when first studied in the 1950′s, and which was transmitted during mortuary feasts, imposed strong genetic selection on the Fore, essentially eliminating residue 129 homozygotes^[@CR23]^. A novel variant of prion protein, V127, unique to the affected population in the epicentre of the kuru epidemic, was also identified^[@CR24]^. In this variant, the glycine at residue 127, which is fully conserved amongst vertebrate PrP primary structures, is substituted by valine. The V127 polymorphism was found on one copy of the *PRNP* gene in unaffected individuals within the population, suggesting that this polymorphism conferred resistance to prion disease, having been selected for in response to the kuru epidemic^[@CR23],[@CR24]^. The protection afforded by this polymorphism was modelled using transgenic mice expressing human PrP^[@CR25]^, and showed that heterozygous mice expressing both alleles containing glycine and valine at residue 127 (G/V127), echoing the human resistance genotype, exhibited profoundly reduced susceptibility to infection with kuru and classical CJD prions. Most importantly, however, and in complete contrast to the protective effect of the residue 129 polymorphism, homozygous mice expressing human PrP with solely valine at residue 127 (V127), showed total resistance to all inoculated human prion strains. A comparison of the incubation periods between hemizygous mice expressing wild-type G127 human PrP only, with heterozygous mice expressing both G127 and V127 PrP, indicated a dose-dependent dominant-negative inhibitory effect of V127 PrP on prion propagation, resulting in prolonged incubation periods and variable attack rates in heterozygotes^[@CR25]^. These data indicated that V127 PrP is intrinsically resistant to prion propagation and can inhibit propagation involving wild-type (WT) G127 PrP. In essence, this single amino acid substitution, at a residue completely conserved in vertebrate evolution, has as potent a protective effect on the host as a null mutation. Consequently, the structural and mechanistic basis of the protective effect of the V127 mutation is of keen interest as it may provide key insights into the mechanism of prion conformational conversion and recruitment. As a first step in characterising the effect of this protective polymorphism on PrP, we undertook a detailed investigation of the effect of the residue 127 polymorphism on the biophysical properties of the native cellular PrP^C^ conformation using a combination of X-ray crystallography, NMR and equilibrium unfolding. We show that this mutation imposes local changes in backbone conformation which facilitate formation of intermolecular hydrogen bonds between native-state dimers and imposes conformational restrictions on this region of the protein. In addition, it significantly alters millisecond timescale conformational rearrangements in regions of PrP proposed to be important in prion transmission^[@CR26]--[@CR28]^. These effects may modulate the conversion of native PrP^C^ to a disease-associated form or on pathway intermediates relevant to the disease process, and provide a mechanistic explanation for the protective effect of this mutant. Results {#Sec2} ======= Choice of PrP variants studied {#Sec3} ------------------------------ Persons who were exposed to kuru and survived the epidemic were predominantly heterozygotes at PrP residue 129^[@CR23]^. The V127 protective polymorphism in human PrP was always present on an M129 allele^[@CR24]^, consequently our main interest was with the V127/M129 PrP variant. However, we took the opportunity, given the known biological effect of the residue 129 polymorphism to also study the V127 variant with valine at residue 129 (V127/V129), and both forms of wild-type PrP (G127/M129 and G127/V129) with the aim of dissecting the effects of both of these protective polymorphisms. V127 PrP structures closely resemble wild-type G127 PrP {#Sec4} ------------------------------------------------------- To determine whether the overall structure of PrP^C^ was affected by the protective V127 variant we crystallised recombinant human PrP (residues 119--231), with valine at residue 127, (V127/M129 and V127/V129), complexed with the Fab fragment of the anti-PrP antibody ICSM18, as performed previously with G127/M129 PrP (Supplementary Table [1](#MOESM1){ref-type="media"} and Supplementary Fig. [1](#MOESM1){ref-type="media"})^[@CR29]^. The crystal structures of both V127 variants (V127/M129, 2.3 Å resolution, pdb 6SV2 and V127/V129, 2.5 Å resolution, pdb 6SUZ) closely resembled that of WT G127/M129 (pdb 2W9E, Fig. [1a](#Fig1){ref-type="fig"} and Supplementary Fig. [2](#MOESM1){ref-type="media"})^[@CR29]^. The structured C-terminal domain (residues 125--225) comprises three α-helices (α1--α3) and a short, two-stranded, anti-parallel β-sheet (Fig. [1](#Fig1){ref-type="fig"} and Supplementary Fig. [3](#MOESM1){ref-type="media"}). Residue 127 immediately precedes the first β-strand of the β-sheet whereas residue 129 lies within it. The residues surrounding 127 and 129 are well defined in both crystal structures (Figs. [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}) and show that the side-chains of both residues are predominantly located on the protein surface. Neither the 127 nor 129 polymorphisms substantially perturb the backbone or sidechain positions, or hydrogen bonding, of residues within the β-sheet (Fig. [1b](#Fig1){ref-type="fig"} and Supplementary Fig. [2a--c](#MOESM1){ref-type="media"}). Both circular dichroism (CD) and heteronuclear NMR spectra (Supplementary Figs. [4](#MOESM1){ref-type="media"}--[6](#MOESM1){ref-type="media"}) are consistent with the crystal structures accurately reflecting the solution structure of the proteins. The global stability and unfolding behaviours of the V127/M129 and V127/V129 variants (Supplementary Fig. [7](#MOESM1){ref-type="media"} and Supplementary Table [2](#MOESM1){ref-type="media"}) are also not significantly affected by the substitution of valine for glycine at position 127, reflecting the lack of major structural perturbation.Fig. 1Effect of the V127 polymorphism on the structure of human PrP^C^.**a** V127/M129 (PDB 6SV2 -- green) and wild-type G127/M129 human PrP (PDB 2W9E -- light blue^[@CR29]^) crystal structures, superimposed in cartoon representation. Residues 125--223 are shown. The r.m.s. deviations of backbone heavy atoms are less than 0.44 Å between these structures. The sidechains of V127 (red) and R164 (blue) are shown as sticks. This figure and the other structural figures were prepared using *PyMOL* (PyMOL Molecular Graphics System, Schrödinger, LLC). **b** Side chain packing in the V127/M129 (green) and WT G127/M129 (light blue) β-sheets. The PrP backbone immediately preceding residue 127 in V127/M129 PrP is displaced due to the bulkier valine sidechain at residue 127. The sidechain and backbone positions of residues in the β-sheet are very similar, with the exception of the sidechain of arginine 164 (R164), which due to its close proximity to residue 127 is displaced in the V127 variant. This perturbation (see also Fig. [8](#Fig8){ref-type="fig"}) is observed in solution by a marked chemical shift change in the Nε peak arising from the R164 sidechain group in NMR HSQC spectra (Supplementary Fig. [6](#MOESM1){ref-type="media"}). **c** Four-stranded intermolecular anti-parallel β-sheet formed between neighbouring V127/M129 PrP molecules (in green and lime green). **d** Intermolecular β-sheet contacts in V127/M129 PrP (green) and WT G127/M129 PrP (light blue). The amino acid sidechains of residues found in the intermolecular β-sheet are shown in stick representation, with the residue 127 and 129 polymorphisms in red and yellow respectively. **e**, **f** Intermolecular β-sheet hydrogen bonding in V127/M129 (**e**) and G127/M129 PrP (**f**). Hydrogen bonds stabilising the intermolecular β-sheet are shown as blue dotted lines, between the amide (blue) and carbonyl (red) groups of the denoted amino acids, with the corresponding distances in Å. The β-sheet interface in the V127/M129 PrP crystal is stabilised by an additional pair of hydrogen bonds between the carbonyls of G126 and amides of A133 (**e**). The additional hydrogen bond pair between G126 and A133 is not formed in WT G127/M129 PrP as the hydrogen bond distance is too long (7.7 Å) (**f**).Fig. 2The quality of the electron density maps for PrP in the V127/M129 PrP - ICSM-18 Fab complex at 2.3 Å resolution.Residues from the PrP β-sheet and the V127 polymorphism are shown; 2F~O~ --F~C~ map contoured at 1σ. V127 polymorphism restricts PrP backbone conformation {#Sec5} ----------------------------------------------------- Despite the crystal structures being mostly unperturbed by the V127 polymorphism, a number of localised differences were identifiable. The most significant area of variation is found immediately N-terminal of residue 127 (residues 125--127). This region adopts an essentially identical conformation in both the V127/M129 and V127/V129 structures, which differs significantly from WT G127/M129 PrP (Fig. [1b](#Fig1){ref-type="fig"} and Supplementary Fig. [2a--c](#MOESM1){ref-type="media"}). In particular, the reduction in the conformational plasticity of the backbone due to the valine/glycine substitution at position 127 leads to a very different conformation at this point (V127 Phi angle = −70.5°, c.f. G127 = +106.9°), as the WT backbone conformation is in a disallowed region of conformational space for valine. Consequently, the Cα of G126 in V127/M129 PrP is displaced by 2.9 Å, and the Cα of L125 by 2.2 Å (equivalent Cα atoms of most other surrounding residues are displaced by 0.2--0.3 Å). These Cα atom positions are well defined in both V127 structures (Figs. [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}). Furthermore, the V127 polymorphism appears to reduce conformational variability at residue 127, and concomitantly increases structural definition of the β-sheet, as implied by a comparison of relative B-factors of this region in V127/M129 PrP when compared with WT G127/M129 PrP. These lower B-factors extend from L125 to A133, beyond the end of the first strand of the β-sheet (Fig. [3](#Fig3){ref-type="fig"}). In V127/M129 PrP, the average Cα B-factors for both the N-terminus (residues 126--131; 30 Å^2^), and β-strand 1 (residues 128--131; 27 Å^2^) are lower than the average B-factor for the core secondary structure elements (31 Å^2^). In contrast, in wild-type G127/M129 PrP, the corresponding values for both the N-terminus (46 Å^2^) and β-strand 1 (42 Å^2^) are higher than the average B-factor (39 Å^2^). As the crystal structures are all isomorphous with the same crystal packing, we suggest that the reduction in B-factors is likely due to conformational restriction introduced by the valine sidechain, and by additional intermolecular hydrogen bonding found in the V127 crystals, described below.Fig. 3Thermal parameter (B-factor) distribution in human PrP.(**a**) V127/M129 PrP (**b**) G127/M129 PrP shown as "putty" representation, as implemented by PyMOL. The V127/M129 PrP Cα atom B-factors range from 22.7 Å^2^ to 96.9 Å^2^ with average values of 38.3 Å^2^ for the whole protein, and 30.4 Å^2^ for the core secondary structure elements (residues 128--131 (β-strand 1), 144--154 (α-helix 1), 160--164 (β-strand 2), 174--186 (α-helix 2) and 202--220 (α-helix 3)). The Cα B-factors are depicted on the structure in dark blue (lowest B-factor) through to red (highest B-factor), with the radius of the ribbon increasing from low to high B-factor. The lowest B-factor is observed in the region of α-helix 2 (α2) and α-helix 3 (α3) where the disulphide bridge links the two α-helices at residues 179 and 214 (dark blue), with the antibody-binding epitope spanning α-helix 1 also displaying lower than average B-factors, consistent with the antibody contacts stabilising this region of PrP relative to the overall structure. The largest B-factors are observed in the loop region linking helices α2 and α3 (red) (α2- α3 loop; residues 191--199), where the electron density clearly shows more disorder than elsewhere in the structure. In contrast, the B-factors for residues in close proximity to the V127 polymorphism are not unusually high, and all of these residues are clearly observed in the electron density (see also Fig. [2](#Fig2){ref-type="fig"}). V127 polymorphism extends PrP intermolecular β-sheet {#Sec6} ---------------------------------------------------- Notably, dimers between crystallographically-related PrP molecules are observed in the crystals (Fig. [1c](#Fig1){ref-type="fig"}). Association is mediated by a short segment of the anti-parallel β-sheet with hydrogen bonds formed between the first β-strand (residues 128--131) of each molecule^[@CR29]^. This results in the formation of a four-stranded intermolecular β-sheet between the existing anti-parallel β-sheets of each PrP molecule, involving close homotypic contacts at L130 (Fig. [1e, f](#Fig1){ref-type="fig"}). Similar intermolecular interactions are also observed in the non-isomorphous crystal structures of sheep^[@CR30]^, rabbit^[@CR31]^ and human PrP^[@CR32]^ in the absence of antibody, and in different crystallographic space groups (Fig. [4](#Fig4){ref-type="fig"}). This suggests that this interaction is not a crystal packing artefact, and may reflect a greater biological significance for prion propagation, especially as residue 129 is protective and crucial to the aetiology and neuropathology of prion disease, and residue 127, which is in close proximity to the dimer interface, can completely prevent prion propagation.Fig. 4The interaction of PrP molecules in various PrP crystal structures.**a** Superposition of human V127 (green), ovine^[@CR30]^ (pink), rabbit^[@CR31]^ (grey) and human D178N^[@CR32]^ (yellow) PrP dimers from their respective crystal structures. Unlike V127, the other structures were obtained from apo-crystals in the absence of antibody. The ICSM18 antibody-binding epitope consists of α-helix 1 which is remote from the PrP dimer interface (see **b** and Supplementary Fig. [1](#MOESM1){ref-type="media"}). **b** Close up view of the β-sheet dimer interface common to the crystal dimers. The relative orientation of the two interacting PrP molecules in each structure differs depending on the intermolecular hydrogen-bonding patterns. The residue 129 polymorphism is accommodated within the dimer interface without significant perturbations of surrounding amino acids (Supplementary Fig. [2c, d](#MOESM1){ref-type="media"}). In contrast, substitution by valine at residue 127 results in the formation of an additional pair of intermolecular hydrogen bonds in both V127 structures, between the backbone carbonyl and amide groups of G126 and A133 respectively (Fig. [1e](#Fig1){ref-type="fig"}, Supplementary Fig. [2d](#MOESM1){ref-type="media"}), due to the alteration in backbone conformation. This orients the G126 carbonyl group towards the dimer interface, and its hydrogen bond acceptor A133. In the WT G127/M129 PrP dimer, the corresponding G126 CO -- A133 N^H^ distance is 7.7 Å, as the carbonyl group of G126 points away from the dimer interface (Fig. [1f](#Fig1){ref-type="fig"}). This additional hydrogen bonding with V127 extends the β-sheet dimer interface to residues 126--133, thereby encompassing V127, whereas G127 is not involved in dimer contacts in WT G127/M129 PrP. The hydrogen bond distances for these additional H-bonds in the V127 structures (2.8--2.9 Å) indicate a strong interaction. Also, the hydrogen bonds involving L130 at the centre of the intermolecular β-sheet interface are shortened (3.0 Å compared with 3.3 Å) (Fig. [1e, f](#Fig1){ref-type="fig"}). Thus, rather than preventing PrP dimerisation through disruption of intermolecular hydrogen bonding^[@CR33],[@CR34]^, the V127 polymorphism appears to increase native-state dimer hydrogen bonding. Increased conformational variability in V127 PrP structures {#Sec7} ----------------------------------------------------------- Intriguingly, altered conformational variability is observed in key regions distant from the site of the V127 polymorphism, in particular the loop linking the second strand of the β-sheet and helix 2 (β2-α2 loop; residues 165--172; Fig. [3](#Fig3){ref-type="fig"}). This region, which has been shown to affect prion cross-species transmissibility^[@CR26]--[@CR28],[@CR35],[@CR36]^, is adjacent to the β-sheet, packing against residues N-terminal to the first β-strand (including residue 127), the β-sheet itself, the C-terminus of helix 3, and is in close proximity to the disease-associated residue D178^[@CR37]^. The Cα B-factors for residues 169--172 within this loop are higher than the average for the rest of the protein in both V127 structures (50 vs. 38 Å^2^ for V127/M129 & 56 vs. 45 Å^2^ for V127/V129). This contrasts with WT G127/M129, where these residues are better defined than the average, according to their B-factors (40 vs. 42 Å^2^). The B-factors for the V127 structures are consistent with an alteration in the degree of conformational exchange in this loop region of the protein compared to WT PrP, possibly compensating for the reduced conformational variability observed in the β-sheet region. The remaining regions of the V127 PrPs display B-factors that are comparable to WT values. Altered conformational variability also seen in solution {#Sec8} -------------------------------------------------------- Crucially, the altered conformational variabilities are also observed in solution. The effects of the V127 and V129 polymorphisms on the dynamics of PrP were investigated using NMR relaxation data (Supplementary Fig. [8](#MOESM1){ref-type="media"}), coupled with *Modelfree* (Fig. [5](#Fig5){ref-type="fig"}) and reduced spectral density analyses (Fig. [6](#Fig6){ref-type="fig"}, Supplementary Fig. [9](#MOESM1){ref-type="media"})^[@CR38],[@CR39]^. The former uses order parameters (*S*^2^) to report internal sub-nanosecond (ns) motions. *S*^2^ values range from 0 for highly flexible to 1 for rigid systems. The β-sheet and helical regions of all PrP variants exhibit *S*^2^ values of 0.8--0.9, typical of structured regions of folded proteins (Fig. [5](#Fig5){ref-type="fig"}). However, a number of residues in structured regions, for example E168 in V127/M129 and D178 in G127/M129 PrP display anomalously low *S*^2^ values. These are subject to millisecond (ms) conformational dynamics described below (Fig. [6](#Fig6){ref-type="fig"}).Fig. 5Degree of order in PrP variants.Order parameters (*S*^2^) for G127 (G127/M129), V127 (V127/M129) and V129 (G127/V129) PrP. Residues of the β1-α1 (residues 134--144) and α2--α3 loops (residues 194--199) display slightly reduced *S*^2^ values (0.6--0.8), reflecting increased flexibility, commonly observed in loop regions of globular proteins^[@CR22],[@CR71],[@CR73]^, and in previous studies of PrP^C^. The N-terminus (residues 119--124) preceding the β-sheet is mobile and disordered, with low *S*^2^ values (0.15--0.5) and a lack of electron density in the crystal structures. These order parameters are mapped onto the structures of the PrP variants in Supplementary Fig. [10](#MOESM1){ref-type="media"}.Fig. 6Conformational dynamics in the PrP variants.**a** Reduced spectral density function J(0), describing the amplitude of zero frequency motions in the PrP variants at 800 MHz. Uncharacteristically large J(0) values, such as those exhibited by β-sheet residues (128--131 and 160--164), and G131 and R164 in V127/M129 PrP in particular, indicate ms -- µs dynamics. The dotted lines in the J(0) graphs are two standard deviations greater than the mean J(0) for the N-terminus (residues 200--210) of helix 3, of the respective variants. **b** Effect of V127 and V129 polymorphisms on the amplitude of zero frequency J(0) motions at 800 MHz. J(0) changes relative to G127/M129 PrP. Residues which experience significant changes in J(0) due to the V127 substitution include G131, R164 and E168. V129 results in altered J(0) motions for residues 129 and 131, within the first β-strand, M166, and residue 178, located in helix 2. The observed changes are due to differential ms conformational dynamics (see **c**). **c** PrP ms dynamics (*R*~ex~) modelled in the Relax Modelfree analysis. The V127 polymorphism increases ms dynamics (*R*~ex~) within the β-sheet (G131/R164) and β2-α2 loop (E168/Q172), and diminishes those at the C-terminus of helix 3. The V129 polymorphism also increases ms dynamics in the first β-strand (V129/G131), and the C-terminus of helix 3. Residues 166 and 172 at either end of the β2-α2 loop are also perturbed. These *R*~ex~ values are mapped onto the structure of PrP^C^ (See Fig. [7](#Fig7){ref-type="fig"}. and also Supplementary Figs. [10](#MOESM1){ref-type="media"} and [11](#MOESM1){ref-type="media"}). The *Modelfree* approach also allows a general separation for each residue of ms conformational dynamics (*R*~ex~ values) from ns and sub-ns motions. These ms timescale motions are often associated with large-scale co-operative conformational changes and highlight residues that populate low-free energy alternative conformations. For each of the PrP variants, a number of residues exhibited *R*~ex~ values (Fig. [6c](#Fig6){ref-type="fig"}). These are concentrated in a spatially close region, involving the β-sheet (V129/G131/R164), the β2-α2 loop (M166/E168/Q172) and the C-terminus of helix 3 (I215 T216/Y218/E219/E221; Supplementary Figs. [3](#MOESM1){ref-type="media"}, [10](#MOESM1){ref-type="media"} and [11](#MOESM1){ref-type="media"}). The line-broadening of resonances D167, Y169, S170 and N171 beyond detection in the HSQC spectra of all three variants also likely reflect ms dynamics. The observed conformational dynamics are consistent with a proposed interconversion of the β2-α2 loop between a more populated 3~10~-helix and a type I β-turn^[@CR27],[@CR30],[@CR32],[@CR40],[@CR41]^ (Supplementary Fig. [12](#MOESM1){ref-type="media"}). The V127 polymorphism results in large increases in ms dynamics for residues G131 and R164 in the β-sheet, and E168 and Q172 in the β2-α2 loop, but decreases in the C-terminus of helix 3 (215--221; Figs. [6](#Fig6){ref-type="fig"} and [7](#Fig7){ref-type="fig"}). In a number of WT PrP crystal structures the sidechain of R164 forms a pair of hydrogen bonds with the carboxyl group of E168 (2.5 and 3.1 Å in PDB 2W9E^[@CR29]^). The introduction of the bulkier valine sidechain at residue 127 appears to sufficiently perturb the side-chain position of R164 such that the interactions with E168 are essentially removed (the equivalent distances are 3.2 and 3.8 Å; Fig. [8](#Fig8){ref-type="fig"}). Significantly, NMR chemical shift changes in the N^ε^ signal from the R164 sidechain in V127/M129 PrP reflect this alteration in side-chain orientation, with the N^ε^ being perturbed by the change in its proximity to the aromatic ring of Y128 and the change in hydrogen bonding of R164 N^H1^. (Supplementary Fig. [6](#MOESM1){ref-type="media"}). The loss of these interactions is a likely source of the increase in the ms dynamics of both residues, which appears to be disseminated along the rest of the β2-α2 loop, as residue Q172, the other visible resonance in the β2-α2 loop, also displays a marked increase.Fig. 7Effect of V127 polymorphism on the amplitude of PrP ms dynamics (*R*~ex~).The sidechains of residues which experience altered ms dynamics in V127/M129 PrP, relative to G127/M129 PrP are shown (Fig. [6c](#Fig6){ref-type="fig"}), with varying width of backbone and colour. Residues showing increased *R*~ex~ values in the V127 variant, such as G131, R164, E168 and Q172, are coloured red, with those showing a reduction, such as Y218, are coloured blue. Those residues for which a comparison is not possible, due to absence of data are not coloured. The orientation of R164 and E168 sidechains in G127/M129 PrP are shown in yellow^[@CR29]^, illustrating the loss of hydrogen bonding in V127/M129 PrP, caused by a steric clash between V127 and R164 sidechains in the V127 variant. Also shown is the hydrogen bond between Y169 and D178, showing the close association between the β2-α2 loop and another residue which has a key effect on the aetiology of human prion disease.Fig. 8Perturbation of the R164 sidechain by V127 in V127 PrP crystals.Comparison of residue 127, R164 and E168 side-chain positions in WT G127/M129 (cyan), V127/M129 (green) and V127/V129 PrP (yellow). Glycine 127 is coloured bright red, with the valine sidechains of residue 127 in V127/M129 and V127/V129 PrP coloured dark red. R164 and E168 in wild-type G127/M129 PrP are coloured dark blue, and lighter blue in both V127 variants. In both V127 variants the sidechain of R164 is sufficiently displaced from its position in the wild-type protein to significantly weaken the specific, strong (2.5 Å) hydrogen bonding interaction with E168 observed in wild-type G127/M129 PrP. Similarly, the V129 polymorphism also affects ms dynamics, however, different residues are affected. V129 increases *R*~ex~ values for G131 and itself (Fig. [6c](#Fig6){ref-type="fig"}, Supplementary Figs. [10](#MOESM1){ref-type="media"} and [11](#MOESM1){ref-type="media"}). This alteration in G131 exchange dynamics has recently been observed in mouse PrP^[@CR42]^. In addition, we also observe that the *R*~ex~ values of D178 are markedly reduced in the V129 polymorph. This is illustrated by the reduction of line-broadening of D178 observed in the HSQC spectra of G127/V129 PrP (Supplementary Fig. [13](#MOESM1){ref-type="media"}). This is notable as the residue 129 M/V polymorphism affects the disease phenotype of the pathogenic D178N mutation which causes inherited prion disease. D178N is associated with the clinico-pathological phenotype Fatal Familial Insomnia (FFI) when residue 129 is methionine, and CJD when it is valine^[@CR37]^. In V127/M129 PrP the D178 HSQC resonance cannot be observed directly as it is heavily overlapped with that of V127, but an analysis of the intensity of signals in V127/M129 3D HNCO NMR spectra indicates that D178 does indeed experience ms dynamics, to a similar extent as wild-type G127/M129 PrP. This suggests that the V129 polymorphism alters PrP conformational variability independently of the 127 mutation. A number of residues at the C-terminus of helix 3 (I215, Y218, E219, R220, E221 and S222) experience altered ms dynamics in V127/M129 compared with G127/V129 PrP. Residues I215/Y218/E221/S222 are on a face of the helix that interacts with residues in the β2-α2 loop (Fig. [7](#Fig7){ref-type="fig"}, Supplementary Fig. [11](#MOESM1){ref-type="media"}). For example, residues Y218 and S222 closely interact with M166, while I215 and Y218 interact with Q172. In particular, residues Y218 and E221 are subject to marked increases in *R*~ex~ values in G127/V129 PrP, whereas in V127/M129 PrP there is a reduction at the C-terminus. V127 polymorphism does not perturb PrP stability {#Sec9} ------------------------------------------------ To test whether the variations in dynamics have a substantial effect on local stability, hydrogen/deuterium exchange rates were obtained on V127/M129 PrP. The observed rates of hydrogen/deuterium exchange allow the determination of amide protection factors, which indicate the extent to which hydrogen bonding and burial prevents solvent access. The hydrogen/deuterium exchange data indicate that the protection factors and stabilities of the PrP secondary structure elements are indistinguishable between V127/M129 and G127/M129 PrP (Supplementary Fig. [14](#MOESM1){ref-type="media"})^[@CR22],[@CR43]^. The protection factors of the secondary structure elements, with the notable exception of the first strand of the β-sheet and in the vicinity of the disulphide bond, reflect the equilibrium constant between native and unfolded states of the protein (*K*~F/U~)^[@CR43]^. The majority of residues that display observable protection factors therefore exchange from the globally unfolded state. This is also the case with G127/V129 PrP^[@CR22]^. The V127 and V129 polymorphisms thus do not induce any alternatively folded states in which the core of the protein is destabilised. This is noteworthy as the first strand of the β-sheet (where the residue 127 and 129 polymorphisms lie) displays anomalously low protection factors corresponding to reduced stability (\~30 times less than the other secondary structure elements), suggesting that its stability might be affected more readily by the protective polymorphisms. Notably, a number of regions that are subject to the ms conformational dynamics affected by both protective polymorphisms are in areas that do not display measureable hydrogen protection, for example the β2-α2 loop and the C-terminus of helix 3. Effect of V127 polymorphism on PrP in vitro fibrilisation {#Sec10} --------------------------------------------------------- The lack of major structural perturbation or altered stability of the V127 variant in comparison to WT PrP^C^ suggests that the polymorphism may act by primarily affecting the efficiency of conversion of PrP^C^ to its disease-associated aggregated form. To assess this we firstly examined the ability of the V127 variant to fibrilise under partially denaturing conditions. When agitated in 2 M GuHCl, PrP can be induced to form amyloid. Binding of the fluorescent thiazole dye thioflavin T to these β-sheet-rich fibrillar structures reports their formation, allowing a quantitative analysis of the kinetics of fibril formation^[@CR44]^. We found that although V127/M129 PrP can be induced to fibrilise within the time scale of the experiment, it did so with a significantly longer lag-time than WT G127/M129 PrP (Fig. [9](#Fig9){ref-type="fig"}). This is particularly interesting as substitutions to valine, and other bulky hydrophobic residues typically promote β-sheet formation and self-association required for amyloid formation^[@CR45]^. These data are however consistent with previously published data which modelled the effect of the V127 mutation on a mouse PrP background, and which indicated that the V127 variant is inherently more resistant to fibrilisation than WT PrP^[@CR46]^.Fig. 9Quantitative analysis of the effect of V127 on PrP fibril formation.**a** Formation of amyloid fibrils as reported by increasing Thioflavin T (ThT) fluorescence. The lines superimposed on the data are non-linear curve fits to Eq. ([2](#Equ2){ref-type=""}), as described in the Methods section. **b**, **c** Fibrillogenesis of V127/M129 PrP occurs with significantly longer mean half- and lag times in comparison to WT G127/M129 PrP (*P* ≤ 0.01, paired *t*-test). Centre lines show the medians for each data set; box limits indicate the 25th and 75th percentiles as determined by R software (<http://shiny.chemgrid.org/boxplotr/>); whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots. *N* = 5 sample points for each data set. Amplification of protease-resistant PrPSc seed (PMCA) {#Sec11} ----------------------------------------------------- Although fibrillar material can be generated using these partially denaturing conditions, the material generated in such reactions has not been shown to be reliably infectious. In contrast, the protein misfolding cyclic amplification (PMCA) technique^[@CR47]^ has been shown to amplify infectious and PK-resistant material with high fidelity. PMCA is a cyclical process where periods of conversion of substrate PrP^C^ by small amounts of PrP^Sc^ "seed" are interspersed with bursts of sonication. We performed PMCA reactions using brain homogenate from mice overexpressing WT G127/M129 PrP (Tg35) WT G127/V129 PrP (Tg152) or V127/M129 PrP (Tg183). Both WT PrP substrates allowed amplification of PK-resistant material. In contrast there was no amplification using V127/M129 as substrate (Fig. [10](#Fig10){ref-type="fig"}). These results are consistent with the observed disease characteristics of the vCJD strain type which propagates most readily with WT PrP with methionine at residue 129, and which failed to generate protease-resistant PrP or cause disease in transgenic mice expressing solely V127/M129 PrP^[@CR25]^.Fig. 10Western blot indicating the presence of PK-resistant material in PMCA reaction (+) and non-sonicated control (−) samples of PrP^Sc^ amplified with Tg35 (huPrP G127/M129), Tg152 (huPrP G127/V129) and Tg183 (huPrP V127/M129) brain homogenate as substrate.In each case, a small amount of seed can be detected in the non-sonicated control samples with varying levels of amplification observed in the reactions with different substrates. Of note is the lack of amplification with V127/M129 PrP (Tg183) as substrate. Discussion {#Sec12} ========== This structural and biophysical study was stimulated by the remarkable effect of the V127 polymorphism on human prion propagation. Transgenic mouse transmissions show that V127 PrP is incapable of supporting prion transmission and propagation, consistent with the human clinical resistance data^[@CR24]^, and is even able to inhibit heterologous propagation of wild-type protein containing glycine at residue 127^[@CR25]^. This differs from the residue 129 polymorphism, where similar studies suggest the importance of homologous protein interactions in prion propagation^[@CR17],[@CR18],[@CR20],[@CR48]^, and the preferential selection of different prion strains by PrP molecules with different primary structure as a result of conformational selection^[@CR2],[@CR16]^. Unlike the residue 129 polymorphism, no strain switching or strain mutation was observed with V127, even in the homozygous state, indicating that it confers complete resistance via the variant protein itself^[@CR25]^, leading to the hypothesis that an altered PrP^C^ fold may be the cause of resistance to prion disease^[@CR34]^. The profound effects that these mutations have on human prion disease pathogenesis may provide key insights into the mechanism of prion conversion, and have a wider relevance to other templated protein mis-folding diseases, where changing a single amino acid could have a similar dramatic effect, with potential significance for therapeutic strategies^[@CR9],[@CR10]^. The structural consequences of the glycine to valine substitution at residue 127 on PrP^C^ is therefore of major and wide potential interest. Here, we have shown that there is a close similarity in overall structure between both V127 variants studied (V127/M129 and V127/V129), and wild-type G127/M129 PrP. Solution spectra (CD and NMR) confirm that the crystal structures faithfully reflect the solution structures of the proteins, allowing detailed analysis of the structural effect of the V127 polymorphism on PrP^C^. We find little evidence for a major structural change in the β-sheet, or α-helices, in contrast to a recent NMR structure which identified unique features caused by the V127 polymorphism^[@CR34]^. In particular we do not observe any displacement of amino acid side chains within the β-sheet, which are well defined by the electron density, and note that this crystal structure satisfies the inter-residue β-sheet NOE distance constraints used for the NMR structural study within 0.25 Å^[@CR34],[@CR49]^, apart from two which are satisfied within 0.43 Å and 1.03 Å (both to the Hε of Y162). However, distinct perturbations of key regions which affect prion transmission and propagation are observed. Specifically, the V127 substitution reduces the conformational variability of the protein backbone immediately preceding the first strand of the β-sheet and radically alters the local backbone conformation. This facilitates the formation of an additional two intermolecular hydrogen bonds, which stabilise the native-state dimer association observed in the crystals. This dimeric association has been observed in a number of different PrP structures crystallised in the absence of antibody^[@CR30]--[@CR32]^ (Fig. [4](#Fig4){ref-type="fig"}). In these, the PrP β-sheet interface is composed of two intermolecular hydrogen bonds, as in WT G127/M129 PrP^[@CR29]^. The V127 dimer interface presented here is unique in both the length of the β-sheet interface and number of hydrogen bonds (4), and argues against the proposal that V127 disfavours native-state dimerisation, by reducing main-chain hydrogen-bond interactions^[@CR33],[@CR34]^. Formation of the intermolecular β-sheet has been proposed as a possible initiation point for β-sheet-mediated oligomerisation to explain the genetic susceptibility and prion strain selection determined by the polymorphic residue 129 in human prion disease^[@CR23],[@CR29],[@CR30],[@CR32]^. If β-strand interactions in this region of the protein mediate PrP interactions during PrP^Sc^ formation, then the packing and geometry of this segment of the chain would have a strong selective effect on conformation, and also productive prion propagation^[@CR29]^. No displacement of the protein backbone or stabilisation of the PrP^C^ dimer interface is caused by the residue 129 polymorphism^[@CR22],[@CR32]^, which may explain the marked effect of the V127 polymorphism on prion pathogenesis. The PrP^C^ dimer interaction does not appear to be a crystal packing artefact as endogenous PrP^C^ dimers have been detected in N2a cells and purified brain fractions^[@CR50],[@CR51]^ with the dimerisation region mapped to the hydrophobic domain of PrP (residues 112--133)^[@CR52]^. PrP^C^ dimerisation inhibits PrP^Sc^ accumulation and prion replication^[@CR53],[@CR54]^, and has a dominant-negative inhibitory effect on the conversion of monomeric PrP^C[@CR54]^. These findings suggest that it may be possible to halt prion formation by stabilising PrP^C^ dimers. Given the strengthened dimer association seen in the V127 crystals this may be one aspect of the protective mechanism of the V127 mutant. The conformational restriction imposed by the V127 polymorphism may also be sufficient to inhibit homotypic protein--protein contacts in heterodimers of V127 and G127 PrP, or prevent the formation of extended β-sheet structure required to convert the PrP N-terminal unstructured region into protease-resistant β-enriched forms^[@CR41],[@CR42],[@CR46],[@CR55]^. Alternatively, the marked alteration in local backbone conformation and increased stability of intermolecular β-sheet interactions may prevent PrP folding into a thermodynamically permissible prion assembly^[@CR3],[@CR4],[@CR56],[@CR57]^. As V127 PrP also inhibits the generation of infectious assemblies of wild-type PrP, this would suggest that it must be either capping nascent prion assemblies or structurally weakening infectious prion assemblies on incorporation. The PMCA data presented here indicates that V127 PrP is not a permissive substrate for amplification of protease-resistant PrP^Sc^ disease seed. It is possible that the polymorphism introduces a protease cleavage site, which would lead to the destruction of a polymer formed of the homomeric V127 protein. PrP V127 incorporation would also dope, with a dose-dependent effect, heteromeric polymers composed of both G127 and V127 PrP variants. Their reduced stability could increase cellular clearance, which would also explain the "dominant negative" effect of V127 on prion propagation^[@CR58]^. In addition to these local structural perturbations, long-range sequence interactions between the protective residue 127 and 129 polymorphisms affect the conformational distribution of a spatially distinct region including the β-sheet, β2-α2 loop, the C-terminus of helix 3 and the disease-associated residue D178^[@CR37]^. These conformationally variable elements have been shown to be key determinants of prion transmission and cross-species prion susceptibility, and a number are associated with inherited forms of prion disease, for example G131V, D167N, V210I, E211Q, Q212P and Q217R^[@CR26],[@CR28],[@CR37],[@CR59]--[@CR64]^. In particular, the β2-α2 loop (residues 165--172) has been proposed to be a key modulator of prion transmission and disease-associated PrP misfolding^[@CR26]--[@CR28],[@CR35],[@CR36],[@CR58]^. V127 alters the structural flexibility of the β-sheet and conformational dynamics of the β2-α2 loop by disrupting the electrostatic interaction between R164 of the β-sheet and E168 within the adjacent β2-α2 loop. Loss of this interaction would disrupt hydrogen bonding and close packing between Y169, F175 and D178 and destabilise the dominant 3~10~--helical conformation of the β2-α2 loop^[@CR27],[@CR36],[@CR40]^ (Supplementary Figs. [11](#MOESM1){ref-type="media"} and [12](#MOESM1){ref-type="media"}). Notably, residue 168 (human numbering) is polymorphic in sheep PrP, in which either glutamine or arginine can be accommodated^[@CR65]^. The arginine polymorph, which would also diminish the electrostatic interaction with R164, is associated with resistance to scrapie, and potently inhibits prion conversion^[@CR60],[@CR62]^. These observations support the notion that increased conformational variability associated with the loss of charge interaction between R164 and E168 increases resistance to prion disease. Substitution of arginine for glutamine at the totally conserved Q172, the dynamics of which are significantly altered in the V127 polymorph also potently inhibits in vitro prion infectivity^[@CR66]^. The recent NMR structural study of V127 PrP also identified significant alterations in the conformational dynamics in the β-sheet and α-helix 2 as a result of the polymorphism^[@CR34]^. Although there are a number of residues perturbed in both studies, the marked effect that we observe on the dynamics of the β2-α2 loop was not observed in the previous study. The differences in protein dynamics may be ascribed to the different solution conditions for data acquisition. In this study PrP was crystallised at pH 8.0, with the NMR solution dynamics acquired at pH 5.5, whereas the previous NMR study used pH 4.5, which may cause glutamate and aspartate side chains to adopt an unphysiological protonated state^[@CR34]^. The calculated p*K*~a~′s of E168 and D167 in V127/M129 PrP are 4.7 and 4.4, respectively^[@CR67]^. The hydrogen-bonding and charge interactions of the β2-α2 loop involving the side-chains of these residues will be weakened through increased protonation. This will likely cause altered conformational dynamics in the β2-α2 loop and adjacent regions. For example, the lack of assigned resonances for R164 by Zheng et al.^[@CR34]^ may be ascribed to intermediate timescale exchange broadening these signals beyond detection. Given the p*K*~a~′s of interacting residues, the pH at which structural studies are carried out might thus be very significant. It is of great interest that both the V127 and V129 polymorphisms have long-range effects on the conformational distribution of these regions of the protein and also the β-sheet. The correlation between these conformationally variable regions of PrP and its propensity to form disease-related isoforms suggest that these regions of the protein are important in prion assembly-PrP^C^ interactions, determining efficient binding and conversion^[@CR27],[@CR36]^. Indeed, relatively subtle variations in the strength and orientation of monomer docking can dramatically affect the productivity of fibrillogenic interactions and determine barriers to amyloid formation^[@CR68]^. Given the effect of the V127 polymorphism on the PrP β-sheet backbone geometry and intermolecular association of PrP monomers observed here, it is tempting to speculate that dimerisation via the formation of the intermolecular PrP β-sheet may be a critical event in PrP oligomerisation and prion propagation and thus explain the exceptional effect of the residue 127 polymorphism on human prion disease. Methods {#Sec13} ======= Recombinant PrP and antibodies {#Sec14} ------------------------------ Recombinant human PrP containing residues 119--231 (PrP^119--231^) was produced and purified as previously described^[@CR69]^. This length of construct was chosen as the PrP N-terminus up to approximately residue 125 is unstructured in full-length (residues 23--231) and truncated (residues 91--231) PrP and compromises the NMR dynamics characterisation due to its effect on the rotational tumbling of the structured domain^[@CR70],[@CR71]^. Removal of the N-terminal tail does not affect the structure or local structural fluctuations of the PrP structured globular domain^[@CR72],[@CR73]^. PrP containing valine at residue 127 (V127/M129 and V127/V129), and wild-type PrP with glycine at residue 127 on both 129 methionine and valine backgrounds (G127/M129 and G127/V129) were expressed and purified for biophysical analysis. ICSM 18 was purchased from D-Gen Limited. The Fab fragment of ICSM 18 was prepared by limited papain digest of the mature antibody followed by purification using gel filtration chromatography. Crystallisation conditions {#Sec15} -------------------------- ICSM 18-Fab and PrP were mixed at a molar ratio of 3:1 for preparation of the complex prior to crystallisation, and incubated at room temperature for 30 min before buffer-exchanging the complex into 50 mM Tris, 150 mM NaCl, pH 8.0 and filtering through a 0.22 µm membrane prior to crystallisation. Crystals of the complex were obtained by using the sitting-drop vapour diffusion technique; droplets containing 5--6 mg/mL PrP in 0.4 M and 0.75 M ammonium sulphate, 0.05 M Tris (pH 7.5 and 8.0) were equilibrated over wells containing 0.8 M and 1.5 M ammonium sulphate, 0.1 M Tris (pH 7.5 and 8.0). Crystals, round in shape, grew in 6 months to 0.05--0.2 mm diameter. In situ data collection and analysis {#Sec16} ------------------------------------ Data were collected at room temperature in situ on beamline I03 at Diamond Light Source, with the crystallisation plates sealed in biohazard bags. Multiple wedges of data were collected from different parts of the same crystal, and from different crystals, and scaled together to provide a complete dataset. We typically collected 15° of data from each crystal in 0.3° oscillations. Data were integrated with *XDS*^[@CR74]^ and then subsequently *BLEND*^[@CR75]^ was used to analyse how well the different wedges of data scaled together and the results used to decide which datasets should be scaled and merged with *AIMLESS*^[@CR76]^. Structure determination and refinement {#Sec17} -------------------------------------- The structures were solved by molecular replacement using PHASER^[@CR77]^ with the heavy and light chains of the Fab fragment of antibody ICSM18 and the PrP molecule used as search models (protein databank accession code 2W9E). Electron density maps were inspected and the models built using *COOT*^[@CR78]^ followed by refinement with *REFMAC5*^[@CR79]^. Data collection and final refinement statistics are summarised in Supplementary Table [1](#MOESM1){ref-type="media"}. Ramachandran statistics for V127/M129 and V127/V129 structures (in parentheses) are as follows; residues in most favoured region: 96.7% (96.7%), residues in additionally allowed regions: 3.3% (3.3%) and residues in disallowed regions: 0.0% (0.0%)^[@CR80]^. The final coordinates of the V127/M129 and V127/V129 structures have been deposited in the Brookhaven Protein Data Bank (<http://www.rcsb.org>), with accession numbers 6SV2 and 6SUZ respectively. NMR sample preparation and spectroscopy {#Sec18} --------------------------------------- For the NMR study ^15^N & ^13^C/^15^N-labelled samples of PrP were prepared. Following purification, protein samples were either (A) buffer-exchanged into 20 mM sodium acetate, containing 1.5 mM sodium azide (NaN~3~), pH 5.5 through dialysis, then concentrated in Vivaspin 20 centrifugal concentrators to protein concentrations of 0.8--1.2 mM or (B) dialysed against deionised water then lyophilised and resuspended into 20 mM sodium acetate, 1.5 mM NaN~3~, pH 5.5. 10% D~2~O (v/v) was added to the NMR samples to provide the lock signal, together with TSP as the chemical shift reference to 1 mM final concentration. NMR samples were placed in Sigma FEP NMR sample tube liners (Z286397--1EA), held within Wilmad PP-528 NMR tubes for NMR data acquisition. Assignment spectra for V127/M129 PrP were acquired at 303 K on Bruker DRX-600 and DRX-800 spectrometers, with ^15^N-relaxation measurements for V127 and V129 PrP acquired at 298 K on Bruker Avance III 500 and 800 MHz spectrometers, all equipped with 5 mm ^13^C/^15^N/^1^H triple-resonance probes. Sensitivity-enhanced ^1^H-^15^N HSQC^[@CR81],[@CR82]^ and standard triple-resonance experiments^[@CR83]^ with uniformly ^13^C/^15^N-labelled protein (HNCA, HNCACB, CBCA(CO)NH and HNCO) were used to obtain V127/M129 backbone resonance assignments. Proton chemical shifts were referenced to TSP. ^13^C & ^15^N chemical shifts were calculated relative to TSP, using the gyromagnetic ratios of ^13^C, ^15^N and ^1^H (^15^N/^1^H = 0.101329118; ^13^C/^1^H = 0.251449530). For residue 127, when comparing ^13^C chemical shifts, the difference in residue type was compensated for by subtracting residue-specific random coil shifts (glycine/valine) to generate secondary chemical shifts, which were then subtracted^[@CR84]^. NMR data were processed and analysed using Felix 2007 (Accelrys, San Diego), *Topspin (*v 3.2, Bruker*)* and CCPN Analysis (*v. 2.3.1*)^[@CR85]^ software. Spin relaxation measurements {#Sec19} ---------------------------- Spin relaxation measurements (T~1~, T~2~ and ^15^N(^1^H)-NOE) were acquired on 1 mM ^15^N-labelled PrP^119--231^ WT (G127/M129) PrP, V127 (V127/M129) PrP and V129 (G127/V129) PrP as described in Yip et al.^[@CR86]^. Briefly, by using this methodology, heating compensation was improved by the incorporation of a compensation block based on the relaxation block, followed by a pre-scan ^1^H saturation sequence and constant length recovery period. The *T*~1~ data were obtained using ^15^N relaxation delays of 50, 100, 200\*, 300, 500, 800\*, 1000, 1500, 2000, 3000, 4000 and 5000 ms. The *T*~2~ data were obtained using ^15^N relaxation delays of 8.5, 17.0, 33.9\*, 50.9, 67.8, 101.8\*, 135.7, 186.6, 254.4 ms (500 MHz) and 7.8, 15.7, 31.4\*, 47.0, 62.7\*, 94.1, 125.4, 172.5 and 235.2 ms (800 MHz; the asterisks denote duplicate measurements). *T*~1~ and *T*~2~ datasets were recorded as pseudo 3D experiments, with randomised order of time increments. Two separate nitrogen offsets were used to reduce build-up of off-resonance artefacts during the CPMG block of the *T*~2~ measurements. For the ^15^N-^1^H-NOE measurement, two two-dimensional spectra were acquired with a relaxation delay of 6 s between scans. Spectra were collected with *t*~1~ acquisition times of 94.7 ms (500 MHz)/59.2 ms (800 MHz) and *t*~2~ (direct) acquisition times of 127.8 ms (500 MHz)/91.8 ms (800 MHz). Errors for time-series *T*~1~ and *T*~2~ data were calculated from the overall standard deviation for duplicate data points in the series. Errors for the NOE data were estimated from measurements of the root mean-square deviation of the base-plane noise in those spectra. Non-linear least-squares (Levenberg-Marquardt) fitting of two-parameter exponential functions to decay data was performed using in-house routines using Numerical Python. *Modelfree* analysis {#Sec20} -------------------- Protein dynamics were analysed by *Relax* (*v. 3.3.1*)^[@CR38],[@CR39]^, using the *T*~1~, *T*~2~ and ^15^N{^1^H}-NOE spin relaxation data. Reduced spectral density mapping analysis, as implemented by the *Relax* default *J*~(w)~ mapping script mode was used to obtain *J*~(ω)~ values for each given field strength. A full *Modelfree* analysis^[@CR87]^ was carried out using the "d'Auvergne" protocol within *Relax*. Extended order parameters (*S*~2~, *S*^f^~2~, *S*^s^~2~), the effective correlation time for fast internal motions (*t*~e~) and intermediate exchange broadening contribution (*R*~ex~) values were obtained using this protocol. Amide exchange protection experiments {#Sec21} ------------------------------------- Hydrogen-deuterium exchange rates (*k*~ex~) were determined by adding 260 µl 20 mM sodium acetate, 1 mM sodium azide, pH 4.5, dissolved in 100% (v/v) D~2~O to lyophilised PrP samples, to obtain final protein concentrations of 1 mM. A series of sensitivity-enhanced ^1^H-^15^N HSQC spectra^[@CR81],[@CR82]^ were acquired at 293 K on a Bruker DRX-800 spectrometer. The decay curves of the ^1^H-^15^N HSQC cross-peaks were fitted to single exponential decays with offset, and protection factors (*k*~ex~/*k*~int~) for observable amides were determined using intrinsic amide exchange rates^[@CR88]^ (*k*~int~). Acquisition of the first experiment began \~5 min after mixing, setting a lower limit on the detection of protection factors of \~5. Circular dichroism {#Sec22} ------------------ Circular dichroism was measured at 25 °C with a Jasco J-715 spectropolarimeter, using a 0.1 cm pathlength quartz cuvette. The sample temperature was controlled with a circulating water bath. Far-UV (amide) CD spectra were recorded between 180 nm and 300 nm with 20 μM protein (2 nm bandwidth; Data Pitch 0.5 nm). In all, 10--50 spectra were averaged. Equilibrium unfolding measurements {#Sec23} ---------------------------------- For equilibrium unfolding experiments, 6 μM protein was incubated in 10 mM HEPES, 25 mM NaCl pH 7.5, and increasing concentrations of GuHCl denaturant. Molecular ellipticity (\[θ\], degree M^−1^ cm^−1^) was recorded at 222 nm (5 nm bandwidth; 20 s integration time). The denaturation profile for each protein was measured in three separate experiments. Conversion to molar denaturant activity {#Sec24} --------------------------------------- To allow more accurate extrapolation of data to calculate folding parameters in the absence of denaturant and the free energy change of protein folding (Δ*G*), denaturant concentration (\[GuHCl\]) was converted to molar denaturant activity (*D*), as described in Parker et al.^[@CR89]^, using *C*~0.5~ = 7.5. Equilibrium constant between folded and unfolded states {#Sec25} ------------------------------------------------------- For the two-state equilibrium unfolding transitions, data were fitted to the following equation, where *K* and *K*~(W)~ are equilibrium constants between the folded and unfolded states at a given denaturant activity (*D*) and in water, respectively, and *m* describes the sensitivity of the equilibrium to denaturant activity^[@CR89]^.$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{K}} = {{K}}_{{\mathrm{(W)}}}{\mathrm{exp}}\,\left( {m.{\mathrm{D}}} \right)$$\end{document}$$ For visual representation of the data shown, data were converted to proportion folded, *α*~F~, using the following, *α*~F~ = (*K*/(1 + *K*)). Data fitting was carried out using *GraFit* (Erithacus software). The significance of the differences in free energy for folding and *m* values between the three variants characterised were determined by paired two-tailed Student's *t* test. Quantitative analysis of the kinetics of PrP fibril formation {#Sec26} ------------------------------------------------------------- Recombinant V127/M129 and WT G127/M129 PrP (residues 119--231) was dialysed into 20 mM sodium acetate, 2 mM sodium azide, pH 6.0, and then denatured by the addition of GuHCl to a final concentration of 6 M. Denatured PrP was then diluted to a final concentration of 10 µM in 20 mM sodium acetate, 2 M GuHCl, 10 mM EDTA, 100 µM Thioflavin T (ThT) and pH 6.0. All solutions were filtered through a 0.22-µm filter to remove particulates. In all, 200 µl aliquots were placed in silanised Greiner 96-well flat-bottomed plates (\#655077) containing four 0.5-mm diameter zirconium ceramic beads in each well to assist agitation. The plates were incubated at 37 °C with constant agitation in a Tecan Infinite F200 Microplate Fluorimeter. Fibril formation was monitored through the increase in ThT fluorescence (excitation 430 nm, emission 485 nm), with readings acquired every 600 s. Five replicates were used for each PrP sample. To determine the half- and lag-times for fibril formation, data were fitted to an empirical function described by Nielsen et al.^[@CR90]^.$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{Fi}} + {\mathrm{Ff}}/\left\{ {1 + {\mathrm{exp}}\left[ { - \left( {{{t}}-{{t}}_{\mathrm{m}}} \right)/\tau } \right]} \right\}$$\end{document}$$where Fi is the initial fluorescence reading, Ff is the final fluorescence reading, *t* is time, *t*~m~ is the time taken to half maximal fluorescence and *τ* is the reciprocal of the propagation rate during the rise phase \[1/*k*~(apparent)~\]. Lag-time is defined as *t*~m~ − 2*τ*. Formation of protease-resistant PrP by PMCA amplification {#Sec27} --------------------------------------------------------- PMCA substrate homogenates were prepared from mice that had been perfused with PBS containing 5 mM EDTA at the time of death. PrP-null (*Prnp*^*o/o*^), Tg35 (homozygous for huPrP G127/M129), Tg152 (homozygous for huPrP G127/V129) or Tg183 (homozygous for huPrP V127/M129)) mouse brains^[@CR25]^ were homogenised in cold conversion buffer (PBS containing 150 mM NaCl, 1.0% (v/v) Triton X‐100, 4 mM EDTA and 1× Complete protease inhibitor (Roche Applied Science)), using a Duall tissue grinder to give a 10% (w/v) homogenate. Substrates were seeded with a 1/100 dilution of vCJD (I4618) 10% brain homogenate in PBS. Each reaction mixture was divided in two prior to PMCA with one half stored at −70 °C as a minus PMCA control. PMCA consisted of 96 cycles of 30 s sonication every 30 min in a Misonix S3000 at 75% power output (Misonix, Farmingdale, NY), reactions were carried out with 40 µl substrate in 200-µl thin-walled PCR tubes at 35 °C. Samples were digested with 50 µg ml^−1^ proteinase K (PK) for 1 h at 37 °C. The reaction was stopped by the addition of AEBSF in SDS-loading buffer and samples were boiled for 10 min before running on 16% Tris-glycine gels. Western blotting was carried out according to Unit protocol, using 3F4 (Merck Inc, N.J., U.S.A) as the primary antibody and goat anti-mouse IgG conjugated to alkaline phosphatase (Sigma A2179) as the secondary antibody. Statistics and reproducibility {#Sec28} ------------------------------ In the reported experiments, each protein sample was identically-engineered. The sample size (*n*) of each experiment is provided in the corresponding figure captions in the main manuscript and supplementary information files. Sample sizes were chosen to support meaningful conclusions. All in vitro folding experiments were replicated at least three times. In vitro fibrillisation assays were replicated five times. *T*~1~ and *T*~2~ NMR data were recorded with randomised order of time increment and each included one duplicate dataset. Replicate experiments were successful. Investigators were not blinded during experimental measurements or data analysis. Reporting summary {#Sec29} ----------------- Further information on research design is available in the Nature Research [Reporting Summary](#MOESM7){ref-type="media"} linked to this article. Supplementary information ========================= {#Sec30} ###### Supplementary Information ###### Supplementary Data 1 ###### Supplementary Data 2 ###### Supplementary Data 3 ###### Peer Review File ###### Description of Additional Supplementary Files ###### Reporting Summary ###### PDB validation report 1 ###### PDB validation report 2 **Publisher's note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information ========================= **Supplementary information** is available for this paper at 10.1038/s42003-020-01126-6. We thank Richard Newton for the preparation of figures. This work was supported by the Medical Research Council. We are grateful to the staff at Diamond Light Source for access to X-Ray diffraction data collection facilities. We gratefully acknowledge the longstanding and major contribution of the late Anthony Clarke to our structural studies. L.L.P.H., R.C., D.S., M.J.C., A.M.H., K.M., R.L.B., G.S.J., J.P.W. and J.C. designed research; L.L.P.H., R.C., D.S., M.B., E.B.S., S.F., M.J.C., A.M.H., K.M., R.L.B. and J.P.W. performed research; D.S. and K.M. contributed new analytic tools; L.L.P.H., R.C., M.J.C., A.M.H., K.M., R.L.B., J.P.W. and J.C. analysed data; L.L.P.H., R.C., K.M., R.L.B., J.B., J.P.W. and J.C. wrote the paper. The atomic coordinates for the crystal structures described in this paper have been deposited in the Brookhaven Protein Data Bank (<https://www.rcsb.org/>) (accession nos. 6SV2 (V127/M129 PrP) & 6SUZ (V127/V129 PrP)). The data that support the findings of this study are available from the corresponding author upon reasonable request. G.S.J. and J.C. are shareholders, and J.C. a director of D-Gen Limited, an academic spin-out company in the field of prion disease diagnosis, decontamination, and therapeutics, which provided the ICSM18 monoclonal antibody used in this study. The remaining authors share no competing interests.
{ "pile_set_name": "PubMed Central" }
Long-term effect of Boston brace treatment on renal function in patients with idiopathic scoliosis. The long-term effects of Boston brace treatment on renal function were studied in 20 patients with idiopathic scoliosis. Renal function was tested by clearances of inulin and para-aminohippurate sodium (PAH) when the brace was first applied as well as after four and 12 months of brace treatment. Each function test was performed without and with the brace. The glomerular filtration rate decreased when the brace was first applied, was unchanged after four months, and increased after 12 months. Renal plasma flow decreased when the brace was first applied but was unchanged after four and 12 months. Urinary sodium excretion decreased to values lower than those of control subjects when the brace was first applied, but an adaptive increase was noted after four and 12 months of brace treatment. The acute effects of brace application were observed even after four and 12 months of treatment; an increase in urinary sodium excretion was found when the brace was removed.
{ "pile_set_name": "PubMed Abstracts" }
Le Spa Mystic Teeth Whitening: How it Works Teeth whitening is a simple cosmetic procedure if done right. Doing it right means using a proper teeth-whitening agent, an easy and effective method of applying the agent to the teeth, and an appropriate accelerator light. Our cosmetic whitening treatment is considered by industry insiders to be the best. The whitening results are impressive and the "Wow!"-factor is huge. 1. The Whitening Gel; it's specifically formulated to deliver the maximum whitening results in the shortest amount of time.2. Our Applia-Brush™ Paint-On Technique; it's the easy-to-use gel-filled applicator that allows the customer to quickly and effectively apply the Whitening Gel to their teeth to be whitened.3. The Infinity™ Pro SL Light; the light, which is correctly tuned to the optimal wavelength and which has the highest energy output of any light, accelerates the whitening process by activating the photosensitive Whitening Gel.
{ "pile_set_name": "Pile-CC" }