text
stringlengths
1
964k
meta
dict
The New Hollywood cinema of the 1970s owes much of its look to a pair of Hungarian film school refugees, Laszlo Kovacs and Vilmos Zsigmond. This documentary by fellow cinematographer Chressanthis is a loving tribute to these two lifelong friends and master craftsmen, with interviews from such colleagues as Peter Bogdanovich, Richard Donner and Dennis Hopper and clips from classic films shot by Kovacs and Zsigmond, including EASY RIDER, THE DEER HUNTER and CLOSE ENCOUNTERS OF THE THIRD KIND. In this critically acclaimed sci-fi drama, Sandra Bullock stars as Ryan Stone, who on her first mission is stranded in space when her shuttle is destroyed by debris. The collaboration of Cuarón, cinematographer Emmanuel Lubezki and a team of talented artisans create cinema that will take your breath away even if your oxygen supply is unlimited. “In depicting the fearful, beautiful reality of the space world above our world, GRAVITY reveals the glory of cinema’s future; it thrills on so many levels.” – Richard Corliss, Time Eleven-year-old amateur inventor, Francophile and pacifist Oskar Schell (Thomas Horn) discovers a mysterious key among the belongings of his deceased father (Tom Hanks), who died a year earlier in the terrorist attacks on the World Trade Center. Determined to keep his vital connection to the man who playfully cajoled him into confronting his wildest fears, the young boy embarks on an urgent search for the lock the key will open. As Oskar crosses the five New York boroughs on foot - encountering a range of people (including an excellent Max von Sydow) who are each survivors in their own way - he begins to uncover unseen links to the father he misses, to the mother (Sandra Bullock) who has become so emotionally distant and to the whole noisy, dangerous and often wondrous world around him. Adapted from the acclaimed bestseller by Jonathan Safran Foer. Nominated for 2 Academy Awards for Best Picture and Best Supporting Actor (von Sydow).
{ "pile_set_name": "Pile-CC" }
Report Shows Even After Snowden, Gov’t Passes Secret Laws for National Security When for NSA employee Edward Snowden blew the whistle on his former agency’s surveillance practices, Americans went nuts. But who’s to say other, equally alarming programs aren’t currently in place? According to a new report by the Brennan Center for Justice at NYU School of Law, we can’t know for sure. The report, “The New Era of Secret Law” by Elizabeth Goitein, describes how different branches of government are able to enact and enforce policies without ever telling the public they exist. How is this possible, when statutes are passed by Congress, federal agencies need to publish new rules and make them subject to public comment, and court decisions are typically published? Well, it turns out there are other ways to set rules or policies that have the same effect as laws. For starters, the very surveillance program that Snowden revealed was allowed by a Foreign Intelligence Surveillance Court (FISA Court). The FISA Court ruled in the program’s favor, even though on the surface it may have gone against the PATRIOT Act. The government’s former practice of using methods like waterboarding when interrogating members of Al Qaeda and the Taliban was part of a legal memorandum by the Justice Department’s Office of Legal Counsel (OLC). Neither one was made available to the public at the time, and only certain members of Congress got to see them. After Snowden’s big reveal, the Director of National Intelligence made certain FISA Court opinions public, but the report says that most pre-Snowden FISA case law is still secret, based on the Brennan Center’s Freedom of Information Act requests and conversations with Justice Department officials. On top of that, at least 20 percent of the opinions issued by the OLC opinions between 1998 and 2013 were classified. No less than 74 opinions, memoranda, or letters issued between 2002 and 2009 related to national security topics, including the “detention and interrogation of suspected terrorists, intelligence activities, and the law of armed conflict” are still classified. Justice Department spokesman Kevin Lewis told the Washington Post, “Some opinions may not be appropriate for public release because they could reveal classified national security information or implicate confidential executive branch deliberations.” A great deal of secret law is created by loopholes in FOIA that create exceptions for certain information. The first, known as Exemption 1, says that classified information is not subject to disclosure under FOIA. Turns out that rules themselves can be classified, keeping the public from seeing them. Another exception, known as Exemption 3, is often used by the CIA to hide much of their activity, creates an exception for matters “specifically exempted from disclosure by statute.” A big argument for this is that secret law is often necessary for national security purposes. By making the existence of the laws known, it alerts potentially dangerous parties to issues that the government is looking into. The report argues that this practice goes too far. “National security has always required some level of secrecy in the details of operations,” it says. “The law is different. In the case of regulations and similar instruments, these establish general rules for conduct — not plans for specific operations.” The report suggests that if the government is going to create secret law, they should at least make it known to all branches of government, as well as “independent oversight bodies,” so that someone is watching, even if the public is kept in the dark. Brian Hale, a spokesman for the Office of the Director of National Intelligence, told the Post that the government is working on increasing transparency, where appropriate. “In the last several years the government has engaged in an unprecedented level of transparency regarding its intelligence collection authorities,” Hale said, adding that the government “continues to review for declassification and public release additional older FISC opinions as part of the on-going transparency effort.”
{ "pile_set_name": "Pile-CC" }
SINGAPORE: Will China ever allow a different system of government in Hong Kong? That is “wishful thinking replacing reality” by some protesters, said Singapore’s Home Affairs and Law Minister K Shanmugam. In an interview with South China Morning Post and Lianhe Zaobao – the transcript of which was released on the Ministry of Law’s website on Sunday (Aug 11) – Mr Shanmugam addressed questions about his views on the situation in Hong Kong. Solutions have to be found, both for the socio-economic and ideological issues that Hong Kong is facing, he said. To solve the problems, Hong Kong needs a supportive China, and the solutions need to work for both Hong Kong and China, he added. As it happened: Hong Kong police fire tear gas, rubber bullets at protesters as violence erupts But with the “deeply entrenched positions” of some protesters on ideological issues, there is “no easy way forward” for Hong Kong, Mr Shanmugam said. “Hong Kong is part of China. Beijing will expect Hong Kong to adapt to the political structure that prevails in China. Adapt, not adopt," he said. “Some of the protestors seem to think that China will allow a very different system in Hong Kong. That is wishful thinking replacing reality,” he said. “How will China's leaders look at it? “You sing the US national anthem, you speak in Mandarin and tell the Chinese tourists to go back and take these ideas back to China. The leaders could think Hong Kong is just the start, for something that some people want to hope to start in the rest of China.” “IDEOLOGY MUST SQUARE WITH REALITY” Mr Shanmugam’s comments came amid another tense weekend in Hong Kong, with demonstrators taking to the streets in a movement that began in opposition to a Bill allowing extradition to mainland China but has become a call for greater democratic freedoms. The weeks of increasingly violent protests have plunged the city into its biggest political crisis for decades and pose a serious challenge to Beijing, which has condemned the protests and accused foreign powers of fuelling unrest. In his interview, Mr Shanmugam also criticised international news organisations for their “very superficial analysis” and “engaging in labelling” on the events in Hong Kong. “All protesters are automatically, generally, democracy fighters. Police, on the other hand, are oppressive, attacking the forces of democracy, using excessive force. ‘They’re negative, they’re an evil force.’” Some of the news coverage reflects a “skewed perspective, from a very ideological lens”, he said. READ: China tells UK to back off after minister's call for probe into Hong Kong protests China has “competent, (the) best people” in its government. And over 35 years, the country has lifted 500 million to 600 million people out of poverty, Mr Shanmugam said. “No country has done that in history, in 35 years,” he said. “Not enough credit is given for that. It’s a huge achievement.” Could that have been achieved under another system of government? Can another political system do better for the people of China, compared to the current system? There is none – and ideology must square with reality, Mr Shanmugam said. “SINGAPORE BENEFITS FROM STABILITY IN THE REGION” The minister also dismissed “superficial” comments that Singapore benefits from the instability in Hong Kong. “We benefit from stability across the region, including Hong Kong. If China does well, Hong Kong does well, the region does well, we do well,” Mr Shanmugam said. “There’s no profit in seeing instability. And if Hong Kong is at odds with China, it’s a problem for everyone, including us.” Hong Kong’s strengths as a financial centre and its valuable position as an outpost for China are not going to go away overnight, he said. Mr Shanmugam also said the majority of Singaporeans think they are lucky that the same things are not happening in their home. “If this happened to us, it would be bad for our economy and we don’t have the advantages that Hong Kong have to weather such a situation,” he said. “Hong Kong has the huge advantage of China’s support. Singapore has no one to support it. “So from that perspective, I think Singaporeans see that and they say if this happens in Singapore, it will be very troublesome and they are grateful that it is not happening here.”
{ "pile_set_name": "OpenWebText2" }
TravelTab rents technology to travelers, including GPS devices, smartphones & tablets loaded with apps, and WiFi hotspots. Their mobile devices are critical to their business. Challenges include users who are not necessarily technically competent or patient and operations dispersed across many locations without local technical support. “Our IT team needed to unify endpoint management and streamline app deployments, in addition to providing more secure access to a revolving door of travelers” said Maria Cotton, mobile device systems administrator at TravelTab. “Using the VMware Workspace ONE managed mobility solutions provided by Vox Mobile as a managed cloud service—along with tapping into Vox Mobile’s expertise in mobility innovation and infrastructure management—helps us transform traveler experiences.” Rob Seemann, Vox Mobile’s VP of sales and marketing for Vox Mobile, explained that the TravelTab project’s challenges included volume, variety, and velocity: Volume: With an immediate migration of thousands of devices, the TravelTab environment is larger than most enterprises. The mobile apps that run on these devices are the most important feature for users and for the business model. Variety: TravelTab supports a slightly wider variety of devices and provides services to a range of users from the least technically sophisticated to the tech-savvy. Add an expansive catalog of custom apps and you have an environment that is far more varied than in our usual enterprise mobile solutions for companies. Velocity: TravelTab cycles devices through customers much more quickly than most of our enterprise clients, as devices are rented for days or weeks and then need to be reset and ready for the next customer. Vox Mobile selected Workspace ONE managed mobility solutions from VMware as the ideal solution to tackle the above challenges. With Workspace ONE in Vox Mobile’s Managed Cloud: Devices are fully configured remotely, over the air. Local staff doesn’t need to know anything about the technology or the configuration for our managed mobility solutions. Workspace ONE is infinitely available and instantly scalable, with a high-reliability infrastructure that confidently supports bursts of usage or constant expansion. Mobile Device Management capabilities have never been more easily available. With consumption-based pricing, the costs are aligned to needs, eliminating upfront costs for infrastructure while delivering the benefits as you need them. There is no need to buy or install a mobile device management (MDM) tool – you just pay for what you use. Customer information invariably is collected on the devices as they use them to connect to web-based services or make purchases. That information needs to be secured while they have the devices and then eliminated as soon as the devices are returned. As the leading EMM/UEM platform in the industry, the leading security features and capabilities are built in to Workspace ONE. (https://www.air-watch.com/gartner-report-2018/)
{ "pile_set_name": "Pile-CC" }
Brachytelephalangic chondrodysplasia punctata in a female child. We report a case of a female child born to nonconsanguineous parents who at birth presents a facial dysmorphism including flattened and hypoplasic nose associated with epiphyseal stippling of the tarsal bones, the right hip, the cervical, lumbar, and sacral regions of the spinal column, and hypoplasia of the distal phalanges of the fingers. The current pregnancy history was negative for exposure to alcohol or drugs. The karyotype was normal. The clinical and radiological features strongly suggest brachytelephalangic chondrodysplasia punctata. Described in males, this condition has not previously been detected in a female; its gene has been assigned to Xp22.3. The present observation of brachytelephalangic chondrodysplasia punctata in a female questions the genetic heterogeneity of this syndrome.
{ "pile_set_name": "PubMed Abstracts" }
Compliance with referral for curative care in rural Burkina Faso. The goal of this study is to contribute to improving the functioning of the referral system in rural Burkina Faso. The main objective is to ascertain the compliance rate for referral and to identify the factors associated with successful referral. A record review of 12 months of curative consultations in eight randomly selected health centres was conducted to identify referral cases. To assess referral compliance, all patient documents at referral hospitals from the day of the referral up to 7 days later were checked to verify whether the referred case arrived or not. Descriptive statistics were then used to compute the compliance rate. Hierarchical modelling was performed to identify patient and provider factors associated with referral compliance. The number of visits per person per year was 0.6 and the referral rate was 2.0%. The compliance rate was 41.5% (364/878). After adjustment, females (OR = 0.71; 95% CI = 0.52-0.98), patients referred during the rainy seasons (OR = 0.56; 95% CI = 0.40-0.78), non-emergency referrals (OR = 0.47; 95% CI = 0.34-0.65) and referrals without a referral slip (OR = 0.30; 95% CI = 0.21-0.43) were significantly less likely to comply. Children between 5 and 14 years old (OR = 0.61; 95% CI = 0.35-1.06) were at a higher risk of non-compliance, but the difference did not reach statistical significance. Moreover, none of provider characteristics was statistically significantly associated with non-compliance. CONCLUSIONS In a rural district of Burkina Faso, we found a relatively low compliance with referral after the official referral system was organized in 2006. Patient characteristics were significantly associated with a failure to comply. Interventions addressing female patients' concerns, increasing referral compliance in non-emergency situations, reducing inconvenience and opportunity costs due to seasonal/climate factors, and assuring the issue of a referral slip when a referral is prescribed may effectively improve referral compliance.
{ "pile_set_name": "PubMed Abstracts" }
Q: How do I declare a property of a record as an abstract function I'm new to F# and functional programming and need some help. I come from c# so my mindset still gets in the way a bit. I need to pass some options to a function and I'm using a record for this. One of the options is a continuation function unit -> Option<'a>. I can't figure out how to define the record type. Below is an example of what I've been trying. type Func2<'a> = 'a -> 'a option type ProcessOptions = { func1: int -> int option func2: Func2<int> // This works... //func2: Func2<'a> // ... but this is what I'm trying to achieve - so that I can pass any Func2<'a> using this record. } let f1 a = let r = Some a printfn "f1: %A" r |> ignore r let f2 (a:'a) = let r = Some a printfn "f2: %A" r |> ignore r let f3 (processOptions:ProcessOptions) = processOptions.func1(3) |> ignore processOptions.func2 789 |> ignore () let f4 (processOptions:ProcessOptions) = processOptions.func1(4) |> ignore //processOptions.func2 "abc" |> ignore // as a result this does not work... () [<EntryPoint>] let main argv = f1(1) |> ignore f2 123 |> ignore f2 "abc" |> ignore let fo = { func1 = f1 func2 = f2 } f3 fo let fo1 = { func1 = f1 func2 = f2 } f4 fo1 0 A: A member inside a record cannot be a generic function (that you can call with different types of arguments such as int or string). It will always have one fixed type. A trick you can use is to define a simple interface with a generic method: type Func = abstract Invoke<'a> : 'a -> 'a option Now your members in the record can be just of type Func (with no generic type arguments), but the Invoke method inside Func will be generic: type ProcessOptions = { func1: Func func2: Func } Creating Func values is a bit harder than writing ordinary functions, but you can use object expressions: let f1 = { new Func2 with member x.Invoke(a) = let r = Some a printfn "f1: %A" r |> ignore r } And you can now pass around ProcessOptions and call the Invoke method with different types of arguments: let f4 (processOptions:ProcessOptions) = processOptions.func1.Invoke 4 |> ignore processOptions.func2.Invoke "abc" |> ignore f4 { func1 = f1; func2 = f1 }
{ "pile_set_name": "StackExchange" }
Cholesterol sensing by the ABCG1 lipid transporter: Requirement of a CRAC motif in the final transmembrane domain. The ATP-binding cassette (ABC) transporter, ABCG1, is a lipid exporter involved in removal of cholesterol from cells that has been investigated for its role in foam cells formation and atherosclerosis. The mechanism by which ABC lipid transporters bind and recognise their substrates is currently unknown. In this study, we identify a critical region in the final transmembrane domain of ABCG1, which is essential for its export function and stabilisation by cholesterol, a post-translational regulatory mechanism that we have recently identified as dependent on protein ubiquitination. This transmembrane region contains several Cholesterol Recognition/interaction Amino acid Consensus (CRAC) motifs, and its inverse CARC motifs. Mutational analyses identify one CRAC motif in particular with Y667 at its core, that is especially important for transport activity to HDL as well as stability of the protein in the presence of cholesterol. In addition, we present a model of how cholesterol docks to this CRAC motif in an energetically favourable manner. This study identifies for the first time how ABCG1 can interact with cholesterol via a functional CRAC domain, which provides the first insight into the substrate-transporter interaction of an ABC lipid exporter.
{ "pile_set_name": "PubMed Abstracts" }
Abbreviations {#nc005} ============= PPCI : primary percutaneous coronary intervention PCI : percutaneous coronary intervention MT : medical therapy STEMI : ST-elevation myocardial infarction ECG : electrocardiogram LBBB : left bundle branch block VF : ventricular fibrillation ICU : intensive care unit CVDs : cardiovascular diseases CAD : coronary artery disease AMI : acute myocardial infarction HF : Heart Failure NSTEMI : non-ST-elevation myocardial infarction UA : unstable angina IHD : ischemic heart disease DM : diabetes mellitus COPD : chronic obstructive pulmonary disease HLoS : hospital length of stay BMI : body mass index GUSTO-1 : Global Utilization of Streptokinase and Tissue Plasminogen Activator to treat Occluded Arteries t-PA : tissue plasminogen activator PAMI-1 : Primary Angioplasty in Myocardial Infarction Introduction {#s0005} ============ Cardiovascular disease (CVD) is the leading cause of death worldwide, and coronary artery disease (CAD) is the most prevalent manifestation associated with high mortality and morbidity [@b0005]. Heart failure is the end-stage of several cardiovascular diseases such as acute myocardial infarction (AMI), and it remains a major challenge for regenerative medicine because of its high prevalence and incidence in elderly patients [@b0010]. However, the long-term incidence of heart failure (HF) in patients with ST-elevation myocardial infarction (STEMI), non-ST-elevation myocardial infarction (NSTEMI), or unstable angina (UA) is uncertain [@b0015]. Studies report cardiovascular disease as the leading cause of death in the elderly and describe its direct correlation with aging [@b0020]. The elderly may experience higher mortality from STEMI due to severe comorbidities, advanced CAD, as well as mechanical and electrical complications of AMI [@b0025; @b0030]. Further, several disorders often coexist in the elderly such as ischemic heart disease (IHD), hypertension, diabetes mellitus (DM), chronic obstructive pulmonary disease (COPD), chronic renal failure, digestive system disorders as well as joint and bone disorders, which occur more often in this group of patients [@b0030; @b0035; @b0040; @b0045; @b0050]. Cardiologists are therefore increasingly confronted with the management challenges of elderly patients presenting with STEMI. Optional management of acute coronary syndromes in this population has been an area of uncertainty as there is a paucity of evidence-based data due to their exclusion and under-representation in clinical trials [@b0055]. Primary percutaneous coronary intervention (PPCI) and pharmacologic therapy are widely used and constitute a vital treatment strategy for AMI [@b0060; @b0065; @b0070]. PPCI therapy must be initiated prior to angiography (pretreatment), and continued during the procedure (periprocedural), recovery phase (in-hospital), and follow-up [@b0075]. The purpose of this study is to evaluate the prognosis of PPCI and medical therapy (MT) in elderly patients presenting with STEMI. Methods {#s0010} ======= We conducted a retrospective study on 301 STEMI patients (aged ⩾80) treated with PPCI and MT at Harefield Hospital, London during the period between January 2005 and February 2010. Sixty-three patients were excluded from the study as they did not have true STEMI based on non-diagnostic ECG for STEMI and negative troponin, or had presented with left bundle branch block (LBBB) and had normal coronaries. ST-elevation myocardial infarction was defined as the presence of ST-elevation or new left bundle branch block on electrocardiography in addition to suspicion of ongoing ischemia. Primary PCI was defined as any use of a guidewire for more than diagnostic purposes in patients with STEMI. Conventional MT was defined as treatment with anti-platelets and anti-thrombotic medications without thrombolysis. The demographic variables, body mass index (BMI), comorbidities and hospital length of stay (HLoS) were also collected for analysis. The protocol of the study was approved by the research ethics committee of the hospital. Data analysis was carried out using Microsoft Excel 2002 (Microsoft Corporation, Seattle, WA, USA), and the Statistical Package for Social Sciences version 16 (SPSS Inc., Chicago, IL, USA). Data were presented as percentage and mean ± standard error of mean. Chi-square test was used to compare the differences between comorbidities of PPCI and MT, while Mann--Whitney U test was used to compare the HLoS. *P*-value of \<0.05 was considered statistically significant. Results {#s0015} ======= Demographics, BMI, and hospital length of stay of the study population are shown in [Table 1](#t0005){ref-type="table"}. A total of 186 patients were treated with PPCI and 52 patients were treated with conventional MT. There were 107 (45%) males and 131 females (54%). The mean age of the PPCI group was 83.92 years and 84.76 years for the MT group (*P* \> 0.05). The MT group HLoS (four days) was higher than the PPCI group (three days), (*P* = 0.039). The survival of the PPCI group is demonstrated in [Fig. 1](#f0005){ref-type="fig"}. The survival rate of PPCI patients showed 86% (*n* = 160) at month 1 followed by 83.9% (*n* = 156) at month 6, and 81.2% (*n* = 151) at month 12. The survival rate of the MT group is demonstrated in [Fig. 2](#f0010){ref-type="fig"}. The survival rate of MT patients showed 44.2% (*n* = 23) at month 1 followed by 36.5% (*n* = 19) at month 6, and 34.6% (*n* = 18) at month 12. The Kaplan--Meier survival curves of PPCI are shown in [Fig. 3](#f0015){ref-type="fig"}. The comorbidities of the PPCI and medical groups during admission are shown in [Fig. 4](#f0020){ref-type="fig"}. Compared to stroke, Renal Failure (RF) and cancer, hypertension and DM were the major comorbidities in both groups. Compared to MT, significantly fewer comorbidities were found in the PPCI group. Complications related to PPCI during admission are shown in [Fig. 5](#f0025){ref-type="fig"}. Ventricular fibrillation (VF) (4.8%) and consequent admission to intensive care unit (7%) were the major complications of PPCI. Discussion {#s0020} ========== The Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries (GUSTO-1) trial demonstrated that the 30-day mortality rate for STEMI increased tenfold among the elderly, from 3.0% in the age group of \<65 years to 30.3% among those \>85 years of age [@b0080]. This high mortality rate in the very elderly was also confirmed by several other studies [@b0085; @b0090; @b0095; @b0100; @b0105]. On the other hand, several large multi-center trials have demonstrated that reperfusion therapy of STEMI, whether primary percutaneous coronary intervention (PCI) or fibrinolysis, improved mortality for elderly patients [@b0025]. In the present study, we also observed that treatment with PPCI was feasible and beneficial for patients with STEMI. Limited research has been conducted on the prognosis of PPCI management of STEMI in the elderly. There are few trials comparing PPCI with fibrinolytic therapy, which enroll an adequate number of older patients. Existing subset analyses from trials, which randomized patients to primary PCI or fibrinolytic therapy, suggest that PCI is a preferred strategy in older patients. The PPCI in the Primary Angioplasty in Acute Myocardial Infarction (PAMI-1) study randomized patients to immediate PCI or fibrinolytic therapy, and observed that the greatest benefit was in patients over 65 years. There was no significant reduction in the combined endpoint of death/MI in patients under 65 years (0.8% mortality in both groups), but there was a marked reduction in the same endpoint in patients \>65 (death/MI was 8.6% with angioplasty versus 20% with tPA) [@b0110]. In the GUSTO-IIB trial, the largest randomized trial comparing angioplasty with thrombolytic therapy, 1138 patients were randomized to receive either accelerated tPA or primary angioplasty. Although primary angioplasty resulted in better 30-day outcomes (death/MI (myocardial infraction) 1 stroke occurred in 9.6% versus 13.7% with tPA), there was no significant difference in death/MI at six months (13.3% versus 15%, respectively) [@b0115]. The recent TRatamiento del Infarto Agudo de miocardio eN Ancianos (TRIANA) trial showed a trend towards improved outcomes for patients treated with PCI versus fibrinolysis with a combined end point of death, recurrent MI and disabling stroke at 30 days. PCI patients had marked improvement of recurrent ischemia. In the previous SENIOR-PAMI trial, similar results were obtained, but a subgroup analysis of patients \>80 years of age showed no benefit of PCI over fibrinolysis [@b0120]. The present study shows that the survival rate of PPCI patients was 86% (*n* = 160) at month 1, followed by 83.9% (*n* = 156) at month 6, and 81.2% (*n* = 151) at month 12. It also suggests that patients treated with PPCI for STEMI have good prognosis if they survive the initial months. A recent study reported that PCI is the preferred treatment for patients with STEMI owing to improved vessel patency, decreased infarct size, lower rates of reinfarction, and improved survival compared to pharmacological reperfusion [@b0125]. There is little in the literature to direct and guide STEMI therapy in elderly patients, especially those whose age is greater than or equal to 80 years [@b0025]. Elderly patients usually have complex CAD with higher mortality and morbidity, and higher rates of complication following PPCI, such as stroke and renal failure [@b0025; @b0130; @b0135; @b0140]. Advanced age is an independent predictor of mortality after PPCI, and elderly patients show a larger prevalence of female gender, hypertension, and diabetes [@b0085], which was confirmed in our study. We observed that hypertension and DM were the major co-morbidities of both the PPCI and MT groups. Overall, the patients treated with PPCI had fewer comorbidities compared to the patients treated with MT, which means that our hospital had selected the right patients for PPCI to ensure a better outcome. We also found a benefit in HLoS, which was shorter in the PPCI group. The MT group HLoS (four days) was higher than the PPCI group (three days). Major limitations of this study were the relatively small number of patients, the limited number of risk factors examined, the study's retrospective nature, and that samples were from a single hospital. However, despite its retrospective nature, the present study is important in light of the paucity of evidence-based clinical outcome data due to research exclusion and under-representation in the elderly STEMI patient group. In future, absolute and relative risks for efficacy and safety in age subgroups should be reported and trials should make an effort to enroll the elderly in proportion to their prevalence among the treated population. Outcomes of particular relevance to the older adult, such as quality of life, physical function, and independence should also be evaluated and geriatric conditions unique to this age group, such as frailty and cognitive impairment, should be considered for their influence on care and outcomes. With these efforts, treatment risks can be minimized and benefits can be placed within the health context of elderly patients. ![Survival rate (%, *n*) of primary percutaneous coronary intervention (*n* = 186).](gr1){#f0005} ![Survival rate (%, *n*) of medical therapy (*n* = 52).](gr2){#f0010} ![Survival curve: primary percutaneous coronary intervention patients.](gr3){#f0015} ![Comorbidity in primary percutaneous coronary intervention (*n* = 186) and medical therapy group (*n* = 52).](gr4){#f0020} ![Complications of primary percutaneous coronary intervention patients (*n* = 186) during admission.](gr5){#f0025} ###### The demographic, BMI and hospital length of stay of the study population. Variables Angioplasty Medical --------------------------- --------------- ------------------------------------------------ *Gender* Male 80 (43%) 27 (52%) Female 106 (57%) 25 (48%) Total **186** **52** 

 *Age* 80--85 128 (68.8%) 32 ⩾86--90 42 (22.58%) 14 ⩾91--95 14 (7.52%) 6 ⩾96 2 (1.07%) 0 

 *BMI* Male 24.4 24.3 Female 24.9 25.1 

 *Hospital length of stay* HLOS 3 (2--5 days) 4 (2--7 days)[⁎](#tblfn1){ref-type="table-fn"} *P* = 0.039, Mann--Whitney test.
{ "pile_set_name": "PubMed Central" }
Q: PageIndexChanged is not working I am using Radgrid with pager. When clicking on the next page number on a pager the data is not displaying(not binding the data). Can anyone help me to fix this. here is my code. protected void Page_Load(object sender, EventArgs e) { try { if (!IsPostBack) { Session["SearchRes"] = null; if (Session["TaskName"] != null) lblTskName.Text = Session["TaskName"].ToString(); Session["FilColms"] = null; Session["SortExp"] = null; Session["FilExp"] = null; Session["ViewAll"] = null; BindGrid(); } } catch (Exception ex) { throw ex; } } private void BindGrid() { try { DataSet dsResult = new DataSet(); clsSearch_BL clsObj = new clsSearch_BL(); clsObj.TaskID = (string)Session["TaskID"]; clsObj.CustName = (string)Session["CustName"]; clsObj.MarketName = (string)Session["MarketName"]; clsObj.HeadendName = (string)Session["HeadendName"]; clsObj.SiteName = (string)Session["SiteName"]; clsObj.TaskStatus = (string)Session["TaskStatus"]; clsObj.OrdType = (string)Session["OrdType"]; clsObj.OrdStatus = (string)Session["OrdStatus"]; clsObj.ProName = (string)Session["ProName"]; clsObj.LOC = (string)Session["LOC"]; clsObj.QuoteID = (string)Session["QuoteID"]; clsObj.CMNumber = (string)Session["CMNumber"]; if (Session["SearchRes"] == null) { dsResult = clsObj.getSearchResults_BL(clsObj); Session["SearchRes"] = dsResult; } else dsResult = (DataSet)Session["SearchRes"]; DataView dataView = dsResult.Tables[0].DefaultView; rg200.DataSource = dsResult; rg200.DataBind(); } catch (Exception ex) { throw ex; } } protected void rg200_UpdateCommand(object source, Telerik.Web.UI.GridCommandEventArgs e) { if (Session["TaskID"] != null) { string strTaskID = (string)Session["TaskID"]; if (strTaskID != string.Empty) { clsTaskUpdates_BL objBL = new clsTaskUpdates_BL(); GridEditableItem editedItem = e.Item as GridEditableItem; //Get the primary key value using the DataKeyValue. string OrdID = editedItem.OwnerTableView.DataKeyValues[editedItem.ItemIndex]["orderId"].ToString(); //Access the textbox from the edit form template and store the values in string variables. string ClarifyAccountNbr = ((GridTextBoxColumnEditor)editedItem.EditManager.GetColumnEditor("Clarify Account Nbr")).TextBoxControl.Text; string SiteID = ((GridTextBoxColumnEditor)editedItem.EditManager.GetColumnEditor("Site ID")).TextBoxControl.Text; string QuoteID = ((GridTextBoxColumnEditor)editedItem.EditManager.GetColumnEditor("Quote ID")).TextBoxControl.Text; CheckBox chkEDP = ((GridCheckBoxColumnEditor)editedItem.EditManager.GetColumnEditor("EDP Created?")).CheckBoxControl; //string ClarifyAccountNbr = (editedItem["Clarify Account Nbr"].Controls[0] as TextBox).Text; //string SiteID = (editedItem["Site ID"].Controls[0] as TextBox).Text; //string QuoteID = (editedItem["Quote ID"].Controls[0] as TextBox).Text; //CheckBox chkEDP = (editedItem["EDP Created?"].Controls[0] as CheckBox); try { objBL.setTask200_Bl(OrdID, ClarifyAccountNbr, SiteID, QuoteID, chkEDP.Checked); Session["SearchRes"] = null; BindGrid(); } catch (Exception ex) { rg200.Controls.Add(new LiteralControl("Unable to update Employee. Reason: " + ex.Message)); e.Canceled = true; } } } } protected void rg200_PageIndexChanged(object source, GridPageChangedEventArgs e) { try { rg200.CurrentPageIndex = e.NewPageIndex; BindGrid(); } catch (Exception ex) { throw ex; } } A: Your code shows that you use binding with DataBind() calls. In this way you should manually change the page index hooking PageIndexChanged, assign data source to the grid and bind it. Alternatively, use NeedDataSource binding to spare some manual coding.
{ "pile_set_name": "StackExchange" }
Category: Online Essay Writers 247 Top Recommendations of Write The Essay Serice You cannot merely begin composing an essay you should do brainstorming before it and that means you believe it is feasible to write this content this is certainly necessary. If you wish to understand who to fund essay and acquire an authentic...
{ "pile_set_name": "Pile-CC" }
Zomato can now deliver your food using drones thanks to new government policy Under the Drone 2.0 policy, India is all set to allow the commercial use of drones - as delivery vehicles, air taxis - beyond the visual line of sight. From a blanket ban on drones until a few years ago to rolling out a robust regulatory roadmap for drone operations, the Indian government has come a long way. Now, it is gearing up for the next phase of growth. Zomato's exhibit at Global Aviation Summit At the first-ever Global Aviation Summit organised by FICCI and the Government of India, Minister of State for Civil Aviation Jayant Sinha unveiled the Drone 2.0 policy that will come into effect in March 2019. It is expected to create a spectrum of business opportunities for all stakeholders - equipment manufacturers, service providers, engineers, etc. - in the drone sector. Under the Drone 2.0 policy, India is all set to allow the commercial use of drones - as delivery vehicles, air taxis, and other services - beyond the visual line of sight. That means Zomato can deliver your food on drones. Even Uber can ferry you from Point A to B on air taxis. (Uber Air is already a thing in some Western countries. The aviation ministry hopes India will get there at some point.) Until now, the government had put a ban on the commercial use of drones or unmanned aerial vehicles (UAVs) owing to security reasons. The Drone 1.0 policy unveiled in August 2018 limited the use of drones to only aerial photography, filmmaking, disaster relief, and recreational activities. However, the Ministry of Civil Aviation has addressed these security concerns in its Drone 2.0 policy. To ensure safe and lawful drone operations in the country, the government has proposed a DigitalSky network that will segregate airspace into drone corridors and label them as Red, Yellow and Green Zones. Red Zones are those that bar drones from flying. Airports, military areas, and other high-security locations like Rashtrapati Bhavan and Parliament House come under Red Zones. To fly drones in Yellow Zones, operators need to be “NPNT-compliant”. NPNT or ‘No Permission, No Takeoff’ is a protocol developed by the government to control the airspace used by drones. And Green Zones will, of course, allow operators to get easy permissions to fly drones. Source: Ministry of Civil Aviation Further explaining the framework, Sinha stated, “Both the drone service provider and the air traffic management will have full control over the drone’s journey. If you deviate from your drone corridor, the traffic controller can safely land it or send it home. Different drone ports will be created for different types of drones.” He also urged entrepreneurs and engineers to “start developing” drone technologies and applications keeping this framework in mind. “There will be plenty of Green Zones, and you will have enough opportunities to explore,” he added. Except for nano-drones, all other drones have to be registered for lawful operations. “We want to put in place the right standards, the right regulations and the right ecosystem such that India can lead the world in drone technologies. We would also like to partner with other countries to make this happen,” Sinha said. Globally, the commercial drone market is estimated to be $100 billion, with countries in the EU enabling a host of interesting use cases for businesses as well as customers. India is keen to grab a big share of that pie, the minister revealed.
{ "pile_set_name": "Pile-CC" }
1. Introduction {#sec1} =============== Chronic obstructive pulmonary disease (COPD) is a common, preventable, and treatable disease that is characterized by persistent respiratory symptoms and airflow limitation that is due to airway and/or alveolar abnormalities usually caused by significant exposure to noxious particles or gases \[[@B1]\]. COPD has been the third leading cause of death in China in 2010, behind only stroke and ischemic heart disease \[[@B2]\]. Inhalation of cigarette smoke or other noxious particles, such as smoke from biomass fuels, causes lung inflammation. The chronic inflammation response may induce parenchymal tissue destruction (resulting in emphysema) and disruption of normal repair and defense mechanisms (resulting in small airway fibrosis) \[[@B1]\]. The c-JUN N-terminal kinases (JNK) and p38 mitogen-activated protein kinase (MAPK) signaling, the main components of MAPK pathway, are closely correlated with the inflammatory response. JNK and p38 MAPK pathway can be activated by environmental stimuli, such as tobacco smoke, and by endogenous signals, such as cytokines, growth factors, and inflammation-derived oxidants. Recent studies have suggested that activation of the MAPK pathway contributes to several COPD-associated phenotypes, including mucus overproduction and secretion, inflammation, and cytokine expression \[[@B3]\]. Inflammatory mediators and chemotactic factors, including tumor necrosis factor-*α* (TNF-*α*), interleukin- (IL-) 6, and IL-10, which is mediated in part by the p38 MAPK pathway, contribute to the formation of pulmonary emphysema \[[@B4], [@B5]\]. TNF-*α* increased the expression of monocyte chemoattractant protein-1 (MCP-1) at least partly via enhancing phosphorylation of p38 and JNK \[[@B6]\]. Several studies have shown that therapies, such as treatment with bone marrow-derived mesenchymal stem cell (MSCs) and Panax ginseng (Ren Shen), may relieve airway inflammation and emphysema via the MAPK pathway \[[@B7], [@B8]\]. In recent years, Traditional Chinese Medicine (TCM) therapies, including internal and external treatments, have played an increasingly important role in stable COPD because of their favorable curative effect and few side effects \[[@B9], [@B10]\]. The pattern of lung-kidney qi deficiency, one type of TCM syndromes, is one of the most common syndromes in the stable phase of COPD \[[@B11]\]. Many different factors, such as cigarette smoking and noxious particles, may lead to lung qi weakness, and the patients will present dyspnea, shortness of breath, weakness, and spontaneous perspiration (worse with exertion), then kidney qi is damaged and becomes weak over time, and the patients will also present tinnitus, vertigo, frequent micturition, frequent urination at night, soreness, and weakness of the waist and knees. Bufei Yishen granules (ZL.201110117578.1), a special prescription for internal treatment of lung-kidney qi deficiency syndrome, were clinically proved effective in relieving clinical symptoms and reducing the frequency of acute exacerbations in stable COPD patients \[[@B12]\]. Additionally, Bufei Yishen granules were also confirmed effective in ameliorating systemic and airway inflammation and remodeling in a cigarette smoke/bacterial exposure-induced COPD rat model and preventing COPD and its comorbidities, such as ventricular hypertrophy \[[@B13]--[@B15]\]. Shu-Fei Tie (ZL.200810049332.3) is a popular clinically used ointment for acupoint sticking in external therapy which can excite vital qi in the human body and has been proven therapeutic in COPD treatment with its high safety, convenience, and fewer side effects \[[@B16], [@B17]\]. Our previous study has shown that Bufei Yishen granules combined with Shu-Fei Tie can alleviate clinical symptoms, reduce the frequency and duration of acute exacerbation, and improve lung function and quality of life in patients with stable COPD and also showed beneficial effect in a 4-month treatment period and 6 months of follow-up \[[@B18]\]. Our previous animal experimental study has also shown that this approach improves pulmonary function and lung pathological impairment in COPD rats \[[@B19]\]. We also found that this therapy can suppress oxidative stress in COPD rats \[[@B20]\]. However, whether or not it can suppress inflammation in COPD rats remains unclear. We want to know whether the effect of Bufei Yishen granules combined with Shu-Fei Tie is related to anti-inflammation. Thus, our current study was performed to examine the mechanism of Bufei Yishen granules combined with Shu-Fei Tie therapy on inflammation regulated by JNK and p38 MAPK signaling in COPD rats. 2. Materials and Methods {#sec2} ======================== 2.1. Animal Model {#sec2.1} ----------------- Seventy-two Sprague-Dawley rats (equal number of males and females, 2 months old, 180--220 g) were obtained from the Laboratory Animal Center of Henan Province (Special Pathogen Free, SCXK \[Henan\] 2010-0002) and randomly assigned to the Control, Model, Bufei Yishen (BY), acupoint sticking (AS), Bufei Yishen + acupoint sticking (BY + AS), and aminophylline (APL) groups (12 in each group). The methods were performed according to the approved guidelines of the Experimental Animal Care and Ethics Committee of the First Affiliated Hospital, Henan University of Traditional Chinese Medicine, Zhengzhou, China. After accommodating to the facility for 7 days, COPD rats were exposed to cigarette smoke and*Klebsiella pneumoniae* (KP) for model establishment according to previously described methodology \[[@B21]\]. Commercial cigarettes (Hongqiqu® Filter Cigarette, Henan, China) were provided by Henan Tobacco Industry Co., Ltd., and each of these cigarettes contained 1.0 mg nicotine, 11 mg CO, and 10 mg tar oil, according to the manufacturer\'s specifications.*Klebsiella pneumoniae* (strain: 46114) was purchased from the National Center For Medical Culture Collection (Beijing, China) and prepared at a concentration of 6 × 10^8^ colony forming units (CFU) per milliliter before being administered to the animals. Animals were exposed to smoke (smoke concentrations, 3,000  ±  500 ppm) for 30 min, twice a day for 12 weeks.*Klebsiella pneumoniae* solution (0.1 ml, 6 × 10^8^ colony forming units/ml) was dropped into the two nostrils in an alternate fashion, once every 5 days, for the first 8 weeks. The successful generation of a COPD rat model was evaluated according to symptoms, lung function, and pulmonary pathology \[[@B22]\]. 2.2. Drugs {#sec2.2} ---------- \(1\) Aminophylline tablets (Xinhua, Shandong, China, 0.1 g/tablet) were crushed prior to administration to the animals. (2) Bufei Yishen granules \[Ren Shen (Ginseng Radix et Rhizoma) 9 g, Huang Qi (Astragali Radix) 15 g, Shan Zhu Yu (Corni Fructus) 12 g, Yin Yang Huo (Epimedii Herba) 9 g, Gou Qi Zi (Lycii Fructus) 12 g, and Wu Wei Zi (Schisandrae Chinensis Fructus) 9 g etc.\] were prepared by the Department of Pharmacology in the First Affiliated Hospital of Henan University of Traditional Chinese Medicine, Zhengzhou, China. (3) Shu-Fei Tie mainly consisted of Bai Jie Zi (Semen Brassicae) 10 g, Yan Hu Suo (Rhizoma Corydalis) 5 g, Xi Xin (Asarum Heterotropoides) 5 g, and Yuan Hua (Daphne Genkwa) 10 g and also included other components, 3.0 g/tubes. The main chemical compounds of Bufei Yishen granules and Shu-Fei Tie had been described in our published article \[[@B20]\]. The main component of Shu-Fei Tie placebo was carbopol, diatomaceous earth, and glycerine, each unit equivalent to 3.0 g. The placebo was also similar to the true drug in its appearance, weight, color, and odor. Shu-Fei Tie and its placebo were produced and packed by the Department of Pharmacology in the First Affiliated Hospital of Henan University of TCM, which was the reform base of TCM preparation and dosage formulation. 2.3. Administration {#sec2.3} ------------------- From weeks 9 through 20, rats in the Control and Model groups were intragastrically given normal saline (2 ml/animal, b.i.d) and Shu-Fei Tie placebo (2 times/week); Bufei Yishen granules (4.44 g/kg·d, b.i.d) and Shu-Fei Tie placebo were given to the BY group; normal saline and Shu-Fei Tie were given to the AS group; Bufei Yishen granules (4.44 g/kg·d, b.i.d) and Shu-Fei Tie were given to the BY + AS group, and aminophylline (2.3 mg/kg·d, b.i.d) and Shu-Fei Tie placebo were given to the APL group. Dosage adjustments were made weekly according to body mass. The equivalent dosages were calculated by using the following formula: *D*~rat~ = *D*~human~ × (*I*~rat~/*I*~human~) × (*W*~rat~/*W*~human~)^2/3^. *D*: dose; *I*: body shape index; *W*: body weight. Rats in each group were sacrificed at week 20. Methods of acupoint sticking: as shown in [Figure 1](#fig1){ref-type="fig"}, the acupoint sticking was applied at Dazhui (GV14), Feishu (BL13, both sides), and Shenshu (BL23, both sides). A combination of these five acupoints can improve the lung qi and kidney qi, as well as preventing cough and asthma. The method of acupoint sticking and skin injury treatment was according to \[[@B19]\]. All rats were sacrificed at week 20 and samples were harvested. 2.4. Bronchoalveolar Lavage and Total and Differential Cell Counts {#sec2.4} ------------------------------------------------------------------ Experimental rats were sacrificed, and the left lungs were lavaged twice with 3 ml of PBS via tracheal cannulation after the right main bronchus was ligated. An equal volume of BALF was collected, and 10 *μ*l was used for total cell counts by using the "cell-count boards" method. The BALF supernatants were obtained by centrifugation (1,500 rpm  ×  10 min) at 4°C, and the samples were stored at −70°C for subsequent enzyme-linked immunosorbent assays (ELISA). The cell sediment was smeared evenly on glass slides and fixed and stained with hematoxylin-eosin. Cells were identified and differentiated into mononuclear cells, neutrophils, and lymphocytes according to standard morphology and staining characteristics. Two hundred cells per slice were quantified, and the absolute number of each cell type was calculated under a light microscope. 2.5. Enzyme-Linked Immunosorbent Assay {#sec2.5} -------------------------------------- MCP-1, IL-2, IL-6, and IL-10 concentrations in BALF were quantified by using a commercial ELISA kit (RapidBio, USA) according to the manufacturer\'s protocol. 2.6. Lung Morphology {#sec2.6} -------------------- After lavage with 10% formaldehyde and fixation for 72 h, the lung tissues were cut into 3 mm thick sections, embedded in paraffin, sliced into 4 *μ*m slices, and stained with a standard method (hematoxylin-eosin) for light microscopy. 2.7. Immunohistochemical {#sec2.7} ------------------------ For additional immunohistochemical staining, 4 *μ*m cuts were obtained. Primary antibodies against MCP-1, IL-2, IL-6, and IL-10 hydroxyguanosine (BOSTER, Wuhan, China) were used for specimen staining with the immunoperoxidase avidin-biotin method in an automatic stainer (Autostainer, Dako, Denmark). The antigen-antibody reaction was visualized with 3,3-diaminobenzidine tetrahydrochloride (DAB). Image-Pro Plus 6.0 was used for image capture and analysis. The integral optical density (IOD) represented the cytokine expression level. 2.8. Quantitative Real-Time PCR and Western Blotting Analysis {#sec2.8} ------------------------------------------------------------- The expression of JNK and p38 mRNA in lung tissues was analyzed using quantitative real-time PCR (qRT-PCR). The protein expressions of JNK, p-JNK, p38, and p-p38 in lung tissue were measured by Western blotting. The methods have been described in our previous study \[[@B20]\]. Primers for JNK and p38 MAPK were designed and synthesized by Generay Biotech Co. Ltd. (Shanghai, China), and the sequences used in this study are shown in [Table 1](#tab1){ref-type="table"}. 2^−ΔΔCT^ was used to calculate the changes in the relative expression of the genes in each sample. 2.9. Statistical Analysis {#sec2.9} ------------------------- SPSS 19.0 software (IBM; Armonk, NY, USA) was used for data analysis. Data are expressed as the mean ± standard error (SE). One-way analysis of variance (ANOVA) was employed for multiple comparisons. *P* \< 0.05 was considered a significant difference. 3. Results {#sec3} ========== 3.1. Pulmonary Histopathological Changes {#sec3.1} ---------------------------------------- As shown in [Figure 2(a)](#fig2){ref-type="fig"}, the structure of the pulmonary alveoli and airway was fully intact in Control rats. In contrast, rats in the Model group showed alterations in the submucosal and glandular tissue, including infiltration by inflammatory cells and other severe pathological changes such as epithelial-cell hyperplasia, alveolar cavity expansion, thickened small conducting airways, and connective tissue in the peribronchiolar space. Rats in the BY, AS, BY + AS, and APL groups exhibited small airway wall thickening and connective tissue hyperplasia in the peribronchiolar space, although the pathological changes were alleviated in the treatment groups to different degrees compared with the Model group, particularly in the BY and BY + AS groups. As shown in [Figure 2(b)](#fig2){ref-type="fig"}, the levels of total white blood count (WBC) and the percent of neutrophils were higher compared to those in the Control group (*P* \< 0.01), whereas the level of the percent of lymphocytes and monocytes was lower (*P* \< 0.01 or *P* \< 0.05). Compared with those in the Model group, the levels of total WBC and the percent of neutrophils in the BY, AS, BY + AS, and APL groups were significantly decreased (*P* \< 0.05 or *P* \< 0.01), whereas the percent of lymphocytes in BY and BY + AS groups and the percent of monocytes in BY, BY + AS, and APL groups were increased (*P* \< 0.05 or *P* \< 0.01). Compared with that in the APL group, the level of total WBC in the BY + AS group was decreased (*P* \< 0.05). The levels of total WBC and the percent of neutrophils in the BY + AS group were decreased in comparison to the AS group (*P* \< 0.05). 3.2. IL-2, IL-6, IL-10, and MCP-1 Levels in Bronchoalveolar Lavage Fluid (BALF) {#sec3.2} ------------------------------------------------------------------------------- As shown in [Figure 3](#fig3){ref-type="fig"}, the levels of IL-2, IL-6, and MCP-1 in the Model group were higher compared to those in the Control group, whereas the level of IL-10 was lower (*P* \< 0.01). Compared with those in the Model group, the levels of IL-2, IL-6, and MCP-1 in the BY, AS, BY + AS, and APL groups were significantly decreased, whereas the level of IL-10 was increased (*P* \< 0.05 or *P* \< 0.01). Compared with those in the APL group, the levels of IL-2 and IL-6 in the BY and BY + AS groups were decreased (*P* \< 0.05 or *P* \< 0.01), and the level of MCP-1 in the BY + AS group was also decreased (*P* \< 0.05). The levels of IL-6 in the BY and BY + AS groups were decreased in comparison to the AS group (*P* \< 0.01), whereas the level of IL-10 was increased (*P* \< 0.05). The level of MCP-1 in the BY + AS group was decreased compared with that in the BY group (*P* \< 0.05). 3.3. IL-2, IL-6, IL-10, and MCP-1 in Lung Tissue {#sec3.3} ------------------------------------------------ As shown in [Figure 4(a)](#fig4){ref-type="fig"}, IL-2 was mainly detected in the tracheal mucosal epithelium. IL-6 was distributed in the alveolar walls and alveolar interstitium, and IL-10 and MCP-1 were detected in the alveolar interstitium. As shown in Figures [4(b)](#fig4){ref-type="fig"}, [4(c)](#fig4){ref-type="fig"}, [4(d)](#fig4){ref-type="fig"}, and [4(e)](#fig4){ref-type="fig"}, the levels of IL-2, IL-6, and MCP-1 in the Model group were higher than that in the Control group, whereas the level of IL-10 was lower (*P* \< 0.01). Compared with those in the Model group, the levels of IL-2, IL-6, and MCP-1 in the BY, AS, BY + AS, and APL groups were significantly decreased, whereas the level of IL-10 was increased (*P* \< 0.01). Compared with those in the APL group, the levels of IL-2, IL-6, and MCP-1 in the BY and BY + AS groups were decreased, whereas the level of IL-10 was increased (*P* \< 0.05 or *P* \< 0.01). The level of MCP-1 in the AS group was higher than that in the APL group (*P* \< 0.01). In addition, the levels of IL-2, IL-6, and MCP-1 in the BY and BY + AS groups were lower than that in the AS group (*P* \< 0.01), whereas the level of IL-10 was higher (*P* \< 0.01). The level of MCP-1 in the BY + AS group was lower than that in the BY group (*P* \< 0.05). 3.4. The mRNA and Protein Expression of JNK and p38 MAPK in the Lung {#sec3.4} -------------------------------------------------------------------- As shown in [Figure 5(a)](#fig5){ref-type="fig"}, JNK and p38 MAPK mRNA expression in the Model group was increased compared with that in the Control group (*P* \< 0.01). Compared with the Model group, JNK and p38 MAPK mRNA expression in the BY, AS, BY + AS, and APL groups was decreased (*P* \< 0.05 or *P* \< 0.01). JNK and p38 MAPK mRNA expression in the BY + AS group was decreased compared with the AS group (*P* \< 0.05), while p38 MAPK mRNA expression was decreased compared with the APL group (*P* \< 0.05). As shown in [Figure 5(b)](#fig5){ref-type="fig"}, the protein expression of JNK and p-JNK in the Model group was higher than that in the Control group (*P* \< 0.01). Compared with those in the Model group, the protein expression levels of JNK and p-JNK in the BY, AS, BY + AS, and APL groups were significantly decreased (*P* \< 0.01). From the highest to the lowest expression, the protein level of p-JNK in each group was as follows: AS group, APL group, BY group, and BY + AS group, with no significant differences among these groups (*P* \> 0.05). As shown in [Figure 5(c)](#fig5){ref-type="fig"}, the protein expression of p38 and p-p38 MAPK in the Model group was increased compared with that in the Control group (*P* \< 0.05, *P* \< 0.01). Compared with those in the Model group, the expression levels of p38 in the BY, BY + AS, and APL groups were decreased (*P* \< 0.05 or *P* \< 0.01), whereas the level of p-p38 was decreased in each group (*P* \< 0.01). The protein expression of p-p38 in the BY, BY + AS, and APL groups was lower than that in the AS group (*P* \< 0.05 or *P* \< 0.01), although p-p38 expression was lower in the BY + AS group compared with the APL group (*P* \< 0.05). 4. Discussion {#sec4} ============= This study was to evaluate the anti-inflammatory efficiency of Bufei Yishen granules, Shu-Fei Tie, and their combination in COPD rat model, and the results suggested that Bufei Yishen granules combined with Shu-Fei Tie therapy were beneficial for relieving lung and airway inflammation in COPD rats and that this effect was mediated via the downregulation of JNK and p38 MAPK signaling pathway. In recent years, increasing attention has focused on the beneficial effects of TCM therapies in patients with stable COPD. The syndrome of lung-kidney qi deficiency is one of the most common syndromes in stable COPD. Our previous clinical study has confirmed the beneficial effect of Bufei Yishen granules in stable COPD \[[@B12]\]. Recently, we have preliminarily discussed the potential targets of Bufei Yishen granules by using systems pharmacology \[[@B23]\]. Acupoint sticking therapy, a kind of TCM external therapy by externally applying herbal paste to the acupoints, is popular being used for many chronic lung diseases in clinical practice. The prescription for herbal paste and the suitable acupoints are according to the intended purpose of treatment. Shu-Fei Tie, an ointment for acupoint sticking, can excite vital qi in the human body and has also been proven to be effective in preventing acute exacerbation of COPD and improving patients quality of life \[[@B17]\]. In our previous clinical and animal studies, Bufei Yishen granules combined with Shu-Fei Tie have been demonstrated to be beneficial in treating stable COPD; however, the mechanism responsible for these effects remains unclear. Multiple initiating events are involved in the pathogenesis of COPD, including inflammation, protease-antiprotease imbalance, oxidant-antioxidant imbalance, and damage to the parenchyma and airways, leading to tissue remodeling. Chronic inflammation is known to play a major role in the pathological mechanism of COPD, and inflammatory cytokines, such as MCP-1, IL-2, IL-6, and IL-10, are known to promote inflammation. IL-6 is a key cytokine involved in the etiology of inflammation. Histological studies have revealed that IL-6 expression is increased in patients with COPD, and this cytokine is known to be associated with airway inflammation \[[@B24]\]. IL-2 is a Th1 cytokine, and inhalation of IL-2 induces asthma-like symptoms in humans and aggravates airway inflammation in a mouse model of asthma \[[@B25], [@B26]\]. IL-10 is synthesized by CD4+ or CD8+ T-lymphocytes, macrophages, monocytes, eosinophils, and the airway epithelium. IL-10 inhibits and terminates the inflammatory reaction by suppressing the synthesis and release of proinflammatory cytokines. Our study found that the levels of IL-2, IL-6, and MCP-1 in the lungs of COPD rats were increased significantly, whereas that of IL-10 was decreased. All four treatment protocols (Bufei Yishen granules, Shu-Fei Tie, Bufei Yishen granules combined with Shu-Fei Tie, and aminophylline) alleviated the expression of inflammatory cytokines in the lung and airway, whereas Bufei Yishen granules and the combined therapy showed enhanced effects compared to Shu-Fei Tie and aminophylline. Inflammatory cytokines are mediated in part by MAPK signaling transduction pathways, such as JNK and p38 MAPK, which in turn are activated by bacterial products, cytokines, and chemokines \[[@B27]\]. Activation of MAPK pathways can initiate inflammatory cascades, leading to significantly increased production of inflammatory mediators such as cytokines and chemokines. Activation of the p38 and JNK pathways is involved in LPS-induced production of inflammatory molecules \[[@B28]\], and inhibition of the MAPK signaling pathway significantly reduces the secretion of IL-6 and IL-8 \[[@B29]\]. Our study showed that the mRNA and protein expression levels of JNK and p38 MAPK were increased in COPD rats. All four treatment protocols reduced the expression of these inflammatory mediators; however, Bufei Yishen granules combined with Shu-Fei Tie were more effective than Shu-Fei Tie in decreasing the expression of JNK and p38 MAPK mRNA and were more effective than aminophylline in reducing p38 mRNA. Moreover, Bufei Yishen granules combined with Shu-Fei Tie significantly decreased the protein expression of p-p38 and were more effective than Shu-Fei Tie and aminophylline. The anti-inflammatory effects of Bufei Yishen granules had been confirmed in our previous studies \[[@B13]\], but we did not find the overlapped anti-inflammatory effects combined with Shu-Fei Tie ointment in this study. We have found their exciting function on pulmonary surfactant proteins \[[@B30]\], but there may be other mechanisms involved in the function of Bufei Yishen granules combined with Shu-Fei Tie, which need our further study. In summary, our study shows that the p38 MAPK and JNK signaling pathways are involved in regulating the expression of IL-2, IL-6, IL-10, and MCP-1 in the lung and airway in COPD rats. All four treatment protocols can alleviate lung and airway inflammation, and Bufei Yishen granules combined with Shu-Fei Tie are better than other protocols. Their anti-inflammatory effect may be involved in regulating the p38 MAPK and JNK signaling pathways. This study was supported by China National Natural Science Foundation (81403367 and 81130062), Scientific Research and Specific Fund for the National TCM Clinical Research Base (JDZX2012030), and Basic Research Program of Scientific and Technological Research Key Program of Henan Province Department of Education (15A360027). The authors also thank Ningning Tian for her drawing in [Figure 1](#fig1){ref-type="fig"}. Conflicts of Interest ===================== The authors report no conflicts of interest in this work. Authors\' Contributions ======================= Jiansheng Li, Suyun Li, Yang Xie, and Minghang Wang contributed to the study design. Yange Tian and Ya Li contributed to data analysis and manuscript drafting. In addition, Suxiang Feng and Xuefang Liu contributed to drug preparation and quality control. Yange Tian and Jing Mao contributed to QPCR. Haoran Dong and Wanchun Zheng contributed to Western blotting. All authors were involved in the interpretation of the results, drafting and critically reviewing the manuscript for important intellectual content, and approving the final submitted version. ![The acupoint sticking position of Dazhui, Feishu (both sides), and Shenshu (both sides).](ECAM2017-1768243.001){#fig1} ![Lung morphology and total and differential cell counts in BALF of each group. Control: control group; Model: model group; BY: Bufei Yishen group; AS: acupoint sticking group; BY + AS: Bufei Yishen + acupoint sticking group; APL: aminophylline group (the same as below). Pathological changes in the lungs of each group (H&E stained ×100) (a). The orange arrows: alveolar cavity expansion; the red arrow: airway epithelial-cell hyperplasia; the green arrow: thickened small conducting airways. The total and differential cell counts in BALF (b): values are expressed as the mean ± SEM. ^AA^*P* \< 0.01, ^A^*P* \< 0.05 versus Model group; ^C^*P* \< 0.05 versus AS group; ^D^*P* \< 0.05 versus BY + AS group.](ECAM2017-1768243.002){#fig2} ![Changes in inflammatory cytokines in BALF in all treatment groups. Values are expressed as the mean ± SEM. ^AA^*P* \< 0.01, ^A^*P* \< 0.05 versus Model group; ^BB^*P* \< 0.01, ^B^*P* \< 0.05 versus BY group; ^CC^*P* \< 0.01, ^C^*P* \< 0.05 versus AS group; ^DD^*P* \< 0.01, ^D^*P* \< 0.05 versus BY + AS group.](ECAM2017-1768243.003){#fig3} ![Changes of inflammatory cytokines in the lung in all treatment groups. Immunohistochemical staining of lung sections (magnification, ×400) (a); IL-2, IL-6, IL-10, and MCP-1 were quantitatively analyzed (b, c, d, and e). Values are expressed as the mean ± SEM. ^AA^*P* \< 0.01 versus Model group; ^BB^*P* \< 0.01, ^B^*P* \< 0.05 versus BY group; ^CC^*P* \< 0.01 versus AS group; ^DD^*P* \< 0.01 versus BY + AS group.](ECAM2017-1768243.004){#fig4} ![The mRNA and protein expression of JNK and p38 MAPK in the lung in all treatment groups. (a) JNK, p38 MAPK mRNA in each group; (b) the protein expression of JNK and p-JNK in each group; (c) the protein expression of p38 and p-p38 in each group. Values represent the mean ± SEM. ^AA^*P* \< 0.01, ^A^*P* \< 0.05 versus Model group; ^B^*P* \< 0.05 versus BY group; ^CC^*P* \< 0.01, ^C^*P* \< 0.05 versus AS group. ^D^*P* \< 0.05 versus BY + AS group.](ECAM2017-1768243.005){#fig5} ###### Primer sequence of JNK and p38 MAPK mRNA. Gene Primer Sequence (5′ → 3′) ---------- ---------------------------- ------------------------ GADPH FW ACAGCAACAGGGTGGTGGAC RV TTTGAGGGTGCAGCGAAC TT JNK FW TACAGAGCACCCGAGGTCATC RV AGAGGATTTTGTGGCAAACCA p38 MAPK FW GGC TCT GGC GCC TAT GG RV CCA CAC GTA ACC CCG TTT TT FW, forward; RV, reverse. [^1]: Academic Editor: Ji H. Kim
{ "pile_set_name": "PubMed Central" }
Lasek, Łódź Voivodeship Lasek is a village in the administrative district of Gmina Warta, within Sieradz County, Łódź Voivodeship, in central Poland. It lies approximately east of Warta, north of Sieradz, and west of the regional capital Łódź. The village has a population of 150. References Category:Villages in Sieradz County
{ "pile_set_name": "Wikipedia (en)" }
This Is A Custom Widget This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile. This Is A Custom Widget This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile. Lafayette Sanctuary City Meeting Submit information on your event You can submit information about an event and we will consider adding it to our community calendar. This event has passed. Lafayette Sanctuary City Meeting March 26 @ 11:45 am - 12:45 pm |Free Interested in helping turn Lafayette into a sanctuary city? We are looking for people to help work with our councils, law enforcement, religious institutions, and schools to build a diverse base of local support. Lamorinda Sanctuary will be hosting meetups in Moraga, Orinda, and Lafayette to strategize and assign roles for turning each city into a sanctuary. Please RSVP here (https://goo.gl/forms/WcylLVCVeXHVwOp83). Even if you don’t have time for a more time-intensive role, there are other ways to be involved! All are welcome. Rev. Will McGarvey Jessica Natal Our Mission We as people from a diversity of religions, spiritual traditions and sectors of society, gather to manifest our unity as we promote the spirit of community, service and cooperation through the work of the Interfaith Council.
{ "pile_set_name": "Pile-CC" }
--- abstract: 'We have analyzed in detail a set of Rossi X-ray Timing Explorer (RXTE) observations of the galactic microquasar GRS 1915+105 corresponding to times when quasi-periodic oscillations in the infrared have been reported. From time-resolved spectral analysis, we have estimated the mass accretion rate through the (variable) inner edge of the accretion disk. We compare this accretion rate to an estimate of the mass/energy outflow rate in the jet. We discuss the possible implications of these results in terms of disk-instability and jet ejection, and in particular note an apparent anti-correlation between the accretion and ejection rates, implying that the gas expelled in the jet must leave the accretion disk before reaching its innermost radius.' author: - 'Belloni, T., Migliari, S., Fender, R.P.' date: 'Received ; accepted 15 May 2000' title: Disk mass accretion rate and infrared flares in GRS 1915+105 --- =1.7 true cm Introduction ============ GRS 1915+105 is a transient X-ray source discovered in 1992 with WATCH (Castro-Tirado, Brandt & Lund 1992). Since then it has probably never switched off completely and it has remained as a highly variable bright X-ray source (see Sazonov et al. 1994; Paciesas et al. 1996; Bradt et al. 2000). It is the first Galactic object that was found to show superluminal expansion in the radio (Mirabel & Rodríguez 1994). The interpretation of this phenomenon in terms of relativistic jets (Rees 1966) implies bulk velocities of the ejecta of $\geq 0.9c$ at an angle of 60–70 degrees to the line of sight (Mirabel & Rodríguez 1994, Fender et al. 1999, Rodríguez & Mirabel 1999). Because of the high value of the extinction on the line of sight, no optical counterpart is available, but an infrared counterpart has been found (Mirabel et al. 1994). The source is suspected to host a black hole because of its high X-ray luminosity and its similarity with another Galactic superluminal source GRO J1655-40 (Zhang et al. 1994), for which a dynamical estimate of the mass is available (Orosz & Bailyn 1997). Four years of monitoring with the All-Sky Monitor (ASM) on board RXTE showed that the 2-10 keV flux of GRS 1915+105 is extremely variable, considerably more than any other known X-ray source (see Bradt et al. 2000). See Belloni et al. (2000) for a complete reference list of RXTE observations of the source. Belloni et al. (1997a,b), from the analysis of selected X-ray spectra, showed that the X-ray variability of the source can be interpreted as the repeated appearance/disappearance of the inner portion of the accretion disk, caused by a thermal-viscous instability. During the low-flux intervals, when the source spectrum hardens considerably, the inner disk up to a certain radius becomes unobservable and is slowly re-filled again. A more complete picture of these variations, where the observations were classified into twelve different classes and another type of (soft) low-flux intervals was presented, was shown by Belloni et al. (2000). Additional spectral analysis has been presented by Markwardt et al. (1999) and Muno et al. (1999), who analyzed in detail the connection between QPOs and energy spectra in GRS 1915+105. One of the problems caused by the exceptional variability of the source is that it is difficult to estimate the accretion rate through the disk or even to rate observations according to accretion rate. Quasi-periodic variability in the radio, infrared and millimetre bands has been discovered (Pooley 1995, Pooley & Fender 1997; Fender et al. 1997; Fender & Pooley 2000). Fender et al. (1997) suggested that these oscillations could correspond to small ejections of material from the system. Indeed, these oscillations have been found to correlate with the disk-instability as observed in the X-ray band (Pooley & Fender 1997; Eikenberry et al. 1998,2000; Mirabel et al. 1998). This suggests that (some of) the gas is ejected from the inner disk during each low-flux interval. On longer time scales an analogous pattern is observed in the form of major relativistic ejections occurring at the end of a 20-day X-ray dip or ‘plateau’ (Fender et al. 1999). In this Letter we present the results of detailed time-resolved spectral analysis of RXTE/PCA data of observations when (quasi-)simultaneous infrared data are available. We estimate the value of the accretion rate through the disk for each observation and show that it is anticorrelated with the estimated jet power. Data analysis ============= The published infrared observations of GRS 1915+105 for which there are simultaneous or quasi-simultaneous (ie. within 2 days) RXTE/PCA data are those from Mirabel et al. (1998), Eikenberry et al. (1998), Fender et al. (1998), Eikenberry et al. (2000), Fender & Pooley,(2000). All observations reveal very variable X-ray light curves (see Table 1), corresponding to classes $\beta$, $\nu$ and $\theta$ in the classification by Belloni et al. (2000). Date Obs\# Class T$_{\rm start}$(UT) $\Delta$ t (s) R$_{\rm max}$(km) $\dot{M}_{\rm disk}$ (M$_\odot$/yr) $\dot{M}_{\rm J} $ (M$_\odot$/yr) $P_{\rm J}$ (erg s$^{-1}$) --------- ---------------- ---------- --------------------- ---------------- ------------------- ------------------------------------- ----------------------------------------- ---------------------------- 14/8/97 20186-03-03-01 $\beta$ 4:02 530-690 170$\pm$ 14 1.3$\times 10^{-7}$ 6$\times 10^{-7}$(a) $9\times 10^{37}$ 09/9/97 20402-01-45-03 $\beta$ 6:00 500-720 128$\pm$ 13 7.1$\times 10^{-8}$ 3$\times 10^{-7}$(b) $1\times 10^{38}$ 15/9/97 20186-03-02-00 $\theta$ 12:31 600-1000 —$^c$ —$^c$ 5$\times 10^{-7}$(d\*) $9\times 10^{37}$ 10/7/98 30182-01-03-00 $\nu$ 5:05 2250-3500$^e$ 288$\pm$ 27 2.7$\times 10^{-7}$ $10^{-7}$(f) $4 \times 10^{37}$ 22/5/99 40702-01-02-00 $\nu$ 20:41 1100-1370 55$\pm$ 13 8.0$\times 10^{-9}$ 2$\times 10^{-6}$(g\*) $3 \times 10^{38}$ from Eikenberry et al. (1998); $^{\mathrm{b}}$ from Mirabel et al. (1998); $^{\mathrm{c}}$ not measurable; $^{\mathrm{d}}$ from Fender & Pooley (1998) determined from IR data; $^{\mathrm{f}}$ from Eikenberry et al. (2000); $^{\mathrm{g}}$ from Fender & Pooley (2000); $^{\mathrm{*}}$ quasi-simultaneous For each observation, we produce light curves at 1s time resolution (from [Standard1]{} data) and isolated the long hard low-flux intervals corresponding to state C (unobservable inner disk) of Belloni et al. (2000). For each interval, we measured its length from the light curve (see Table 1). Then we accumulated spectra on a time scale of 16 seconds from [Standard2]{} data, thus retaining the full energy resolution and coverage of the PCA. From each spectrum, we subtracted the background estimated with [pcabackest]{} vers. 2.1b. We did not correct for deadtime effects, but we do not expect this effect to be too important. For each observation in PCA epoch 3 we produced a detector response matrix using [pcarsp]{}, while for epoch 4 we used the response provided on line by K. Yahoda [^1]. We fitted each spectrum with the “standard” model used for black-hole candidates, consisting of the superposition of a multicolor disk-blackbody and a power law. By assuming a distance of 12.5 kpc and a disk inclination of 70$^{\circ}$ (Mirabel & Rodríguez 1994), we can derive from the fits the inner radius of the accretion disk. Correction for interstellar absorption (fixed to $6\times 10^{22}$cm$^{-2}$, see Belloni et al. 2000) and an additional emission line (fixed at 6.4 keV) were also included. A systematic error of 1% was added. The value of the reduced $\chi^2$ was usually around 1, although some fits were slightly worse. The resulting interesting parameters (inner disk radius and temperature, slope of the power law) as a function of time are shown in Fig. 1 for three of the five observations, for which this automated procedure gave good results. The remaining two observations had to be treated more carefully. The observation from 1997 Sep 15th, the only one from class $\theta$, resulted in an extremely strong power law component, with a photon index steeper than 3. The softness and intensity of this component made it impossible to obtain sensible values for the disk parameters, although there is evidence of its presence. This enhanced power law is probably the reason of the difference between this class and the others (see Belloni et al. 2000). The observation from 1998 July 10th did not include full state-C intervals: in this case, we measured the length of the intervals from the infrared (Eikenberry et al. 2000). Also, the inner disk radius resulted to be larger and therefore more difficult to measure as this component is softer. In order to estimate the disk parameters, we produced a 32s spectrum corresponding to the bottom of the dip only and obtained the best fit parameters, corresponding to the largest inner radius. This is the reason why there is only one point for this observation in Fig. 2. Results ======= In principle, from each spectrum the accretion rate through the measured inner radius of the disk could be measured from the values of kT$_{\rm in}$ and R$_{\rm in}$ (see Belloni et al. 1997a) by using the expression from a standard thin accretion disk. However, given the errors on these parameters, this measurement is too uncertain. In order to obtain an improved estimate of the disk accretion rate or, better, a ranking of the observations in terms of accretion rate (since the actual values of the inner disk radii obtained with the multicolor disk-blackbody model are probably underestimates, see Merloni, Fabian & Ross 1999), we plotted the values corresponding to the deepest parts of the X-ray light curves in a kT$_{\rm in}$ vs. R$_{\rm in}$ plane (see Fig. 2). If for each observation the disk accretion rate was constant, the points should lie on the diagonal lines corresponding to a slope $-$3/4 (as, for a given $\dot{M}$, $T \propto R^{-3/4}$ – Belloni et al. 1997a). Their actual distribution is flatter, showing that there is a deviation from the expected law, but it is interesting to note that the distributions lie on parallel curves in the log-log plane. This indicates different values of the disk accretion rate. Lines corresponding to the larger measured radius for each of the four observations are shown in Fig. 2 with their associated accretion rate value. Typical 1$\sigma$ errors are also shown. Although the actual values for the accretion rate are probably not accurate, on the basis of this plot we can rank the observations by accretion rate. It is important to note that the accretion rate measured this way correspond to matter passing [*through*]{} the observed inner radius of the disk only: if some matter leaves the disk before that radius, its presence cannot be detected with this procedure. This estimate of accretion rate can be double checked by considering the length of the state C intervals, which Belloni et al. (1997a,b) interpreted as the viscous time scale of the disk at the edge of the unobservable region which is refilled. The observation from 1999 May 22nd has a smaller inner disk radius (see Fig. 2) than the 1997 ones and a longer re-fill time (Tab. 1), indicating a lower value of the accretion rate. The 1998 July 10th observation has a much larger inner disk radius than the 1997 ones, by a factor of 1.7 and 2.3, which would correspond to a re-fill time longer by a factor 6.4 and 18 respectively, while it is much shorter, indicating a higher accretion rate. Discussion ========== The results of our analysis indicate that, at least for observations of class $\nu$ and $\beta$ (which have many similar traits), we have a way to estimate the disk accretion rate during an instability event, when the inner disk radius grows from its “minimum” value of $\sim$30 km and slowly moves back to it. Although we know that the measured value is only an underestimate, it is natural to associate this minimum value with the innermost stable orbit. It is interesting to compare these values, or at least their ranking, with the rate of ejection in the jets. As we mentioned above, the accretion rate measured through this procedure is associated to matter flowing [*through*]{} the observable inner edge of a geometrically thin accretion disk. Some of the accreting gas must leave the accretion disk to form the jet, unless it is entirely composed of pairs generated by photon-photon interactions. and how this happens is basically unknown. There are two extreme possibilities: either matter ejected in the jet leaves the accretion disk before entering the innermost regions, thus not contributing to our measured disk accretion rate (case 1), or it leaves it after passing through our measured inner disk radius, in which case it is a fraction of the accretion rate we measure (case 2). In case 1, if the fraction of matter in the jet is constant and the total external accretion rate (disk+jet) is variable, we expect a positive correlation between disk accretion rate (from X rays) and disk ejection rate (from the infrared). If the fraction is variable and the total is constant, these quantities should be anticorrelated. In case 2, if the fraction of matter in the jet is constant, we expect a positive correlation, while the constant total is in this case not possible as the total would be what we measure, which is not observed to be constant. If both fraction and total vary, the situation is complicated. Of course, there is a spectrum of intermediate possibilities, where the jet production is connected to the inner region of the disk in a way that would not allow to dissociate the two processes. With the paucity of existing data, we limit ourselves to the extreme cases. Notice that measuring an anti-correlation would be an indication against case 2. Table 1 also lists an estimate of the mass ejection rate $\dot{M}_{\rm J}$. This is based upon an equipartition calculation for one proton for each electron, negligible kinetic energy associated with the repeated ejection events, and an average over the repetition period of the oscillations. Note that there is a systematic uncertainty in these numbers due to lack of knowledge of the intrinsic electron spectrum which corresponds to the observed flat-spectrum radio–infrared emission. However, unless the spectral form of the distribution changes between observations then the effect is the same for all data sets and the ranking remains the same. Of course we may be observing synchrotron emission from a pair plasma with no baryonic content, in which case the amount of power being supplied to the jet, $P_{\rm J}$, makes more useful comparison with the accretion rate; this value is also listed in Table 1. For more details of how these quantities are calculated, see Fender & Pooley (2000). Either way, there appears to be an [*anticorrelation*]{} between accretion rate inferred from the X-ray spectral fits and the outflow rate of mass/energy in the jet. The low number of points in our sample prevents us from saying something more firm. Notice that an anticorrelation is also suggested by the strong flat-spectrum radio emission observed during long ‘plateau’ intervals; periods when Belloni et al. (2000) estimate that the accretion rate must be very low. We also note that the faint infrared flares reported by Eikenberry et al. (2000) do not appear to be different from the others in other respects, as the X-ray light curves are too undersampled to allow a detailed correlation. If future observations show that disk accretion rate and jet ejection rate are indeed anti-correlated, the following scenario could be speculated. A fraction of the accreting gas leaves the geometrically thin accretion disk before reaching the inner edge (from which it would fall into the black hole) and goes into a hot corona. The details are not known, but our results indicate that this does not happen after the inner edge. As the disk refills, the inner radius moves inwards, more soft photons from the disk reach the corona, which causes its Comptonization emission to soften gradually. At the end of the instability period, when the disk is refilled down to the innermost stable orbit, this “reservoir” of hot gas is expelled to produce the jet, resulting in the observed infrared / mm / radio emission, causing the power-law component to steepen dramatically and to cause the sudden change in the X-ray count rate and spectral parameters. Notice that, as we remarked earlier, the distributions of points in Fig. 2 are flatter than the expected curve for a constant disk accretion rate according to a standard thin disk: in other words, as the inner disk radius decreases, the disk accretion rate seems to decrease as well. This could mean that the process that re-routes some gas from the disk to the corona becomes more efficient closer to the central object, and therefore the fraction of matter going into the corona increases as the disk refills. We thank G. Ghisellini and M. Tagger for useful discussions. Belloni, T., Méndez, M., King, A.R., van der Klis, M, & van Paradijs, J., 1997, ApJ, 479, L145 Belloni, T., Méndez, M., King, A.R., van der Klis, M, & van Paradijs, J., 1997, ApJ, 488, L109 Belloni, T., Klein-Wolt, M., Méndez, M., van der Klis, M., van Paradijs, J., 2000, A&A, 355, 271 Bradt, H., Levine, A.M., Remillard, R.A., Smith, D.A., 2000, Mem SAIt, Vol. 71, in press. Castro-Tirado, A. J., Brandt, S., & Lund, S. 1992, IAU Circ., 5590 Eikenberry, S.S., Matthews, K., Morgan, E.H., Remillard, R.A., Nelson, R.W., 1998, ApJ, 494, L61 Eikenberry, S., Matthews, K., Muno, M., Blanco, P., Morgan, E., Remillard, R., 2000, ApJ, 532, L33 Fender, R.P., Pooley, G.G., Brocksopp, C., Newell, S.J., 1997, MNRAS, 290, L65 Fender, R.P. & Pooley, G.G., 1998, MNRAS, 300, 573 Fender, R.P. & Pooley, G.G., 2000, MNRAS, submitted Fender, R.P., Garrington, S.T., McKay, D.J., et al., 1999, MNRAS, 304, 865 Markwardt, C.B., Swank, J.H., Taam, R.E., 1999, ApJ, 513, L37 Merloni, A., Fabian, A.C., Ross, R.R., 2000, MNRAS, in press. Mirabel, I. F., & Rodríguez, L. F. 1994, Nature, 371, 46 Mirabel, I.F., Duc, P.A., Rodríguez, P.A., et al., 1994, A&A, 282, L17 Mirabel, I.F., Dhawan, V., Chaty, S., et al., 1998, A&A, 330, L9 Muno, M.P., Morgan, E.H., Remillard, R.A., 1999, ApJ, 527, 321 Orosz J., Bailyn C.D., 1997, ApJ, 477, 876 Paciesas, W.S., Deal, K.J., Harmon, B.A., et al., 1996, A&AS, 120, 205 Pooley, G.G., 1995, IAU Circ., 6269 Pooley, G.G., & Fender, R.P., 1997, MNRAS, 292, 925 Rees, M.J., 1966, Nature, 211, 468 Rodríguez, L. F., & Mirabel, I. F., 1999, ApJ, 511, 398 Sazonov, S.Y., Sunyaev, R.A., Lapshov, I.Y., et al., 1994, Astr. Lett., 20, 787 Zhang, S. N., Wilson, C. A., Harmon, B. A., et al., 1994, IAU Circ., 6046 [^1]: http://lheawww.gsfc.nasa.gov/users/keith/epoch4/
{ "pile_set_name": "ArXiv" }
Q: How to add a custom field to apache solr? How do you create your own solr field using Drupal 8's search api? A: 1) Copy /modules/contrib/search_api/src/Plugin/search_api/processor/AddURL.php into your own custom module at /src/Plugin/search_api/processor 2) Rename and rework (see below) 3) Add your extra field to the index at /admin/config/search/search-api/index/myindex/fields 4) Enable the processor on your search_api index 5) Reindex content and verify in solr that your new custom field is indexed. http://127.0.0.1:8983/solr/#/mysolrcore/schema-browser?field=sm_mymodule_content_type, click on "Load Term Info" to see data loaded in. Here's an example: namespace Drupal\mymodule\Plugin\search_api\processor; use Drupal\search_api\Datasource\DatasourceInterface; use Drupal\search_api\Item\ItemInterface; use Drupal\search_api\Processor\ProcessorPluginBase; use Drupal\search_api\Processor\ProcessorProperty; /** * Adds a custom type filter to the indexed data. * * @SearchApiProcessor( * id = "mycustom_field", * label = @Translation("Custom Field"), * description = @Translation("Add a custom field to search index"), * stages = { * "add_properties" = 0, * }, * locked = true, * hidden = false, * ) */ class CustomField extends ProcessorPluginBase { /** * machine name of the processor. * @var string */ protected $processor_id = 'mycustom_field'; /** * {@inheritdoc} */ public function getPropertyDefinitions(DatasourceInterface $datasource = NULL) { $properties = array(); if (!$datasource) { $definition = array( 'label' => $this->t('Custom Field'), 'description' => $this->t('custom field'), 'type' => 'string', 'processor_id' => $this->getPluginId(), ); $properties[$this->processor_id] = new ProcessorProperty($definition); } return $properties; } /** * {@inheritdoc} */ public function addFieldValues(ItemInterface $item) { $entity = $item->getOriginalObject()->getValue(); $custom_field = ''; // Use $entity to get custom field. $fields = $this->getFieldsHelper() ->filterForPropertyPath($item->getFields(), NULL, $this->processor_id); foreach ($fields as $field) { $field->addValue($custom_field); } } }
{ "pile_set_name": "StackExchange" }
UK film premier @ Museum of British Surfing – Going Vertical The Museum of British Surfing is proud to present another UK film premier – ‘Going Vertical‘ – the epic search to unravel one of surfing’s greatest mysteries… who started the shortboard revolution? The debate still rages across the Pacific – who really lit the fuse of this massive shift in surfboard design during the tumultuous 1967 ‘summer of love’ that turned surfing upside down. On Thursday August 30th 2012 at 7.30pm we’ll be showing the movie at Croyde Village Hall in association with our fabulous friends at the Croyde Deckchair Cinema – the first 38 tickets sold will get deckchairs at the front of the hall. Advance tickets only – priced £5 (cash only) – are available now from the Museum of British Surfing in the Caen Street car park, Braunton between 10am & 4pm every day. 10% discount for season ticket holders.
{ "pile_set_name": "Pile-CC" }
Just Siberian Huskies 2013 Wall Calendar List Price $13.99Our Price $3.49ID: 201300002888 Just Siberian Huskies Wall Calendar: Siberian Huskies are creatures of delightful contrasts: athletic yet elegant; playful yet mild-mannered. These twelve dazzling, full-color photographs embody all the terrific traits of this popular breed. The large format wall calendar feature daily grids with ample room for jotting reminders; four bonus months of September through December 2012; moon phases; U.S. and international holidays. UPC: 709786022915 EAN: 9781607556381
{ "pile_set_name": "Pile-CC" }
What does it mean to say “if you like your health insurance, you can keep it”? Some will remember this as a defining debate around the Affordable Care Act. One lesson Democrats took from the collapse of the Clinton administration’s 1994 reforms was that Americans hated the idea of the government canceling their insurance plans. In deference to that view, the ACA was designed to leave most existing health coverage intact, and President Obama repeatedly promised that no one would lose insurance they liked. Even so, about 3 million plans did get canceled because they were beneath the ACA’s minimum standards for health insurance, and the political backlash was fierce. This debate has reemerged in the runup to 2020. Bernie Sanders’s Medicare-for-all plan, as currently written, would cancel every private insurance plan in the country. Polling suggests that’s lethal: When told that Medicare-for-all would abolish private insurance, respondents flip from favoring the plan by a 56 percent to 38 percent margin to opposing it by a 58 percent to 37 percent margin. These numbers, when combined with the Obamacare backlash and the Clintoncare experience, have underscored reformers’ view that a plan that takes away the private insurance people have and like is doomed. In response, supporters of Medicare-for-all have struck back with an ambitious reframing: The idea that anyone can keep the health insurance they like under any system but Medicare-for-all is the true lie, they say. Matt Bruenig, of the People’s Policy Project, is the most aggressive proponent of this view. “The truth is that people who love their employer-based insurance do not get to hold on to it in our current system,” he writes. “Instead, they lose that insurance constantly, all the time, over and over again. It is a complete nightmare.” The only way to enjoy true health security, where your insurance can never be taken away, is “a seamless system where people do not constantly churn on and off of insurance. Medicare for All offers that.” There’s power to this argument. As a comparison between Medicare-for-all and the status quo, there’d be far more security under Medicare-for-all. But even as he accuses others of dishonesty, Bruenig is weaponizing this point against plans that it doesn’t apply to — most recently, the Center for American Progress’s Medicare Extra plan — and doing so in ways that confuse the underlying issue. If this were just an internecine debate between supporters of different health reform plans that would all be vast improvements over what we have now, it wouldn’t much matter. But it raises some of the hardest issues in health politics. Private health insurance isn’t theoretical. More than 150 million Americans get insurance through their employers right now. They live in the world that pundits and think-tankers are arguing over. So if the private insurance market as it exists is such a nightmare, then why are people so loath to see it replaced? This is a question that has crushed past efforts at health reform — and not just federally. In the past few years, Vermont saw its effort to pass single-payer collapse and 79 percent of Coloradans voted down a single-payer ballot initiative. So why aren’t voters more willing to abandon a system that’s clearly failing? And what kinds of reforms will they accept? The two types of “insurance churn” The key idea in Bruenig’s argument is something experts call “insurance churn.” Importantly, though, the term is being used to refer to two different things: Researchers use insurance churn to refer to any change in health insurance plans. If I lose my job and become uninsured, that’s churn. If my employer switches insurance providers, that’s churn. If I move from my current job and insurance coverage to another job with a different insurance plan, that’s churn. And so on. In punditry, though, people will often simplify churn to the question of losing and gaining health insurance. In this meaning, it’s not churn if my employer switches coverage providers, but it is if my employer fires me and I become uninsured. You can see the way one definition slides into the other in Bruenig’s post. For most of the analysis, he’s using the first, more technical, definition of churn. He relies on a study of insurance churn in Michigan, for instance, to write: Among those who had employer-sponsored insurance in 2014, only 72 percent were continuously enrolled in that insurance for the next 12 months. This means that 28 percent of people on an employer plan were not on that same plan 1 year later. You like your employer health plan? You better cross your fingers because 1 in 4 people on employer plans will come off their plan in the next 12 months. The study he’s referencing found that “Ninety-four percent of respondents with employer-sponsored plans maintained coverage continuously all year.” The lower, 72 percent number includes data showing that “16 percent directly switched to a different employer-sponsored plan and 6 percent gained coverage through either an individually purchased plan, Medicaid, or Medicare.” So that’s the first type of churn, which includes changes to the plans people use, not just changes to whether people are insured or not. But at the end, when Bruenig says that Medicare-for-all is “a seamless system where people do not constantly churn on and off of insurance,” he’s quietly switching to the second definition of churn. The core insight here is real: So long as a third party is providing your health insurance, you don’t have full control of its future. You may like the health plan your employer provides now, but they could change that plan, or you could change jobs, or be laid off. The problem is that point applies to public insurance too. Imagine President Bernie Sanders passes Medicare-for-all in 2022. In 2024, amid a backlash to rising tax rates, Sanders loses reelection to Ohio Sen. Rob Portman. Working with a Republican Congress, Portman restructures Medicare-for-all in a few ways. Where Sanders included coverage for abortion, Portman bars it totally. Where Sanders designed the program to avoid copays and deductibles, Portman, a believer in health savings accounts, reworks it to frontload the cost-sharing. Where Sanders guaranteed coverage to everyone, including unauthorized immigrants, Portman restricts it to legal residents, and adds a work requirement for able-bodied adults. If any of that sounds far-fetched, consider that Republicans were a single vote away from repealing Obamacare in 2017, and the Trump administration approved Wisconsin’s request to add premiums and a work requirement to its Medicaid program in 2018. The skepticism Bruenig brings to private insurance is the same skepticism many bring to single-payer insurance. You like your government-provided health plan? Better cross your fingers, because your side just lost the White House, and the incoming administration wants to slash health care spending by 15 percent, drug-test all beneficiaries, and turn the whole thing over to private contractors. Not all churn is equal A problem with the term “churn” is that it collapses both good and bad dynamics into the same label. Involuntary churn is a problem. But voluntary churn typically goes by another name: choice. Take Medicare Advantage, the suite of private insurance options offered inside the Medicare program. About a third of Medicare enrollees choose Medicare Advantage, and about 10 percent of Medicare Advantage members voluntarily choose to switch to another plan each year. Medicare Advantage’s enrollees are slightly more satisfied with their coverage than those in traditional Medicare, so it’s reasonable to assume they appreciate the opportunity for churn. Last week, I wrote about the Center for American Progress’s Medicare Extra plan. I won’t recap the entire piece here, but the short version is that Medicare Extra rebuilds the health system around a revamped Medicare plan, but it allows people to remain in employer-sponsored insurance, traditional Medicare, or VA care if they so choose. It also preserves some private options in Medicare for those that want them. It’s an effort to capture the main benefits of single-payer — universal, guaranteed coverage alongside Medicare-based pricing — without taking away the insurance plans most people have and like. Bruenig was, to say the least, not happy with my description: This is a good example of the slipperiness in the way Bruenig deploys this argument. Under Medicare Extra, there’s no churn on and off health insurance: it’s a universal program. If you chose to remain on your employer-sponsored insurance and then you got fired, you’d just be added to Medicare Extra. Bruenig’s claim that this is a lie rests on switching his definition to the first type of churn, the one that counts any change in plan. Under Medicare Extra, it’s true, you could end up switching from one employer-sponsored insurance plan to another, or from an employer who offered Cigna to one who bought into Medicare. Or you could choose to switch from Medicare to your employer’s insurance, or to the private options offered under Medicare Choice. Bruenig says this would be “entirely out of your control.” That’s true in some cases, and not in others. Either way, Medicare-for-all is just as vulnerable to that critique: You can keep Bernie Sanders’s Medicare-for-all plan until the moment some other president and Congress decides to change the law. It’s also outside of your control. (And in a system where the White House and the Senate are both held by the party that won fewer votes, the idea that it is in your control because the government reflects the will of the people isn’t particularly convincing.) This is the problem with the rhetorical game Medicare-for-all’s supporters are playing. If you narrow the definition of insurance churn to whether or not you have insurance, the argument doesn’t work as a cudgel against some of the competing plans like Medicare Extra. If you widen the definition of insurance churn to any change in your insurance plan, then it becomes a cudgel against Medicare-for-all too. You can’t talk yourself out of public opposition This gets to the broader context of this debate. There’s an effort among Medicare-for-all’s supporters to argue that nothing but pure single-payer can solve the problems of the health care system, and so any other plan, even if it’s more aligned with what the public wants, is an unacceptable half-measure. Since those plans make a virtue out of offering people more insurance choices, and since Medicare-for-all suffers in polls for abolishing private insurance, you need to make the offering of those choices into a weakness. But in doing, you end up sidestepping the hard political question here, the one those other plans are trying to respond to: If the private insurance market is such a nightmare, why is the public so loath to abandon it? Why have past reformers so often been punished for trying to take away what people have and replace it with something better? It’s simply not the case that when you say, in normal English, “if you like your X, you can keep it,” people believe you’re protecting them from all exigent circumstances. People live in the employer-based health insurance market now. They’re dealing with the instability Bruenig and others are pointing out as we speak. The fact that they’re not clamoring for the government to take it over demands exploration, not rebuttal. Trying to redefine “it’s possible to lose the insurance you have now” as equivalent to “the government will take away the insurance you have now and move you to something different” isn’t a way of answering the concerns people have — it’s a way of trying to talk yourself out of answering them. And that’s a dangerous strategy. It’s particularly unwise when the public’s views are as clear as they are here. A new Marist/NPR poll tested support for both “Medicare for all that want it — that is, allow all Americans to choose between a national health insurance program or their own private health insurance” and “Medicare for All — that is, a national health insurance program for all Americans that replaces private health insurance.” “Medicare for all that want it” polled at 71 percent. Medicare-for-all that replaces private insurance polled at 41 percent. Supermajority support becomes a minority position. Why? There are different interpretations of what’s going on here. Many Americans simply don’t trust the government. Others don’t much like radical change, even if they do trust the government. Some reasonably want private options as escape hatches if the government option is wrecked by a future Congress, or marred by incompetent administration. Many don’t realize how much their insurance costs, because employers pay, on average, 70 percent of premiums, even if that quietly comes out of wages. And while about 60 percent of Americans think it’s the government’s responsibility to ensure everyone can access health care, that leaves 40 percent of the country that disagrees. Indeed, the political risk of a plan like Medicare Extra isn’t that it changes too little for the public’s comfort, but that, like Medicare-for-all, it changes too much. Everyone on Medicaid gets forced into the new system. Everyone on Obamacare’s individual markets gets forced into the new system. The new plan will be better and more comprehensive than what came before it, so in theory, it should be a welcome change. But if “in theory, it should be a welcome change” always cashed out in practice, we would have fixed the health system long ago. The political upsides of Medicare Extra are that it won’t require as large tax increases, it won’t upend employer-based insurance, and it can at least claim that private choices will be available, but for all that, it will still mean huge disruption, far more than attended Obamacare. For the record, I’m not opposed to Medicare-for-all. It’s one of many health systems that I think would be a vast, vast improvement on what we have now. I want more people to have better health care, and my fear is that in treating public opinion as infinitely malleable or simply confused, Medicare-for-all’s supporters will trigger a backlash that destroys the effort, just as so many health reformers have before them. Risk aversion here is real, and it’s dangerous. Health reformers don’t tiptoe around it because they wouldn’t prefer to imagine bigger, more ambitious plans. They tiptoe around it because they have seen its power to destroy even modest plans. There may be a better strategy than that. I hope there is. But it starts with taking the public’s fear of dramatic change seriously, not trying to deny its power. Further reading: • The lessons of 1994. It’s been 25 years since the Clintons tried, and failed, to fundamentally restructure the American health care system. What went wrong then is worth studying. • How to get to universal coverage without single-payer. How Medicare Extra works, and where it does and doesn’t differ from Medicare-for-all. • 7 health care questions the 2020 Democrats should answer. I argue in this piece that centering the entire Democratic health reform debate on where or not you want to abolish private insurance is the wrong question. Here are some better ones.
{ "pile_set_name": "OpenWebText2" }
#region Copyright Syncfusion Inc. 2001-2020. // Copyright Syncfusion Inc. 2001-2020. All rights reserved. // Use of this code is subject to the terms of our license. // A copy of the current license can be obtained at any time by e-mailing // licensing@syncfusion.com. Any infringement will be prosecuted under // applicable laws. #endregion using System.Reflection; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; // General Information about an assembly is controlled through the following // set of attributes. Change these attribute values to modify the information // associated with an assembly. [assembly: AssemblyTitle("SampleBrowser.SfStepProgressBar.iOS")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Syncfusion Inc.")] [assembly: AssemblyProduct("SampleBrowser.SfStepProgressBar.iOS")] [assembly: AssemblyCopyright("Copyright © 2001-2020 Syncfusion Inc.")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] // Setting ComVisible to false makes the types in this assembly not visible // to COM components. If you need to access a type in this assembly from // COM, set the ComVisible attribute to true on that type. [assembly: ComVisible(false)] // The following GUID is for the ID of the typelib if this project is exposed to COM [assembly: Guid("72bdc44f-c588-44f3-b6df-9aace7daafdd")] // Version information for an assembly consists of the following four values: // // Major Version // Minor Version // Build Number // Revision // // You can specify all the values or you can default the Build and Revision Numbers // by using the '*' as shown below: // [assembly: AssemblyVersion("1.0.*")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")]
{ "pile_set_name": "Github" }
Pecorino Pecorino is the name given to all Italian cheeses made from sheep's milk. It covers a wide variety of cheeses produced around the country, but specifically it refers to four main varieties of Pecorino, all of which enjoy PDO protection. These hard ewes’ milk cheeses from central Italy and the island of Sardinia have established a very good export market outside Italy. Of these four, Pecorino Romano from Sardinia, Lazio and Tuscan Province of Grosseto is the most widely known outside of Italy. The remaining three mature PDO cheeses are Pecorino Sardo from Sardinia, Pecorino Siciliano from Sicily and Pecorino Toscano from Tuscany. Pecorinos are traditional, creamery, hard, drum-shaped cheeses. They come in a variety of flavours determined by their age. Aged Pecorinos referred to as ‘stagionato’ are hard and crumbly in texture with buttery and nutty flavours. Young or ‘semi-stagionato’ and ‘fresco’ Pecorinos feature a softer texture with mild, creamy flavours. A good Pecorino will have smooth, hard rind that is pale straw to dark brown in colour. The rind will vary in colour, depending on the age of the cheese, and may include a protecting coating of lard or oil. Its compact interior is white to pale yellow in colour, with irregular, small eyes. Today, this classic Italian cheese is available in many flavours including Pecorino Pepato spiced with black peppercorns or red chili. Pecorino is a preferred cheese in many pasta cheeses and an obvious choice in Italian regions where the cheese is produced. Also, it served as a good substitute for the expensive Parmigiano-Reggiano.
{ "pile_set_name": "Pile-CC" }
Q: How to install Ubuntu while keeping my files' backup disk partition I made with windows (D:)? So I have my 2TB disk into two partitions (C:) and (D:), I don't care about (C:) which hosts windows. I want only Ubuntu as an operating system while keeping everything at the (D:) partition. Any idea if that is possible? A: A) UEFI/BIOS Set to "UEFI mode only" (no legacy/CSM). Disable "secure boot" Disable "Intel Rapid Start" (if equipped) Disable "fast boot" in UEFI (note this is different than the "fast startup" setting in Windows 8/10). The options in your UEFI/BIOS might say something like Full/Minimal/Automatic for boot mode. Select Full (or thorough, or complete, etc whatever your UEFI vendor has chosen to call it). B) Advanced Power Options (Fastboot) Disable fast startup in Windows 8/10 under "advanced power options". Restart the computer to ensure that this subsequent boot and the next reboot/shutdown will be in "normal" mode. C) Bitlocker (For Windows 10 Pro) If you have Windows 10 Pro and encrypted the drive with "Bitlocker" remove the encryption. D) Rufus / Bootable USB stick Use Rufus to create a bootable USB stick with your choice of Ubuntu based distro. Make sure in Rufus that you CHOOSE the option UEFI/GPT only. This ensures the Linux environment boots only into UEFI mode during your install. E) Boot Menu Reboot your computer and press key for one time boot menu (Dell is typically F12). Select your USB stick from the boot options. Note: Make sure it says UEFI in front of the USB stick in the boot menu. If not, return to Windows and recreate your USB stick with Rufus ensuring you choose the UEFI/GPT (only) option. F) Boot into USB Stick Boot into Linux live environment and begin the install. G) Installation type When you get to the installation option, choose "Something else" at the bottom of the Ubiquity installer. H) Create partitions Find your C: partition. It will be called something like /dev/sda1 Select it and press the "-" sign to delete it and create "free space". 1st Partition / Root (All the software you install are stored here) Select "free space" that you created by deleting the C: drive. Select "+" Partition the target drive as follows: Size: min. 10 GB (25GB+ recommended. I have 40GB) Type for the new partition: Primary Location for the new partition: Beginning of this space Use as: ext4 Mount point: Choose "/" 2nd Partition / Swap (Only needed if you want to Hybernate) Select "free space" Select "+" Partition the target drive as follows: Size: Depends on your RAM. See Swap FAQ. Type for the new partition: Primary Location for the new partition: Beginning of this space Use as: swap 4th Partition / Home (Only needed if you want to keep your personal files separate from / Root partition. It seems you save your personal files in D: so you may not need this.) Select "free space" Select "+" Partition the target drive as follows: Size: Remainder of "free space" or any size you want. (You will need to leave some space if you want to make another partition. Of course, you can always shrink "/home" partition later) Type for the new partition: Primary Location for the new partition: Beginning of this space Use as: ext4 Mount point: Choose "/home" I) Installation & Reboot Finish the installation process and reboot (removing the USB stick when your UEFI/BIOS screen logo appears). J) Upon reboot Boot into Linux Install any updates **K) Setup Automout for D: Open the "Disks" utility. Select the partition you want to automount. (D:) Click the icon with 2 cogwheels. (Additional partition options) Select "Edit Mount Options". Turn the "User Session Defaults" option to "Off". Check the box "Mount at system startup" Make sure "Show in the user interface" is also checked. Replace the line that says nosuid,nodev,nofail,x-gvfs-show to users,uid=1000,dmask=027,fmask=137,x-gvfs-show,utf8. Change the "Mount point" to something less complicated like /mnt/D or /mnt/MyFiles. Change the "Filesystem Type" to ntfs-3g. Click "OK" and you are all set! Borrowed and edited from user613363's answer. (Dual Boot Windows 10 and Linux Ubuntu on Separate Hard Drives)
{ "pile_set_name": "StackExchange" }
The Turkish Einstein, Oktay Sinanoglu The Turkish Einstein Oktay Sinanoglu () is a book in which Scientist Oktay Sinanoğlu tells the story of his life and works. Interviewee Sinanoglu replies to the questions of interviewer Emine Çaykara. Sinanoglu details his journey from Ankara, Turkey, to the United States when he was sixteen, to attend the University of California - Berkeley, and to subsequently earn Masters and Doctoral degrees from MIT and Berkeley, before becoming one of the youngest full professors in Yale University's history, where Sinanoglu remained on the faculty for over forty years. The book was first published in 2001 and 58,000 copies were sold out in record time. Only the pirated publication of a further 150,000 copies was able to satisfy demand. References Category:Books about scientists Category:2001 books Category:Turkish books
{ "pile_set_name": "Wikipedia (en)" }
Hypnosis was reworked to be more streamlined. You now burn one stick of incense for one full hypnosis session, instead of using one for every single turn. The rate of hypnosis is still the same so the cost of hypno incense has been increased to compensate. Overall, the hypnosis process should be much more intuitive than before. Three new types of hypnosis training was added; autozoophilia, muscle minded and sadism. These grant three new types of titles; domesticated, superjock and sadist. They are all ‘basic’ hypnosis types. 18 new titles have been added due to the new title types. They all range from common to unique; there are no new rare titles added this update. Changes hypnotism hypnotism system reworked hypnotism items cost 4x normal price Sadism Superjock Autozoophilia titles misc cumsar title description added slut titles increase ranch price Sex Slave [submissive + masochistic] [uncommon] new titles added to guild requests title descriptions now show short bonus descriptions bimbo slut change title texts now match feminine/masculine update feminine/masculine moved down a rank feminine/masculine no longer requires fitness Femme Sub [feminine + submissive] [uncommon] Bimboy Title [bimbo + masculine] [uncommon] Feminine Bimbo Title [feminine + bimbo] [uncommon] Feminine Sissy Title [feminine + sissy] [uncommon] Gigolo Title [masculine + strumpetry] [uncommon] Feminine Sissy Bimbo Title [feminine + sissy + bimbo] [unique] muscle titles muscle titles increase fitness gain (workout only) Muscle Title [Superjock] [common] Muscle Bimbo Title [Superjock + Bimbo] [uncommon] Masculine Muscle Bimbo Title [Masculine + Bimbo + Superjock] [unique] Muscle Bitch Title [masculine + muscle + submissive] domesticated titles domesticated titles increase slave produce yield Domesticated Title [Domesticated] [common] Domesticated Bitch Title[Domesticated + strumpetry] [uncommon] sadist titles Sadism Title [Sadistic] [common] Man Eater Title [strumpetry + sadism] [uncommon] Succubus Title [strumpetry + sadism + cumsar] [unique] Incubus Title [masculine + sadism + cumsar] [unique] Dominatrix Title [feminine + sadism + dominant] [unique] misc height affected by leg length dominance focus in brothel now affected by masochism/submissive stats increase player sensitivities through sex removed guild bonus bug fixes taken contracts now show correct names in text fix brothel customer bug fix sex shop player piercings status removed [not implemented yet] fix slave piercings Download Link Version 0.5.8.0 [Patreon] Version 0.5.7.0 [Public]
{ "pile_set_name": "OpenWebText2" }
Q: Get Position of cursor on panel with scrollbars I've build a simple program (so far) that has a large panel as the "WorkArea" of the program. I draw a grid onto it, have some functionality that snaps my cursor to closest point on the grid etc. I have a status bar on the bottom of the window which displays my current position on the panel. However, regardless of where I've scrolled to (let's say vertical bar is at 10% relative to top and horizontal is 25%) it displays my cursor position with regards to the actual window. I have a OnMouseMove event that handles this: private void WorkArea_MouseMove(object sender, MouseEventArgs e) { GridCursor = grid.GetSnapToPosition(new Point(e.X, e.Y)); toolStripStatusLabel1.Text = grid.GetSnapToPosition(new Point(e.X, e.Y)).ToString(); Refresh(); } It works as I'd expect giving the points of the cursor, drawing it to the correct place, and so on. However, if I scroll out, I still get the same readings. I could be scrolled out half way on the vertical and horizontal scrollbars, put my cursor in the upper left-hand corner, and read a 0,0, when it should be something more like 5000,5000 (on a panel 10k by 10k). How can one go about getting the absolute position within a panel with respect to its scrollbars? A: You need to offset the location by the scroll position: private void panel1_MouseMove(object sender, MouseEventArgs e) { Point scrolledPoint = new Point( e.X - panel1.AutoScrollPosition.X, e.Y - panel1.AutoScrollPosition.Y); .. } Note that the AutoScrollPosition values are negative..: The X and Y coordinate values retrieved are negative if the control has scrolled away from its starting position (0,0). When you set this property, you must always assign positive X and Y values to set the scroll position relative to the starting position. For example, if you have a horizontal scroll bar and you set x and y to 200, you move the scroll 200 pixels to the right; if you then set x and y to 100, the scroll appears to jump the left by 100 pixels, because you are setting it 100 pixels away from the starting position. In the first case, AutoScrollPosition returns {-200, 0}; in the second case, it returns {-100,0}.
{ "pile_set_name": "StackExchange" }
Q: Reading file backward while using variable from first line I want to read a file back ward while using a variable present in the first line (here: 2636). My file: nx_ 2355 ny_ 2636 0.000000 0.000000 0.000000 1.000000 68.000000 0.428139 2.000000 68.000000 0.939878 3.000000 67.000000 0.757181 4.000000 68.000000 0.000000 5.000000 69.000000 -1.229728 To read the file forward, and process it, I used: cat $1 | awk 'NR==1 {nb=$4} NR>1 {up=nb-$1; print $2,up,$3}' To read the file backward it seems I should use tac, but I don't know how to retrieve the variable in the first line, and avoid to process the last line. I am searching for something like this: tac $1 | awk 'NR==END {nb=$4} NR<END {up=nb-$1; print $2,up,$3}' I want to have as output: 69.000000 2631 -1.229728 68.000000 2632 0.000000 67.000000 2633 0.757181 68.000000 2634 0.939878 68.000000 2635 0.428139 0.000000 2636 0.000000 A: Is this what you're trying to do? $ awk 'NR==1{nb=$4; next} {print $2, nb-$1, $3}' file | tac 69.000000 2631 -1.229728 68.000000 2632 0.000000 67.000000 2633 0.757181 68.000000 2634 0.939878 68.000000 2635 0.428139 0.000000 2636 0.000000 If you really did have to do what you said you wanted to do then that'd be this: $ read -r _ _ _ nb < file; tail +2 file | tac | awk -v nb="$nb" '{print $2, nb-$1, $3}' 69.000000 2631 -1.229728 68.000000 2632 0.000000 67.000000 2633 0.757181 68.000000 2634 0.939878 68.000000 2635 0.428139 0.000000 2636 0.000000 or this: $ read -r _ _ _ nb < file; tac file | awk -v nb="$nb" 'NR>1{print p[2], nb-p[1], p[3]} {split($0,p)}' 69.000000 2631 -1.229728 68.000000 2632 0.000000 67.000000 2633 0.757181 68.000000 2634 0.939878 68.000000 2635 0.428139 0.000000 2636 0.000000 or similar but it seems unlikely that's what you really need given the script you posted. If the above doesn't answer your question then edit your question to clarify your requirements and provide the expected output given your posted sample input.
{ "pile_set_name": "StackExchange" }
It is well known in motor vehicles to provide an air bag module mounted to a vehicle steering wheel. A typical driver's side air bag module includes a generally circular inflator positioned partially within a bag opening of an air bag for discharging inflator gas to inflate the air bag upon sensing certain predetermined vehicle conditions. The inflator, cover, and air bag are each mounted to the base plate to form the air bag module. The cover of the air bag module overlies the air bag, inflator and other module components to form an aesthetically pleasing cover which is durable for normal vehicle use. The cover commonly has tear lines or weakened portions that allow the cover to open during air bag inflation. It is known to connect the cover of the air bag module to the base plate of a driver's side module or the housing of a passenger's side module by a plurality of fasteners. The use of multiple fasteners increases assembly time. The prior art has also suggested the use of mating overlapping tabs on the cover and base plate or housing. However, assembly is still difficult since the tabs on the plate or housing are rigid metal which must be bent into place. The suggestion has also been made to form tabs in the cover which are trapped between the base plate and an additional relatively heavy plate-like structure needed to hold the cover in place during air bag inflation. Many of the prior art covers have the significant disadvantage of being difficult to disassemble from the module, thus limiting access beneath the cover for serviceability of components, such as a horn switch. In addition, the module cover and fasteners in the prior art are likely to be damaged during disassembly of the cover from the module.
{ "pile_set_name": "USPTO Backgrounds" }
Introduction ============ Bacterial-incited plant diseases account for significant production losses to agricultural crops. Disease control is a major challenge as a result of various factors including pathogen variation, ability to overcome plant genetic resistance, lack of effective bactericides as a result of strains developing tolerance, and the pathogen's ability to reach high populations in a relatively short period of time when conditions are favorable for disease development. Antibiotics and copper-based compounds have been the principal bactericides used for disease control. Copper has been the most widely used bactericide; however, copper resistance is present in many plant pathogenic bacteria.[@R1]^-^[@R7] Antibiotics have also been used as part of a management strategy for various bacterial diseases since the 1950s.[@R8]^-^[@R10] Streptomycin, an aminoglycoside antibiotic, was used extensively for control of bacterial diseases and as a result, streptomycin-resistant strains became prevalent, resulting in reduced disease control efficacy of bacterial spot of tomato and pepper[@R8] as well as fireblight of apple and pear.[@R11] An alternative to conventional bactericides has been to use systemic acquired resistance (SAR) inducing compounds also known as plant activators, which have provided a level of control against various bacterial diseases,[@R12]^-^[@R16] but may have negative physiological effects on plant growth and yield.[@R15]^,^[@R16] Bacteriophages (phages) offer an alternative to conventional management strategies for controlling bacterial plant diseases.[@R17]^-^[@R28]^,^ Although many studies provided positive results using phage, phage therapy has not been considered a good strategy for controlling plant pathogenic bacteria because of its unreliability[@R29] and the narrow spectrum of activity intrinsic to phages.[@R30] Additionally, the plant environments in which phage are required to operate are less than ideal. Within the phyllosphere, UV exposure, intense visible light and desiccation are all factors that reduce phage viability and disease control efficacy.[@R31] In studies examining persistence in the phyllosphere, phages applied to tomato leaves during the early morning in late May or early June were unrecoverable 24 h after application.[@R32] Compared with the phyllosphere, the rhizosphere environment is less harsh, but the phages have significant obstacles including a relatively low diffusion rate through heterogeneous soil matrices that changes as a function of available free water, biofilms that can trap phages,[@R33] soil clay particles that can reversibly adsorb phages,[@R34] and low soil pH that can inactivate phages.[@R35] In natural environments, as a result of low rates of phage diffusion and high rates of phage inactivation, low numbers of viable phages are available to lyse target bacteria.[@R31] One additional factor needed for a high degree of success is that high populations of both phage and bacterium must exist in order to initiate a chain reaction of bacterial lysis.[@R31] Although some success has been achieved with phage for controlling bacterial foliar plant diseases,[@R36] deployment of phages in agricultural systems is challenging given the need to maintain high phage populations on plant surfaces and the inability of phages to persist on leaf surfaces for extended periods of time,[@R32] as well as the inability to deliver phages at sufficient quantities to the appropriate sites. Balogh et al.[@R37] improved efficacy by applying phages in the evening to extend the time phages persisted on the leaf surface and by identifying several formulations that extended the persistence of phages on leaf surfaces. Obradovic et al.,[@R17] used these findings and demonstrated that phages effectively reduced the bacterial spot pathogen in three different field trials, providing better disease control than the standard bactericide treatment, copper-mancozeb. Another approach for maintaining high phage populations in the phyllosphere is to co-apply them with bacteria that are able to persist in the plant environment and that are sensitive to the phage. Thus if the bacterial populations are maintained at fairly high concentrations, they will serve as hosts for the phage and potentially maintain high phage populations. Svircev et al.,[@R38] controlled fire blight of pear by utilizing a strain of *P. agglomerans* for delivering and sustaining a mixture of four phages, which were able to lyse strains of both *P. agglomerans* and *E. amylovora,* the causal agent of fire blight. A similar strategy was used for controlling tobacco bacterial wilt, where phages were applied together with a phage-sensitive avirulent strain of the pathogen *Ralstonia solanacearum* to control the disease.[@R28] Using a similar approach, Balogh[@R39] determined in greenhouse experiments that phage persisted for extended periods of time on tomato foliage colonized by a mildly pathogenic strains of the bacterial spot of tomato pathogen, but not on non-colonized leaves. A second challenge in using phage relates to delivery site and application timing. The phage must come in direct contact with the pathogen prior to the bacterium entering the host. Therefore delivery of the phage in close proximity to potential infection sites is necessary for disease control. *Ralstonia solanacearum*, a soil inhabitant and causal agent of bacterial wilt of tomato, infects roots and then proceeds to colonize the vascular system in the stems, eventually causing the plants to wilt and die. Several studies have demonstrated control of bacterial wilt using phages.[@R22]^,^[@R27]^,^[@R40] Timely delivery of phages to the root zone prior to infection to allow for the phages to interact with the pathogen will likely be a critical factor in disease control. A second possible scenario relates to the phages ability to be taken up by the roots and then translocated in the xylem vessels. Translocation of phage and related reduction of crown gall incidence and severity was previously reported.[@R41] Therefore control of bacterial wilt by using phages as therapeutants following infection by the bacterium may be possible. In this study, we tested two strategies for enhancing the use of bacteriophages for bacterial disease control on plants. The objectives of this study were to: (1) address the systemic nature and persistence of soil-applied phage in tomato plants, (2) assess the effectiveness of a commercial phage mixture against *R. solanacearum* for the control of tomato bacterial wilt, and (3) evaluate the use of an attenuated *X. perforans* strain to improve the phage persistence on tomato leaf surfaces. Results ======= Systemic movement of phages in tomato plants -------------------------------------------- Phage from a commercial phage mixture specific to *X. perforans* strain 97-2 remained at detectable levels in the absence of the host bacterium in tomato roots for more than 14 d after root application ([Fig. 1](#F1){ref-type="fig"}). Phage were also detected in foliar plant tissues at levels as high as 10^6^--10^7^ PFU/g tissue in the upper leaves and stems 2 d after initial application. Phage reached concentrations of up to 10^5^ PFU/g in root tissues on the 15th day of sampling, regardless if roots were initially damaged and left undamaged at initial phage application. Phage levels in upper leaves and upper stems plummeted below the limit of detection by the 7th day in plants with damaged roots and by the 15th day in plants with undamaged roots. By the 10th day, phage were still detectable between 10^2^ and 10^4^ PFU/g of lower stem and leaf tissues in plants whose roots were damaged and left undamaged at initial phage application ([Fig. 1](#F1){ref-type="fig"}). ![**Figure 1.** Systemic movement and persistence of *X. perforans* 97-2 specific phage mixture in tomato cultivar Bonny Best with undamaged (**A**) and damaged (**B**) roots. Four week-old plants were drenched with 30 ml of a commercial phage mixture provided by OmniLytics Inc. at a concentration of 10^8^ PFU/ml. Control plants were treated with water. Destructive sampling was performed to evaluate phage presence in roots, upper leaves, and upper stems after 1, 2, 3, 5, 7, 10 and 15 d. Lower stems and lower leaves were evaluated after the 10th and 15th day only. Presented values are the average of two experiments.](bact-2-215-g1){#F1} In the second set of experiments using a single phage strain ФMI2, the concentration of phage particles detected in the roots 13 d after application only dropped one log unit compared with the initial phage concentration 4 h after initial application ([Fig. 2A](#F2){ref-type="fig"}). Phages were continually detected in the first and second internode within the two-week period ([Fig. 2A and B](#F2){ref-type="fig"}). Although the concentration was lower than in roots, phages were detected within 24 h following application to the soil, and remained viable in plant tissue in the absence of the host bacterium. Three days after application, phages were detected in the first and second leaf, followed by detection in the third and fourth internode two days later ([Fig. 2A](#F2){ref-type="fig"}), but this distribution was not confirmed in the second repetition of this trial ([Fig. 2B](#F2){ref-type="fig"}). ![**Figure 2.** Systemic movement and persistence of *X. euvesicatoria* ФMI2 in tomato cultivar Bonny Best. Four week-old plants were drenched with 30 ml of ФMI2 suspension at a concentration of 3.7 × 10^8^ PFU/ml in first experiment (**A**) and 1.7 × 10^8^ PFU/ml in the second experiment (**B**). Control plants were treated with water. Destructive sampling was performed to evaluate phage presence in: roots, 1st and 2nd internode, 1st leaf, 3rd and 4th internode and 2nd leaf after 1, 2, 3, 5, 7, 10 and 14 d.](bact-2-215-g2){#F2} In the third set of experiments where ФRS5, a phage associated with *R. solanacearum*, was tested for systemic movement in tomato plants after applying a suspension of phages to the soil. Phage ФRS5 was detected 24 and 48 h after application in all plant sections except the second leaf ([Fig. 3](#F3){ref-type="fig"}). The concentration of ФRS5 was highest in the roots and progressively lower as sampling progressed up the plant. Five days after application, the ФRS5 was only detected in the roots. ![**Figure 3.** Systemic movement and persistence of *Ralstonia solanacearum* ФRS5 in tomato cultivar Bonny Best. Four week-old plants were drenched with 30 ml of ФRS5 suspension at a concentration of 1× 10^8^ PFU/ml. Control plants were treated with water. Destructive sampling was performed to evaluate phage presence in: roots, 1st and 2nd internode, 1st leaf, 3rd and 4th internode and 2nd leaf after 1, 2, 3, 5, 7, 10 and 14 d. Presented values are the average of two experiments.](bact-2-215-g3){#F3} Control of tomato bacterial wilt with phages -------------------------------------------- When phage was applied at various time points prior and following the application of *R. solanacearum* to the soil, the most effective wilt control was achieved in the treatments where the commercial RS5-specific phage mixture (ФRS5mix) was applied immediately after inoculation ([Fig. 4A](#F4){ref-type="fig"}). However, there was no effect on disease control when the single phage ФRS5 was applied immediately after inoculation ([Fig. 4B](#F4){ref-type="fig"}). Plants that were not treated with the ФRS5mix started wilting 3--5 d after inoculation (smaller weaker plants wilted first). Different stages of plant wilt were observed mainly in plants that did not receive the commercial RS5-specific phage mixture. Both ФRS5mix and ФRS5treatments were less effective when applied 3 d before inoculation and ineffective when applied 3 d after inoculation. ![**Figure 4.** Control of tomato bacterial wilt with a commercial phage mixture or a purified phage against *Ralstonia solanacearum* strain RS5. (**A**) Four week-old tomato cv Solar Set was inoculated with *R. solanacearum* RS5 and treated with a 10^8^ PFU/ml of a commercial phage mixture specific to *R. solanacearum* strain RS5 (ФRS5mix*)* provided by OmniLytics, Inc. (**B**) Four week-old tomato cv Bonny Best was inoculated with RS5 and treated with a single phage strain (ФRS5). Treatments from bottom to top were: (1) ФRS5 mix immediately after inoculation (ФRS5 mix, ia RS5), (2) ФRS5 mix and non-inoculated (ФRS5mix, No RS5), (3) Untreated-Inoculated (ФRS5mix, No RS5), (4) Untreated, non-inoculated (ФRS5mix, No RS5), (5) ФRS5 mix 3 d before inoculation (ФRS5mix, 3db RS5), (6) (ФRS5mix, 3db and ia RS5), (7) ФRS5 mix 3 d after inoculation (ФRS5mix, 3da RS5), (8) (ФRS5mix, 3db, ia and 3da RS5), and (9) (ФRS5mix, ia and 3da RS5). Presented values are the average of two experiments. Means followed by the same letter are not significantly different based on Fisher's protected LSD method (α = 0.05).](bact-2-215-g4){#F4} Effect of OPG mutant on phage persistence in greenhouse conditions ------------------------------------------------------------------ In the greenhouse, phage persistence was consistently higher on leaflets from plants treated with attenuated mutants compared with leaflets that only received phage ([Fig. 5](#F5){ref-type="fig"}). Although phage populations were below the limit of detection 7 d after the phage application on leaflets that did not receive an attenuated mutant, phages were still recovered from leaves that were pre-treated with the attenuated *Xanthomonas perforans* strains 91-118:Δ*opgH,* 91-118:Δ*gumD* and 91-118::Δ*opgH*Δ*gumD* even 10 d after the initial phage application. Calculated AUPPC values were statistically lower (p = 0.0249) in phage alone applications compared with phage treatments that included the attenuated mutants ([Table 1](#T1){ref-type="table"}). ![**Figure 5.** The effect of attenuated *Xanthomonas perforans* strains 91-118:Δ*opgH,* 91-118:Δ*gumD* and 91-118::Δ*opgH*Δ*gumD* on the persistence of phage Xv3-1 (Ф), specific to *X. perforans* 91-118, on tomato leaf surfaces over time. Three plants (3--4 weeks-old) were dipped in a 10^6^ cfu/ml suspension of one of the *X. perforans* strains. Phage was applied at 5 × 10^8^ PFU/mL 3 d later. Additional plants were treated with only Ф Xv3-1 as a control. A single leaflet from each plant was collected after phage application (0), 1, 2, 4 and 7 d later, and washed to enumerate phage levels, expressed as the number of plaque forming units (PFU) per gram of leaf tissue with standard error based on 3 replicate plants per a treatment.](bact-2-215-g5){#F5} ###### **Table 1.** Effect of attenuated *Xanthomonas perforans* mutants on phage (Ф) persistence based on the area under phage population curve (AUPPC) on greenhouse grown tomato plants Treatment^§^ AUPPC^‖^   ---------------- ---------- ------ *OPG* + Ф 21.5 a^¶^ *GumD* + Ф 23.4 a *GumD-OPG* + Ф 23.3 a Phage 15.0 b ***P~TRT~ =*** 0.0249     ^§^ Three plants were dipped in a solution (10^6^ cfu/ml) of attenuated *X. perforans* strains 91-118:Δ*opgH,* 91-118:Δ*gumD* and 91-118::Δ*opgH*Δ*gumD.* Phage Xv3-1 was applied at 5 × 10^8^ PFU/mL 3 d later. Additional plants were treated with phage alone as a control. ^‖^AUPPC was calculated using the formula: Σ (\[(x~i~ + x~i-1~) / 2\] (t~i~ − t~i-1~)) where x~i~ is the phage population (log PFU/ml) at each evaluation time and (t~i~ − t~i-1~) is the time between evaluations. ^¶^Means followed by the same letter are not significantly different based on Fisher's protected LSD method (α = 0.05). Effect of OPG mutant application on phage persistence in field conditions ------------------------------------------------------------------------- In summer 2011, plots were sampled over a 7 d period on three separate occasions ([Fig. 6A--C](#F6){ref-type="fig"}). During the three sampling periods (May 23--29, June 6--12 and June 20--25), the trends in phage populations were quite similar ([Fig. 6A--C](#F6){ref-type="fig"}). In the absence of the OPG mutant, phage populations on tomato leaves dropped to levels of ≤ 10 PFU/g by day 2, 4 and 2 after initial phage application during the respective sampling periods. The addition of OPG mutant, regardless of level, improved phage population levels beginning at day 1 for the first two sampling periods ([Fig. 6A and B](#F6){ref-type="fig"}) and at day 2 for the third sampling period ([Fig. 6C](#F6){ref-type="fig"}), and greatly extended phage persistence on leaf surfaces at detectable levels for at least 5, 3 and 5 d, respectively ([Fig. 6A--C](#F6){ref-type="fig"}). In the 2011 fall season, only one sampling period (December 8--14) was done, with similar results that by day 4 phage populations on leaves treated with the OPG mutant were higher than those treated with phage alone ([Fig. 6D](#F6){ref-type="fig"}). AUPPC analysis substantiated that the application of the attenuated OPG mutant (at both rates) statistically improved phage persistence over time compared with phage applied alone to leaf surfaces during the first two sampling periods in the summer of 2011 ([Table 2](#T2){ref-type="table"}). Only the OPG applied at 10^7^ cfu/ml statistically improved phage levels over that of the phage only treatment over the third sampling period during the summer of 2011 based on AUPPC. While in the fall of 2011, phage populations with the addition of the OPG mutant at 10^7^ or 10^8^ cfu/ml resulted in only numerically higher AUPPC values compared with phage applied alone. ![**Figure 6.** The effect of an attenuated *Xanthomonas perforans* strains 91-118:Δ*opgH* (OPG) on the persistence of phage Xv3-1 (Ф) on tomato leaf surfaces over time. Three field trials were performed during the summer of 2011 (**A--C**) and a single field trial during the fall of 20ll (**D**). For each field trial, OPG was applied to tomato plants every 2 (**A--C**) or 3 (**D**) weeks as either a 10^7^ or 10^8^ cfu/ml bacterial suspension. Five random leaflets from each plot were collected after phage application (0), 1, 2, 4 and 7 d later, and washed after phage were applied to enumerate phage levels, expressed as the number of plaque forming units (PFU) per gram of leaf tissue with standard error based on 4 replicate plots per a treatment. Additional plots were treated with either phage or OPG, or left untreated as a control.](bact-2-215-g6){#F6} ###### **Table 2.** Effect of *Xanthomonas perforans OPG* mutant on phage (Ф) persistence on field grown tomato plants Treatment^§^ Exp I-1 Exp I-2 Exp I-3 Exp II ------------------- ----------- --------- ----------- -------- -------- ---- ----------- --- *OPG (10^7^)* + Ф 25.8^‖^ a^¶^ 20.5 a 20.4 a 20.0 a *OPG (10^8^)* + Ф 27.4 a 21.8 a 16.6 ab 19.3 a Ф (Phage only) 9.0 b 10.0 b 11.7 bc 16.6 a *OPG (10^7^)* 2.2 c 0.0 c 5.5 cd 1.7 b Untreated control 0.0 c 0.0 c 3.5 d 1.3 b *P~TRT~ =* \< 0.0001   \< 0.0001   0.0002   \< 0.0001     ^§^ Field plots were sprayed with a solution (10^7^ or 10^8^ cfu/ml) of attenuated *X. perforans* strains 91-118:Δ*opgH* (OPG) every two (Exp I-1, -2 and -3) or three (Exp II) weeks. Phage Xv3-1 was applied to foliage at 5 × 10^8^ PFU/mL on May 23 (Exp I-1), June 6 (Exp I-2), June 20 (Exp I-3) and Dec 8 (Exp II). Additional plots were treated with either phage or OPG alone, or left untreated as controls. Five leaflets were collected from each plot immediately after phage application, and 1, 2, 4 and 7 d later to enumerate phage levels. ^‖^Values indicate AUPPC, which was calculated using the formula: Σ (\[(x~i~ + x~i-1~) / 2\](t~i~ − t~i-1~)) where x~i~ is the phage population (log PFU/ml) at each evaluation time and (t~i~ − t~i-1~) is the time between evaluations. ^¶^Means followed by the same letter are not significantly different based on Fisher's protected LSD method (α = 0.05). Discussion ========== Translocation experiments using individual phage and commercial phage mixtures demonstrated that phage could move from the root zone to the lower foliar portions of the plant for short periods of time. We also noted that the phage could be maintained at high concentrations in the roots for at least 15 d, regardless if roots were damaged or left intact. Phage levels declined more rapidly in upper leaves and stems of tomato plants in which roots had been damaged and were detected for a week longer in plants where roots were not damaged. These results differed from those reported by Ward and Mahler,[@R42] who studied phage f2 uptake and translocation to distal tissues of soybean and corn grown in hydroponic solutions. They observed that uptake of phage f2, through the cut roots of corn and soybean plants, was consistently higher in the sampled upper tissues. Reduction of the phage population below detectable levels in stems and foliage of the plants with damaged roots 5 d after treatment ([Fig. 1B](#F1){ref-type="fig"}) may indicate reduced phage absorbing capacity in plants with injured roots. In the latter two experiments the level of phage uptake and the extent of systemic movement of phages were much lower ([Figs. 2](#F2){ref-type="fig"} and [3](#F3){ref-type="fig"}). One reason for this difference might be that in the latter experiments single phage strains were used as opposed to the first set of experiments where a commercial phage mixture was used; these differences may be due to differences in the phage virion properties. Our findings were similar to previous studies[@R42] that demonstrated that phage uptake, irrespective of root damage, will vary depending on the type of phage, plant species, plant size, plant age, and most likely with the kind of soil or media in which the plant is grown. In our experiments using commercial phage mixtures, *X. perforans* 97-2 specific phage were recovered from roots 1 d after application at levels that did not differ significantly from the concentrations applied. However, in case of pure phage strains ФMI2 and ФRS5, the highest phage levels recovered from the roots occurred within 2--3 d after the initial application ([Figs. 2](#F2){ref-type="fig"} and [3](#F3){ref-type="fig"}), approximately two log units lower than the initial concentration. This could be either due to some differences in the phage properties or phage trapping by substrate particles. In all three sets of the phage translocation experiments, we observed similar trends regarding phage levels within the root system. The highest phage levels in roots typically occurred 2--3 d after initial soil application, regardless of the phage strain or root damage ([Figs. 1](#F1){ref-type="fig"}--[3](#F3){ref-type="fig"}). Our experiments also showed that phage could be initially recovered at higher levels from upper plant parts, which then rapidly declined from the 5th to the15th day. The decline may have been due to several factors, possibly plant defense responses or due to photosynthesis, since chlorophyll absorbs solar energy that might be detrimental to phage survival in the absence of a host bacterium. In addition, the phage appeared to differ in their ability to persist in above ground tissues across experiments. The persistence of ФRS5 inside stem and leaf tissue was limited to 3--5 d, whereas the *X. perforans* 97-2 specific phage mixture and *X. euvesicatoria* specific ФMI2 were recovered from 7--15 d after application. However, these differences in phage persistence in above ground tissues might be due to the age of the plant at the time of phage application. Plants used to evaluate the *X. perforans* 97-2 specific phage mixture and *X. euvesicatoria* specific ФMI2 were older than plants used to evaluate ФRS5. Regardless of whether individual phage or commercial phage mixtures were applied to the soil, phage persisted longer in tomato roots and reduced bacterial wilt severity ([Fig. 4](#F4){ref-type="fig"}); although efficacy varied depending on the timing of phage treatments relative to the pathogen, *R. solanacearum*. Fujiwara et al.[@R22] recovered a large number of phages from the phage-treated roots of *R. solanacearum* inoculated and non-inoculated tomato plants for 4 mo after phage were applied. Phage titers recovered from *R. solanacearum* inoculated plants, were 10 times higher than non-inoculated plants, which would be expected considering that phage need bacterial cells to replicate. In the presence of a suitable bacterial host, it is expected that phage would persist longer in roots and confer further protection to plants from further bacterial infection unless the bacterium develops resistance to the phages. Phage persistence in the phyllosphere is a limitation for the successful use of phages for control of foliar pathogens.[@R32] In greenhouse and field trials, phage persistence was dramatically improved with the prior application of a phage-sensitive, virulence-attenuated bacterial strain. This attenuated strain became established in the tomato phyllosphere and supported higher phage titers over a 7 d sampling period. In this study we demonstrated two different approaches for applying bacteriophages which may prove useful for managing bacterial plant diseases. We demonstrated that some phages under certain conditions can be systemically translocated inside plants, and retain their viability there for days. In our present study, a commercial *X. perforans* 97-2 specific phage mixture reached the upper leaves of a tomato and maintained a 10^4^ PFU/g leaf tissue concentration for 7 d, compared with a typical foliar application that would generally drop to undetectable levels within 1 or 2 d. Therefore, regular drench/drip applications could maintain a higher level of phage population in the tomato foliage, compared with what foliar sprays can provide. Of course, it is not known if phages present inside the leaves would have any way to contact foliar bacterial pathogens, and to a degree that would affect foliar disease development. It is also unknown whether the phage concentration achievable by root absorption is high enough to be effective for vascular pathogens, like *R. solanacearum*. Based on the importance of bacterial plant diseases and the need for effective control methods, further investigation on this topic would have merit. We also showed that phage populations could be maintained at significantly higher levels in the tomato phyllosphere in which an attenuated strain of its host had colonized. Although the attenuated strain resulted in visible disease on tomato leaves, it is plausible that other mutants can be identified that would colonize the phyllosphere without disease and serve as a suitable host for the phages. Materials and Methods ===================== Bacterial strains and phages ---------------------------- Bacterial strains used in these studies were stored at −80°C in sterile DI water with 30% glycerol and phages were stored at 4°C in dark. For all experiments, the strains used were grown on nutrient agar (NA) medium (0.8% (wt/V) (BBL, Becton Dickinson and Co.) at 28°C. The bacterial suspensions were prepared by using 24 h cultures grown on NA medium and suspensions were adjusted to 5--10^8^ cfu/ml (A~600~ = 0.3), and then were diluted appropriately. Phage propagation ----------------- For field studies, phage-sensitive bacteria were grown in liquid Nutrient Broth (NB) (BBL, Becton Dickinson and Co.) or Luria-Bertani (LB) media shaking at 200 rpm at 28°C. After the addition of the phage and a 5 min incubation period on the bench top, the culture was shaken at 150 rpm at 28°C for 16--18 h. Then the culture was sterilized, enumerated and stored at 4°C in the dark until use. This method yielded phage titers of approximately 10^10^ PFU/ml.[@R39] Systemic movement of phage in tomato ------------------------------------ For the first set of experiments, a proprietary mixture of phage (OmniLytics Inc.) active against *X. perforans* strain 97-2 were studied using tomato plants cv Bonny Best grown in 10-cm pots containing soilless medium. Plants were maintained in a greenhouse, watered daily, and fertilized every 14 d with a soluble 20-20-20 (N-P-K) fertilizer (0.4 g/pot; Peter's Fertilizer Products, W.R. Grace & Co.). The soil surrounding 4 week-old tomato plants was drenched with30 ml of the phage mixture (10^8^ PFU/ml). Treatments consisted of (1) root-injured plants treated with phage, (2) non-injured plants treated with phage, and (3) non-injured and non-phage-treated control plants. Roots were injured in treatment 1 by stabbing the root system with a knife at four different locations in the pot close to the base of each plant. Each treatment consisted of 21 plants with three plants used for destructive sampling at each time point at days 1, 2, 3, 5, 7, 10 and 15 d after treatment. At each time point, the weights of washed roots, upper leaves, upper stems, lower leaves and lower stems (when plants had more than 3 leaves) were determined, before blending individual samples in 25 ml nutrient broth. Blended plant tissue was transferred to a 50 ml centrifuge tube and held for about 5 min at room temperature while plant material settled to the bottom of the tubes. One milliliter of supernatant was transferred to a 1.5 ml micro-centrifuge tube and 100 µl of chloroform was added. From this tube serial dilutions were made and plated with a bacterial suspension of *X. perforans* strain 97-2 for quantifying plaques after a 24 h incubation at 28°C as previously described.[@R37] The experiment was performed twice. The next set of experiments was performed similarly, but used phage strain M12 active against *X. euvesicatoria* strain KFB189, which was isolated from the roots of field-grown pepper plants in Serbia. Treatments were similar to the previous study, except the injured root treatment was not included. Following the phage drench application (30 ml/plant), three treated and three non-treated control plants were sampled 1, 2, 3, 5, 7, 10 and 14 d after the initial phage application. The aerial portions of the plants were carefully collected to avoid contaminating the stem and foliage samples with the phage treated substrate. The substrate was thoroughly washed from the roots with tap water followed by removal of the free water from the plant surface by blotting with paper tissue. Plants were sectioned using a sterile scalpel on the following five sections: (1) root; (2) first and second internode; (3) first leaf; (4) third and fourth internode; and (5) second leaf. Phage was enumerated similar to the first set of experiments, except plant tissues were homogenized in sterile water (1 ml water per gram of tissue) using a mortar and pestle. The experiment with the pure phage strain MI2 was performed twice. A third set of phage trials was performed to test the systemic nature of phages specific to *R. solanacearum*. These tomato experiments followed the same experimental procedure used in the previous MI2 phage trials, except phage strain RS5 (ФRS5) compatible with *R. solanacearum* strain RS5 was used. The experiment was performed twice. Experimental data from all six trials were collected as the number of plaque forming units (PFU) per g of plant tissue and log~10~ transformed prior to calculating the mean value from the three replications and performing statistical analyses. Control of tomato bacterial wilt with ФRS5 ------------------------------------------ In the first set of experiments, a 10^8^ PFU/ml phage mixture specific to *Ralstonia solanacearum* strain RS5 (*ФRS5mix*) provided by OmniLytics, Inc. was used. Inoculum of *R. solanacearum* strain RS5 was grown overnight on casamino acid peptone glucose broth in a shaker at 28°C. Inoculum concentration was determined with the aid of a spectrophotometer, and adjusted to 10^8^ cfu/ml with the same broth. For this experiment, 4-week-old tomato plants cv Solar Set were transplanted to 10 cm pots containing plant growth medium and placed over individual saucers that were also used for watering to avoid cross contamination and maintain high moisture content. The experiment had nine treatments replicated six times and arranged on a greenhouse bench in a randomized complete block design. Bacterial inoculum (6 ml) was applied as a drench around the plant using a 10 ml pipet. Similarly, 5 ml of *ФRS5mix* (MOI = 1) was applied according to the following treatments: (1) *ФRS5mix* immediately after (*ia*) inoculation (*ФRS5mix ia RS5*), (2) *ФRS5mix* and non-inoculated (*ФRS5mix*, *No RS5*), (3) untreated-inoculated (*No ФRS5mix*, *RS5*), (4) untreated, non-inoculated (*No ФRS5mix*, *No RS5*), (5) *ФRS5mix* 3 d before (*3db*) inoculation (*ФRS5mix 3db RS5*), (6) (*ФRS5mix 3db* and *ia RS5*), (7) *ФRS5mix* 3 d after (*3da*) inoculation (*ФRS5mix 3da RS5*), (8) (*ФRS5mix 3db, ia* and *3da RS5*) and (9) (*ФRS5mix ia* and *3da RS5*). For the second experiment, 4-week-old tomato plants cv Bonny Best were transplanted, moved to a growth chamber (16 h light/8 h dark; 26°C) and treated similarly as in previous experiment. In this experiment, treatments were applied 10 d after transplanting to give the roots time to heal and resume normal growth. Plants were similarly drenched with 6 ml *R. solanacearum* RS5 inoculum, but this time a single phage strain *ФRS5* prepared as previously described at 10^8^ PFU/ml (MOI = 1) was used instead of the OmmiLytics *ФRS5mix*. To avoid cross contamination, six plants per treatment were placed in the same tray and the substrate was kept moist throughout the 14 d observation period by adding water to the trays. For both experiments, percent of wilted plants per treatment was evaluated after 14--21 d and each experiment was performed twice. Role of attenuated strains of *X. perforans* in phage persistence in phyllosphere --------------------------------------------------------------------------------- ### Greenhouse experiment Three- to 4-week-old tomato plants of cv Bonny Best grown in 10-cm pots were maintained in the greenhouse with temperatures ranging from 25--35°C. Plants were inoculated with *X. perforans* 91-118:Δ*opgH,* 91-118:Δ*gumD* or 91-118:Δ*opgH*Δ*gumD* strains[@R43]^,^[@R44] separately by dipping three plants each in the appropriate bacterial suspension adjusted to 10^6^ cfu/mL and amended with 0.025% Silwet L-77 (Loveland Industries, Co.). Once disease symptoms were observed on inoculated plants, a phage suspension of 5 × 10^8^ PFU/mL phage (MOI = 100) was sprayed once on all treatments. Phage suspensions used in greenhouse studies were a mixture (Agriphage from OmniLytics, Inc.) and phage stock Xv 3-1 propagated on *Xanthomonas perforans* 91-118:Δ*opgH* from phage stocks for field trials. The titer of the phage was determined over a 7 d period by sampling one leaflet from each of three plants and quantifying the phage concentrations as described above. ### Field experiments The field experiments were located at the University of Florida's Gulf Coast Research and Education Center (GCREC). Experiments were prepared along three plastic-mulched raised beds, 100 m in length on 1.5 m bed center spacing. Each group of 3 beds was separated by a 4.6 m ditch area. Individual plots consisted of three adjacent 6.4 m bed lengths with plants spaced every 46 cm, and included a 3.7 m non-planted buffer area between plots on the same beds to minimize inter-plot movement of phage and bacterial treatments. Treatments were replicated 4 times and arranged in a randomized complete block design. All treatments and measurements were made to the center bed of each plot, using plants in the outer beds to minimize inter-plot interference. Field experiments were conducted in the summer and fall of 2011 with tomato cultivar SecuriTY 28, and the *X. perforans* 91-118:Δ*opgH* attenuated mutant as the host strain for phage persistence studies. Either a 10^7^ or 10^8^ cfu/ml suspension of *X. perforans* 91-118:Δ*opgH* was prepared in 10 mM MgSO~4~ and applied to tomato foliage in select plots before sunrise with a backpack sprayer. An enriched phage ФXv 3-1 specific to *X. perforans* 91-118:Δ*opgH* in a 0.75% (wt/V) skim milk suspension was applied weekly in the evening to specific plots at 10^8^ PFU/ml after the first *X. perforans* 91-118:Δ*opgH* applications were made (corresponding to an MOI of 0.1 and 1 for plots treated with 10^7^ or 10^8^ cfu/ml suspension of *X. perforans* 91-118:Δ*opgH*, respectively). Treatments included: (1) *X. perforans* 91-118:Δ*opgH* applied alone at 10^7^ cfu/ ml, (2) *X. perforans* 91-118:Δ*opgH* applied at 10^7^ cfu/ml followed by phage, (3) *X. perforans* 91-118:Δ*opgH* applied at 10^8^ cfu/ml followed by phage; (4) a phage alone control; and (5) a non-treated control. Initially, weekly applications of *X. perforans* 91-118:Δ*opgH* were made for the first 2 weeks, and then once every 2 weeks for the remainder of the summer trial and once every 3 weeks for the remainder of the fall trial. ### Phage isolation from phyllosphere and quantification of phyllosphere populations For detection of phage in the greenhouse and field studies, leaflets were sampled to monitor phage persistence on the leaf surface at days 0, 1, 2, 4, 7. For greenhouse studies, samples were also collected on day 10. For field trials, five leaflets were removed from the middle part of each plant to create a composite sample for each plot, while for greenhouse trials three leaflets were taken from the middle part of each plant. The samples were placed in a portable Styrofoam cooler and immediately carried to the laboratory and processed for phages as described above. The leaflets were placed in Erlenmeyer flasks containing 100 ml or 50 ml sterile DI water for field and greenhouse trials, respectively, and agitated for 15 min. One milliliter aliquots of the rinsate were transferred to 1.5 ml microcentrifuge tubes. To each tube 100 µL of chloroform was added. Tubes were incubated on a rotary shaker for 30 min. The chloroform was pelleted by centrifugation at 13,000 rpm speed for 15 min. The aqueous top phase was transferred into new centrifuge tubes. The tubes were centrifuged at 13,000 rpm for 15 min to remove cellular debris. The supernatant was used for enumeration of the phage titer after dilutions. For numeration of phage titer in greenhouse and field trials, soft nutrient agar yeast extract medium (NYA) \[0.8% Nutrient Broth, 0.6% Bacto Agar and 0.2% Yeast Extract (Difco, Becton Dickinson and Co.)\] was used. Bacterial cells from 24 h-old cultures were suspended in 2 ml MgSO~4~ and 100 µL of the concentrated bacterial suspension was added in empty Petri dishes. Sixteen mililiters warm (48°C) NYA medium was poured into the plate. The dishes were gently swirled for even distribution of the bacteria. After the medium solidified, 10 µL dilutions of the phage suspension were spot inoculated. After the phage suspension dried, the plates were transferred to 28°C incubators and after 24 or 48 h the plaques were counted at the appropriate dilutions. The phage concentration was calculated from the plaque number and specific dilution and expressed as PFU/ml. Population data were log-transformed and standard errors were determined. The overall growth curve was determined by calculating the area under the population progress curve (AUPPC). The AUPPC is a modification of the area under the disease progress curve (AUDPC) which has been used to analyze population progress:[@R45] standardized AUPPC = Σ (\[(x~i~ + x~i-1~) / 2\] (t~i~ − t~i-1~)) where x~i~ is the phage population (log PFU/ml) at each evaluation time and (t~i~ − t~i-1~) is the time in days between evaluations. The data were then subjected to an analysis of variance in SAS version 9.2 (SAS Institute, Inc.) using PROC GLIMMIX to assess the effect of treatments on AUPPC or phage populations over time. For the analyses of AUPPC data, block and the interaction of block × treatment were considered random effects in the model. Repeated measures were performed to examine phage populations over time in field and greenhouse trials, with block and the interaction of block × time fitted to a heterogeneous compound-symmetry covariance structure as a random effect in the analyses. Means separation were based on Fisher's protected LSD method (α = 0.05). The second author was supported by Fulbright scholarship and National Project III46008, Ministry of Education and Science, Serbia. Previously published online: [www.landesbioscience.com/journals/bacteriophage/article/23530](http://www.landesbioscience.com/journals/bacteriophage/article/23530/) No potential conflicts of interest were disclosed. [^1]: These first authors contributed equally to this work. [^2]: These senior authors contributed equally to this work.
{ "pile_set_name": "PubMed Central" }
/* * DO NOT EDIT. THIS FILE IS GENERATED FROM e:/builds/moz2_slave/mozilla-1.9.1-win32-xulrunner/build/editor/txtsvc/public/nsITextServicesFilter.idl */ #ifndef __gen_nsITextServicesFilter_h__ #define __gen_nsITextServicesFilter_h__ #ifndef __gen_nsISupports_h__ #include "nsISupports.h" #endif /* For IDL files that don't want to include root IDL files. */ #ifndef NS_NO_VTABLE #define NS_NO_VTABLE #endif class nsIDOMNode; /* forward declaration */ /* starting interface: nsITextServicesFilter */ #define NS_ITEXTSERVICESFILTER_IID_STR "5bec321f-59ac-413a-a4ad-8a8d7c50a0d0" #define NS_ITEXTSERVICESFILTER_IID \ {0x5bec321f, 0x59ac, 0x413a, \ { 0xa4, 0xad, 0x8a, 0x8d, 0x7c, 0x50, 0xa0, 0xd0 }} class NS_NO_VTABLE NS_SCRIPTABLE nsITextServicesFilter : public nsISupports { public: NS_DECLARE_STATIC_IID_ACCESSOR(NS_ITEXTSERVICESFILTER_IID) /** * Indicates whether the content node should be skipped by the iterator * @param aNode - node to skip */ /* boolean skip (in nsIDOMNode aNode); */ NS_SCRIPTABLE NS_IMETHOD Skip(nsIDOMNode *aNode, PRBool *_retval NS_OUTPARAM) = 0; }; NS_DEFINE_STATIC_IID_ACCESSOR(nsITextServicesFilter, NS_ITEXTSERVICESFILTER_IID) /* Use this macro when declaring classes that implement this interface. */ #define NS_DECL_NSITEXTSERVICESFILTER \ NS_SCRIPTABLE NS_IMETHOD Skip(nsIDOMNode *aNode, PRBool *_retval NS_OUTPARAM); /* Use this macro to declare functions that forward the behavior of this interface to another object. */ #define NS_FORWARD_NSITEXTSERVICESFILTER(_to) \ NS_SCRIPTABLE NS_IMETHOD Skip(nsIDOMNode *aNode, PRBool *_retval NS_OUTPARAM) { return _to Skip(aNode, _retval); } /* Use this macro to declare functions that forward the behavior of this interface to another object in a safe way. */ #define NS_FORWARD_SAFE_NSITEXTSERVICESFILTER(_to) \ NS_SCRIPTABLE NS_IMETHOD Skip(nsIDOMNode *aNode, PRBool *_retval NS_OUTPARAM) { return !_to ? NS_ERROR_NULL_POINTER : _to->Skip(aNode, _retval); } #if 0 /* Use the code below as a template for the implementation class for this interface. */ /* Header file */ class nsTextServicesFilter : public nsITextServicesFilter { public: NS_DECL_ISUPPORTS NS_DECL_NSITEXTSERVICESFILTER nsTextServicesFilter(); private: ~nsTextServicesFilter(); protected: /* additional members */ }; /* Implementation file */ NS_IMPL_ISUPPORTS1(nsTextServicesFilter, nsITextServicesFilter) nsTextServicesFilter::nsTextServicesFilter() { /* member initializers and constructor code */ } nsTextServicesFilter::~nsTextServicesFilter() { /* destructor code */ } /* boolean skip (in nsIDOMNode aNode); */ NS_IMETHODIMP nsTextServicesFilter::Skip(nsIDOMNode *aNode, PRBool *_retval NS_OUTPARAM) { return NS_ERROR_NOT_IMPLEMENTED; } /* End of implementation class template. */ #endif #endif /* __gen_nsITextServicesFilter_h__ */
{ "pile_set_name": "Github" }
Estarabad Mahalleh Estarabad Mahalleh (, also Romanized as Estarābād Maḩalleh; also known as Mahalleh) is a village in Balatajan Rural District, in the Central District of Qaem Shahr County, Mazandaran Province, Iran. At the 2006 census, its population was 188, in 55 families. References Category:Populated places in Qaem Shahr County
{ "pile_set_name": "Wikipedia (en)" }
WASHINGTON - On Wednesday, August 16th, the Kaper-Dale for Governor campaign announced that Milltown and Newark, two New Jersey cities, have equal to or higher water lead toxicity levels than Flint, Michigan. The event where this announcement was made was held at the NJ Department of Environmental Protection in Trenton, New Jersey. Workers and passersby stopped to witness and join the rally. Workers also sympathized with the protest, but cited their lack of resources as a reason for the problems. "There is another kind of violence these first weeks of August than what we saw in Charlottesville. The violence of silence. Silence regarding a public health crisis in Newark and Milltown that also threatens all of us," said Seth Kaper-Dale in his speech. The campaign has announced that it intends to pay Newark residents $15/hr to help increase education about this water crisis. Kaper-Dale is running for Governor in the Green Party of New Jersey, alongside running mate Lisa Durden, who the campaign calls "a voice for the marginalized, from Black lives to women and children; a voice who puts the last, first." Other speakers included Barry Bendar, Green Party candidate for Freeholder in Ocean County; Aaron Hyndman, Green Party NJ co-chair and LD 24 Assembly candidate; Troy Knight-Napper, LD 28 State Senate candidate; Sean Stratton LD 18 Assembly candidate; and Carol Gay, Our Revolution Ocean County & NJ Industrial Workers Union. The general election to elect the next Governor of New Jersey will take place on November 7th. ###
{ "pile_set_name": "OpenWebText2" }
At least two Chevy dealerships in the U.S. and two in Canada are offering a retro Big 10 conversion on the 2018 Silverado which is quickly gaining traction among fans of the iconic truck. Introduced in the early 1970s, the Cheyenne Super 10 is easy to spot thanks to its distinct two-tone paint job and Big 10 badging. Look for it again on the 2018 Silverado which is being offered by at least two Chevy dealerships: Blake Greenfield Chevrolet Buick in Wells, Minn. and Valley Chevrolet in Wilkes-Barre, Penn. Regardless who’s selling it (Blake Greenfield Chevy claims to be the first), the retro truck’s getting a lot of praise including a big endorsement from longtime Chevy devotee and retired NASCAR great, Dale Earnhardt, Jr. “Damn this is a great idea,” Earnhardt Jr. posted on Twitter along with a picture of a red and white 2018 Silverado Retro Big 10. In a video posted below, Valley Chevrolet lists the following details on its Retro Big 10 Conversion: 3.5-inch lift kit, 18-inch rally wheels with BF Goodrich white lettered tires (All-Terrain T/A), a two-tone decal package, Big 10 special emblems, chrome mirror covers and chrome door handles. The $8,000 conversion raises the MSRP to $57,474. Blake Greenfield Chevy lists their four-wheel-drive, double-cab Z71 2018 Big 10 at $51,410. Silverados from model years 2014 to 2018 are eligible for the conversion at the Minnesota dealership. The retro Big 10 continues to win over fans, including discerning Bow Tie followers. “Considering how far truck styling has come since the Cheyenne was introduced in 1971, you’d think that any attempt to apply it to a new Silverado might turn out to be an unmitigated mess. But you’d be wrong. Because as you can see, the fine folks at Blake Greenfield Chevrolet Buick pulled it off better than we could have ever imagined,” reports chevroletforum.com. Canadian dealerships selling the retro package include Northgate GM in Alberta, Edmonton and Barber Motors in Weyburn, Saskatchewan.
{ "pile_set_name": "OpenWebText2" }
The association of estimated whole blood viscosity with hemodynamic parameters and prognosis in patients with heart failure. We aimed to investigate the association of estimated whole blood viscosity (WBV) with hemodynamic parameters and prognosis in patients with heart failure with reduced ejection fraction. Total of 542 patients were included and followed-up for median 13 months. The WBV parameters had negative relationship with right atrium pressure and positive correlation with cardiac index. The WBV parameters were found to be independent predictors of composite end point (CEP) and all-cause mortality. Every one cP increases of WBV(h) and WBV(l) were associated with 17 and 1% reductions of CEP. In Kaplan-Meier analysis, patients with low WBV quartiles were found to have significantly more CEP. Being an easily accessible and costless prognosticator, WBV seems to be a novel marker for determining prognosis and an emerging tool to individualize heart failure with reduced ejection fraction management.
{ "pile_set_name": "PubMed Abstracts" }
🔼Peresh meaning For a meaning of the name Peresh, NOBSE Study Bible Name List reads Dung and Jones' Dictionary of Old Testament Proper Names proposes Excrement. BDB Theological Dictionary does not interpret our name but does confirm that it is identical to the noun פרש (peresh). Note that our name doesn't specifically refer to fecal matter but rather more generally to Belly Content, and brings to mind the mass of slithering intestines that emerges when an animal is slaughtered and disemboweled. Peresh's brother is named Sheresh, which appears to mean To Uproot, and these two names reflect the same archetypal duality as do the characters of Cain and Abel: the farmer and the shepherd. Our names Peresh and Sheresh reflect the first acts of the process of food preparation: the gutting of an animal and the uprooting of a plant.
{ "pile_set_name": "Pile-CC" }
angular.module('Bastion.routing', ['ui.router', 'ui.router.state.events']); (function () { 'use strict'; /** * @ngdoc config * @name Bastion.routing.config * * @requires $urlRouterProvider * @requires $locationProvider * * @description * Routing configuration for Bastion. */ function bastionRouting($stateProvider, $urlRouterProvider, $locationProvider) { var oldBrowserBastionPath = '/bastion#', getRootPath, shouldRemoveTrailingSlash; getRootPath = function (path) { var rootPath = null; if (path && angular.isString(path)) { rootPath = path.replace('_', '-').split('/')[1]; } return rootPath; }; shouldRemoveTrailingSlash = function (path) { var whiteList = ['pulp'], remove = true; if (path.split('/')[1] && whiteList.indexOf(path.split('/')[1]) >= 0) { remove = false; } return remove; }; $stateProvider.state('404', { permission: null, templateUrl: 'layouts/404.html' }); $urlRouterProvider.rule(function ($injector, $location) { var $sniffer = $injector.get('$sniffer'), $window = $injector.get('$window'), path = $location.path(); if (!$sniffer.history) { $window.location.href = oldBrowserBastionPath + $location.path(); } // removing trailing slash to prevent endless redirect if not in ignore list if (path[path.length - 1] === '/' && shouldRemoveTrailingSlash(path)) { return path.slice(0, -1); } }); $urlRouterProvider.otherwise(function ($injector, $location) { var $window = $injector.get('$window'), $state = $injector.get('$state'), rootPath = getRootPath($location.path()), url = $location.url(), foundParentState; // ensure we don't double encode +s url = url.replace(/%2B/g, "+"); // Remove the old browser path if present url = url.replace(oldBrowserBastionPath, ''); if (rootPath) { foundParentState = _.find($state.get(), function (state) { var found = false, stateUrl = $state.href(state); if (stateUrl) { found = getRootPath(stateUrl) === rootPath; } return found; }); } if (foundParentState) { $state.go('404'); } else { $window.location.href = url; } return $location.url(); }); $locationProvider.html5Mode({enabled: true, requireBase: false}); } angular.module('Bastion.routing').config(bastionRouting); bastionRouting.$inject = ['$stateProvider', '$urlRouterProvider', '$locationProvider']; })();
{ "pile_set_name": "Github" }
13.1: Electric Circuits:Batteries and Resistors The name electric current is given to the phenomenon that occurs when an electric field moves down a wire at close to the speed of light. Voltage is the electrical energy density (energy divided by charge) and differences in this density (voltage) cause electric current. Resistance is the amount a device in the wire resists the flow of current by converting electrical energy into other forms of energy. A device, the resistor, could be a light bulb, transferring electrical energy into heat and light or an electric motor that converts electric energy into mechanical energy. The difference in energy density across a resistor or other electrical device is called voltage drop. In electric circuits (closed loops of wire with resistors and constant voltage sources) energy must be conserved. It follows that changes in energy density, the algebraic sum of voltage drops and voltage sources, around any closed loop will equal zero. In an electric junction there is more than one possible path for current to flow. For charge to be conserved at a junction the current into the junction must equal the current out of the junction. Key Concepts This is the main equation for electric circuits but it is often misused. In order to calculate the voltage drop across a light bulb use the formula: \begin{align*}V_{lightbulb} = I_{lightbulb}R_{lightbulb}\end{align*}Vlightbulb=IlightbulbRlightbulb. For the total current flowing out of the power source, you need the total resistance of the circuit and the total voltage: \begin{align*}V_{total} = I_{total}R_{total}\end{align*}Vtotal=ItotalRtotal. Power is the rate that energy is released. The units for power are Watts \begin{align*}(W)\end{align*}(W), which equal Joules per second \begin{align*}[W] = [J]/[s]\end{align*}[W]=[J]/[s]. Therefore, a \begin{align*}60\;\mathrm{W}\end{align*}60W light bulb releases \begin{align*}60\end{align*}60 Joules of energy every second. The equations used to calculate the power dissipated in a circuit is \begin{align*}P=IV\end{align*}P=IV. As with Ohm’s Law, one must be careful not to mix apples with oranges. If you want the power of the entire circuit, then you multiply the total voltage of the power source by the total current coming out of the power source. If you want the power dissipated (i.e. released) by a light bulb, then you multiply the voltage drop across the light bulb by the current going through that light bulb. Table of electrical symbols and units Name Electrical Symbol Units Analogy Voltage \begin{align*}(V)\end{align*}(V) Volts \begin{align*}(V)\end{align*}(V) A water dam with pipes coming out at different heights. The lower the pipe along the dam wall, the larger the water pressure, thus the higher the voltage. Examples: Battery, the plugs in your house, etc. Current (\begin{align*}I\end{align*}I) Amps \begin{align*}(A)\end{align*}(A) \begin{align*}A = \mathrm{C/s}\end{align*}A=C/s A river of water. Objects connected in series are all on the same river, thus receive the same current. Objects connected in parallel make the main river branch into smaller rivers. These guys all have different currents. Examples: Whatever you plug into your wall sockets draws current Resistance (\begin{align*}R\end{align*}R) Ohm \begin{align*}(\Omega)\end{align*}(Ω) If current is analogous to a river, then resistance is the amount of rocks in the river. The bigger the resistance the less current that flows Examples: Light bulb, Toaster, etc. Resistors in Series: All resistors are connected end to end. There is only one river, so they all receive the same current. But since there is a voltage drop across each resistor, they may all have different voltages across them. The more resistors in series the more rocks in the river, so the less current that flows. Resistors in Parallel: All resistors are connected together at both ends. There are many rivers (i.e. The main river branches off into many other rivers), so all resistors receive different amounts of current. But since they are all connected to the same point at both ends they all receive the same voltage. DC Power: Voltage and current flow in one direction. Examples are batteries and the power supplies we use in class. AC Power: Voltage and current flow in alternate directions. In the US they reverse direction 60 times a second. (This is a more efficient way to transport electricity and electrical devices do not care which way it flows as long as current is flowing. Note: your TV and computer screen are actually flickering 60 times a second due to the alternating current that comes out of household plugs. Our eyesight does not work this fast, so we never notice it. However, if you film a TV or computer screen the effect is observable due to the mismatched frame rates of the camera and TV screen.) Electrical current coming out of your plug is an example. Ammeter: A device that measures electric current. You must break the circuit to measure the current. Ammeters have very low resistance; therefore you must wire them in series. Voltmeter: A device that measures voltage. In order to measure a voltage difference between two points, place the probes down on the wires for the two points. Do not break the circuit. Volt meters have very high resistance; therefore you must wire them in parallel. Voltage source: A power source that produces fixed voltage regardless of what is hooked up to it. A battery is a real-life voltage source. A battery can be thought of as a perfect voltage source with a small resistor (called internal resistance) in series. The electric energy density produced by the chemistry of the battery is called emf, but the amount of voltage available from the battery is called terminal voltage. The terminal voltage equals the emf minus the voltage drop across the internal resistance (current of the external circuit times the internal resistance.) Key Equations \begin{align*}I = \triangle q/ \triangle t\end{align*} current is the rate at which charge passes by; the units of current are Amperes \begin{align*}(1\;\mathrm{A} = 1\;\mathrm{C/s})\end{align*}. \begin{align*}\triangle V = I \cdot R\end{align*} the current flow through a resistor depends on the applied electric potential difference across it; the units of resistance are Ohms \begin{align*}(1 \Omega = 1\;\mathrm{V/A})\end{align*}. \begin{align*}P = I \cdot \triangle V\end{align*} the power dissipated by a resistor is the product of the current through the resistor and the applied electric potential difference across it; the units of power are Watts \begin{align*}(1\;\mathrm{W} = 1\;\mathrm{J/s})\end{align*}. Electric Circuits Problem Set The current in a wire is 4.5 A. How many coulombs per second are going through the wire? How many electrons per second are going through the wire? A light bulb with resistance of \begin{align*}80\ \Omega\end{align*} is connected to a \begin{align*}9\;\mathrm{V}\end{align*} battery. What is the electric current going through it? What is the power (i.e. wattage) dissipated in this light bulb with the \begin{align*}9\;\mathrm{V}\end{align*} battery? How many electrons leave the battery every hour? How many Joules of energy leave the battery every hour? A \begin{align*}120\;\mathrm{V}, 75\;\mathrm{W}\end{align*} light bulb is shining in your room and you ask yourself… What is the resistance of the light bulb? How bright would it shine with a \begin{align*}9\;\mathrm{V}\end{align*} battery (i.e. what is its power output)? A bird is standing on an electric transmission line carrying \begin{align*}3000\;\mathrm{A}\end{align*} of current. A wire like this has about \begin{align*}3.0 \times 10^{-5}\ \Omega\end{align*} of resistance per meter. The bird’s feet are \begin{align*}6\;\mathrm{cm}\end{align*} apart. The bird, itself, has a resistance of about \begin{align*}4 \times 10^5\ \Omega.\end{align*} What voltage does the bird feel? What current goes through the bird? What is the power dissipated by the bird? By how many Joules of energy does the bird heat up every hour? Which light bulb will shine brighter? Which light bulb will shine for a longer amount of time? Draw the schematic diagram for both situations. Note that the objects on the right are batteries, not resistors. Regarding the circuit to the right. If the ammeter reads \begin{align*}2\;\mathrm{A}\end{align*}, what is the voltage? How many watts is the power supply supplying? How many watts are dissipated in each resistor? Three \begin{align*}82\ \Omega\end{align*} resistors and one \begin{align*}12\ \Omega\end{align*} resistor are wired in parallel with a \begin{align*}9\;\mathrm{V}\end{align*} battery. Draw the schematic diagram. What is the total resistance of the circuit? What will the ammeter read for the circuit shown to the right? Draw the schematic of the following circuit. What does the ammeter read and which resistor is dissipating the most power? Analyze the circuit below. Find the current going out of the power supply How many Joules per second of energy is the power supply giving out? Find the current going through the \begin{align*}75\ \Omega\end{align*} light bulb. Find the current going through the \begin{align*}50\ \Omega\end{align*} light bulbs (hint: it’s the same, why?). Order the light bulbs in terms of brightness If they were all wired in parallel, order them in terms of brightness. Find the total current output by the power supply and the power dissipated by the \begin{align*}20\ \Omega\end{align*} resistor. You have a \begin{align*}600\;\mathrm{V}\end{align*} power source, two \begin{align*}10\ \Omega\end{align*} toasters that both run on \begin{align*}100\;\mathrm{V}\end{align*} and a \begin{align*}25\ \Omega\end{align*} resistor. Show me how you would wire them up so the toasters run properly. What is the power dissipated by the toasters? Where would you put the fuses to make sure the toasters don’t draw more than 15 Amps? Where would you put a \begin{align*}25\end{align*} Amp fuse to prevent a fire (if too much current flows through the wires they will heat up and possibly cause a fire)? Look at the following scheme of four identical light bulbs connected as shown. Answer the questions below giving a justification for your answer: Which of the four light bulbs is the brightest? Which light bulbs are the dimmest? Tell in the following cases which other light bulbs go out if: bulb \begin{align*}A\end{align*} goes out bulb \begin{align*}B\end{align*} goes out bulb \begin{align*}D\end{align*} goes out Tell in the following cases which other light bulbs get dimmer, and which get brighter if: bulb \begin{align*}B\end{align*} goes out bulb \begin{align*}D\end{align*} goes out Refer to the circuit diagram below and answer the following questions. What is the resistance between \begin{align*}A\end{align*} and \begin{align*}B\end{align*}? What is the resistance between \begin{align*}C\end{align*} and \begin{align*}B\end{align*}? What is the resistance between \begin{align*}D\end{align*} and \begin{align*}E\end{align*}? What is the the total equivalent resistance of the circuit? What is the current leaving the battery? What is the voltage drop across the \begin{align*}12\ \Omega\end{align*} resistor? What is the voltage drop between \begin{align*}D\end{align*} and \begin{align*}E\end{align*}? What is the voltage drop between \begin{align*}A\end{align*} and \begin{align*}B\end{align*}? What is the current through the \begin{align*}25\ \Omega\end{align*} resistor? What is the total energy dissipated in the \begin{align*}25\ \Omega\end{align*} if it is in use for 11 hours? In the circuit shown here, the battery produces an emf of \begin{align*}1.5\;\mathrm{V}\end{align*} and has an internal resistance of \begin{align*}0.5\ \Omega\end{align*}. Find the total resistance of the external circuit. Find the current drawn from the battery. Determine the terminal voltage of the battery Show the proper connection of an ammeter and a voltmeter that could measure voltage across and current through the \begin{align*}2\ \Omega\end{align*} resistor. What measurements would these instruments read? Students measure an unknown resistor and list their results in the Table (below); based on their results, complete the following: Show a circuit diagram with the connections to the power supply, ammeter and voltmeter. Graph voltage vs. current; find the best-fit straight line. Use this line to determine the resistance. How confident can you be of the results? Use the graph to determine the current if the voltage were \begin{align*}13\;\mathrm{V}\end{align*}. Voltage \begin{align*}(v)\end{align*} Current \begin{align*}(a)\end{align*} \begin{align*} 15\end{align*} \begin{align*}.11\end{align*} \begin{align*}12\end{align*} \begin{align*}.08\end{align*} \begin{align*}10\end{align*} \begin{align*}.068\end{align*} \begin{align*}8\end{align*} \begin{align*}.052\end{align*} \begin{align*}6\end{align*} \begin{align*}.04\end{align*} \begin{align*}4\end{align*} \begin{align*}.025\end{align*} \begin{align*}2\end{align*} \begin{align*}.01\end{align*} Students are now measuring the terminal voltage of a battery hooked up to an external circuit. They change the external circuit four times and develop the Table (below); using this data, complete the following: Graph this data, with the voltage on the vertical axis. Use the graph to determine the emf of the battery. Use the graph to determine the internal resistance of the battery. What voltage would the battery read if it were not hooked up to an external circuit? Terminal Voltage \begin{align*}(v)\end{align*} Current \begin{align*}(a)\end{align*} \begin{align*}14.63\end{align*} \begin{align*}.15\end{align*} \begin{align*}14.13\end{align*} \begin{align*}.35\end{align*} \begin{align*}13.62\end{align*} \begin{align*}.55\end{align*} \begin{align*}12.88\end{align*} \begin{align*}.85\end{align*} Students are using a variable power supply to quickly increase the voltage across a resistor. They measure the current and the time the power supply is on. Use the Table (below) that they developed to complete the following: Graph voltage vs. current Explain the probable cause of the anomalous data after \begin{align*}8\end{align*} seconds Determine the likely value of the resistor and explain how you used the data to support this determination. Graph power vs. time Determine the total energy dissipation during the \begin{align*}18\end{align*} seconds. Time(sec) Voltage \begin{align*}(v)\end{align*} Current \begin{align*}(a)\end{align*} \begin{align*}0\end{align*} \begin{align*}0\end{align*} \begin{align*}0\end{align*} \begin{align*}2\end{align*} \begin{align*}10\end{align*} \begin{align*}1.0\end{align*} \begin{align*}4\end{align*} \begin{align*}20\end{align*} \begin{align*}2.0\end{align*} \begin{align*}6\end{align*} \begin{align*}30\end{align*} \begin{align*}3.0\end{align*} \begin{align*}8\end{align*} \begin{align*}40\end{align*} \begin{align*}3.6\end{align*} \begin{align*}10\end{align*} \begin{align*}50\end{align*} \begin{align*}3.8\end{align*} \begin{align*}12\end{align*} \begin{align*}60\end{align*} \begin{align*}3.5\end{align*} \begin{align*}14\end{align*} \begin{align*}70\end{align*} \begin{align*}3.1\end{align*} \begin{align*}16\end{align*} \begin{align*}80\end{align*} \begin{align*} 2.7\end{align*} \begin{align*}18\end{align*} \begin{align*} 90\end{align*} \begin{align*}2.0\end{align*} You are given the following three devices and a power supply of exactly \begin{align*}120\;\mathrm{v}\end{align*}. \begin{align*}^*\text{Device}\ X\end{align*} is rated at \begin{align*}60\;\mathrm{V}\end{align*} and \begin{align*} 0.5\;\mathrm{A}\end{align*}\begin{align*}^*\text{Device}\ Y\end{align*} is rated at \begin{align*}15\;\mathrm{w}\end{align*} and \begin{align*}0.5\;\mathrm{A}\end{align*}\begin{align*}^*\text{Device}\ Z\end{align*} is rated at \begin{align*}120\;\mathrm{V}\end{align*} and \begin{align*}1800\;\mathrm{w}\end{align*} Design a circuit that obeys the following rules: you may only use the power supply given, one sample of each device, and an extra, single resistor of any value (you choose). Also, each device must be run at their rated values. Given three resistors, \begin{align*}200\ \Omega, 300\ \Omega\end{align*} and \begin{align*}600\ \Omega\end{align*} and a \begin{align*}120\;\mathrm{V}\end{align*} power source connect them in a way to heat a container of water as rapidly as possible. Show the circuit diagram How many joules of heat are developed after 5 minutes? Construct a circuit using the following devices: a \begin{align*}120\;\mathrm{V}\end{align*} power source. Two \begin{align*}9\ \Omega\end{align*} resistors, device A rated at \begin{align*}1\;\mathrm{A}\end{align*}, \begin{align*}6\;\mathrm{V}\end{align*}; device \begin{align*}B\end{align*} rated at \begin{align*}2\;\mathrm{A}\end{align*}, \begin{align*}60\;\mathrm{V}\end{align*}; device \begin{align*}C\end{align*} rated at \begin{align*}225\;\mathrm{w}\end{align*}, \begin{align*}3\;\mathrm{A}\end{align*}; device \begin{align*}D\end{align*} rated at \begin{align*}15\;\mathrm{w}\end{align*}, \begin{align*}15\;\mathrm{V}\end{align*}. You have a battery with an emf of \begin{align*}12\;\mathrm{V}\end{align*} and an internal resistance of \begin{align*}1.00\ \Omega\end{align*}. Some \begin{align*}2.00\;\mathrm{A}\end{align*} are drawn from the external circuit. What is the terminal voltage The external circuit consists of device \begin{align*}X\end{align*}, \begin{align*}0.5\;\mathrm{A}\end{align*} and \begin{align*}6\;\mathrm{V}\end{align*}; device \begin{align*}Y\end{align*}, \begin{align*}0.5\;\mathrm{A}\end{align*} and \begin{align*}10\;\mathrm{V}\end{align*}, and two resistors. Show how this circuit is connected. Determine the value of the two resistors. Students use a variable power supply an ammeter and three voltmeters to measure the voltage drops across three unknown resistors. The power supply is slowly cranked up; use the data Table (below) to complete the following: Draw a circuit diagram, showing the ammeter and voltmeter connections. Graph the above data with voltage on the vertical axis. Use the slope of the best-fit straight line to determine the values of the three resistors. Quantitatively discuss the confidence you have in the results What experimental errors are most likely might have contributed to any inaccuracies.
{ "pile_set_name": "Pile-CC" }
(-10)/6*18/(-3). 8 Calculate the remainder when 36 is divided by (1/2)/((-1)/(-38) + 0). 17 Suppose -3*o - 5*a = -111, 4*a - 9*a = 4*o - 153. Calculate the remainder when o is divided by 15. 12 Suppose 0 = 4*y + k - 0*k - 1109, 2*k + 266 = y. Suppose 0 = -4*c + y - 60. What is the remainder when c is divided by 19? 16 Let j(i) = 4*i + 7. Let k(p) = p**3 - p**2 + 14. What is the remainder when j(8) is divided by k(0)? 11 Let r(x) = -x**2 + 4*x + 6. Let a be r(5). Let h be -36 + 0 - 2 - 0. What is the remainder when (3 - 2) + a - h is divided by 14? 12 Let b be -1 - 6/(-3) - -1. Suppose -b*m = -17 - 133. What is the remainder when m is divided by 19? 18 Suppose -4*k = -5*o + 116, 12 = 4*o - o. Let f(n) = n**2 - 4*n - 5. What is the remainder when (-118)/(-6) - 8/k is divided by f(6)? 6 Calculate the remainder when 9 is divided by (-6)/(-4)*(2 + 0). 0 Let y(n) = 8*n - 20. Calculate the remainder when y(7) is divided by 20. 16 Suppose u + 8 = 2*o - 38, 4*o = u + 94. Calculate the remainder when o is divided by 15. 9 Let u = 9 + 10. Calculate the remainder when 56 is divided by u. 18 Let j = 11 + -7. Suppose -3*v + 42 = -0*v. Calculate the remainder when v is divided by j. 2 Let i(t) = t. Suppose 2*n + 1 + 1 = 0. Let f(p) = -2*p**2 + 14*p - 2. Let b(m) = n*f(m) + 5*i(m). Calculate the remainder when b(6) is divided by 7. 6 Let h(t) = 51*t + 8. Calculate the remainder when h(5) is divided by 53. 51 Let k be 26/8 - (-2)/(-8). Suppose 3*d - 6*d - k = 0, 0 = 3*x - 2*d - 17. What is the remainder when (-2)/(-8) - (-55)/4 is divided by x? 4 Suppose -4*w + 48 = -3*v, -3*v = 12 - 0. Calculate the remainder when 60 is divided by w. 6 Let w = 15 - -8. Calculate the remainder when 42 is divided by w. 19 Let n = -1 + 5. What is the remainder when ((-7)/(-5))/(9/45) is divided by n? 3 Let o = -15 - -25. What is the remainder when 3 is divided by 22/8 + o/40? 0 Calculate the remainder when 8/(-20) - (1314/(-10) + -3) is divided by 20. 14 Suppose 0 = -4*f - 0*f + 84. Suppose x - 3*j = -4*j + 1, j + 15 = -5*x. Calculate the remainder when (-622)/(-10) - x/(-20) is divided by f. 20 Let x = -26 + 70. Suppose 0 = a - 2*a - 5*h, 3*h + 9 = 0. Calculate the remainder when x is divided by a. 14 Let h be (-3 + -1)*(-1)/2. What is the remainder when 19 is divided by 34/6 + h/(-3)? 4 Suppose 3 = -2*d + 15. Let i = d - -2. What is the remainder when 23 is divided by i? 7 Suppose w - 72 = -5*w. Calculate the remainder when 69 is divided by w. 9 Let w(d) = -8*d**2 + 2*d - 1. Let k be w(-2). Let c = k - -148. What is the remainder when c is divided by 28? 27 Suppose 0 = 2*d + 5*h - 23, 11 + 0 = -d + 5*h. Let b(l) = -l**2 - 5*l + 7. Let n be b(-5). Suppose n*t - 24 = d*t. What is the remainder when t is divided by 3? 2 Suppose n + 174 = 6*n - c, -4*c - 16 = 0. Let p(w) = 2*w + 2. Calculate the remainder when n is divided by p(5). 10 Let s(j) = j**3 + 7*j**2 - 9*j + 11. Let w be -3*(16/3)/2. Calculate the remainder when s(w) is divided by 7. 5 Suppose 15 = -j + 4*j, 5*j - 345 = -5*q. Let d = 1 + q. What is the remainder when d is divided by 17? 14 Suppose -2*v + s + 45 = 0, -6 - 42 = -2*v - 2*s. Calculate the remainder when v is divided by 6. 5 Calculate the remainder when (-44*3/2)/((-6)/4) is divided by 16. 12 Let z = -58 - -68. Let k(t) = t + 6. Let w be k(0). Calculate the remainder when (38/2)/(3/w) is divided by z. 8 Let f(d) = d**2 + 3. Let q be f(4). Let y be 6 + 0/(-2) + -1. Suppose -21 = -y*o + q. Calculate the remainder when 23 is divided by o. 7 Let t be 3/6 - 5/(-2). Suppose -3*b + 49 = -5*l, t*b - 85 = -6*l + 2*l. Calculate the remainder when 44 is divided by b. 21 Let r = -31 - -52. Calculate the remainder when r is divided by 12. 9 Let t be 4/(-12)*(-5 + -1). Suppose -t*x + 36 = x. What is the remainder when 22 is divided by x? 10 Let r(m) = -49*m - 1. Suppose -4*a - 4 = 4*n, -8*n + 5 = -3*n. Calculate the remainder when r(a) is divided by 25. 22 Suppose -2*h + w + 3*w = -48, -2*w = 4*h - 126. Calculate the remainder when 58 is divided by h. 28 Let s(y) = -y**3 - y**2 - y + 4. Let z be s(0). What is the remainder when 37 is divided by (z - 3)/((-1)/(-13))? 11 Let d(x) = 6*x**2 + 3. Let u be d(-3). Let v = -22 + u. Calculate the remainder when v is divided by 12. 11 Let q(t) = -4*t**3 + 3*t**2 + 4*t + 2. Calculate the remainder when 18 is divided by q(-1). 3 Suppose 5*x - 25 = 0, -5*v + x = -3*x - 15. What is the remainder when 26 is divided by v? 5 Let n(l) = 9*l**2 - 2*l + 2. Let h be (12/(-10))/((-3)/(-10)). Let a = h - -16. Calculate the remainder when n(2) is divided by a. 10 Let v be 2/12 + (-868)/24. What is the remainder when (-4)/2 - (v - 1) is divided by (-345)/(-39) + 2/13? 8 Let f be -1*1 - 280/(-5). Let g be (3/(-5))/((-1)/f). Suppose 33 + g = 3*k. Calculate the remainder when k is divided by 6. 4 Let h be (-1)/(-2) - (-3)/6. Let l(q) = 4*q**2 + q - 1. Let n be l(-2). Let a = n + h. What is the remainder when 53 is divided by a? 11 Suppose 2*c + 2 = 3*c. Suppose 4*l - 10 = 2, -5*l + 11 = -k. Suppose -c*f - 20 = -k*f. Calculate the remainder when 18 is divided by f. 8 Let f(k) = k**3 + 11*k**2 + 9*k + 5. What is the remainder when 43 is divided by f(-10)? 13 Let n = 24 + 8. Calculate the remainder when 222 is divided by n. 30 Let d(q) be the third derivative of q**6/120 - q**5/60 + q**4/12 - q**3/6 + 3*q**2. Calculate the remainder when 89 is divided by d(3). 20 Let y(p) = p**3 + 5*p**2 - 2*p - 5. Let g be y(-5). Suppose 0 = 2*b + 2*b + g*t - 26, 0 = 5*b - 2*t - 16. What is the remainder when b is divided by 3? 1 Let p = -12 + 37. Calculate the remainder when 74 is divided by p. 24 Suppose -4*w = 3*t - 0*w - 70, 2*w - 2 = 0. Calculate the remainder when t is divided by 8. 6 Suppose -5*y - 5*v + 28 = -v, -14 = -3*y - v. Let n be (-1)/(-5) + (-645)/(-25). Suppose -3*d = 8 - n. What is the remainder when d is divided by y? 2 Suppose 28 = s - 2*i, -5*i = -0*s - 4*s + 103. Let a = -3 + 9. What is the remainder when s is divided by a? 4 Let q(w) = -3*w - 3. Let j be q(-2). Suppose j*s - 5*n = 78, -4*n = s - 5 - 4. Calculate the remainder when s is divided by 11. 10 Suppose a = -5*n + 6*n - 21, 0 = 3*n - 4*a - 59. Let g = n + -14. What is the remainder when (-1)/4 - 375/(-12) is divided by g? 9 Let y(b) = -b**2 - 10*b + 7. Suppose 3*f = -2*a - 2*f - 36, -54 = 3*a - 3*f. Let g = a + 10. What is the remainder when y(g) is divided by 12? 11 Suppose -d = -5*d + 48. What is the remainder when 35 is divided by d? 11 Let c = 124 + -60. Calculate the remainder when c is divided by 17. 13 Suppose 4*w + 91 - 31 = 0. Suppose -3*h + 108 + 15 = 0. What is the remainder when h is divided by 6/w*(-35 + 0)? 13 Let c(u) = -u**2 + 4*u - 2. Let w(g) = g**2 - 3*g + 2. Let y(a) = 4*c(a) + 5*w(a). What is the remainder when 14 is divided by y(-3)? 6 Calculate the remainder when (-8 - -2)/((-6)/4) is divided by 3. 1 Suppose 2*u - 34 = -4*i, -29 = 3*u - 4*u - 5*i. Calculate the remainder when u is divided by 6. 3 Let c(p) = -p**3 + p + 2. Let t be c(0). Suppose -4*y = -i - 55, -t*y - 4*i = -0*y - 32. What is the remainder when 26 is divided by y? 12 Suppose 2*p = 0, 0 = m + 3*p - 2 - 2. What is the remainder when (39/(-2))/(m/(-8)) is divided by 10? 9 Suppose 2*d + 19 - 129 = k, 5*k = -d + 66. What is the remainder when d is divided by 15? 11 Suppose -6 = 2*q, -2*q - 51 = -3*o + 2*q. Suppose 8 = -4*h - 0. Calculate the remainder when ((-2)/(-4))/(h/(-152)) is divided by o. 12 Suppose 3*h = -0*h + 6. Let d(y) = 7*y + 2. Suppose -4*o + 132 + 15 = 5*g, -72 = -2*g + 5*o. Calculate the remainder when g is divided by d(h). 15 Suppose -2*p = -3*r + 8, -p + 1 + 0 = r. Let q = 13 - p. Calculate the remainder when 25 is divided by q. 11 Let j = 93 - 85. Suppose 0 = 3*u - 0*u - 4*f - 27, -3*f = 9. Calculate the remainder when j is divided by u. 3 Calculate the remainder when 15 is divided by (-4)/20 + 3/((-30)/(-52)). 0 Suppose -3*n - 2*n + 95 = 0. Let v be 2 - 0 - (1 + -26). Let k = v - -10. What is the remainder when k is divided by n? 18 Let c = -16 + 53. What is the remainder when c is divided by 10? 7 Suppose -x - 405 = -4*x. What is the remainder when x is divided by 34? 33 Suppose 2*c + 4*q = -10, 0 = -5*c + 2*c - q. Calculate the remainder when 13 is divided by (c/3)/(2/30). 3 Supp
{ "pile_set_name": "DM Mathematics" }
A Reading device disclosed in Japanese Patent Application Publication No. 2010-50522 uses a reading unit to read images on original sheets conveyed by for example an automatic document feeder (ADF), and displays previews of these read images on a display panel.
{ "pile_set_name": "USPTO Backgrounds" }
309th Military Intelligence Battalion (United States) The 309th Military Intelligence Battalion is a training unit of the United States Army. It aims to conduct initial entry, collective, and functional training to produce competent, disciplined, and physically fit military intelligence soldiers, instilled with the Army values, ready to join the Army at war. Lineage Constituted 19 September 1952 in the Army Reserve as Headquarters and Headquarters Detachment, 309th Communication Reconnaissance Battalion. Activated 1 November 1952 at Los Angeles. Reorganized and redesignated 25 January 1956 as Headquarters and Headquarters Company, 309th Communication Reconnaissance Battalion (organic elements constituted 29 December 1955 – 4 March 1956 and activated 1 February 1956 – 5 March 1956). Redesignated 1 October 1956 as the 309th Army Security Agency Battalion. Inactivated 15 July 1959 at Los Angeles. Activated 15 September 1962 with headquarters at Bell, California. Companies A, B, C, and D reorganized and redesignated 15 August 1966 as the 518th, 519th, 520th, and 521st Army Security Agency Companies – hereafter separate lineages. Headquarters and Headquarters Company Inactivated 15 July 1986 at Bell, California. Redesignated 1 February 1990 as Headquarters and Headquarters Company, 309th Military Intelligence Battalion; concurrently withdrawn from the Army Reserve and allotted to the Regular Army. Headquarters transferred 17 August 1990 to the United States Army Training and Doctrine Command and activated at Fort Huachuca, Arizona as an element of the United States Army Intelligence Center's 111th Military Intelligence Brigade. Honors Campaign Participation Credit None Decorations None Current Mission Company A (nicknamed "Apaches") support and train counterintelligence students in the Counterintelligence Special Agent Course (CISAC) and CI Officer's Course (CIOC). Company B (nicknamed "Blackfoot") support and train human intelligence students in the Human Intelligence Collector (MOS 35M) course and the Linguist (MOS 09L) course. Company C (nicknamed "Comanches") The company provides administrative and logistical support staff to the battalion, as well as support and train MI System Maintainers/Integrators (MOS 35T). Insignia Distinctive Unit Insignia Description: A gold color metal and enamel device 1 5/32 inches (2.94 cm) in height overall, consisting of a shield blazoned as follows: Argent, on a pale emitting in saltire four lightning flashes Azure (Teal Blue) a key bit to dexter in base, the bow a bear's head, Or. Attached above the shield is a Gold triparted scroll inscribed “Sentinels of Security” in black letters. Symbolism: Teal blue and silver refer to the colors formerly used for the U.S. Army Security Agency. The key symbolizes the unit's mission which is providing security. The golden bear's head on the key represents California where the unit activated. The lightning flashes, symbolizing electricity, relate to the importance of electronic communications as part of the unit's functions. Background: The distinctive unit insignia was originally approved for the 309th Army Security Agency Battalion, Army Reserve on 12 February 1959. It was assigned for use by the 325th U.S. Army Security Agency Battalion on 5 August 1959. It was reassigned for the 309th U.S. Army Security Agency Battalion on 2 August 1965. The insignia was redesignated for the 309th Military Intelligence Battalion on 2 May 1990. Coat of Arms Shield: Argent, on a pale emitting in saltire four lightning flashes Azure (Teal Blue) a key ward to dexter in base, the bow a bear's head, Or. Crest: 1990–present: None. 1959–1986: That for the regiments and separate battalions of the Army Reserve: On a wreath of the colours, argent and azure, the Lexington Minuteman proper. The statue of the Minuteman, Capt. John Parker (Henry Hudson Kitson, sculptor), stands on the Common in Lexington, Massachusetts. Motto: “Sentinels of Security”. Symbolism: Teal blue and white were the colors used for the U.S. Army Security Agency, the original designation of the organization. The key symbolizes the unit's mission—the guarding of security—and the golden bear's head on the key represents the State of California, where the unit was activated. The lightning flashes, symbolic of electricity, relate to the importance of electronic communications as part of the unit's functions. Background: The coat of arms was originally approved for the 309th Army Security Agency Battalion, Army Reserve, on 12 February 1959. It was assigned for use by the 325th U.S. Army Security Agency Battalion, Army Reserve on 5 August 1959. It was reassigned for use by the 309th U.S. Army Security Agency Battalion on 2 August 1965. It was cancelled on 6 June 1975. The coat of arms was reinstated and redesignated for the 309th Military Intelligence Battalion on 10 October 1995. References 309
{ "pile_set_name": "Wikipedia (en)" }
A sonographic investigation for the development of ultrasound-guided paravertebral brachial plexus block in dogs: cadaveric study. To describe a novel in-plane ultrasound (US)-guided approach to the sixth (C6), seventh (C7), eighth (C8) cervical and to the first thoracic (T1) spinal nerves. Prospective, descriptive, experimental anatomic study. A total of seven canine Beagle cadavers. Phase 1: One cadaver was used to define bony landmarks for the C6-T1 spinal nerves using computed tomography (CT) and magnetic resonance imaging. An US transducer was positioned lateral to the C6 vertebra. Methylene blue (0.05 mL kg-1) was injected cranial and caudal to the transverse process of C6. The probe was moved caudally to identify the cranial costal fovea of T1 and 0.1 mL kg-1 of methylene blue was injected. Full cadaver dissection was performed to assess the staining of the spinal nerves. Phase 2: The technique was repeated using a 50:50 mixture of iohexol and methylene blue in six dogs. CT verified the proximity of contrast to C6, C7, C8 and T1 nerves. Mediastinal, epidural, intravascular and pleural contamination was recorded. Methylene blue staining of the phrenic nerve was assessed by dissection. Phase1: The identified bony landmarks were the lamina ventralis of C6, the transverse process of C6 and C7, T1 vertebra and the first rib. Phase 2: At all the 12 sites, the C6, C7 and C8 nerves were in contact with contrast material. Contrast was demonstrated in close proximity to the anatomical location of the T1 nerve in 11/12 sites. Mediastinal, epidural and intravascular contamination was observed in six, four and two cadavers, respectively. Pleural contamination was not observed. The phrenic nerve was stained on 2/12 of sides. In-plane US-guided blockade of the spinal roots is a feasible technique. However, because of the undesirable spreads of contrast, further research is needed to diminish the occurrence of contaminations of noble structures.
{ "pile_set_name": "PubMed Abstracts" }
Q: Is HTTPS secure? I'm developing a iPhone App that connects to a https:// link to authenticate the user. From what I understood all traffic that goes to a server that has a private key on 256 bits is secured and cannot be caught so there is no need to encrypt the data again and it can be sent as plain text along the HTTPS connection. After reading this blogpost: http://wirewatcher.wordpress.com/2010/07/20/decrypting-ssl-traffic-with-wireshark-and-ways-to-prevent-it/ I don't understand how that traffic can be caught with Wireshark if it's secure. Edit: I've re-read the article and from what I understand you have to have access to the server's private key to do this. What I don't understand is how this guy did it here because I don't think he had access to that. http://techcrunch.com/2013/11/25/quizup-privacy-violations/ http://kylerichter.com/our-responsibility-as-developers/ A: If you read carefully: [...] the output of this command is two files, testkey.pem (containing a 1024 bit RSA private key) and testcert.pem (containing a self signed certificate) And further down: Once SSL is selected, there’s an option on the right to enter an “RSA keys list”. Enter something like this: 10.16.8.5,443,http,c:\openssl-win32\bin\testkey.pem You’ll need to edit the server IP address and path to testkey.pem as appropriate. And more: Protection of one’s private key is at the core of any system using asymmetric keys. If your private key is compromised, the attacker can either masquerade as you or they can attempt to carry out decryption as outlined above. Basically in the tutorial, the author is giving wireshark the private key to decrypt the trafic - intentionally. That private key MUST be kept secret on the server and shouldn't be accessible by anyone in a real life scenario. He gives some tips on enhancing security like using the Diffie-Hellman method of exchanging keys. Bottom line: YES, it's secure. Here are some nice videos that explain public key cryptography: http://www.youtube.com/playlist?list=PLB4D701646DAF0817
{ "pile_set_name": "StackExchange" }
Q: math operations between column in multiindex dataframe I have a dataframe with column multiindex that I need to slice and perform math operations between the slices. # sample df idx=pd.IndexSlice np.random.seed(123) tuples = list(zip(*[['one', 'one', 'two', 'two', 'three', 'three'],['foo', 'bar', 'foo', 'bar', 'foo', 'bar']])) index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) df = pd.DataFrame(np.random.randn(3, 6), index=['A', 'B', 'C'], columns=index) If I wanted to perform say addition/subtraction between individual columns, I could use index slice and do it like this: df.loc[:,idx['three','foo']] - df.loc[:,idx['two','foo']] However, if I want to use higher level slice it doesn't work and return NaNs: # not working df.loc[:,idx['three',:]] - df.loc[:,idx['two',:]] Is there an easy way to use higher level slices of the df and add/subtract corresponding columns only? My dataframe potentially contains hundreds of columns in multiindex. Thanks A: If need MultiIndex in output use rename for same level od MultiIndex: df = df.loc[:,idx['three',:]] - df.loc[:,idx['two',:]].rename(columns={'two':'three'}) print (df) first three second foo bar A -0.861579 3.157731 B -1.944822 0.772031 C 2.649912 2.621137 Advantage is possible rename both levels to new index names and join to original: df = (df.join(df.loc[:,idx['three',:]].rename(columns={'three':'four'}) - df.loc[:,idx['two',:]].rename(columns={'two':'four'}))) print (df) first one two three four \ second foo bar foo bar foo bar foo A -1.085631 0.997345 0.282978 -1.506295 -0.578600 1.651437 -0.861579 B -2.426679 -0.428913 1.265936 -0.866740 -0.678886 -0.094709 -1.944822 C 1.491390 -0.638902 -0.443982 -0.434351 2.205930 2.186786 2.649912 first second bar A 3.157731 B 0.772031 C 2.621137 If not necessary, use DataFrame.xs: df1 = df.xs('three', axis=1, level=0) - df.xs('two', axis=1, level=0) print (df1) second foo bar A -0.861579 3.157731 B -1.944822 0.772031 C 2.649912 2.621137 If need first level one possible solution is MultiIndex.from_product: df1 = df.xs('three', axis=1, level=0) - df.xs('two', axis=1, level=0) df1.columns = pd.MultiIndex.from_product([['new'], df1.columns], names=['first','second']) print (df1) first new second foo bar A -0.861579 3.157731 B -1.944822 0.772031 C 2.649912 2.621137
{ "pile_set_name": "StackExchange" }
LONDON - North Korea offered 3,500km intermediate range ballistic missiles to UK arms dealer Michael Ranger, information detailed in a new UN report has revealed for the first time. Representatives from the North Korean front company Hesong Trading Corporation allegedly offered Mr. Ranger modern and vintage small arms and light weapons, GPS jammers, multiple launch rocket systems, and "extraordinarily," ballistic missiles with a range of up to 3,500 km.
{ "pile_set_name": "OpenWebText2" }
Nearly every morning of his life, Mister Rogers has gone swimming, and now, here he is, standing in a locker room, seventy years old and as white as the Easter Bunny, rimed with frost wherever he has hair, gnawed pink in the spots where his dry skin has gone to flaking, slightly wattled at the neck, slightly stooped at the shoulder, slightly sunken in the chest, slightly curvy at the hips, slightly pigeoned at the toes, slightly aswing at the fine bobbing nest of himself... and yet when he speaks, it is in that voice, his voice, the famous one, the unmistakable one, the televised one, the voice dressed in sweater and sneakers, the soft one, the reassuring one, the curious and expository one, the sly voice that sounds adult to the ears of children and childish to the ears of adults, and what he says, in the midst of all his bobbing-nudity, is as understated as it is obvious: "Well, Tom, I guess you've already gotten a deeper glimpse into my daily routine than most people have." ONCE UPON A TIME, a long time ago, a man took off his jacket and put on a sweater. Then he took off his shoes and put on a pair of sneakers. His name was Fred Rogers. He was starting a television program, aimed at children, called Mister Rogers' Neighborhood. He had been on television before, but only as the voices and movements of puppets, on a program called The Children's Corner. Now he was stepping in front of the camera as Mister Rogers, and he wanted to do things right, and whatever he did right, he wanted to repeat. And so, once upon a time, Fred Rogers took off his jacket and put on a sweater his mother had made him, a cardigan with a zipper. Then he took off his shoes and put on a pair of navy-blue canvas boating sneakers. He did the same thing the next day, and then the next... until he had done the same things, those things, 865 times, at the beginning of 865 television programs, over a span of thirty-one years. The first time I met Mister Rogers, he told me a story of how deeply his simple gestures had been felt, and received. He had just come back from visiting Koko, the gorilla who has learned--or who has been taught--American Sign Language. Koko watches television. Koko watches Mister Rogers' Neighborhood, and when Mister Rogers, in his sweater and sneakers, entered the place where she lives, Koko immediately folded him in her long, black arms, as though he were a child, and then... "She took my shoes off, Tom," Mister Rogers said. Koko was much bigger than Mister Rogers. She weighed 280 pounds, and Mister Rogers weighed 143. Koko weighed 280 pounds because she is a gorilla, and Mister Rogers weighed 143 pounds because he has weighed 143 pounds as long as he has been Mister Rogers, because once upon a time, around thirty-one years ago, Mister Rogers stepped on a scale, and the scale told him that Mister Rogers weighs 143 pounds. No, not that he weighed 143 pounds, but that he weighs 143 pounds.... And so, every day, Mister Rogers refuses to do anything that would make his weight change--he neither drinks, nor smokes, nor eats flesh of any kind, nor goes to bed late at night, nor sleeps late in the morning, nor even watches television--and every morning, when he swims, he steps on a scale in his bathing suit and his bathing cap and his goggles, and the scale tells him he weighs 143 pounds. This has happened so many times that Mister Rogers has come to see that number as a gift, as a destiny fulfilled, because, as he says, "the number 143 means `I love you.' It takes one letter to say 'I' and four letters to say `love' and three letters to say `you.' One hundred and forty-three. `I love you.' Isn't that wonderful?" THE FIRST TIME I CALLED MISTER ROGERS on the telephone, I woke him up from his nap. He takes a nap every day in the late afternoon--just as he wakes up every morning at five-thirty to read and study and write and pray for the legions who have requested his prayers; just as he goes to bed at nine-thirty at night and sleeps eight hours without interruption. Esquire, November 1998 (via sampasumb, he replied)
{ "pile_set_name": "OpenWebText2" }
Introduction {#s1} ============ Growth hormone (**GH**) deficiency (**GHD**) can be congenital or acquired. The incidence of congenital GHD has been assessed at from 1/4000 to 1/10 000 [@pone.0016367-Sizonenko1], [@pone.0016367-Vimpani1], [@pone.0016367-Bao1], [@pone.0016367-Lindsay1]. The pituitary stalk interruption syndrome (**PSIS**) is a sign of congenital and permanent GHD [@pone.0016367-Tauber1], [@pone.0016367-Marcu1], [@pone.0016367-Argyropoulou1]. It is diagnosed by magnetic resonance imaging (MRI) and includes the absence of both a visible pituitary stalk and normal posterior lobe hyperintense signals in the sella turcica, together with the presence of a hyperintense nodule in the region of the infundibular recess of the third ventricle. Familial forms of PSIS and associated malformations suggest that its origin is antenatal [@pone.0016367-Pinto1]. It is important to diagnose GHD and start treatment as soon as possible because this deficiency is associated with excess mortality and substantial morbidity [@pone.0016367-Taback1], [@pone.0016367-Mills1]. Moreover, because insufficient height at the onset of puberty leads to short final height, early diagnosis and treatment of GHD are necessary to allow catch-up growth to optimal height before puberty [@pone.0016367-Grumbach1]. Signs of congenital GHD in neonates include hypoglycemia, prolonged jaundice, and microphallus [@pone.0016367-Sizonenko1], [@pone.0016367-Pinto1], [@pone.0016367-Pinto2], [@pone.0016367-Rottembourg1]. In older children, the diagnosis is based on short stature or growth failure. Height for age is the most common criterion for referral for GH evaluation [@pone.0016367-Grote1]. However, the mean ages reported for diagnosis of symptomatic PSIS in various studies range from 4 to 9 years and suggest important diagnostic delay [@pone.0016367-Tauber1], [@pone.0016367-Argyropoulou1], [@pone.0016367-Pinto2], [@pone.0016367-Rottembourg1], [@pone.0016367-Maghnie1]. In 2000, the GH Research Society (**GHRS**) published guidelines based on height for age but also five other auxological criteria (see below), to ensure that children and adolescents with GHD are appropriately identified and treated [@pone.0016367-Consensus1]. A survey has shown that these criteria are not currently applied, probably because the concomitant use of six auxological criteria might be difficult in day-to-day routine practice [@pone.0016367-Grote1]. Moreover the performance (notably sensitivity for early diagnosis) of these guidelines has never been tested. The objective of this study was therefore to study the diagnostic delay for PSIS with GHD and the sensitivity of the auxological criteria of the GHRS to identify the most useful ones and simplify their routine use. Results {#s2} ======= Characteristics of the population {#s2a} --------------------------------- During the study period, 67 patients seen for growth failure had PSIS and/or GHD: 38 (57%) had GHD with a normal MRI or an isolated hypoplastic anterior pituitary gland, 2 (3%) had GHD and PSIS but had been adopted, six (9%) patients had GHD and PSIS diagnosed in the neonatal period. The study thus included 21 (31%) patients with GHD and PSIS ([Table 1](#pone-0016367-t001){ref-type="table"}), 76% of them boys. One patient was born preterm, and nine were delivered by cesareans (43%) (confidence interval, **CI** = 22--64), including three in breech presentation. One patient had midline abnormalities, including bilateral optic nerve hypoplasia. 10.1371/journal.pone.0016367.t001 ###### Patient characteristics. ![](pone.0016367.t001){#pone-0016367-t001-1} Isolated GHD (n = 16) MPD (n = 5) TOTAL (n = 21) ----------------------- ----------------------- ------------------------------------- ---------------- ------------------------------------ ----- ------------------------------------ **Neonatal symptoms** n\' Percentage n\' Percentage n\' Percentage Breech delivery 2 12.5% (CI 0--29) 1 20% (CI 0--55) 3 14% (CI 0--29) Cesarean delivery 5 31% (CI 8--54) 4 80% (CI 45--100) 9 43% (CI 22--64) **At diagnosis** Median (range) Median (range) Median (range) Age (yr) 16 3.2 (1; 13.6) (IQR 2.6; 4.9) 5 5.1 (1; 10.5) (IQR 5; 5.6) 21 3.6 (1; 13.6) (IQR 2.6; 5.5) Bone age (yr) 12 1.5 (0.5; 9.5) (IQR 1.2; 2.3) 4 2.2 (0.5; 4) (IQR 1.6; 2.9) 16 1.7 (0.5; 9.5) (IQR 1.2; 2.5) Bone age delay (yr) 12 1.3 (0.5; 4.1) (IQR 1; 1.7) 4 2.8 (0.5; 6.4) (IQR 2; 3.9) 16 1.4 (0.5; 6.4) (IQR 1; 2.6) Target height (SDS) 16 −0.2 (−1.6; 1.5) (IQR --0.7; 0.3) 5 −0.3 (−1.5; 0.6) (IQR --0.6; 0.4) 21 −0.3 (−1.6; 1.5) (IQR --0.6; 0.4) Height (SDS) 16 −2.7 (−4.3; −1.3) (IQR --3.7; −2.3) 5 −2.2 (−2.4; −2) (IQR --2.2; −2) 21 −2.5 (−4.3; −1.3) (IQR --3.5; −2) Height velocity (SDS) 16 −3 (−4.1; 0.3) (IQR --3.3; −1.6) 5 −3.3 (−4.2; 0) (IQR --3.4; −3.2) 21 −3.1 (−4.2; 0.3) (IQR --3.4; −1.6) Weight (SDS) 16 −2.5 (−4; −0.4) (IQR --3; −1.9) 5 −0.7 (−1.3; 1.1) (IQR --1.2; −0.3) 21 −2.4 (−4; 1.1) (IQR --2.8; −1) BMI (SDS) 16 −0.9 (−3.7; 2.2) (IQR --1.5; 0.2) 5 1.3 (−0.2; 4) (IQR --0.1; 1.7) 21 −0.23 (−3.7; 4) (IQR --1.1; 0.5) GH peak (ng/mL) 16 3.2 (1.5; 23) (IQR 2; 6.7) 5 2.1 (0.5; 4.1) (IQR 0.9; 3.1) 21 3 (0.5; 23) (IQR 2; 5.5) IGF-1 (ZS) 16 −2.9 (−5.1; −2) (IQR --4; −2.4) 5 −4.8 (−5; −4.1) (IQR --4.9; −4.4) 21 −3.1 (−5; −2) (IQR --4.4; −2.7) CI: confidence interval 95%. IQR: interquartile range. GHD: growth hormone deficiency. MPD: multiple pituitary deficiencies. SDS: standard deviation score. ZS: Z-score. Median age at diagnosis was 3.6 years (range 1--13.6; interquartile range **IQR**: 2.6--5.5), and all patients were prepubertal ([Table 1](#pone-0016367-t001){ref-type="table"}). Sixteen patients (76%) (95%) (CI 58--94) had isolated GHD and five (24%) (CI 6--42) had multiple pituitary deficiencies (**MPD**) with thyroid stimulating hormone deficiency in four and adrenocorticotrophin deficiency in two. The median height was −2.5 SDS (range −4.3; −1.3) (IQR −3.5; −2) and median BMI −0.23 SDS (range −3.7; 4) (IQR −1.1; 0.5). The median height velocity was −3.1 SDS (range −4.2; 0.3) (IQR −3.4; −1.6). Medical and growth history {#s2b} -------------------------- Nine families (43%) (CI 22--64) first consulted a private-practice pediatrician about growth failure, and 12 families an outpatient pediatric department (57%) (CI 36--78). One family sought care directly from our team. Eleven patients (52%) (CI 31--73) had undergone laboratory testing for growth retardation before consulting our team, two had had a GH stimulation test, and 3 had had serum IGF-1 measured. Both GH stimulation tests were normal, and serum IGF-1 was less than --2 SDS, but no further diagnostic procedures were performed to rule out GHD. The patient with bilateral optic nerve hypoplasia had had neonatal hypoglycemia and microphallus but was not evaluated for GH secretion until the age of one year, and then for growth failure. His pediatrician had ordered an MRI at 2 months of age because his eyes were not yet following objects. At 5 months of age, his growth rate started to decrease and at one year of age, he was addressed to our department for growth retardation. The PSIS diagnosis was based on the MRI performed at 2 months of age. No episodes of severe hypoglycemia or adrenal crisis were observed before diagnosis, and no child had any neurological deficiency. Performance of GHRS criteria {#s2c} ---------------------------- [Table 2](#pone-0016367-t002){ref-type="table"} summarizes the performance of each GHRS criterion. The criterion of height more than 2 SDS below the mean + height velocity over 1 year more than 1 SDS below the mean for chronological age had a frequency at final diagnosis of 100%. Height more than 1.5 SDS below the target height was the most effective criterion: 90% of the patients had met the criterion before diagnosis, at a median age of 1 year (range 0; 9) (IQR 0.5; 1.8), and it was the first criterion to be met for 84% of the patients. Its use could have reduced diagnostic delay by 2.1 years (range 0; 12.6) (IQR 1.5; 2.9). The combined use of these two criteria, height more than 2 SDS below the mean + height velocity over 1 year more than 1 SDS below the mean for chronological age and height more than 1.5 SDS below the target height, might also have reduced diagnostic delay by 2.1 years (range 0; 12.6) (IQR 1.5; 3) for a median age at first validation of one of these criteria, that is, the first visit at which a doctor could have determined that the criterion had been met, was 1 year (range 0; 4.7) (IQR 0.6; 2). 10.1371/journal.pone.0016367.t002 ###### Individual analysis of auxological GHRS criteria. ![](pone.0016367.t002){#pone-0016367-t002-2} Height \<−3 SDS Height \<−1,5 SDS below the target height Height \<−2 SDS and height velocity \<−1 SDS[\*](#nt107){ref-type="table-fn"} Height \<−2 SDS and height diminution \>0,5 SDS[\*\*](#nt108){ref-type="table-fn"} Normal height and height velocity \<−2 SDS[\*](#nt107){ref-type="table-fn"} Normal height and height velocity \<−1,5 SDS[\*\*\*](#nt109){ref-type="table-fn"} At least one of the 6 criterion ------------------------------------------------------------------------------------------------------------------ ------------------------ ------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ----------------------------------------------------------------------------- ----------------------------------------------------------------------------------- --------------------------------- Criterion completed at diagnosis n (%) (CI) 11 (52%) (31--73) 19 (90%) (77--100) 21 (100%) (100--100) 11 (52%) (31--73) 5 (24%) (6--42) 4 (19%) (2--36) Age at criterion completion (yr) median (range) (IQR) 1 (0,6; 10) (0.7; 2.2) 1 (0; 9) (0.5; 1.8) 2 (1; 9) (1; 3.9) 3 (2; 6) (3; 4.3) 3 (2; 6) (3; 4) 3 (2; 4) (2.7; 3.2) 1 (0; 4) (0.6; 2) Number of patients who completed the criterion first n (%) (CI) 2 (18%) (0--41) 16 (84%) (67--100) 4 (19%) (2--36) 0 (0%) (0--0) 2 (40%) (3--83) 1 (25%) (0--67) Potential reduction of diagnostic delay among the patients who completed the criterion (yr) median (range) (IQR) 2 (0; 6.8) (0.1; 3.3) 2.1 (0; 12.6) (1.5; 2.9) 1.5 (0; 9.6) (0; 3) 0 (0; 1.5) (0; 0.3) 2 (0.6; 4.5) (0.7; 2.1) 2.7 (0.5; 6.5) (1.6; 4.2) Potential reduction of diagnostic delay among all patients (yr) median (range) (IQR) 0 (0; 6.8) (0; 2) 2 (0; 12.6) (0.6, 2.8) 1.5 (0; 9.6) (0; 3) 0 (0; 1.5) (0; 0) 0 (0; 4.5) (0; 0) 0 (0; 6.5) (0; 0) 2.3 (0; 12.6) (1.5; 3.6) \*over 1 year. \*\*over 1 year in children older than 2 years of age. \*\*\*over 2 years. CI: confidence interval 95%. IQR: interquartile range. SDS: standard deviation score. Late Diagnosis {#s2d} -------------- Median age at diagnosis was 3.6 years (range 1; 13.6) (IQR 2.6; 5.5). Median age when the auxological criterion was met was 1 year (range 0; 4) (IQR 0.6; 2). The median diagnostic delay was 2.3 years (range 0; 12.6) (IQR 1.5; 3.6), with late diagnosis in 17 patients (81%). Discussion {#s3} ========== Main results {#s3a} ------------ We analyzed the diagnostic delay and sensitivity for the GHRS auxological criteria in the largest reported cohort of children seen for PSIS with GHD since the publication of these criteria. We studied the GHRS guidelines rather than other rules, such as the Dutch consensus guidelines or the UK guidelines, because it has been already demonstrated that both of these European guidelines lack specificity [@pone.0016367-Grote2], [@pone.0016367-vanBuuren1], [@pone.0016367-Grote3] or sensitivity [@pone.0016367-Grote2], [@pone.0016367-Grote3]. A Dutch team recently proposed another algorithm to identify children with short stature who require a diagnostic work-up, but this algorithm did not target PSIS with GHD as a key diagnosis [@pone.0016367-Oostdijk1]. We chose to study patients with GHD and PSIS because they comprise a homogeneous population with a permanent GHD, and because the real clinical significance of GHD without PSIS (diagnosed by a low GH response after 2 pharmacological stimulation tests and normal MRI) is a matter of debate today [@pone.0016367-Louvel1]. In all, 71% of patients had a diagnostic delay greater than 1 year. Correct application of the GHRS auxological criteria could have allowed diagnosis of these patients and the beginning of their treatment 2 years earlier. Of the GHRS criteria, the most effective for early and frequent diagnosis was height more than 1.5 SDS below the target height and the criterion met by all patients was height more than 2 SDS below the mean + height velocity over 1 year more than 1 SDS below the mean for chronological age. Height velocity and distance to target height have already been described by other teams as effective markers for detecting other growth disorders, such as Turner\'s syndrome, GHD and celiac disease [@pone.0016367-Grote2], [@pone.0016367-Grote4], [@pone.0016367-vanBuuren2]. Distance to target height and height velocity are still underused in routine practice [@pone.0016367-Grote1]. Interestingly, height velocity is not included in the UK consensus guideline [@pone.0016367-Grote3], [@pone.0016367-Hall1], [@pone.0016367-Fayter1] nor as a growth monitoring indicator in the national French pediatric health notebook. It is not provided by the World Health Organization (WHO) growth charts after 24 months [@pone.0016367-Grote3], not included in any study evaluating the effectiveness of height-screening programs [@pone.0016367-Fayter1], and was used by fewer than 50% of European pediatric endocrinologists in a 2002 survey [@pone.0016367-Grote1]. The specificity of each of the best criteria identified by our study (height more than 1.5 SDS below the target height, as well as height more than 2 SDS below the mean + height velocity over 1 year more than 1 SDS below the mean for chronological age) could not be determined but can be compared indirectly with those of other studies [@pone.0016367-Grote2], [@pone.0016367-Grote4], [@pone.0016367-vanBuuren2]. The specificity of the Dutch guidelines for short stature was tested on a longitudinal growth data of 870 children born in a geographical area of the Netherlands [@pone.0016367-vanBuuren1]. Of the six criteria of the Dutch guidelines, the criteria of height more than −1.3 SDS below the mean and of height more than −1.3 SDS below the target height, which are close to one of our best criteria, had a specificity of 94%. Although it may be somewhat difficult to use all GHRS criteria in routine practice to detect growth anomalies, our results for patients with GHD and PSIS as well as results from a larger population [@pone.0016367-Grote4] indicate that distance to target height should be used routinely as a warning sign for growth anomalies to select the patients who require further investigation. It should replace height for age which is relatively insensitive for the detection of clinically relevant growth disorders [@pone.0016367-Grote3]. Our work shows that GH peak is not enough to rule out a diagnosis of GHD. Indeed, GHD had been ruled out for 2 (10%) of the 21 patients included during their medical care because of normal GH peaks, despite serum IGF-1 less than --2 SDS. This observation supports the current modification of the use of GH provocative tests in the evaluation of GHD [@pone.0016367-Louvel1], [@pone.0016367-Duche1], [@pone.0016367-Juul1], [@pone.0016367-Ghigo1]. Indeed, they are expensive, labor intensive, occasionally risky, and their results not very reproducible [@pone.0016367-Ghigo1], [@pone.0016367-Badaru1], [@pone.0016367-Lemaire1]. Their use has declined over the past two decades. Serum IGF-1, together with the growth rate, provides high quality diagnoses that are practical, simple and very accurate [@pone.0016367-Lemaire1]. Patients suspected for GHD, with a BMI between --2 and +2 SDS, with very low IGF-1 levels should skip GH provocative tests and should be prescribed a MRI [@pone.0016367-Badaru1], [@pone.0016367-Lemaire1]. Study limitations {#s3b} ----------------- We used the national growth charts included in the French health notebook, developed in 1979 [@pone.0016367-Sempe1]. In 2006, the WHO multicentre growth reference study published growth charts for healthy breastfed infants living in good hygiene conditions [@pone.0016367-WHO1]. The comparison of the anthropometric measurements of French children with the new WHO growth standards showed similarities for the neonatal measurements but differed substantially thereafter, with French measurements (height, weight and BMI) lower from 1 to 6 months and French height lower but BMI higher from 6 months to 5 years old [@pone.0016367-Peneau1]. The GHRS consensus guidelines do not make it clear which growth charts should be used. Testing the sensitivity of GHRS criteria by using WHO growth charts is thus probably necessary. Our study was limited to a single center, a design that can result in recruitment bias. The presence of such a bias is supported by the mean age at diagnosis of symptomatic PSIS in our cohort, 3.6 years, compared to those reported in the literature, 4 to 9 years [@pone.0016367-Tauber1], [@pone.0016367-Argyropoulou1], [@pone.0016367-Pinto2], [@pone.0016367-Rottembourg1], [@pone.0016367-Maghnie1], [@pone.0016367-Louvel1]. It is thus possible that diagnostic delays are greater in the general population and that application of the GHRS criteria would reduce diagnostic delays still more than it would have in our patients. Adoption and uncertain paternity are common, limiting utility of the "target height" criterion. That is the reason why it may be useful to consider the use of a combination of our two best criteria: height more than 1.5 SDS below the target height and height more than 2 SDS below the mean + height velocity over 1 year more than 1 SDS below the mean for chronological age. Unexpected findings {#s3c} ------------------- We were surprised by the high proportion (14%) of breech presentation vs 4% in the general population in France [@pone.0016367-Carayol1], and by the high proportion of cesareans (43%) vs 25% in the general population in France [@pone.0016367-LeRay1]. Of patients with MPD, 20% were born in breech presentation, and 80% (including all patients with thyrotrophic insufficiency) were born by cesarean delivery. If we incorporate in the analysis the six excluded patients with PSIS diagnosed during the neonatal period, 22% of patients had breech presentations and 56% cesarean births, for all six were born by cesarean deliveries, three in breech presentation. We were not able to identify a selection bias that could explain this unexpected finding. The frequency of breech presentation and cesarean delivery for GHD patients in the literature varies respectively from 7 to 60% and 30 to 40% [@pone.0016367-Hanew1], [@pone.0016367-Arrigo1], [@pone.0016367-AlbertssonWikland1]. TSH and/or ACTH deficiency and/or GHD may play a role in labor or fetal mobility and lead to breech presentation and/or cesarean delivery. Although we certainly do not recommend a pituitary MRI for all newborns by cesarean delivery or with breech presentations, clinicians should be aware of this finding in determining which newborns with hypoglycemia require a diagnostic workup for GHD. Perspectives {#s3d} ------------ Screening rules based on growth monitoring are currently a topic of debate [@pone.0016367-vanBuuren1], [@pone.0016367-Grote3], [@pone.0016367-Oostdijk1]. Evidence-based strategies must be tested, both for their sensitivity for early diagnosis in case-cohort series of given target diseases (e.g., GHD, celiac disease, and Turner\'s syndrome) and for their specificity in healthy populations [@pone.0016367-vanBuuren2], [@pone.0016367-Hindmarsh1]. The introduction of some of the GHRS criteria (especially height more than 1.5 SDS below the target height and height more than 2 SDS below the mean + height velocity over 1 year more than 1 SDS below the mean for chronological age) would probably be helpful for the early diagnosis of the target disease here, PSIS with GHD. However, the precise specificity of these criteria and their performance for the early diagnosis of other target diseases involving growth monitoring must be tested. Methods {#s4} ======= Study design {#s4a} ------------ This single-center retrospective case-cohort study included all patients seen for PSIS with GHD by a senior pediatric endocrinologist (R Brauner) from January 2000 to December 2007. During the study period, the local routine protocol called for the systematic prescription of GH stimulation tests for all patients seen for growth failure and for systematic MRI of the hypothalamic-pituitary area of those with GHD (as defined below). All patients whose computerized hospital chart or discharge codes contained the words "growth hormone deficiency" and "pituitary stalk interruption syndrome" were considered for inclusion. The Institutional Review Committee (Comité de Protection des Personnes Ile de France III) stated that "this research was found to conform to generally accepted scientific principles and research ethical standards and to be in conformity with the laws and regulations of France, where the research experiment was performed." Written informed consent of the patients or their parents was not judged necessary for this kind of retrospective study. The data of some of the patients included in the present were previously used for other purposes [@pone.0016367-Duche1], [@pone.0016367-Lemaire1]. Inclusion criteria {#s4b} ------------------ We included all patients seen consecutively for GHD and PSIS. GHD was diagnosed by a GH peak of 10 ng/mL or less or 20 mIU/L or less after two pharmacological stimulation tests or a very low level of insulin-like growth factor (IGF)-1 (less than −2 standard deviation scores (**SDS**)) [@pone.0016367-Trivin1]. PSIS was diagnosed by MRI, according to the criteria described above. Patients with GHD but with a normal MRI or an isolated hypoplastic anterior pituitary gland were excluded, as well as adopted patients (because perinatal history and parental heights were not available). Patients with a diagnosis of PSIS in the neonatal period were also excluded because their growth rate before diagnosis could not be calculated. Collected data {#s4c} -------------- Social, demographic, and medical data were extracted from the medical report: sex, parental height, and perinatal history. Signs observed before diagnosis and medical and growth records were noted. Data related to the GHRS clinical and auxological criteria were also extracted. During the neonatal period, these criteria are hypoglycemia, prolonged jaundice, microphallus, or traumatic delivery. In the post-neonatal period, they include severely short stature, defined as a height more than 3 SDS below the mean; height more than 1.5 SDS below the target height; height more than 2 SDS below the mean and a height velocity during the previous year more than 1 SDS below the mean for chronological age, or a decrease in height SDS of more than 0.5 over 1 year in children older than 2 years; in the absence of short stature, a height velocity more than 2 SDS below the mean over 1 year or more than −1.5 SD over 2 years [@pone.0016367-Consensus1]. Definitions {#s4d} ----------- Target height was calculated from parental height [@pone.0016367-Tanner1] and expressed in SDS. Microphallus was defined as a penis length of 2.5 cm or less (−2 SDS) [@pone.0016367-Pinto2]. Height, weight, body mass index (**BMI**, weight in kg/height in m^2^) and height velocity were expressed in SDS for chronological age [@pone.0016367-Sempe1], [@pone.0016367-RollandCachera1]. Bone age was evaluated by one of us (RB) according to the Greulich and Pyle method [@pone.0016367-Greulich1]. Bone age delay was defined as the difference in years between chronological and bone ages. Thyroid stimulating hormone deficiency was defined by thyroxin level less than 12 pmol/L and adrenocorticotrophin deficiency by basal blood cortisol at 08.00 h less than 70 µg/L. Analysis {#s4e} -------- We first analyzed population characteristics at diagnosis of PSIS with GHD and then studied the medical and growth history, symptoms, and clinical signs through diagnosis. Comparison of each GHRS auxological criterion with growth charts allowed us to establish the age at which each criterion was met, to class each criterion in chronological order of fulfillment, and then to evaluate the diagnostic delay, defined as the difference between the age at which the earliest GHRS criterion was met and the age at diagnosis of PSIS with GHD. We arbitrarily considered a diagnostic delay of one year or more as late diagnosis. Finally, we analyzed each GHRS criterion for how early and with what frequency it was met and arbitrarily defined the most effective criterion as the one that was most sensitive and earliest. **Competing Interests:**The authors have declared that no competing interests exist. **Funding:**The authors have no support or funding to report. [^1]: Conceived and designed the experiments: GGL RB MC. Analyzed the data: GGL RB MC. Wrote the paper: GGL RB MC. Collected data: GGL RB LD.
{ "pile_set_name": "PubMed Central" }
Autosomal recessive mutations in the *PARK7* gene, which encodes for the protein DJ-1, result in a loss of function and are a cause of familial Parkinson's disease (PD), while increased wild-type DJ-1 protein levels are associated with some forms of cancer. Several functions of DJ-1 have been described, with the greatest evidence indicating that DJ-1 is a redox-sensitive protein involved in the regulation of oxidative stress and cell survival. We have recently reported that the levels of DJ-1 oxidized at cysteine 106 (C106) was decreased in the cortex of idiopathic PD brains (Piston et al., 2017). Furthermore we found that DJ-1 forms high molecular weight complexes in human brain and the dopaminergic SH-SY5Y neuroblastoma cell line, and that these complexes could be oxidized at C106. Proteomics indicated that proteins involved in RNA transcription/translation were associated with these DJ-1 complexes, and the composition of complexes was affected by oxidation of DJ-1. RNA sequencing highlighted that transcripts associated with the catecholamine system, including dopamine (DA) metabolism, tended to be increased when complexes contained DJ-1 mimicking oxidation at C106. DJ-1 knock down (KD) cells also had increased intracellular DA and noradrenaline (NA) levels. In this perspective we will discuss the implications of DJ-1 acting as a redox sensor directly affecting RNA metabolism, and with respect to PD, how dysregulation of catecholamine metabolism in both familial and idiopathic PD, might contribute to some prodromal features of the disease and the increased susceptibility of specific neuronal populations to neurodegeneration. **DJ-1 complexes associated with RNA metabolism and affected by oxidation at C106**: We have found that DJ-1 forms high molecular weight complexes in human brain and neuroblastoma cells, with these complexes appearing to contain RNA metabolism proteins such as heterogeneous ribonucleoproteins (hnRNP) and polyadenylate binding protein 1 (PABP1). Association of DJ-1 with these proteins was affected by the oxidation status of DJ-1 (Piston et al., 2017). Our finding adds to previous reports of DJ-1 being directly associated with RNA (Hod et al., 1999; van der Brug et al., 2008) and likely adds another level of transcriptional/translational control to signaling pathways previously reported to be affected by the redox-status of DJ-1 such as antioxidant defence and survival/apoptosis (Biosa et al., 2017), in addition to our finding in neuroblastoma cells on catecholamine metabolism. Several DJ-1 complexes were detected in human brain lysates and could well reflect the different compositions in neurons and glia, where several different functions of DJ-1 are described in the two cell types. It is likely that alternate complex composition might also contribute to the differing effects of DJ-1 described in several types of cancer and cardiomyocytes following ischaemia. It is becoming increasingly apparent that the oxidation status of C106 is not a simple 'on-off' switch, but rather biphasic. C106 resides in a pocket, and the transition from oxidised C106 (SO~2~^--^) to over oxidised C106 (*e.g*., SO~3~^--^) has been proposed to change the local conformation of the protein resulting in DJ-1 becoming destabilized and losing function (Cao et al., 2014; Kiss et al., 2017). Therefore it can be envisaged that the composition of DJ-1 complexes under acute or mild oxidative stress will differ from that of chronic excessive oxidizing conditions, thus altering the physiological response of the cell over time, for example from pro-survival to apoptotic (Cao et al., 2014). The observation that DJ-1 complexes are oxidized in idiopathic PD brains, and that DJ-1 oxidation appears to be decreased in frontal cortex of PD brains, raises the prospect that changes in RNA metabolism also occur in idiopathic PD. Oxidised DJ-1 at C106 by immunohistochemistry has also been reported in neurons and glia of PD brains, and seems to diminish with disease progression (Saito et al., 2014). DJ-1 has been proposed as a biomarker for PD and certain types of cancer such as breast and lung. Given the apparent important role of C106 oxidation in controlling DJ-1 function, in future, measuring the ratio of oxidized to reduced DJ-1 in cerebro-spinal fluid (CSF)/serum might prove more informative. **DJ-1 and the catecholamine system**: In terms of PD, the finding in neuroblastoma cells expressing an oxidation mimic of C106 that several transcripts associated with catecholamine metabolism were affected fits well with both the motor and non-motor symptoms of the disease. Upregulated transcripts included dopa decarboxylase (DDC) involved in DA synthesis, vesicular monoamine transporter 1 (VMAT1), required for the sequestration of DA into vesicles; and dopamine beta hydroxylase (DBH) which converts DA to NA in vesicles (Piston et al., 2017). Furthermore in DJ-1 KD cells, in which no DJ-1 complexes were detectable, and presumably resulting in the loss of transcriptional/translational control of transcripts, both the levels of DA and NA were significantly increased (Piston et al., 2017). DJ-1 function has previously been associated with DA metabolism, including both transcriptional control of tyrosine hydroxylase (TH), the first step in DA synthesis, and direct binding to and activation of TH and DDC by DJ-1. DJ-1 knockout mice also exhibit increased DA reuptake, linked to increased activity of the DA transporter DAT, and perhaps the inhibition of DA D2 receptor, involved in the control of DA synthesis and release. Further work is required to establish whether the elevation in these two neurotransmitters in DJ-1 KD cells is due to increased synthesis, decreased turnover, and/or accumulation of these molecules in the cytoplasm, rather than vesicles. **Dysregulated catecholamine metabolism in familial and idiopathic PD**: Both DA and NA are readily oxidized, forming toxic intermediates that are implicated in neurodegeneration. It is notable that two brain regions with considerable neuronal loss in PD are the DA neurons of the substantia nigra pars compacta and NA neurons of the locus coeruleus. Both neuronal populations contain the pigment neuromelanin, which contains oxidation products of DA and NA, and is formed when these neurotransmitters are not sequestered in vesicles. Furthermore, many of the non-motor functions associated with PD including prodromal features such as constipation, sleep disruption and depression, and later features such as cognitive impairment can all be linked to DA and/or NA systems (Schapira et al., 2017) ([**Figure 1**](#F1){ref-type="fig"}). It has been proposed that the broad action potentials and autonomous pacemaking, which results in large oscillations of cytosolic calcium, contribute to the selective vulnerability of DA neurons in the substantia nigra pars compacta. Recently Burbulla et al. (2017) have shown that levels of oxidised DA in the cytosol increase over time in midbrain DA neurons with DJ-1 KO or DJ-1 mutations. They propose that increased oxidative stress as a result of dysfunctional DJ-1 and high calcium levels is responsible. Intriguingly, one consequence of this is inhibition of the lysosomal enzyme glucocerebrosidase (GCase). Mutations in the *GBA* gene, which encodes GCase, cause 5--10% of all PD cases. Furthermore, GCase activity is decreased in idiopathic PD brains, with the substantia nigra showing the greatest deficit (Gegg et al., 2012). In light of our findings, we propose that dysregulation of catecholamine homeostasis, and in particular increased DA levels, might also contribute to this phenomenon ([**Figure 1**](#F1){ref-type="fig"}). Indeed, Burbulla et al indicate that increasing DA levels in mouse neurons with L-DOPA elevated DA oxidation and inhibition of GCase. Oxidised DA was also increased but to a lesser extent in neurons with *PINK1* and *parkin* mutations, and GCase activity is diminished in dopaminergic cells lacking these functional proteins (Gegg et al., 2012; Burbulla et al., 2017). Mutations in both proteins are also associated with increased oxidative stress and mitochondrial dysfunction and likely contribute to the phenomena. DJ-1, *PINK1* and *parkin* mouse models all exhibit perturbed striatal DA function, that precedes nigral-striatal degeneration, once again implicating dysregulated catecholamine metabolism (Kitada et al., 2009). ![Pathogenic pathways of idiopathic and familial forms of Parkinson's disease (PD).\ TH: Tyrosine hydroxylase; DDC: dopa decarboxylase; DBH: dopamine beta hydroxylase; GBA: glucosylceramidase beta (also known as glucocerebrosidase); LRRK2: leucine rich repeat kinase 2; ATP13A2: ATPase cation transporting 13A2; VPS35: VPS53, retromer complex component; SNCA: synuclein alpha.](NRR-13-815-g001){#F1} **Can other genetic causes of PD or the idiopathic forms of the disease also be linked to catecholamine dyshomeostasis?** The accumulation of α-synuclein in Lewy bodies is a hallmark of PD. While the exact function(s) of α-synuclein is still unclear, the protein is known to predominantly localize to presynaptic terminals and bind highly curved membranes, including neurotransmitter vesicles. α-synuclein has been proposed to be involved in neurotransmitter release, but in a regulatory, rather than an essential role, as it is generally lacking in inhibitory neurons, is one of the last proteins to reach the synapse and is absent in invertebrates (Burré, 2015). α-Synuclein has been proposed to play roles in vesicle filling, clustering, soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) complex formation and the synaptic vesicle cycle (Burré, 2015). Increased levels of α-synuclein, as a result of duplication or triplication of the α-synuclein gene, or aberrant membrane binding due to point mutations, could all disrupt the loading/release of catecholamines and increase toxic oxidized species. Lysosomal function is known to decline with age, the greatest risk factor for PD, and inhibition of the autophagy-lysosomal pathway causes accumulation and/or oligomerization/fibrillation of α-synuclein. Mutations in *GBA*, *LRRK2*, *VPS35* and *ATP13A2* are all known to inhibit autophagy, resulting in accumulation of α-synuclein. *Lrrk2* has also been linked with rab proteins, which are important for vesicle trafficking. Midbrain DA neurons containing Lrrk2 mutations or triplication of the α-synuclein gene exhibit increased levels of oxidised DA (Burbulla et al., 2017). Therefore in addition to the mitochondrial dysfunction and oxidative stress discussed above, impaired lysosomal function and impaired vesicle trafficking could also contribute to dysregulated intracellular catecholamine levels and neurodegeneration, and likely feedback on one another in both genetic and idiopathic PD ([**Figure 1**](#F1){ref-type="fig"}). **Conclusion**: Measuring the levels of monomeric and oxidized DJ-1 described above, as well as DA, NA and their metabolites in cell models and CSF of PD patients could provide an insight to disease mechanisms. For example the ratio of DA to its precursor DOPA or oxidized forms such as 3,4-dihydroxyphenylacetic acid (DOPAC), which results from the oxidation of cytosolic DA in neurons by monoamine oxidase A, might indicate the points at which DA/NA metabolism are affected (*e.g*., synthesis or cytosolic accumulation of neurotransmitters). Furthermore, given the involvement of the catecholamine system in several prodromal features of PD, measurement of these metabolites in the CSF from at risk individuals such as mutant *GBA* carriers could help identify useful biomarkers for disease progression. *Due to the restriction on both the number of words and references we would like to thank the authors of the papers we were unable to cite in this perspective*. *This work was funded by a Medical Research Council (UK) Experimental Medicine grant \[MR/M006646/1\]*. ***Copyright license agreement:** The Copyright License Agreement has been signed by all authors before publication*. ***Plagiarism check:** Checked twice by iThenticate*. ***Peer review**: Externally peer reviewed*. ***Open peer review report***: ***Reviewer:** Chih-Li Lin, Chung Shan Medical University, China*. ***Comments to authors**: Authors reported a perspective review discussing the neuroprotective function of DJ-1 in PD. According to their recent relevant publication, authors provided an insight for the role of DJ-1 in PD pathogenesis, particularly for that the oxidative changes to DJ-1 are concomitant with changes in mRNA transcripts and involved in catecholamine metabolism. This is a well-written short manuscript that deals with molecular mechanisms of microglial regulation in the development of brain*.
{ "pile_set_name": "PubMed Central" }
Cerebrovascular segmentation of TOF-MRA based on seed point detection and multiple-feature fusion. The accurate extraction of cerebrovascular structures from time-of-flight (TOF) data is important for diagnosis of cerebrovascular diseases and planning and navigation of neurosurgery. In this study, we proposed a cerebrovascular segmentation method based on automatic seed point detection and vascular multiple-feature fusion. First, the brain mask in the T1-MR image is detected to enable the extraction of the TOF brain structure by simultaneously acquiring the TOF image and its corresponding T1-MRI. Second, local maximum points are detected on three maximum-intensity projections of TOF-MRA data and then be traced back in three-dimensional space to detect seed points for the initialization of vascular segmentation. Third, the TOF-MRA image and its corresponding vesselness image are fused to enhance vascular features on the basis of fuzzy inference for the extraction of whole cerebrovascular structures, particularly miniscule cerebral vessels. Finally, detected seed points and multiple-feature fused enhanced images are provided to the procedure of region growing, and cerebrovascular structures are segmented. Experimental results show that compared with traditional methods, the proposed method has higher accuracy for vascular segmentation and can avoid over- and under-segmentations. The proposed cerebrovascular segmentation method is not only effective but also accurate. Therefore, it has potential clinical applications.
{ "pile_set_name": "PubMed Abstracts" }
Introduction {#sec1} ============ Functional imaging has become a mainstay in both preclinical and clinical studies. Among the imaging modalities, magnetic resonance imaging (MRI) has a wide acceptance because it employs nonionizing radiation and yields detailed structural information owing to its inherent high soft-tissue contrast.^[@ref1]^ Between longitudinal (*T*~1~)- and transverse (*T*~2~)-weighted MRI, *T*~2~-imaging is particularly useful in visualization pathologies involving changes in water content, such as edema, and inflammation. Although qualitative approaches based on clustering of iron oxide nanoparticles (NPs) in response to matrix metalloproteases activity in vitro has been reported,^[@ref2]^*T*~1~ and *T*~2~ MRI cannot yield quantitative information on biological processes. In this regard, fluorescence molecular tomography (FMT), an optical technique that uses near-infrared (NIR) light, which is weakly absorbed by the tissue and vasculature, allows for temporal assessment of quantitative changes to tissue function in live subjects^[@ref3]^ and has emerged as a powerful imaging modality.^[@ref4]^ FMT has been successfully used to visualize biological processes such as protease activity in atherosclerosis,^[@ref5]^ tumor growth,^[@ref3]^ and skeletal structures.^[@ref6]^ Therefore, by combining MRI with FMT, fluorescence information can be assigned to body compartments,^[@ref3],[@ref7]^ thereby providing information on biological processes. Pathologies associated with bone such as Paget's disease and osteoporosis involve changes to bone mineral phase (BMP). Hydroxyapatite (HAp), the mineral phase of bone, is embedded in an organic collagen matrix. Bone undergoes constant remodeling via crosstalk between osteoclasts (bone-resorbing cells) and osteoblasts (collagen- and mineral-depositing cells),^[@ref8]^ with 5--10% of bone mass being renewed annually.^[@ref9]^ Deregulation in this crosstalk can lead to accelerated bone loss as observed in osteoporosis and cancer metastasis to bone.^[@ref10]^ Diagnosis of early indications of changes to BMP and bone mineral density (BMD) could help in early detection of bone pathologies. Therefore, imaging agents that bind to BMP and provide contrast in MRI and FMT can provide new insights and correlations between BMD and protease activity. Toward the objective of realizing multimodal polymeric NPs with affinity to bone, we exploited the known affinity of alendronic acid (Aln), a bisphosphonate derivative to BMP.^[@ref11]^ Targeting of anticancer drugs such as paclitaxel to bone using polymers,^[@ref12]^ micelles,^[@ref13]^ and dendrimers^[@ref14]^ through conjugation with bisphosphonate is well established^[@ref15]^ and has been explored for the treatment of Paget's disease and osteoporosis.^[@ref16]^ In the context of imaging, conjugation with bisphosphonate derivatives has also been exploited to target scintigraphy agents, fluorescent probes, radionuclide for positron emission tomography,^[@ref17],[@ref18]^ and gadolinium chelates for *T*~1~-weighted MRI, which has been discussed in a recent exhaustive review by Cole et al.^[@ref19]^ Because bone pathologies are accompanied by loss of the mineral phase, they are well suited for *T*~2~-weighted MRI^[@ref20]^ as changes in the mineral phase are expected to also change the water content in the bone.^[@ref21]^ Therefore, bone-targeting agents, if properly designed, not only could serve as a targeted drug delivery system but also could exploit changes in the bone mineral content, to serve as a probe to enable early identification of these processes. For *T*~2~-weighted MRI, ultrasmall superparamagnetic iron oxide nanoparticles (USPIONs) are the contrast agent of choice because they can disrupt the magnetic field within the tissue, providing dark contrast (negative contrast).^[@ref22],[@ref23]^ Although the binding of γ-Fe~2~O~3~ NPs modified with a bisphosphonate derivative to synthetic hydroxyapatite has been reported,^[@ref24]^ surprisingly, to date, no *T*~2~-contrast agents capable of binding to BMP have been described.^[@ref19]^ The use of USPIONs for imaging tissues can pose a few challenges. Free USPIONs are rapidly picked up by the cells of the reticuloendothelial system and show accumulation in the liver, which can lead to potential hepatic toxicity.^[@ref25]^ However, this can be mitigated by altering the morphology of the USPIONs^[@ref26]^ and encapsulating them within a polymer carrier. However, with respect to marrying USPIONs with bisphosphonate, the chelation of the phosphate groups with the iron in the USPIONs^[@ref27]^ poses a severe limitation in encapsulating USPIONs using polymers conjugated to bisphosphonate. In this study, we present a novel postmodification strategy to overcome this drawback, thereby synthesizing for the first time polymeric nanoprobes with high affinity to BMP, which also provide contrast in *T*~2~-weighted MRI and the simultaneous visualization of BMP in NIR optical tomography. We achieved this by exploiting a blending approach to produce poly([dl]{.smallcaps}-lactic acid-*co*-glycolic acid) (PLGA) NPs stabilized with polyethylene glycol (PEG) and having chemically accessible *N*-hydroxysuccinimide (NHS) groups on the surface, which upon postfunctionalization with Aln yielded NPs with a PEG- and Aln-rich surface. These PLGA--PEG/Aln NPs showed excellent solution stability, high binding capacity to BMP, and *T*~2~-contrast enhancement in MRI. By incorporating NIR-fluorophore-labeled PLGA in the blending step, NPs with high affinity for BMP that can be imaged in MRI and optical modalities has been realized for the first time ([Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"}). ![Schematic of the fabrication of multimodal NPs that can provide contrast in optical and MRI modalities. Note: VT750 is an NIR dye that is conjugated to PLGA.](ao-2016-00088m_0001){#fig1} Results and Discussion {#sec2} ====================== NHS--PLGA Surface Functionalized NPs Containing USPIONs {#sec2-1} ------------------------------------------------------- Blending of PLGA with PLGA--PEG block copolymers to produce NPs with a PEGylated surface has been explored extensively.^[@ref28],[@ref29]^ NPs bearing Aln on the surface have been prepared by blending PLGA--PEG with PLGA--PEG--Aln and shown to accumulate in bone in a multiple myeloma model in mice.^[@ref12]^ Because Aln has high chemical affinity for oxidized iron and therefore shows chelation towards USPIONs, it has been exploited to synthesize SPECT/MRI agents.^[@ref17]^ We first verified the capacity of the USPIONs to interfere with NP formation using PLGA--Aln^[@ref30]^ (see [Materials and Methods](#sec4){ref-type="other"} for details of the synthesis) and found that the presence of USPIONs indeed disrupted NP formation in the nanoprecipitation method.^[@ref31]^ The NPs produced in the presence of USPIONs are composed of large aggregates (size: 890 ± 8 nm, polydispersity index (PDI): 0.23, [Figure S1](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)) and as such are too large for intravenous administration. We therefore postulated that postfunctionalization approach, wherein Aln is introduced for surface modification after the NP is formed, could enable the preparation of USPIONs containing NPs with bone-targeting Aln moieties. This approach would have an added advantage of allowing tunability of Aln density on the NP surface. We therefore end-functionalized PLGA with NHS (PLGA--NHS) as described in [Scheme [1](#sch1){ref-type="scheme"}](#sch1){ref-type="scheme"}A and then blended PLGA--NHS with PLGA--PEG to yield NPs with a reactive NHS-rich surface. Furthermore, to confer visibility to the NPs in the NIR spectrum, PLGA modified with the NIR fluorescent dye VivoTag-750 (VT750) was synthesized starting from PLGA--NHS using a two-step synthesis as shown in [Scheme [1](#sch1){ref-type="scheme"}](#sch1){ref-type="scheme"}B and incorporated in the blending step. ![(A) Functionalization of PLGA with NHS and (B) Functionalization of PLGA--NHS with VT750 NIR Dye](ao-2016-00088m_0010){#sch1} The premise behind the incorporation of PLGA--PEG was to conform stability of the NPs via steric stabilization. However, a high concentration of PEG chains on the NP surface could diminish the accessibility and hence the reactivity of Aln to the NHS groups. Therefore, an optimization study was undertaken to identify the minimum weight-percentage of PLGA--PEG that would yield NPs with narrow polydispersity, in a size range that is suitable for intravenous administration while ensuring stability under serum conditions ([Table S1](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). On the basis of this study, a blend composition of 80% PLGA--NHS, 15% PLGA--PEG, and 5% PLGA--VT750 (total polymer concentration 5 mg/mL) was deemed optimum because it yielded NPs below 150 nm with a PDI of less than 0.2. The presence of a surface rich in NHS groups was confirmed by the charge inversion of the NP from negative (∼−24 mV) to positive (∼+8 mV) ([Table [1](#tbl1){ref-type="other"}](#tbl1){ref-type="other"}), and this is consistent with the p*K*~a~ of NHS, which is 7.8.^[@ref32]^ Because the size, PDI, and surface chemistry of the USPIONs can impact the encapsulation efficiency, USPIONs were synthesized by thermal decomposition of iron pentacarbonyl at high temperatures, which yielded particles with an average size of 5 nm, a narrow PDI, and excellent magnetic properties.^[@ref26]^ Furthermore, using this synthesis approach, the surface of the USPIONs is coated with oleylamine, ensuring that the particles are stable and dispersible in the polymer solution during the preparation of the NPs using the nanoprecipitation method.^[@ref31]^ The NPs containing USPIONs were prepared using the blend system, and the uniform distribution of the USPIONs within the NPs was verified using transmission electron microscopy (TEM) ([Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}A). ![Characterization of freeze-dried Aln--PLGA NPs. (A) TEM image of the encapsulated USPIONs in Aln--PLGA NPs. (B) EDX spectra of the Aln--PLGA NPs. (C) ^31^P NMR spectra (12 800 scans) of Aln--PLGA NP suspensions.](ao-2016-00088m_0002){#fig2} ###### Size and Zeta Potential of Nanoparticle Preparations NPs size (average ± SD) (nm) PDI zeta potential (average ± SD) (mV) ---------------------------------------------------------------- -------------------------- ------ ------------------------------------ PLGA 175 ± 1 0.16 --24.7 ± 0.5 PLGA--NHS 148 ± 3 0.18 +8.2 ± 0.7 blended NHS functionalized NPs[a](#t1fn1){ref-type="table-fn"} 139 ± 2 0.16 +4.7 ± 0.5 PLGA--NHS/PLGA--PEG/PLGA--VT750 at 80/15/5 weight-percentage with USPIONs 0.1 mM (23.1 μg/5 mg of PLGA) Fe~3~O~4~. Postmodification of NHS--PLGA NPs Containing USPIONs with Alendronic Acid {#sec2-2} ------------------------------------------------------------------------- NHS--PLGA NPs encapsulating USPIONs were first dialyzed against distilled water to remove residual organic solvents, and the NP suspension was postmodified by mixing with an aqueous Aln solution (0.2 mg/mL) before it was again dialyzed against water to remove unbound molecules ([Scheme [2](#sch2){ref-type="scheme"}](#sch2){ref-type="scheme"}). Aln--sodium salt was synthesized by a previously published method^[@ref16]^ with minor modification, notably substituting PCl~5~ for PCl~3~, resulting in a 78% yield, and its structure was verified using IR, ^1^H nuclear magnetic resonance (NMR), and ^31^P NMR spectroscopy ([Figure S2](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). ![Surface Modification of NHS--PLGA NPs with Alendronic Acid](ao-2016-00088m_0003){#sch2} The zeta potential analysis of the NP suspensions post reaction revealed an inversion of the surface charge from positive to highly negative (−32.2 ± 1.7), which is consistent with the covalent linkage of Aln to the NP surface. Importantly, the modification step had minimal impact on the average size of the NPs, which remained relatively unchanged with a narrow PDI (163 ± 1 nm, PDI = 0.16), suggesting that the encapsulated USPIONs did not interfere with the postmodification with Aln. Further evidence for the presence of Aln sodium salt was obtained by energy-dispersive X-ray (EDX) analysis of lyophilized NPs, where strong signals corresponding to phosphorous and sodium were observed in addition to iron from the encapsulated USPIONs ([Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}B), whereas in the case of premodified control NPs (PLGA--NHS), no such peaks were detected ([Figure S3](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). Additional evidence for the presence of Aln on the NP surface was gathered from the ^31^P NMR spectra ([Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}C), where the presence of a lone phosphorous peak indicated that a single phosphorous-containing species was present on the NP, thus effectively excluding nonspecific chemisorption of Aln during the modification step. In comparison, when the PLGA--PEG/PLGA NPs were incubated with Aln, no ^31^P peak was detected, suggesting that Aln is not adsorbed through electrostatic interactions but is indeed covalently bound to the NP surface ([Figure S4](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). The postfunctionalized Aln--PLGA NPs were found to be stable for 7 days even in serum containing cell media ([Figure S5](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). Dose-Dependent Binding of Aln--PLGA NPs to Synthetic and Biogenic HAp {#sec2-3} --------------------------------------------------------------------- PEG--PLGA NPs, NHS--PLGA NPs, and Aln--PLGA NPs were incubated with HAp granules for 3 h and analyzed using a scanning electron microscope (SEM) ([Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}A--C). While the PEG--PLGA NPs, as expected, did not bind to the surface, the NHS--PLGA NPs showed slight binding towards Hap, which may be attributed to the affinity between the slightly positively charged surface of the NHS--PLGA NPs and the negatively charged surface of the HAp granules. However, the Aln--PLGA NPs showed a remarkable binding capacity to the HAp surface with a complete coverage of the surface by the NPs ([Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}C). To understand the binding behavior, a time course study was undertaken using Aln--PLGA NPs modified with VT750, and the NP binding was quantified using FMT. Total fluorescence associated with the NP stock solution was first quantified within an agarose phantom, and this value was used to determine the fraction of the adsorbed NPs. A representative FMT volumetric projection of the HAp granules treated with VT750-labeled Aln--PLGA NPs and then embedded in an agarose phantom is shown in [Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}D. It was found that the adsorption of the Aln--PLGA NPs on the HAp surface showed first-order saturation kinetics with rapid adsorption in the first hour followed by saturation in 2--3 h ([Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}E), with a maximum of ∼8% of the NPs being adsorbed onto the HAp surface. In comparison, adsorption of PEG--PLGA and NHS--PLGA, both of which lack Aln, was less than 2% after 3 h. More importantly, the NPs produced using PLGA premodified with Aln showed poor binding behavior that was in the same range as the nonspecific controls (NHS-NPs and PEG-NPs). Furthermore, the binding of the Aln--PLGA NPs, that is postmodified, was fourfold greater than that of the premodified NPs after 3 h; this difference was statistically significant (*p* = 0.023). This proved the validity of our approach that a postmodification strategy would yield superior outcomes. Furthermore, the Aln--PLGA NPs binding to the HAp surface was preserved in the cell culture medium supplemented with serum, which is physiologically more relevant ([Figure S6](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). The above findings taken in their totality offer compelling evidence for the postmodification strategy presented herein. SEM analysis provided further qualitative evidence for the increased NP binding with longer incubation time (1--6 h) ([Figure S7](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). The saturation kinetics suggested that the limiting factor in the adsorption of the Aln--PLGA NPs onto the HAp surface was the availability of surface area. Because bone is highly porous, a linear scaling relationship between NP-associated fluorescence and surface area is necessary for qualitative analysis. Therefore, the Aln--VT750-labelled Aln--PLGA NPs were incubated with an increasing amount of HAp granules, which corresponds to an increase in the surface area, and a linear correlation was observed up to a 10-fold increase in HAp mass ([Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}F). This suggests that the system described herein can be used to quantify bone volume (mass). ![Visualization and quantification of Aln--PLGA NP adsorption on synthetic HAp granules. SEM images of the surface of HAp granules after incubation with NPs for 3 h (A) PEG--PLGA NPs, (B) NHS--PLGA NPs, and (C) Aln--PLGA NPs. (D) Representative reconstructed FMT volume of HAp in an agarose phantom after incubation with the Aln--PLGA NPs. (E) Adsorption behavior of the Aln--PLGA NPs (*n* = 3) (postmodified) versus NHS--PLGA (*n* = 3) PEG--PLGA (*n* = 3) (nonbinding controls) and PLGA NPs premodified with Aln (binding control, *n* = 3). The asterisk indicates statistical significance between post- and premodified NPs with a *p*-value of 0.023. (F) Adsorption of Aln--PLGA NPs as a function of HAp mass.](ao-2016-00088m_0005){#fig3} Cytocompatibility and Binding Affinity of Aln--PLGA NPs toward Biogenic HAp {#sec2-4} --------------------------------------------------------------------------- ### Cytocompatibility of Aln--PLGA--VT-750 NPs {#sec2-4-1} Toxicity of nanomaterials can be a limiting factor in clinical translation. We therefore investigated the cytocompatibility of the Aln--PLGA NPs toward osteoblasts because they are responsible for the mineralizing of mammalian skeletons. Osteoblasts were differentiated from human bone marrow-derived mesenchymal stem cells (MSCs) for 21 days, and their osteogenic lineage was confirmed by the upregulation of mRNA for type-1 collagen, alkaline phosphate, and bone sialoprotein using real-time quantitative polymerase chain reaction (RT-qPCR) on day 7 of differentiation and deposition of mineral phase on day 21 of differentiation ([Figure S8](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). In addition to osteoblasts, toxicity toward Hepa 1-6, a mouse hepatoma-derived cell line that is routinely used for screening liver toxicity,^[@ref26]^ and *mouse* macrophages (RAW 264.7) was undertaken. The reason for including macrophages in this screening was that NPs were cleared from circulation primarily by macrophages. Cells were incubated with the Aln--PLGA NPs at different concentrations, ranging from 31 to 500 μg/mL for 24 h, and the cytocompatibility was assessed by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazoliumbromid (MTT) assay. The Aln--PLGA NPs showed no appreciable toxicity in all three cell systems at concentrations below 125 μg/mL, but a small decrease in cell viability was observed at 150 and 500 μg/mL in all cell types ([Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}). These data are consistent with the published reports on NP dose escalation and cell viability.^[@ref26]^ ![Cell viability after 24 h of incubation with increasing concentrations of Aln--PLGA NPs (31--500 μg/mL).](ao-2016-00088m_0006){#fig4} ### Binding of Aln--PLGA--VT-750 to Biogenic HAp {#sec2-4-2} Because the mineral deposits within biological environments were of much smaller dimension and possibly of different chemical composition than that of synthetic HAp, the ability of the Aln--PLGA NPs to bind to mineral deposits secreted by osteoblasts was ascertained. Osteoblasts were differentiated from human MSCs for 21 days, and the secretion of BMP by the osteoblasts was verified by Alizarin Red staining ([Figure S9](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)). Osteoblasts were incubated with Aln--PLGA NPs and control NPs (premodified PLGA--Aln) for 2 h in the presence of serum, and the cells were incubated with Quin-2, which is an established fluorescent dye used for the visualization of intracellular and extracellular calcium.^[@ref33]^ Fluorescent microscopy revealed that both post- and premodified NPs (imaged in the 750 nm channel) always co-localized with Quin-2 (480 nm). However, as observed with the binding studies with HAp granules, Aln--PLGA NPs prepared by postmodification method showed a qualitatively higher binding to the mineral phases in comparison to that of NPs prepared by premodification with Aln ([Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}). Pixel analysis confirmed an 8.4-fold increase in fluorescent signal (normalized to Quin-2 intensity, i.e., mineral phase) in the case of postmodified Aln--PLGA NPs versus premodified NPs, suggesting that for a similar content of mineral phase, a greater number of NPs were associated in the case of the former. Because the amount of VT-750 labeled PLGA was the same (5 wt %) in both NP preparations, this observed enhancement of nearly 1 order of magnitude can be unequivocally attributed to the higher binding efficiency of the postmodified NPs to biogenic HAp. ![Binding of the PLGA NPs containing USPIONSs post- and premodified with Aln \[Aln--PLGA (upper panel) and PLGA--Aln (lower panel), respectively\] to biogenic HAp secreted by osteoblasts. The premodified NPs served as controls. (A and E) Quin-2 labeling of biogenic HAp deposits, (B and F) NPs, and (C and G) merged images showing colocalization of biogenic HAp (Quin-2) deposits with NPs (VT-750), and blue-colored cell nuclei (DAPI). Bright field images showing the mineral deposits (dark spots) associated with osteoblasts are shown in (D and H) for comparison. The scale bar is 10 μm.](ao-2016-00088m_0007){#fig5} USPIONSs Containing Aln--PLGA NPs Decrease *T*~2~ Relaxation Times in Agarose Phantoms {#sec2-5} -------------------------------------------------------------------------------------- Having demonstrated that Aln--PLGA NPs with encapsulated USPIONs possess favorable cytocompatibility and show high affinity for both synthetic and biogenic HAp, we ascertained the ability of this system to function as an MRI *T*~2~-contrast agent. The system performance of NMR and MRI devices are usually tested with phantom systems.^[@ref34],[@ref35]^ Agarose is one of the most suitable components available to fabricate imaging phantoms because it has comparable *T*~2~ relaxation times compared with human tissue (40--150 ms).^[@ref34]^Hence, the Aln--PLGA NPs were first associated with HAp powder and then dispersed in agarose gelled in an NMR tube to simulate bonelike tissue environment as shown in [Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}A. The concentration of USPIONs in the NPs was varied from 0.07 to 0.35 mmol/L, and water *T*~2~ relaxation times were measured using the Carr--Purcell--Meiboom--Gill (CPMG) pulse sequence. USPIONs encapsulated within the Aln--PLGA NPs affected the transverse *T*~2~ relaxation as a function of the encapsulated amount of iron ([Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}B). This clear inverse dependence of *T*~2~ with concentration (see [Figure S10](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf) for *R*^2^ values and *r*^2^) suggests that the Aln surface functionalization does not influence the ability of the encapsulated USPIONs to provide contrast. Even at a moderate concentration of 0.14 mmol Fe/L, a significant reduction in *T*~2~ times of more than 50%, that is, from 78 to 42 ms, was observed with a further reduction to ∼35 ms at a concentration of 0.21 mmol/L, which stagnated beyond 0.28 mmol/L. These findings confirm the suitability of Aln--PLGA NPs as the carrier for bone-targeted MRI contrast agents. ![Water *T*~2~ relaxation by the USPIONs encapsulated in the Aln--PLGA NPs: (A) schematic representation of experimental setup. (B) Water *T*~2~ relaxation times in the presence of USPIONs of different concentrations within the NPs.](ao-2016-00088m_0008){#fig6} Aln--PLGA/VT750 Encapsulating USPIONs Enable Multimodal Imaging of Bone Environment in MR and NIR Optical Imaging {#sec2-6} ----------------------------------------------------------------------------------------------------------------- NIR probes have been successfully used to noninvasively gain information on biological processes.^[@ref4],[@ref5]^ Combining the benefits of NIR imaging with the structural details gained from MRI can offer new means of correlating biological processes within dense structures such as bone. Having demonstrated that USPIONs encapsulated in Aln--PLGA NPs with high affinity toward biogenic HAp can decrease *T*~2~ times of water, we investigated the potential of these novel nanoprobes to provide contrast in *T*~2~-weighted MRI and NIR tomography using fresh bovine bone samples as a model system. Bovine bone samples (dimensions 4 × 2 × 6 mm^3^) were immersed partially for 3 h in a solution of Aln--PGLA NPs modified with VT750 and containing USPIONs. In this experimental design, the unimmersed half of the bone sample served as the internal control. The sample was washed four times with deionized water at 700 rpm to remove unbound NPs and then imaged using μ-computed tomography (μCT), MRI, and FMT ([Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}). μCT can clearly distinguish spongy bone and was used to provide a reference point for the MRI and FMT image reconstruction. As postulated, a darkening in the MRI image was found exclusively in the region of the bone sample that was exposed to the Aln--PLGA NPs, thus confirming the ability of the NPs to not only bind to the bone tissue but also function as a *T*~2~ contrast agent for MRI. The affinity of Aln--PLGA NPs to the bone was further verified using FMT, where a cumulative fluorescence of 32 pmol of VT750 was found. This corresponds to a theoretical amount of 8.7 μg of USPIONs. The ability to gain information in multiple modalities using these novel bone-targeting nanoprobes was verified by co-registration of the MRI and FMT data sets, where a clear overlap between the darkened regions (MRI modality) and the fluorescence signal (optical modality) was observed. ![Imaging of fresh bovine femoral head spongy bone after incubation with the Aln--PLGA NPs labeled with VT750 and loaded with the USPIONs by μCT (A), MRI (B), FMT (C), and MRI/FMT fused image (D). The dashed red line indicates the demarcation between the portion of the bone sample immersed in the NP solution to the bottom and the untreated region to the top. The color scale shows the concentration of NIR fluorophore VT750 in a given voxel (1 mm^3^).](ao-2016-00088m_0009){#fig7} Conclusions {#sec3} =========== In this study, by combining a blending approach followed by postmodification with Aln, polymeric (PLGA) nanoprobes with bone-targeting properties were synthesized. The postmodification strategy enabled the encapsulation of USPIONs without interference from Aln. Furthermore, by introducing an NIR-conjugated PLGA in the blending step, NPs possessing characteristics that can be interrogated in both MRI (USPIONs) and NIR optical tomography (NIR-dye) were realized for the first time. These novel multimodal nanoprobes show affinity to synthetic and biogenic HAp secreted by osteoblasts and found in spongy bone, and enable the interrogation of the bone tissue in both *T*~2~-weighted MRI and optical modalities. The multimodal nanoprobes present an important platform to interrogate the bone environment to understand bone pathologies and develop bone-targeted therapeutics at an early stage. Materials and Methods {#sec4} ===================== Materials {#sec4-1} --------- Hydroxyapatite (HAp) was purchased from Finceramica, Faenza (RA) Italy. Pro Osteon 200 was purchased from Zimmer Biomet in Winterthur, Zürich. Poly([d]{.smallcaps},[l]{.smallcaps}-lactide-*co*-glycolide) (PLGA), *M*~w~ 14 000--38 000; poly(ethylene glycol) methyl ether-*block*-poly(lactide-*co*-glycolide) (PLGA--PEG), PEG average *M*~n~ = 5000 g/mol, PLGA average *M*~n~ 55 000 g/mol; *N*-(3-dimethylaminopropyl)-*N*′-ethylcarbodiimide hydrochloride (EDC), ≥98%; NHS, 98%; aminubutyric acid, ≥98%, phosphorus pentachloride, ≥98%; ethylenediamine, \>99%; methansulfonic acid, ≥98%; sodium hydroxide, ≥98%; dimethyl sulfoxide (DMSO), ≥99.8%; Alizarin, dye content, 97%; Quin-2, ≥95%; dexamethasone, ≥98%; [l]{.smallcaps}-ascorbic acid 2-phosphate sesquimagnesium salt hydrate, ≥95%; glycerol phosphate disodium salt pentahydrate, ≥98%; and tetrahydrofuran (THF), ≥99.9% was purchased from Sigma-Aldrich, St. Louis, MO, USA. Methanol, ≥99.8% and agarose type I, of molecular-biological grade, was purchased from Merck Millipore, MA, USA. Magnesium sulfate, ≥99%, and dialysis membranes \[molecular weight cutoff (MWCO), 50 000 and 10 000\] were purchased from Carl Roth, Germany. Formaldehyde 37% was purchased from AppliChem, Germany. Dichloromethane (DCM), ≥99%, was purchased from Fischer Scientific, France. VivoTag-750 was purchased from Perkin Elmer, MA, USA. Acetone, ≥99.9% and diethylether, ≥99.9% was purchased from VWR International, PA, USA. Human marrow-derived MSCs were a gift from Prof. Dr. Ivan Martin from University Hospital Basel. Hepes buffer was purchased from PAN Biotech, Germany. Fetal bovine serum was purchased from Invitrogen, CA, USA. Penicillin--streptomycin and phosphate-buffered saline (PBS) were purchased form Gibco/Life Technologies, CA, USA. Fibroblast growth factor was purchased from R&D system, Germany. Synthesis of PLGA--NHS {#sec4-2} ---------------------- Carboxylic acid end-functionalized PLGA (2.9 × 10^--5^ mol) was mixed with EDC (2.08 × 10^--4^ mol) in 10 mL DCM in an argon atmosphere. After 30 min, NHS (2.08 × 10^--4^ mol) was added to the reaction flask and stirred overnight. A translucent solution was obtained. The reaction mixture was filtered through a Teflon syringe filter with a cutoff of 200 nm and dripped into 10 mL of diethyl ether at 0 °C to form a precipitate. The obtained suspension was filtered, and the precipitate dried under reduced pressure to obtain PLGA--NHS. VT-750 Functionalization of PLGA--NHS {#sec4-3} ------------------------------------- PLGA--NH~2~ (0.32 μmol) was diluted in 1 mL acetone. VivoTag-750 (1 mg) was diluted in 100 μL of DMSO and added to PLGA--NHS. The sample was put on a shaker for 3 h at 700 rpm. The acetone was removed under reduced pressure. Saturated NaCl (0.4 mL), DCM (2 mL), and water (2 mL) were taken in a 5 mL Eppendorf tube, and the resultant mixture was shaken thoroughly. The water phase was removed, and the organic phase washed again with 2 mL of water. This step was repeated 5 times. The organic phase was dried with MgSO~4~, and the solvent evaporated under reduced pressure to yield PLGA--VT750. Synthesis of PLGA--NH~2~ from PLGA--NHS {#sec4-4} --------------------------------------- PLGA--NHS (100 mg, 3.12 μmol) was diluted in 10 mL of DCM; and then 10 equiv of ethylenediamine was added to the solution and allowed to react at room temperature (RT) for 3 h. The reaction mixture was precipitated in cooled diethyl ether, leading to a white precipitate, which was filtered and washed with diethyl ether. The filter cake was dried, dissolved in DCM, and reprecipitated in dried alcohol to remove unreacted amine. Synthesis of Alendronic Acid {#sec4-5} ---------------------------- In a 250 mL flask under argon, amino butyric acid (0.095 mol) and phosphorous acid (0.095 mol) were dissolved while stirring at 65 °C in 50 mL of methane sulfonic acid. After 20 min, 0.2 mol of PCl~5~ was added. The solution was stirred under reflux at 65 °C for 22 h. The translucent mixture obtained was quenched with 200 mL of water under vigorous stirring. The formation of gas was observed, and the gas was absorbed with NaOH 0.1 mol/L. The reaction mixture was then stirred at reflux at 160 °C for 5 h. The reaction was cooled down using ice-cold water. The pH in the initial reaction flask (−0.34) was brought to 1.85 by the addition of 57 mL of NaOH 50% solution. The resulting solution was added dropwise to 500 mL of methanol, which was cooled with an ice bath. A white precipitate was formed. The suspension formed was filtered and washed with 3 × 100 mL of methanol. The resulting solid was solubilized in water (30 mL) and was poured over 500 mL of methanol at RT under vigorous agitation. The resulting white suspension was filtered to yield a white solid (0.074 mol). The remaining solvent was evaporated in the oven by heating for 1 h at 70 °C followed by 20 min at 90 °C. Nanoprecipitation of NHS--PLGA NPs {#sec4-6} ---------------------------------- The NPs were synthesized using the nanoprecipitation method.^[@ref31]^ Briefly, PLGA--VT750, PLGA--PEG, and PLGA--NHS (5:15:80 w/w %) were dissolved in THF (1 mL) and USPIONs (0.1 mmol/L) at a fixed mass ratio with a concentration of 5 mg/mL. To the organic phase an equal amount of water was added rapidly to form the NPs. The remaining organic solvent was removed by dialysis (MWCO = 50 000/50 kDa), stirring at 200 rpm overnight at RT. Alendronic Acid Conjugation on the Surface of NHS--PLGA NPs {#sec4-7} ----------------------------------------------------------- Alendronic acid (500 μL, 0.2 mg/mL) was mixed with 500 μL of NHS--PLGA NPs. The sample was shaken for 2 h at 700 rpm at RT. To remove the unreacted alendronic acid, the sample was dialyzed overnight against water (MWCO 100 000 kDa). Synthesis of PLGA--Aln from PLGA--NHS {#sec4-8} ------------------------------------- PLGA--NHS (10 mg, 0.32 μmol) was dissolved in 5 mL of acetone. Water (5 mL) was slowly added while stirred with a magnetic stirrer. A milky suspension of NHS--PLGA NPs was obtained. The solvent was removed by dialysis (MWCO = 50 000/50 kDa) overnight. The NHS--PLGA NPs were diluted with water to reach a concentration of 0.25 mg/mL. Alendronic acid (1 mL; 0.1 mg/mL) was added dropwise to the NP suspension while stirring. The NPs started to aggregate, indicating the formation of an amide bond between alendronic acid and PLGA--NHS. The reaction was stirred for 2 h at RT. The precipitate was centrifuged (2000 rpm), and the supernatant was discharged. The precipitate was dissolved in DCM and again precipitated in diethyl ether. To remove the unreacted alendronic acid, the sample was again dialyzed overnight against water (MWCO 100 000 kDa), which yielded PLGA--Aln. The resulting PLGA--Aln was used for the control experiments. Cell Culture {#sec4-9} ------------ Hepa 1-6 cells were cultured in Dulbecco's Modified Eagle's Medium supplemented with 10% fetal bovine serum (Thermo Fisher Scientific, Waltham, MA, USA) and 1% penicillin/streptomycin (P/S) (100 U/mL; PAN-Biotech GmbH, Aidenbach, Germany). Human MSCs, a gift from Prof. Ivan Martin (University Hospital Basel Reference Number of local ethical committee 78/07), were differentiated into osteoblasts for 21 days with modified eagle's medium (α-MEM) supplemented with 10% fetal bovine serum, 100 U/mL penicillin, l00 μg/mL streptomycin, 1 mM sodium pyruvate, 10 mM HEPES buffer, 5 ng/mL basic fibroblast growth factor, 10 nM dexamethasone, 0.1 mM [l]{.smallcaps}-ascorbic acid-2-phosphate, and 10 mM β-glycerol phosphate. Osteoblast differentiation was tested after 7 days with RT-qPCR and after 21 days with Alizarin Red-S staining. Mouse macrophages (RAW 267.4, Sigma Aldrich, Germany) were cultured in a Roswell Park Memorial Institute medium supplemented with 5% fetal bovine serum and 1% P/S (100 U/mL; PAN-Biotech GmbH, Aidenbach, Germany). All cells were maintained in a humidified incubator at 37 °C with 5% CO~2~ atmosphere. Cellular Metabolic Activity Assessment {#sec4-10} -------------------------------------- MTT is an assay used to investigate the mitochondrial activity of cells. Cells were exposed to the MTT dye (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide), which transforms into insoluble purple formazan crystals. After exposure, the crystals were dissolved in DMSO, and the absorbance at 530 nm was measured in a plate reader (Synergy II, BioTek Instruments, Inc.). The amount of transformed MTT is proportional to the metabolic activity and can therefore be used to determine the effect of particles on the cell viability. We seeded 20 × 10^3^ cells per well in a 96-well plate and allowed them to attach overnight. On the following day, the supernatant was replaced with media containing Aln--PLGA NPs in different concentrations (31, 62, 125, 250, and 500 μg/mL). After 24 h of incubation, cell viability was investigated with the protocol provided by the supplier (Invitrogen). Untreated cells served as controls for the experiment. Osteogenic Differentiation of Human MSCs {#sec4-11} ---------------------------------------- MSCs were differentiated to osteoblasts for 21 days with α-MEM supplemented with 10% fetal bovine serum, 100 U/mL penicillin, l00 μg/mL streptomycin, 1 mM sodium pyruvate, 10 mM HEPES buffer, 5 ng/mL basic fibroblast growth factor, 10 nM dexamethasone, 0.1 mM [l]{.smallcaps}-ascorbic acid-2-phosphate, and 10 mM β-glycerol-phosphate. Osteoblast differentiation was tested after 7 days with RT-qPCR and after 21 days with Alizarin Red-S staining. Alizarin Red-S Staining {#sec4-12} ----------------------- After three weeks, the cell layer in the petri dish was washed with PBS for 3 times, fixed for 10 min in 3.7% formalin at RT, stained with Alizarin Red 2% for 20 min, and washed with tap water. Human MSCs were able to mineralize the matrix abundantly after 3 weeks of culture in osteogenic medium, as demonstrated by a strong Alizarin Red-S staining. Transmission Electron Microscopy {#sec4-13} -------------------------------- To determine the size and the presence of USPIONs in the polymeric NPs TEM was used \[Zeiss LEO 912 Omega (Leo Elektronenmikroskopie GmbH, Oberkochen, Germany)\]. An aliquot of Aln--PLGA NPs in water was placed on a carbon grid. After removing the solvent, the samples were analyzed using an electron acceleration of 120 keV. Scanning Electron Microscopy {#sec4-14} ---------------------------- SEM micrographs were obtained using a Quanta 250 FEG (FEI). The dried HAp samples (Finceramica) after incubation with the Aln--PLGANPs were placed on a carbon grid and coated with gold to visualize under high-resolution under reduced pressure. For elemental analysis (EDX), an Oxford INCA x-act (Oxford Instruments, UK) was used. The samples were analyzed using the software INCA. Fourier-Transform Infrared Spectroscopy (FTIR) {#sec4-15} ---------------------------------------------- FTIR spectra were recorded on a Vector 22 instrument (Bruker Optics), and the software provided by the manufacturer was used to import and analyze the spectrum. Dynamic Light Scattering {#sec4-16} ------------------------ The particle size was analyzed using dynamic light scattering with a Delsa Nano C (Beckman Coulter Inc., USA) provided with a laser diode operating at 658 nm. Measurements were conducted at a scattering angle of θ = 165° to the incident beam. Samples were equilibrated at 25 °C for at least 30 min prior to the analysis. The data were processed using CONTIN algorithms with Delsa Nano UI Software version 3.73. The size and PDI were expressed as the average of at least three measurements (±standard deviation). Surface Charge Measurements {#sec4-17} --------------------------- A Delsa Nano C (Beckman Coulter Inc., USA) analyzer was used to measure ζ potential values. The ζ potential of the particles in aqueous suspension (1:9 dilution with deionized H~2~O) was obtained by measuring the electrophoretic movement of charged particles under an applied electric field. Scattered light was detected at a 30° angle at 25 °C. *T*~2~ Measurements {#sec4-18} ------------------- A Bruker Avance 300 nuclear magnetic resonance spectrometer (Bruker Biosciences Corp., Billerica, MA) with a CPMG sequence was used to measure *T*~2~ relaxation times. The USPIONs containing the Aln NPs were incubated with 20 mg HAp powder for 3 h at RT. The supernatant was removed, and the samples were washed 5 times with 0.2 mL of double-distilled deionized water. The samples were then dried under reduced pressure for 6 h. A 2% w/v agarose solution prepared in deionized water was heated to 90 °C to form a solution, and while cooling it down, 20 mg HAp mixed with 0.7 mL of agarose was added to the solution in an NMR-tube (180 × 5 mm^2^), following which the suspension was gelled by placing it on ice. The NMR data were processed using Top Spin 3.2 software (Bruker), and Dynamic Center 2.1.8 was used to calculate *T*~2~ relaxation times. Microscopy {#sec4-19} ---------- For fluorescence microscopy, cells were seeded in 8-well chamber slides. Following incubation of the osteoblasts with Aln--PLGA NPs for 2 h, the cells were washed three times with PBS to remove unbound and weakly bound NPs, stained with 4′,6-diamidino-2-phenylindole (DAPI) nuclear stain, fixed with paraformaldehyde 3.7% (v/v) for 15 min, mounted onto coverslips, and then imaged using the Zeiss cell observer Z1. The images of the NPs (750 channel) were exported and analyzed using pixel analysis from ImageJ to compare the amount of NPs (pre- and postmodified) bound on the calcium deposits. FMT Imaging {#sec4-20} ----------- After incubation with fluorescence-labeled NPs, the Hap (Osteoporon) was dried under reduced pressure and sealed in a plastic tube 3.0 × 18.0 mm^2^, which was placed in an agarose phantom (1% in Milli-Q water). Samples were analyzed with an FMT 2500 (Perkin-Elmer) using the 750 nm channel (excitation 745 nm, emission 770--800 nm). Reflectance images were analyzed using a TrueQuant 3D (Perkin-Elmer, version 2.2.0.24). Image Co-registration {#sec4-21} --------------------- μCT volumes (isometric voxel size: 0.0844 μm; 908 × 428 pixel, 500--1200 slices) and FMT volumes were imported into AMIDE (64-bit v. 1.0.4). The volumes were corrected for axial rotations and co-registered based on the reflectance images from FMT. The co-registered datasets were exported in DICOM format and imported into Osirix (v. 5.5.1, 64-bit), where the image volumes were synchronized, fused, and visualized using 3D volume rendering. MRI {#sec4-22} --- A commercially available mousehead 2-element quadrature cryogenic coil system with a 7 T Bruker BioSpec small horizontal animal scanner (bore size 20 cm, maximum gradient amplitude 676 mT/m) was used to adapt MR pulse sequences with respect to bone sample imaging. The standard setup, designed for imaging live mice, was modified by the addition of a 3D printed sample holder made of polylactic acid (PLA). Bone samples incubated with NPs (spongiosa, 4 × 2 × 6 mm^3^) were then inserted into a poly(methyl methacrylate) (PMMA) container (height 3 mm) filled with saline solution and sealed on one side with an adhesive PCR tape. For imaging, a 3D FLASH sequence was applied with TR (40 ms), TE (6.2 ms), flip angle (50°), averages (1), BW (50 000 Hz), Cryocoil, pulse length (1.400 ms), BW (3000.0 Hz) and a physical resolution of 39 × 38 × 94 μm^3^, and the imaging was obtained in 22 min. The binding of the NPs on the bone samples was subsequently compared with the findings in FMT microscopy and μCT. μCT Imaging {#sec4-23} ----------- Bovine bone samples were scanned using a SkyScan 1178 (SkyScan, Belgium; version 1.3) high-speed in vivo micro-CT imager over an angle of 180° with rotation steps of 1° at a resolution of 83 μm and a field of view of 86 mm. Each image was acquired at 65 kV and 615 μA with an average of three frames. Using the NRecon module (SkyScan 2010, v. 6.3.3), the data sets underwent postalignment, beam-hardening correction, and ring-artifact reduction (a parameter set with the integrated fine-tuning tool), and the 3D reconstructions were exported in a DICOM format and presented as a 3D image in OsiriX v. 5.5.1. Statistical Analysis {#sec4-24} -------------------- The statistical analysis was performed using Student's *t*-test (paired, type 2, and 2-tailed) in Microsoft Excel, and *p*-values \< 0.05 were considered statistically significant. All values are presented as mean ± standard deviation (SD). The Supporting Information is available free of charge on the [ACS Publications website](http://pubs.acs.org) at DOI: [10.1021/acsomega.6b00088](http://pubs.acs.org/doi/abs/10.1021/acsomega.6b00088).TEM of premodified Aln--PLGA NPs (from PLGA--Aln) containing USPIONs; evaluation of PLGA--PEG weight-percentage to yield stable NPs; stability of Aln--PLGA/PEG--PLGA NPs in serum-conditioned media; reaction scheme and structure analysis of alendronic acid; EDX spectra of NHS--PLGA NPs; ^31^P spectra of NHS--PLGA NPs; SEM images of Aln--PLGA NPs adsorption on HAp over time; SEM images of pristine HAp, Aln--PLGA NPs incubated in cell-conditioned media; PLGA--PEG NPs; osteogenic lineage of osteoblasts; Alizarin Red staining of calcium deposits secreted by osteoblasts; and *R*^2^ values and *r*^2^ as a function of USPION concentration in Aln--PLGA NPs ([PDF](http://pubs.acs.org/doi/suppl/10.1021/acsomega.6b00088/suppl_file/ao6b00088_si_001.pdf)) Supplementary Material ====================== ###### ao6b00088_si_001.pdf The authors declare no competing financial interest. This work was funded by the Fifth INTERREG Upper Rhine Program (Project A21: NANO\@MATRIX), the Excellence Initiative of the German Federal and State Governments (Grant EXC 294, Centre for Biological Signalling Studies), the Helmholtz Zentrum Geesthacht through the Helmholtz Virtual Institute on Multifunctional Biomaterials for Medicine, and the University of Freiburg. The authors thank R. Thomann for assistance with acquiring TEM images, V. Ahmadi for SEM analysis, D. Vonwil for technical advice on FMT, J. Christensen for valuable input in image co-registration, S. Shah for assistance with FMT imaging of HAp samples, V. Hugo Pacheco Torres for assistance with NMR measurements, and K. Göbel-Guéniot for preparing the sample holder for the MRI measurements.
{ "pile_set_name": "PubMed Central" }
Efficacy of a triple therapy with rabeprazole, amoxicillin, and faropenem as second-line treatment after failure of initial Helicobacter pylori eradication therapy. Triple therapy consisting of lansoprazole, amoxicillin, and clarithromycin (LAC regimen) is widely used to eradicate Helicobacter pylori in Japan. However, the need for appropriate treatment after failure of initial therapy to eradicate H. pylori has been increasing. We therefore assessed the efficacy of a combination of rabeprazole, amoxicillin, and faropenem for second-line eradication therapy. The subjects were 116 patients positive for H. pylori infection. Patients initially received lansoprazole 60 mg/day, amoxicillin 1500 mg/day and clarithromycin 400 mg/day in two divided doses for 7 days. Patients in whom eradication treatment failed were given rabeprazole 20 mg/day and amoxicillin 1500 mg/day in two divided doses, and faropenem 600 mg/day in three divided doses (RAF regimen) for 7 consecutive days. H. pylori status was assessed by the 13C-urea breath test combined with rapid urease test or H. pylori culture method 8 weeks after completion of therapy. Susceptibility to clarithromycin was determined by the agar dilution method, and genetic polymorphism of CYP2C19 was analyzed by polymerase chain reaction-restriction fragment length polymorphism. The initial H. pylori eradication rate with the LAC regimen was 76.4% (84/110). Assessment of the CYP2C19 genotypes of the patients in whom eradication therapy failed revealed that homozygous extensive metabolizers accounted for 70.0% (16/23) and heterozygous extensive metabolizers for 30.0% (7/23), with no poor metabolizers. The acquired resistance rate for clarithromycin was 52.0% (12/23). The success rate of re-eradication with the RAF regimen was 91.3% (21/23) with no serious adverse effects. Triple therapy comprising rabeprazole, amoxicillin, and faropenem is effective for second-line eradication treatment of H. pylori infection, regardless of the genetic polymorphism of CYP2C19 or the presence of resistance to clarithromycin.
{ "pile_set_name": "PubMed Abstracts" }
Screening of multiple metal and antibiotic resistant isolates and their plant growth promoting activity. Heavy metal contamination has accelerated due to the rapid industrialization world wide. Accumulation of metals in excess can modify the structure of essential protein or can replace an essential element. Bradyrhizobium strains showed tolerance to cadmium, chromium, nickel, lead, zinc and copper. All the isolates showed maximum tolerance towards lead and zinc which was followed by nickel and chromium. These strains also showed tolerance towards most of the antibiotics. Bradyrhizobium strains were also tested for their Plant Growth Promoting (PGP) substances, all isolates produced good amount of indole acetic acid and were positive for ammonia but only three strains were positive for HCN and siderophore (RM1, RM2 and RM8), the rest isolates showed negative result. Based on the above intrinsic abilities of Bradyrhizobium species, these strains can be used for the growth promotion, as well for the detoxification of the heavy metals in metal polluted soils.
{ "pile_set_name": "PubMed Abstracts" }
The second Adventurer's Challenge brings exclusive rewards all month long, plus some new twists. Read on to see what's coming! November 1, 2018 at 11:05 AM by PrintsKaspian With Halloween over, darkness has fallen over Albion... and with the new month comes a new Adventurer's Challenge! For all of November, take the Grim Challenge and claim a piece of the darkness for yourself. Earn points for open-world activities, unlock chests to get valuable loot, and claim your own unique plague-themed mount to sow fear and confusion among your foes. Pestilential Prizes For Intrepid Adventurers NEW MOUNT: November's Grim Challenge introduces the Pest Lizard, a fast-moving mount with two different poison-based attacks that confuse and disorient enemies. (Note: while they are not officially battle mounts, Pest Lizards cannot be used in GvGs due to their unique poison spells.) NEW AVATAR BORDER: This month, reach your goals to win the Grim Challenge 2018 avatar border, a thematic decoration that will grimly frame your avatar for all eternity. This non-tradable item permanently unlocks this exclusive avatar border for one character. SEASONAL SPECIALS: Unlock Grim Challenge chests to get a mix of classic and new vanity items. We're bringing back a classic costume set that matches the pestilential theme, and introducing two new furniture items sure to bring menace and gloom to even the cheeriest setting. Updated Points and Rewards - Farmers Rejoice! Starting this month, Fame earned through farming now gives Challenge Points as well! Now you can earn points through PvE, gathering, fishing, and farming as you work toward weekly and monthly rewards. The formula for earning points has also been adjusted to bring high-end and low-end Fame values closer together, and the daily bonus has been increased to reward regular play. And as an added bonus, you'll now have a chance of getting City Heart resources with each chest you open. And finally, with the start of this second challenge, the UI has been updated to show all past challenges. You can now view previous months' challenges via tabs in the Adventurer's Challenge UI, so if you've unlocked something in a past month but never claimed it, your rewards are safe. Log in now to earn Challenge Points and unlock these exclusive rewards!
{ "pile_set_name": "OpenWebText2" }
Q: How to validate the Ckeditor using JQuery I am new in javascript and I am developing a project into CodeIgniter. Actually, I face a problem into CKEditor validation.When I fill all field and Click on the submit button then one message show CkEditor is required field But when I again click on the submit button data successfully submitted. I don't know how to resolve this problem.I already search so many things But my Problem is same. I using this code But this is giving an error getData() is not Defined I don't know what is getdata and where I have to use this. function CheckForm(theForm) { textbox_data = CKEDITOR.instances.mytextbox.getData(); if (textbox_data==='') { alert('please enter a comment'); } } A: change this textbox_data = CKEDITOR.instances.mytextbox.getData(); to textbox_data = CKEDITOR.instances['mytextbox'].getData(); in [] there should be id of the textbox. NOTE: Don't forget to add CKEditor JS.
{ "pile_set_name": "StackExchange" }
Doggy Hydrant The Doggy Hydrant was originally conceived out of the idea to help isolate the potty area of mans best friend and save expensive landscaped yards or encourage the use and placement on indoor potty areas. Even though it is still used as a dog training aid to make your pet piddle in the same spot each time, it has become a great accessory and decor piece for the most pampered dogs and has become popular as a photogoraphy prop and set decoration. This miniature replica fire hydrant was designed using data from real fire hydrants and is surfaced to mimic the actual cast iron finish of real hydrants. Manufactured in the heartland of the USA using 100% USA made materials and labor, these replicas are 20 inches tall and weigh just 5 lbs. They come with a fitted plug in the bottom so they can be filled with sand or other materials to keep them stable and are constructed of durrable exterior grade urethane materials and finished with UV stable paints and clear coats which will keep these hydrants looking good for years. The Doggy Hydrant is currently available in three colors, Pink, Blue and Classic Red.
{ "pile_set_name": "Pile-CC" }
{% assign calendar-id = include.calendar-id | default: 'main' %} <div id="calendar-{{ calendar-id }}" class="card-calendar"></div> {% capture_global scripts %} <script> document.addEventListener('DOMContentLoaded', function () { {% if jekyll.environment == 'development' %}window.tabler_calendar = window.tabler_calendar || {};{% endif %} var calendarEl = document.getElementById('calendar-{{ calendar-id }}'), today = new Date(), y = today.getFullYear(), m = today.getMonth(), d = today.getDate(); window.FullCalendar && ({% if jekyll.environment == 'development' %}window.tabler_calendar["calendar-{{ calendar-id }}"] = {% endif %}new FullCalendar.Calendar(calendarEl, { plugins: [ 'interaction', 'dayGrid' ], themeSystem: 'standard', header: { left: 'title', center: '', right: 'prev,next' }, selectable: true, selectHelper: true, nowIndicator: true, views: { dayGridMonth: { buttonText: 'month' }, timeGridWeek: { buttonText: 'week' }, timeGridDay: { buttonText: 'day' } }, defaultView: 'dayGridMonth', timeFormat: 'H(:mm)', events: [ { title: 'All Day Event', start: new Date(y, m, 1), className: 'bg-blue-lt' }, { id: 999, title: 'Repeating Event', start: new Date(y, m, 7, 6, 0), allDay: false, className: 'bg-blue-lt' }, { id: 999, title: 'Repeating Event', start: new Date(y, m, 14, 6, 0), allDay: false, className: 'bg-lime-lt' }, { title: 'Meeting', start: new Date(y, m, 4, 10, 30), allDay: false, className: 'bg-green-lt' }, { title: 'Lunch', start: new Date(y, m, 5, 12, 0), end: new Date(y, m, 5, 14, 0), allDay: false, className: 'bg-red-lt' }, { title: 'LBD Launch', start: new Date(y, m, 19, 12, 0), allDay: true, className: 'bg-azure-lt' }, { title: 'Birthday Party', start: new Date(y, m, 16, 19, 0), end: new Date(y, m, 16, 22, 30), allDay: false, className: 'bg-orange-lt' } ] })).render(); }); </script> {% endcapture_global %}
{ "pile_set_name": "Github" }
Sunday, June 22, 2014 Corroborating Haynes Labels A drum in my personal collection manufactured by J. C. Haynes & Co. bears a fantastic label inside which reads in part: "Manufacturers and Importers of Brass and German Silver Musical Instruments. / J. C. Haynes & Co., / Importers, Wholesale and Retail Dealers in / Musical Instruments, Strings, Sheet Music, and Musical Merchandise. / 33 COURT ST., opp. the Court House. / John C. Haynes. Oliver Ditson. C. H. Ditson. J. E. Ditson." Left incomplete, however, are the blanks where the date and owners name can be filled in. Even upon close inspection, no handwriting can be made out. One theory as to why this information isn't present is that the ink has simply faded over time. It now appears more likely that these blanks were never filled in at all. We can say this with a bit of confidence after comparing it with another similar instrument. J. C. Haynes & Co. Drum, ca. 1870s - 1880s J. C. Haynes & Co. Drum Label, ca. 1870s - 1880s The example seen below was recently offered up on ebay by a seller from Texas with the username "all_things_peacock". The drum is quite similar to mine, especially upon viewing the shells from the inside. Both drum appear to be made of a dark hardwood and have narrow reinforcing rings made of a lighter colored wood at each bearing edge. The labels on these two instruments are a perfect match which helps solve a bit of a mystery as to how old my own drum is. Past research showed that the address on a Haynes label by itself was not enough to accurately date a drum beyond a decades wide window spanning most of the later half of the 19th century. But this new label, complete with a hand written date provides a firm point on the timeline. September 9th, 1880 it reads, which happened to be a Thursday for what it's worth.
{ "pile_set_name": "Pile-CC" }
Demonstration that the leukocyte common antigen CD45 is a protein tyrosine phosphatase. It has been proposed on the basis of amino acid sequence homology that the leukocyte common antigen CD45 represents a family of catalytically active, receptor-linked protein tyrosine phosphatases [Charbonneau, H., Tonks, N. K., Walsh, K. A., & Fischer, E. H. (1988) Proc. Natl. Acad. Sci. U.S.A. 85, 7182-7186]. The present study confirms that CD45 possesses intrinsic protein tyrosine phosphatase (PTPase) activity. First, a mouse monoclonal antibody to CD45 (mAb 9.4) specifically eliminated, by precipitation, PTPase activity from a high Mr fraction containing CD45, prepared by gel filtration (Sephacryl S200) of a Triton X-100 extract of human spleen. Second, PTPase activity was demonstrated in a highly purified preparation of CD45 that was eluted with a high pH buffer from an affinity column, constructed from the same antibody. Third, on sucrose density gradient centrifugation, PTPase activity was only found in those fractions that contained CD45 as determined by Western analysis. When CD45 was caused to aggregate, first by reacting it with mAb 9.4 and then adding a secondary, cross-linking anti-mouse mAb, the PTPase activity shifted to the same higher Mr fractions that contained CD45. No shift in CD45 or PTPase was observed following addition of a control IgG2a. On this basis, it is concluded that CD45 is a protein tyrosine phosphatase.
{ "pile_set_name": "PubMed Abstracts" }
Q: How we hear sound computer vs human experiment -amplitude and volume-Interesting Suppose everything I will say from now on: We open song-A from youtube(1 computer) in 5 tabs(5 same songs started same time) with same volume.By intrigue and test it will act like same song only played as once with same volume.The volume is not multiplied with 5 times. In real life think there is 5 identical singers named B(think as copies).They start to sing song-A at the same time with same volume from same location(Think like they are dots binded together so their distance is the same to listener).My intrigue and knowledge says we would hear 5x the amplitude of Song-A so by the fraction of logarithm the sound we hear and our ear processed will increase differently then the computer example. My dilemma is how computer sends 5 song-A to speaker but give same volume as one as different in second example.It might be a thing that is not often thought by people.Can someone explain the mechanism? A: I think people have misunderstood that this is a hypothetical question. Anyway, in short, playing two sound sources simultaneously on your computer will add them together (double the power / +3dB, etc). This can be observed by having multiple mono tracks in a DAW, using your computers motherboard output if necessary, muting all tracks and progressively un-muting. The thing with a computer output is that it has controllable amplitude. So whilst you may be combining the audio signals, your overall output amplitude will be capped, which will simply result in distorted audio when you've 'added' too many signals. As @Andy aka mentioned above, some computer soundcards will mix/limit audio output so that it doesn't distort. In the real world, such an amplification cap does not exist. Whilst, as @Richard Crowley mentioned in his answer, addition / combination of sound waves in air is incredibly complex, in theory if you were able to keep creating multiple identical sounds from the exact same source location they would continue to 'add' together.
{ "pile_set_name": "StackExchange" }
Equivalence (trade) Equivalence is a term applied by the Uruguay Round Agreement on the Application of Sanitary and Phytosanitary Measures. WTO Member countries shall accord acceptance to the Sanitary and Phytosanitary (SPS) measures of other countries (even if those measures differ from their own or from those used by other Member countries trading in the same product) if the exporting country demonstrates to the importing country that its measures achieve the importer’s appropriate level of sanitary and phytosanitary protection. References Category:United States Department of Agriculture Category:Agriculture
{ "pile_set_name": "Wikipedia (en)" }
315 S.W.3d 685 (2010) In re Art HARRIS, Relator. No. 01-09-00771-CV. Court of Appeals of Texas, Houston (1st Dist.). July 1, 2010. Rehearing Overruled August 6, 2010. *687 Amanda L. Bush, Charles L. Babcock, Nancy Hamilton, Jackson Walker, LLP, Houston, TX, for Appellant. *688 Bonnie Stern, Beverly Hills, Ca, Diana E. Marshall, Marshall & Lewis, LLP, Harry Paul Susman, Richard Wolf Hess, Susman Godfrey LLP, Michael Meyer, Neil C. McCabe, The O'Quinn Law Firm, Lyndal Harrington, Houston, TX, Keith Miles Aurzada, Walter A. Herring, Bryan Cave, L.L.P., Dallas, TX, L. Lin Wood, Luke A. Lantta, Bryan Cave, LLP, Atlanta, GA, Teresa Stephens, North Richland Hills, TX, Nelda Turner, Gladewater, TX, for Appellee. Panel consists of Justices KEYES, ALCALA, and HANKS. OPINION ON REHEARING EVELYN V. KEYES, Justice. On April 22, 2010, a panel of this Court conditionally granted relator Art Harris's petition for writ of mandamus. Real party in interest Virgie Arthur filed a motion for rehearing on May 5, 2010. We deny Arthur's motion for rehearing, but we withdraw our April 22, 2010 opinion and issue this opinion in its place. This is a petition for writ of mandamus filed by relator, Art Harris, requesting that we direct the trial court to withdraw discovery orders against Art Harris issued on January 27, 2009, May 11, 2009, and August 28, 2009.[1] In five issues, Harris argues that the trial court abused its discretion: (1) in ordering Harris to turn over "electronic media" for forensic examination when there was neither a pending request for production nor any request for production of documents with which he had not complied; (2) in ordering Harris to respond to the Special Master's questions and to assess usage and contents of other electronic media listed in the Special Master's August 17, 2009 email; (3) in refusing to apply Texas Rule of Civil Procedure 193.3 and other discovery procedures on the treatment of privileged documents and creation of privilege logs; (4) by failing to consider Rule 171 in appointing a special master to conduct forensic computer examinations; (5) by appointing a special master to investigate and inquire into patterns of discovery abuse, or, in the alternative, by failing to remove a special master who is acting outside the limitations and specifications stated in the order appointing him, including reading attorney-client communications. Background On April 28, 2008, Virgie Arthur filed the underlying proceeding against Howard K. Stern, Bonnie Stern, Lyndal Harrington, Art Harris, Nelda Turner, Teresa Stephens, Larry Birkhead, Harvey Levin, and TMZ Productions, Inc., alleging that certain syndicated television broadcasts and internet publications defamed her and harmed her efforts to seek custody and visitation of her granddaughter, who is the child of Vickie Lynn Marshall, also known as Anna Nicole Smith. Art Harris is a correspondent for Entertainment Tonight, and Arthur alleges in her petition that he participated in defaming Arthur through internet postings, news articles, and an interview with a relative of Vickie Lynn Marshall's that was broadcast on Entertainment Tonight and that Harris conspired with Howard K. Stern, Marshall's former lawyer and companion, and others to defame Arthur. *689 On August 1, 2008, Arthur served Art Harris with her First Request for Production. The requests for production instructed Harris to "[p]roduce documents and tangible things in the forms as they are kept in the ordinary course of business" and to "[p]roduce electronically stored information in native format." The instructions in the request for production further stated that, for any electronically stored information, Harris should: [P]roduce a discovery log that details the type of information, the source of information, the discovery request to which the information corresponds, and the information's electronic ID number. [W]rite all of the electronically stored information to reasonably usable storage media, such as CD, DVD, or flash drive. [I]dentify every source containing potentially responsive information that [Harris] is not searching for production [and,] [f]or any materials that [Harris] claims no longer exist or cannot be located, provide all of the following: (1) A statement identifying the material. (2) A statement of how and when the material passed out of existence of when it could no longer be located. (3) The reasons for the material's nonexistence or loss. (4) The identity, address, and job title of each person having knowledge about the nonexistence or loss of the material. (5) The identity of any other materials evidencing the nonexistence or loss of the material or any facts about the nonexistence or loss. Arthur's request for production number one requested that Harris "produce copies of all communications, including but not limited to email and other electronic communications, for the period September 2006 to present," between Harris and 38 listed email addresses. Arthur's request for production number two requested that Harris "[p]roduce all documentation of the identity and/or contact information" for the thirty-eight email addresses listed in request number one, including "website registration information, names, physical addresses, telephone numbers, email addresses, and IP addresses." Request for production number three requested that Harris "[p]roduce copies of all communications, including but not limited to email and other electronic communications, for the period September 2006 to the present, between you and the following or about the following." The request then listed thirty-nine individuals or entities related to Arthur's claims against Harris and the other defendants in the case, including several parties' attorneys. At this time Harris, Bonnie Stern, Lyndal Harrington, and Nelda "Rose" Turner were all represented by attorney William Ogden. On August 28, 2008, Harris served Arthur with his objections and responses to Arthur's document requests. Harris objected to the requested discovery based on a qualified privilege due to his status as a professional journalist, arguing that "[t]he qualified journalist privilege... arises as a matter of common law and constitutional law." Harris "invoke[d] the privilege and request[ed] a protective order against the production of material obtained in newsgathering." Harris also objected to the requests as "unreasonably overbroad, prohibitively expensive, and unduly burdensome" under Texas Rule of Civil Procedure 192.4(a) and argued that "the burden and expense of the proposed discovery outweighs its likely benefit, taking into account the needs of the case, the party's resources and the issues at stake in the litigation," citing Texas Rule of Civil Procedure 192.4(b). Finally, he objected that the requests constituted "an unreasonable *690 and unwarranted invasion of personal privacy." On October 12, 2008, Arthur filed a motion to compel, arguing that Harris and the other defendants represented by Ogden had failed to produce relevant documents. Ogden responded on behalf of all four defendants, and a hearing was held on November 21, 2008. On December 4, 2008, Harris produced more than 300 pages of documents.[2] On December 8, 2008, Arthur filed her Second Motion to Compel Responses to Requests for Production from Defendant Bonnie Stern. The motion stated that Arthur had served Bonnie Stern with requests for production but had not received any responsive documents from her, and it requested that the trial court compel Bonnie Stern to produce the requested documents and order her to "make her computer hard drive available to [Arthur] for forensic examination and capture of information." On December 11, 2008, the trial court held another hearing on the discovery disputes. Arthur's counsel represented that Arthur had motions on discovery disputes pending for Bonnie Stern, Art Harris, Lyndal Harrington, and Rose Turner. Ogden appeared as counsel for Harris, Bonnie Stern, Lyndal Harrington, and Rose Turner. Arthur's counsel requested production of Bonnie Stern's hard drive at the hearing, arguing spoliation of the evidence based on some emails that were produced from another source. The following exchange occurred: [Trial Court]: Okay. So your request for production does not include her [Bonnie Stern's] hard drive but that's what you're asking for right now. Is that right? [Arthur]: Yes, Your Honor. [Ogden]: We have never been served with the request, discovery request to produce her hard drive which we would oppose. .... [Trial Court]: [P]rocedurally it seems to me a little off track for— [Arthur]: Your Honor, it's not exactly true that we haven't ever requested access to the hard drive. We contacted Mr. Ogden and the other defendants earlier in the litigation and suggested to them that the way to go, given the experts I've talked to, is to have the Court—apply to the Court for the appointment of an independent computer forensic examiner, a master. Under the rules that can be done and that person, the neutral, can examine the hard drive that needs to be examined.... .... [Trial Court]: Okay. Since we all actually know what he wants, do we really care if he files a formal request or do *691 we just decide to rule on the request and save a hearing? [Ogden]: I do care because I don't think it's appropriate for anyone to go— .... [Trial Court]: I understand that you object to producing the hard drive and to having somebody fish through it. [Ogden]: Correct. [Trial Court]: But do you object to having the Court rule one way or the other on it today even though there's no actual written request for production for that? [Ogden]: I do respectfully because I'd like the opportunity to brief that and file a response. After discussing the relationship between Arthur's claims against Bonnie Stern and her brother, Howard K. Stern, Arthur's counsel stated that he would "file a motion this afternoon for an independent forensic examiner."[3] The trial court asked if the request was "[f]or both computers or just Bonnie's?" Arthur's counsel responded: I think the—it would probably be best to proceed step by step; that is, to see if the Court would approve for Bonnie the appointment of the forensic computer examiner. If we need to utilize his or her services more down the road, then the order could be amended to do that. I suspect we will have to because Teresa Stephens, who was very much involved in this conspiracy ... has written the Court and has informed this counsel that all of her e-mails were on a[n] external server and they all disappeared while her computer was locked up in a back vault. That's not to suggest that our discovery dispute with Rose Turner is over because we would at some point probably want to approach the Court and say that she has a lot more that we've requested other than the emails that she's produced that she should produce. Ogden left the hearing to call Bonnie Stern in order to consult with her regarding production of her hard drive to a forensic examiner. On returning to the hearing, Ogden stated that Bonnie Stern agreed to produce her hard drive to a forensic examiner to "look for and copy any emails or exchanges between the 40 web addresses and email addresses that are listed in the request for production," provided that her bookkeeping business records would not be interfered with. The trial court stated that she could limit the order so that her unrelated business would not be examined, and Ogden replied, "That sounds perfect." The parties then discussed whether Ogden's other clients, including Harris, would submit to forensic examination of their computers. Ogden stated on the record that he was not able to reach them, and the following exchange occurred: [Trial Court]: All right. Well, we'll go back then to the notion that I'm granting the motion to compel as to one and three with regard to each of those and you all can either do it the unagreed way or the more or less agreed way after you get out of here. [Ogden]: I'm not sure what the unagreed way is but I will— [Trial Court]: The unagreed way is she just has to turn it over—or they, whoever they are, just have to turn it over. [Ogden]: They—they have—they've done that to the extent they can. [Trial Court]: No, I'm ordering the whole thing, not just what you chose to produce. *692 [Ogden]: When you say "turn it over," you mean hand [Arthur's counsel] their computer— [Trial Court]: No.... I mean that as to Request for Productions [sic] 1 and 3, it's granted. Produce. [Ogden]: So they can—they can print those off and deliver the printed copies? [Trial Court]: Right.... Or you can agree after you get out of here and get a chance to talk to them for what we are doing with Ms. Bonnie Stern's computer. The trial court then informed the parties that it was their responsibility to select an examiner. Over the following weeks, Ogden and Arthur's counsel communicated regarding the selection of the forensic examiner. Ogden suggested Craig Ball, along with several other candidates, and the parties eventually selected Craig Ball. On December 17, 2008, Arthur's counsel stated in an email that he had been informed that Turner and Harrington also desired to have their computers examined by Craig Ball, and he asked "What is Art Harris's position?" Ogden's co-counsel responded that Harris had opted to produce non-privileged documents rather than agree to the independent forensic examination. On January 2, 2009, Ogden and his firm filed an unopposed motion to withdraw as attorney for all four defendants, including Harris. On January 5, 2009, Ogden confirmed in an email that Ball was acceptable as a candidate for independent forensic examiner and stated generally in a letter that "Defendants agree to using Craig Ball as the independent forensic examiner. You may file this agreement with the Court pursuant to [Texas Rule of Civil Procedure] 11." On January 20, 2009, Arthur's counsel filed the letter from Ogden in the trial court as a Rule 11 agreement. On January 21, 2009, the trial court granted Ogden's motion to withdraw as counsel. On January 27, 2009, the trial court entered an "Order Compelling Production and Appointing Independent Computer Forensic Examiner," which ordered Harris to "produce the documents requested by [Arthur] in her Requests for Production Nos. 1 and 3 for the period of September 20, 2006 through March 14, 2008" and appointed Craig Ball as a "Special Master under the terms and conditions of the Consulting Agreement attached to this Order... to conduct an independent forensic examination of the relevant computer hard drives [including Harris's], external hard drives, jump drives, and other such repositories of electronic communications in the possession or control of ... ART HARRIS... for the purpose of locating documents responsive to Plaintiff's Request for Production."[4] *693 Attached to this January 27, 2009 order was a consulting agreement, effective December 18, 2008, between Arthur's counsel's firm and Craig Ball. It identified the firm as "the Client" and stated that the client "desires to engage Ball as a court-appointed neutral computer forensics examiner in Cause No. 2008-24181 in the 280th Judicial District Court of Harris County, Texas on the terms and conditions set forth herein." It further specifies that Ball is an independent contractor who will serve as the duly-appointed neutral agent of the Court and is not an employee or agent of Client. Ball does not serve as legal counsel to those Client serves.... The obligation to compensate and reimburse Ball timely and fully under this Agreement is not contingent upon the outcome of any claim or action, upon collection of monies from third parties or upon the opinions of testimony that Ball may offer. On February 2, 2009, Harris's new counsel filed a notice of appearance. On February 3, 2009, Harris filed a motion to clarify the January 27, 2009 order compelling production and appointing the independent computer forensic examiner. A hearing was originally set for February 6, 2009, but was continued until May 8, 2009. At this hearing, Harris's new counsel argued that the trial court had improperly included Harris in the order requiring him to produce his computer and storage devices because Harris had not agreed to surrender his computer. The trial court responded, "It doesn't have to be an agreement. It wasn't an agreement. It was the Court's order and I think I said, All right. *694 We're going to have this stuff from all of these people and I think they were all the subject of the hearing and I'm not quite sure what makes you think they weren't." Harris's counsel also addressed the case In re Weekley Homes,[5] which was then pending before the Texas Supreme Court, and argued that there were no requests for production of documents that they had not complied with. On May 11, 2009, the trial court entered an order denying Harris's motion to clarify. The trial court ordered Harris to "produce the relevant computer hard drives, external hard drives and jump drives ("electronic media") to Special Master Craig Ball in accordance with the Order dated January 27, 2009 and the Consulting Agreement attached thereto, except as where other procedures are specified herein." The trial court ordered Harris to contact the Special Master and to deliver the electronic media to him on or before May 19, 2009 at noon, under the terms specified by the Special Master. Immediately upon completion of producing a forensically sound image of the hard drive or other electronic media, as defined in this order, Special Master Ball shall return the original electronic media and computer, if applicable, to Defendant Art Harris. The Special Master shall promptly capture and produce to Defendant Harris a copy of all documents as set out in the order of January 27, 2009. This order gave Harris fourteen days from the date he received the captured documents to produce a privilege log to the Special Master "listing all documents submitted by Special Master Ball to Defendant Art Harris, which Defendant Art Harris is withholding from Plaintiff and the reasons for withholding the documents from production," and it ordered that Ball "produce all documents not listed on the privilege log to [Arthur]" and "maintain for the remainder of this lawsuit the electronic media and documents listed on the privilege log." The trial court ordered that Arthur pay the costs of the Special Master. Harris then turned over electronic media to the special master, including a Dell desktop computer with an 80 GB hard drive, a Dell laptop computer with a 160 GB hard drive, and an external 200 GB hard drive. The special master sent emails to Harris's counsel on August 7 and August 11 raising questions regarding Harris's replacement of his hard drive[6] and requesting that Harris give him more information and produce more electronic devices. The special master also sent other defendants emails regarding Harris's electronic media that were eventually posted on Nelda Turner's blog. On August 14, 2009, Harris's counsel responded by letter to the emails requesting more information and answering the special master's questions regarding the alleged replacement of the hard drive. The special master was not satisfied with this explanation and again requested more information and production of more electronic media from Harris in a series of emails from August 17, 2009 to August 23, 2009. On August 23, 2009, Harris filed a motion to reconsider the appointment of the special master and request for protective order and stay of appointment, arguing *695 that the appointment of Craig Ball as a special master was made in violation of the requirements of Texas Rule of Civil Procedure 171, that Harris was not properly a subject of the order compelling production of his hard drive to Ball, and that Ball acted outside the role of special master. Harris maintained that he had already produced more than three million pages of emails and that the trial court should "stay and terminate the role of the Special Master immediately." Harris also argued that Ball "has made sarcastic, editorial, and prejudicial comments about [Harris] regarding his date and his style of writing, as well as disclosing information gleaned from emails, some of which we believe were attorney-client communications." The trial court held a hearing, and, on August 28, 2009, it signed an order denying Harris's motion to reconsider. The trial court ordered that Harris "shall within 14 days of this Order respond to Special Master Craig Ball's August 17, 2009 email inquiry and evaluate whether the electronic media mentioned in the email contains communications from the relevant time period." The trial court also ordered that Harris "shall not produce the electronic media referred to in Special Master Craig Ball's August 17, 2009 email at this time" but that "nothing shall be deleted or destroyed from Defendant Art Harris's electronic media referred to in Special Master Craig Ball's August 17, 2009 email inquiry." Finally, the trial court ordered that Harris "has until September 28, 2009, to produce a privilege log pertaining to the CD provided to [Harris's] counsel by Special Master Craig Ball on August 28, 2009, and submit it along with the captured documents to the Court for in-camera inspection." Harris filed this petition for writ of mandamus on September 4, 2009. We granted his motion for emergency temporary relief suspending the trial court's enforcement of the three disputed orders. Standard of Review Mandamus relief is appropriate only if a trial court abuses its discretion and no adequate appellate remedy exists. In re CSX Corp., 124 S.W.3d 149, 151 (Tex.2003). The heavy burden of establishing an abuse of discretion and an inadequate appellate remedy is on the party resisting discovery. Id. A trial court commits a clear abuse of discretion when its action is "so arbitrary and unreasonable as to amount to a clear and prejudicial error of law." Id. (citing CSR Ltd. v. Link, 925 S.W.2d 591, 596 (Tex.1996) (orig. proceeding)). Orders Compelling Discovery In his first issue, Harris argues that the trial court abused its discretion by ordering him to turn over "electronic media" for forensic examination when there was neither a pending request for production nor any request for production of documents with which he had not complied, he had filed a motion for a protective order, and no motion to compel production was pending against him. In his third issue, he argues that the trial court erred in refusing to apply Texas Rule of Civil Procedure 193.3 and other discovery procedures on the treatment of privileged documents and creation of privilege logs. We address these issues together. A. Order to Turn Over Documents Without Pending Request for Production or Motion to Compel Discovery in this case is governed by Texas Rules of Civil Procedure 192.3, *696 192.4, 193, and 196.4.[7] Rule 192.3 allows a party to "obtain discovery regarding any matter that is not privileged and is relevant to the subject matter of the pending action, whether it relates to the claim or defense of the party seeking discovery or the claim or defense of any other party." TEX.R. CIV. P. 192.3(a). The comments to Rule 192 state, "While the scope of discovery is quite broad, it is nevertheless confined by the subject matter of the case and reasonable expectations of obtaining information that will aid resolution of the dispute." TEX.R. CIV. P. 192 cmt. 1; see also CSX, 124 S.W.3d at 152 ("Although the scope of discovery is broad, requests must show a reasonable expectation of obtaining information that will aid the dispute's resolution."). Rule 192.4 imposes limitations on the scope of discovery. TEX.R. CIV. P. 192.4. It states: The discovery methods permitted by these rules should be limited by the court if it determines, on motion or on its own initiative and on reasonable notice, that: (a) the discovery sought is unreasonably cumulative or duplicative, or is obtainable from some other source that is more convenient, less burdensome, or less expensive; or (b) the burden or expense of the proposed discovery outweighs its likely benefit, taking into account the needs of the case, the amount in controversy, the parties' resources, the importance of the issues at stake in the litigation, and the importance of the proposed discovery in resolving the issues. Id. Determinations regarding the scope of discovery are largely within the trial court's discretion. In re Colonial Pipeline Co., 968 S.W.2d 938, 941 (Tex.1998) (citing Dillard Dep't Stores, Inc. v. Hall, 909 S.W.2d 491, 492 (Tex.1995) (orig. proceeding)). However, the discovery rules "explicitly encourage trial courts to limit discovery when `the burden or expense of the proposed discovery outweighs its likely benefit, taking into account the needs of the case, the amount in controversy, the parties' resources, the importance of the issues at stake in the litigation, and the importance of the proposed discovery in resolving the issues.'" In re Alford Chevrolet-Geo, 997 S.W.2d 173, 181 (Tex.1999) (orig. proceeding) (quoting TEX.R. CIV. P. 192.4(b)). "[A] discovery order that compels overly broad discovery `well outside the bounds of proper discovery' is an abuse of discretion for which mandamus is the proper remedy." Dillard, 909 S.W.2d at 492 (quoting Texaco, Inc. v. Sanderson, 898 S.W.2d 813, 815 (Tex.1995) (orig. proceeding)). Rule 193 "imposes a duty upon parties to make a complete response to written discovery based upon all information reasonably available, subject to objections and privileges." TEX.R. CIV. P. 193 cmt. 1. It permits a party to object to discovery as overbroad and to refuse to comply with it entirely. Id. at cmt. 2 (citing Loftin v. Martin, 776 S.W.2d 145 (Tex.1989) (orig. proceeding)). "A central consideration in determining overbreadth is whether the request could have been more narrowly tailored to avoid including tenuous information and still obtain the necessary, pertinent information." CSX, 124 S.W.3d at 153. "[D]iscovery may not be used as a fishing expedition or to impose unreasonable discovery expenses on the opposing party." Alford Chevrolet-Geo, *697 997 S.W.2d at 181 (citing K Mart Corp. v. Sanderson, 937 S.W.2d 429, 431 (Tex.1996) (orig. proceeding) (holding that not only must discovery requests be reasonably tailored to include only matters relevant to case, but discovery requests may not be used as fishing expedition or to impose unreasonable discovery expenses on opposing party)); see also In re Am. Optical Corp., 988 S.W.2d 711, 713 (Tex. 1998) (orig. proceeding). Here, Arthur requested all correspondence between Harris and a list of 38 other email addresses and people, some of whom were business associates and attorneys for parties to the litigation who were not alleged to have been co-conspirators to defame Arthur. Arthur's requests also delved into information potentially protected by Harris's privilege as a journalist. After Arthur served her discovery requests on him, Harris responded by filing objections based on privilege as a journalist and scope. Harris objected to the requests as "unreasonably overbroad, prohibitively expensive, and unduly burdensome" under Texas Rule of Civil Procedure 192.4(a), and he argued that "the burden and expense of the proposed discovery outweighs its likely benefit, taking into account the needs of the case, the party's resources and the issues at stake in the litigation," citing Texas Rule of Civil Procedure 192.4(b). Finally, he objected that the requests constituted "an unreasonable and unwarranted invasion of personal privacy." On October 12, 2008, Arthur filed her motion to compel production from Harris, and, following a hearing on November 21, 2008, Harris produced approximately 300 pages of emails and other documents that he determined were responsive to the discovery requests. Arthur made no further motion to compel discovery from Harris, and she never served Harris with any further discovery requests. At the December 11, 2008 discovery hearing held on Arthur's motion to compel discovery from Harris's co-defendant Bonnie Stern, Arthur made only limited references to Harris, and the trial court did not address any arguments or objections raised by Harris. Arthur never established that the scope of discovery requested from Harris was required for her to establish her claims of defamation and conspiracy. Nevertheless, following the hearing on Arthur's motion to compel production from Bonnie Stern, the court ordered Harris to turn over his computer hard drive, external drives, and jump drives to the court-appointed "Special Master" and forensic examiner, Craig Ball. Harris's February 3, 2009 motion to clarify the January 27 order made it clear that Harris wished to reassert his previous objections that the discovery ordered by the trial court was overbroad, prohibitively expensive, and unduly burdensome. The trial court denied the motion. Because Arthur did not file a motion to compel further discovery from Harris following the November 21, 2008 hearing, Harris had no opportunity to urge his objections and motion for a protective order prior to being ordered to produce the documents sought by Arthur. We hold that in compelling discovery from Harris without requiring Arthur to identify specific discovery requests with which Harris had not complied and without having before it a motion to compel discovery from Harris, the trial court acted arbitrarily and without considering the discovery rules. See TEX.R. Civ. P. 215.1, 215.2, 215.3; In re Ford Motor Co., 165 S.W.3d 315, 317 (Tex. 2005) (holding that mandamus relief is available when trial court does not follow guiding rules and principles and reaches arbitrary and unreasonable decision). We further hold that the trial court abused its *698 discretion in ordering overbroad discovery and in failing to determine whether the documents sought by Arthur from Harris were privileged, as Harris claimed, or even whether they were relevant or reasonably calculated to lead to the discovery of evidence relevant to Arthur's claims. See TEX.R. CIV. P. 192.3 (stating that "a party may obtain discovery regarding any matter that is not privileged and is relevant to the subject matter of the pending action"). B. Orders to Produce Electronic Media Also in his first issue, Harris argues that the trial court abused its discretion by ordering him to produce his electronic media for computer forensic examination because Arthur had made no request for the electronic hardware and no showing that the benefits of production outweigh the costs as required by Rule 196.4. He cites In re Weekley Homes to support his argument.[8] Rule 196.4 provides: To obtain discovery of data or information that exists in electronic or magnetic form, the requesting party must specifically request production of electronic or magnetic data and specify the form in which the requesting party wants it produced. The responding party must produce the electronic or magnetic data that is responsive to the request and is reasonably available to the responding party in its ordinary course of business. If the responding party cannot—through reasonable efforts—retrieve the data or information requested or produce it in the form requested, the responding party must state an objection complying with these rules. If the court orders the responding party to comply with the request, the court must also order that the requesting party pay the reasonable expenses of any extraordinary steps required to retrieve and produce the information. TEX.R. CIV. P. 196.4. In Weekley Homes, the Texas Supreme Court held that Rule 196.4 requires a specific request "to ensure that requests for electronic information are clearly understood and disputes avoided." 295 S.W.3d at 314. It set out the appropriate procedure for requesting electronic information under the rules: When a specific request for electronic information has been lodged, Rule 196.4 requires the responding party to either produce responsive electronic information that is "reasonably available to the responding party in its ordinary course of business," or object on grounds that the information cannot through reasonable efforts be retrieved or produced in the form requested. Once the responding party raises a Rule 196.4 objection, either party may request a hearing at which the responding party must present evidence to support the objection. TEX.R. CIV. P. 193.4(a). To determine whether requested information is reasonably available in the ordinary course of business, the trial court may order discovery, such as requiring the responding party to sample or inspect the *699 sources potentially containing information identified as not reasonably available. Id. at 315. If the responding party fails to meet its burden of production, the trial court may order production subject to the discovery limitations imposed by Texas Rule of Civil Procedure 192.4. Id. The supreme court recognized that "[p]roviding access to information by ordering examination of a party's electronic storage device is particularly intrusive and should be generally discouraged, just as permitting open access to a party's file cabinets for general perusal would be." Id. at 317. It stated: As a threshold matter, the requesting party must show that the responding party has somehow defaulted in its obligation to search its records and produce the requested data. The requesting party should also show that the responding party's production "has been inadequate and that a search of the opponent's [electronic storage device] could recover deleted relevant materials." Courts have been reluctant to rely on mere skepticism or bare allegations that the responding party has failed to comply with its discovery duties. Even if the requesting party makes this threshold showing, courts should not permit the requesting party itself to access the opponent's storage device; rather, only a qualified expert should be afforded such access, and only when there is some indication that retrieval of the data sought is feasible. Due to the broad array of electronic information storage methodologies, the requesting party must become knowledgeable about the characteristics of the storage devices sought to be searched in order to demonstrate the feasibility of electronic retrieval in a particular case. And consistent with standard prohibitions against "fishing expeditions," a court may not give the expert carte blanche authorization to sort through the responding party's electronic storage device. Instead, courts are advised to impose reasonable limits on production. Finally, federal courts have been more likely to order direct access to a responding party's electronic storage devices when there is some direct relationship between the electronic storage device and the claim itself. Id. at 317-19 (internal citations omitted). Weekley Homes further held that even "[i]f the responding party meets its burden by demonstrating that retrieval and production of the requested information would be overly burdensome, the trial court may nevertheless order targeted production upon a showing by the requesting party that the benefits of ordering production outweigh the costs." Id. at 315 (citing TEX.R. CIV. P. 192.4). We first address Arthur's argument that Harris waived any complaints arising under Weekley Homes. 1. Preservation Arthur argues that Harris never made any arguments based on Weekley Homes before the trial court and, therefore, failed to preserve those complaints under Texas Rule of Appellate Procedure 33. She notes that Weekley Homes was decided on August 28, 2009, after the January 27 and the May 11 orders were signed. However, the transcript of the May 8, 2009 hearing clearly reflects that Harris's counsel did bring the Weekley Homes case to the attention of the trial court and that Harris reasserted similar arguments in his August 23, 2009 motion to reconsider, in which he argued, among other things, that the trial court had not followed the correct procedure and that this case was not appropriate to compel production of the actual hard drives. *700 We conclude that Harris's actions were sufficient to put the trial court on notice regarding his complaints as raised in this petition for writ of mandamus, and this issue was preserved. See TEX.R.APP. P. 33.1. 2. Production of Electronic Discovery The trial court's January 27, 2009 order required Harris to produce "the relevant computer hard drives, external hard drives, jump drives, and other such repositories of electronic communications in [his] possession or control" for an "independent forensic examination." On February 3, 2009, Harris filed a motion to clarify this order, arguing that he should not have been included in the order to turn over hard drives for forensic examination. After a hearing on May 8, 2009, the trial court denied the motion to clarify. The trial court's May 11, 2009 order again ordered Harris to "produce the relevant computer hard drives, external hard drives and jump drives." Harris argues that the trial court erred in failing to follow the provisions of Rule 196.4, as described in Weekley Homes, in compelling him to produce his hard drives in the January 27 and May 11 orders. We agree. Arthur's original requests for production specifically requested that Harris produce emails and other electronic communications in their native format.[9]See TEX.R. CIV. P. 196.4. After Harris filed objections, arguing, in part, that the requests were prohibitively expensive and unduly burdensome, Arthur filed a motion to compel Harris to comply with the discovery requests. In response, Harris produced 300 documents that were "responsive to the request and [were] reasonably available to the [him as the] responding party in [his] ordinary course of business." See id. Arthur did not file any other motions to compel discovery from Harris, nor did Arthur ever serve Harris with a discovery request for his hard drives. Thus, Arthur failed to follow the first step required by Rule 196.4 and Weekley Homes by failing to make a specific request of the production of the hard drives themselves. See id. ("To obtain discovery of data or information that exists in electronic or magnetic form, the requesting party must specifically request production of electronic or magnetic data and specify the form in which the requesting party wants it produced."); Weekley Homes, 295 S.W.3d at 314 (holding that specific request is required "to ensure that requests for electronic information are clearly understood and disputes avoided"). The trial court also failed to follow any of the other provisions of Rule 196.4 as described in Weekley Homes. Nor, as stated above, did the trial court ever address Harris's objections to discovery. In fact, the record of the December 11, 2008 hearing does not contain any argument by Arthur's counsel that Harris's production as of the date of that hearing had been insufficient. Rather, Bonnie Stern, and not Harris, was the subject of the December 11 hearing, and, thus, here there is less than the assertion of "mere skepticism or bare allegations" that Weekley Homes had deemed insufficient to compel discovery of a hard drive or other electronic storage device. See 295 S.W.3d at 317-18, 320 (holding that "conclusory statements that the deleted emails it seeks `must exist' and that deleted emails are in some cases recoverable is not enough to justify the highly intrusive method of discovery the trial court ordered, which afforded *701 the forensic experts `complete access to all data stored on [the Employees'] computers'"). Following the trial court's January 27, 2009 order appointing a special master and forensic examination and requiring Harris to produce his hard drives and jump drives, Harris filed a motion to clarify the order, arguing that he was improperly ordered to produce the drives. In response, the trial court held a hearing on May 8, 2009 on Harris's motion to clarify, but it did not require Arthur to make any showing that Harris "has somehow defaulted in [his] obligation to search [his] records and produce the requested data" or that Harris's production had been "inadequate and that a search of [his electronic storage devices] could recover deleted relevant materials." See id. at 317. Nor did Arthur offer any evidence supporting her effort to obtain the hard drives or any evidence regarding which, if any, of Harris's electronic storage devices could be expected to contain discoverable documents at the hearing on Harris's motion to clarify the January 27 order. See id. Weekley Homes also held that direct access to a responding party's electronic storage devices is more likely to be appropriate "when there is some direct relationship between the electronic storage device and the claim itself." Id. at 317-19 (recognizing that "ordering examination of a party's electronic storage device is particularly intrusive and should be generally discouraged, just as permitting open access to a party's file cabinets for general perusal would be" and citing cases where employers sued former employees for misuse of company computers as instances where close relationship between claims and defendant's computer equipment justified production of computers themselves). Arthur made no such showing either at the December 11, 2008 hearing or at the May 8, 2009 hearing or in any motion to compel. Moreover, even if we could conclude that the record supported a finding by the trial court that Harris's electronic storage devices could be expected to contain discoverable documents and that direct access to those devices was justified by some direct relationship between the storage devices and Arthur's claims, Arthur also failed to demonstrate that the "particularities of [the] electronic information storage methodology [would] allow retrieval of emails that have been deleted or overwritten, and what that retrieval [would] entail." Id. at 320. In sum, the record does not contain any evidence sufficient to satisfy the stringent standard for compelling production of Harris's electronic storage devices. Finally, the trial court failed to consider whether the benefits of production to Arthur outweighed the burdens of the appointment of a special master and forensics expert to obtain the information sought when ordering the production of Harris's computer hard drive, external drives, and jump drives to the court-appointed Special Master. See TEX.R. CIV. P. 192.4 (requiring trial courts to weigh benefits of production against burdens imposed when requested information is not reasonably available in ordinary course of business). Thus, even if Arthur had shown that the documents sought from Harris were not privileged, were relevant to her claims against Harris, and could not have been reasonably obtained other than by ordering him to turn over his hard drives, and that there was a direct relationship between the hard drives and Arthur's claims, which she has not, she still would not be entitled to discovery of Harris's hard drives. See id. We conclude that the trial court abused its discretion not only by compelling production of overly broad discovery without *702 addressing Harris's objections and without a motion to compel discovery from Harris before it, but also by issuing its even more invasive order that Harris produce his hard drives and by failing to require Arthur to make any showing that the benefit of the discovery she sought outweighed the burden and expense to Harris. Thus, we hold that the trial court abused its discretion by issuing the January 27, 2009 order compelling Harris to produce documents in response to Arthur's requests for production and to produce his hard drives and by issuing its May 11, 2009 order denying Harris's motion to clarify. See Alford Chevrolet-Geo, 997 S.W.2d at 181 (holding that although trial court has broad discretion to define scope of discovery, it can abuse its discretion by acting unreasonably). We sustain Harris's first issue. C. Refusal to Apply Rule 193.3 on Treatment of Privileged Documents In his third issue, Harris argues that the trial court abused its discretion by refusing to recognize the discovery procedures of Texas Rule of Civil Procedure 193.3 in the treatment of privileged documents and the creation of privilege logs. Rule 193.3 provides, "A party may preserve a privilege from written discovery in accordance with this subdivision." TEX.R. CIV. P. 193.3. It further provides that a party claiming that "material or information responsive to written discovery is privileged may withhold the privileged material or information from the response" and must provide a withholding statement describing the discovery being withheld, and it provides that the requesting party may then request that the "withholding party identify the information and material withheld." Id. Because we have already determined that the trial court erred in the ways set forth above, this issue is moot. We overrule Harris's third issue Appointment of Special Master In his fourth and fifth issues, Harris argues that the trial court abused its discretion in appointing Craig Ball as a special master to conduct a forensic examination of Harris's computers without following Texas Rule of Civil Procedure 171. Arthur responds that Harris consented to the appointment of the special master, citing a series of emails and other negotiations between the parties that culminated in the filing of the Rule 11 agreement on January 20, 2009 and the trial court's order of January 27, 2009. Arthur also argues that Harris's objection to the special master is barred by laches because the special master was appointed on January 27, 2009, and Harris cooperated with the special master beginning May 14, 2009, but did not seek mandamus relief until September 4, 2009. A. Consent & Laches Parties may consent to the appointment of a special master. See Simpson v. Canales, 806 S.W.2d 802, 811 (Tex.1991) (orig. proceeding). However, the trial court's statements at the May 8, 2009 hearing that "[i]t wasn't an agreement" and that the trial court acted on her own authority in appointing Ball as special master defeat Arthur's argument that the parties consented to Ball's appointment as a special master. Moreover, the January 20, 2009 Rule 11 Agreement between Ogden and Arthur's counsel was an agreement to use Craig Ball as "the independent forensic examiner" as ordered by the trial court. It was not an agreement that a special master be appointed. And it was both executed and filed after Ogden had withdrawn as Harris's counsel. *703 Arthur also argues that delay alone is a valid ground for denying Harris's request for mandamus relief and that Harris fatally delayed in asserting his objections to the appointment of a special master. See In re Xeller, 6 S.W.3d 618, 624 (Tex.App.-Houston [14th Dist.] 1999, orig. proceeding) ("[J]udicial economy would have been better served if relators' [sic] had sought mandamus relief immediately after the appointment of the master."); Owens-Corning Fiberglas Corp. v. Caldwell, 830 S.W.2d 622, 625 (Tex.App.-Houston [1st Dist.] 1991, orig. proceeding) (holding that party may object to appointment of master either before participating in any proceeding before the master, or before parties, master, and trial court have acted in reliance on appointment). The record, however, shows that Harris diligently sought to enforce his rights by filing a motion to clarify the January 27 order appointing Craig Ball as special master within days after it was signed by the trial court. That motion was not heard until May 8. In the hearing, Harris argued that the trial court had improperly included him in the order requiring him and several other defendants to produce their computers and electronic storage devises, and he raised the Weekley Homes case. Harris thus objected to the appointment of the special master before he complied with the May 11, 2009 order compelling him to produce three hard drives to the special master. Therefore, he did make a timely objection. See Caldwell, 830 S.W.2d at 625-26 (holding that party timely objected to appointment of special master and did not waive its right to object when it objected to appointment several days before it participated in proceedings before special master). Furthermore, the argument of delay does not prevent Harris from asserting that the trial court erred in failing to remove the special master on the grounds that the special master has, since the production of the original three hard drives, behaved inappropriately and exceeded the scope of his authority or that he should not be compelled to produce any further electronic media to the special master. We conclude that Harris has not waived his fourth and fifth issues. B. Trial Court's Appointment of Craig Ball We now consider the authority of the trial court to compel Harris to submit matters to a special master. In his fourth issue, Harris argues that this was not an "exceptional case" and that there was no good cause for appointment of a master, as required by Rule 171, governing such appointments. Harris further argues that Craig Ball cannot serve as a neutral special master because he is under contract with, paid for, and indemnified by Arthur under the consulting agreement attached to the trial court's January 27 order. In his fifth issue, Harris argues that the trial court abused its discretion in appointing the special master to read attorney-client communications and to investigate and inquire into perceived discovery abuses. In the alternative, he argues that the trial court erred in failing to remove the special master for acting outside the limitations and specifications stated in the referral order by placing himself in an adversarial position, by investigating perceived discovery abuses, by exhibiting bias and lack of impartiality, and by making highly prejudicial statements. Much of the confusion on this issue stems from the trial court's conflation of the roles of a forensic examiner and a special master. Texas Rule of Civil Procedure 171 is the exclusive authority for the appointment of masters in Texas state *704 courts. Simpson, 806 S.W.2d at 810. Rule 171 provides, in part: The court may, in exceptional cases, for good cause appoint a master in chancery, who shall be a citizen of this State, and not an attorney for either party to the action, nor related to either party, who shall perform all of the duties required of him by the court, and shall be under orders of the court, and have such power as the master of chancery has in a court of equity. TEX.R. CIV. P. 171. Rule 171 also provides that "[t]he court shall award reasonable compensation to such master to be taxed as costs of suit." Id.; TransAmerican Natural Gas Corp. v. Mancias, 877 S.W.2d 840, 844 (Tex.App.-Corpus Christi 1994, orig. proceeding). A special master "has and shall exercise the power to regulate all proceedings in every hearing before him and to do all acts and take all measures necessary or proper for the efficient performance of his duties" as specified in the trial court order. TEX.R. CIV. P. 171. "[A]ppointment of a master lies within the sound discretion of the trial court and should not be reversed except for a clear abuse of that discretion." Simpson, 806 S.W.2d at 811. However, it is "improper for ... an order [appointing a special master] to cast the master in the role of advocate rather than merely referee in the underlying proceeding." TransAmerican, 877 S.W.2d at 843 (citing Caldwell, 830 S.W.2d at 626 (noting impropriety of allowing master to require production of evidence regardless of whether opposing party has requested it)). A forensic examiner in the context of electronic discovery has a much different role. Although we have found no rule or case that specifically defines "forensic examiner," a forensic examiner as contemplated in Weekley Homes is a computer expert whose sole purpose is to create forensic images of a particular electronic storage device and then to search the images for specified documents using a predesignated list of search terms. See Weekley Homes, 295 S.W.3d at 313. A forensic expert as contemplated by Rule 196.4 and Weekley Homes is not given any authority to conduct hearings, to make recommendations regarding what evidence should be produced, or to require the production of any particular storage device or other item of evidence. In contrast to Rule 171's provision that the costs of a special master be taxed as a cost of suit, Rule 196.4 contemplates that "the requesting party pay the reasonable expenses of any extraordinary steps required to retrieve and produce the information." TEX.R. CIV. P. 196.4. In accordance with this rule, the requesting party in Weekley Homes clearly contemplated paying the expenses of its forensic experts if it had been permitted access to Weekley Homes' hard drives. See Weekley Homes, 295 S.W.3d at 313. Here, Arthur sought appointment of Craig Ball as an independent forensic examiner and entered a Rule 11 agreement with Ogden that Ball be the independent forensic examiner appointed. The January 27, 2009 order, however, expressly appointed "Craig Ball of Austin, Texas as a Special Master, under the terms and conditions contained in the Consulting Agreement attached to this order and incorporated herein," an agreement which provided that Ball be hired by Arthur's counsel as an independent forensic examiner. Ball's role in the litigation is in some ways similar to that of a forensic examiner as contemplated by Rule 196.4 and Weekley Homes. Ball is paid by Arthur's counsel, and his role as envisioned in the trial court's January 27, 2009 order at least partially conforms to the role of a forensic *705 expert employed to create images of particular electronic storage devices and then search for specified documents using a predesignated list of search terms at the expense of the requesting party. We have already determined, however, that the discovery order to produce the hard drives was an abuse of discretion. Therefore, the question of Ball's ability to serve as a forensic expert to examine the hard drives on Arthur's behalf is moot. However, the January 27 order also specifically conferred on Ball a number of powers extended to a special master not "related to either party" and appointed by the court "in exceptional cases" under Rule 171. See TEX.R. CIV. P. 171. The trial court referred to Ball as a special master and, from the time of his appointment, treated him as more than a forensic expert, allowing him to contact the parties and to make recommendations regarding the production of particular items. Thus, we next determine whether the trial court erred in the appointing Ball as a special master. Rule 171 permits a trial court to appoint a special master "in exceptional cases, for good cause." TEX.R. CIV. P. 171. While the "`exceptional cases/good cause' criterion of Rule 171 is not susceptible of precise definition," the supreme court has held that "this requirement cannot be met merely by showing that a case is complicated or time-consuming, or that the court is busy." Simpson, 806 S.W.2d at 811. However, courts have found sufficient justification for the appointment of a master to supervise "discovery questions which require extensive examination of highly technical and complex documents by a person having both a technical and a legal background." TransAmerican, 877 S.W.2d at 843 (holding that "the technical nature of the present case and the potential help which may be provided to the trial court by a special master with geological training and expertise constitutes a sufficiently exceptional condition to justify the present appointment"); see also Hourani v. Katzen, 305 S.W.3d 239, 247-48 (Tex. App.-Houston [1st Dist.] 2009, pet. denied) ("The highly technical nature of the case, which involves the feasibility of constructing a driveway or bridge along the edge of a lake without damaging the lake, and the assistance which may be provided to the trial court by a special master with engineering training and expertise constitutes a sufficiently exceptional condition to justify the present appointment."). Here, the case is not of a "highly technical nature." The fact that production of some of the discovery sought by Arthur might require expert forensic examination of electronic media is not sufficient to show that this is an "exceptional case" requiring expertise in computer forensics. Electronic discovery is a common component of modern litigation, and its mere presence alone does not constitute a showing of good cause for appointing a special master. Neither party has argued that some specialized knowledge would be necessary to interpret any of the documents produced in this case. Arthur also argues that appointment of a special master was necessary in this case because of her allegations that Harris did not produce all of the emails and other electronic documents in his possession that are responsive to her requests for production. However, Arthur made no showing to the trial court that Harris had failed to produce requested documents within the proper scope of discovery. As we have already discussed at length, Arthur did not even file a motion to compel discovery from Harris objecting to his production of documents before or after the December 11 hearing and filed no additional requests for production; nor did the trial court hear *706 or rule on Harris's objections to Arthur's requests. Furthermore, were Arthur to show that this was an exceptional case and that examination of Harris's hard drives was necessary for her to prove her case and not unduly burdensome to Harris, a forensic examination could be performed by a forensic examiner without the power and authority of a special master. We conclude that the record reflects that this case does not meet the "exceptional case/good cause criterion of Rule 171." Therefore, we hold that the trial court abused its discretion in appointing Ball as a special master. See Simpson, 806 S.W.2d at 811. To the extent that the trial court's appointment of Ball was as a forensic examiner instead of as a master, we hold that the trial court abused its discretion by failing to comply with Weekley Homes, as we have explained above. We sustain Harris's fourth and fifth issues. In his second issue, Harris argues that the trial court abused its discretion in issuing the August 28 order compelling him to respond to the special master's August 17, 2009 email. Because we have already determined that the court's appointment of Craig Ball as a forensic examiner and special master was an abuse of discretion, this issue is moot. We overrule Harris's second issue. Conclusion We conditionally grant the petition for writ of mandamus and direct the trial court to withdraw its discovery orders against Art Harris issued on January 27, 2009, May 11, 2009, and August 28, 2009. Any pending motions are dismissed as moot. NOTES [1] The underlying lawsuit is Virgie Arthur v. Howard K. Stern, Bonnie Stern, Lyndal Harrington, Art Harris, Nelda Turner, Teresa Stephens, Larry Birkhead, Harvy Levin, and TMZ Productions, Inc., No. 2008-24181, filed April 28, 2008 in the 280th District Court of Harris County Texas, Honorable Tony Lindsay presiding. After this petition for writ of mandamus was filed, Arthur joined CBS as a defendant. [2] It is unclear when the trial court ruled on Arthur's October 12, 2008 motion. There was no written order immediately following the November 21, 2008 hearing, nor was there a transcript of the hearing. However, Arthur's second motion to compel discovery from Bonnie Stern states that the trial court granted the motion to compel on November 21, 2008 and that Arthur subsequently received a packet containing some documents from Turner and Harris. Harris represents that the second motion to compel discovery from Bonnie Stern was the only discovery motion pending at the time the trial court issued its January 27, 2009 order compelling discovery; however, at the December 11, 2008 hearing, Arthur's counsel represented that discovery requests were still pending against Harris and other defendants. During oral argument, counsel for Arthur acknowledged that there was no motion to compel production pending against Harris at the time of the trial court's January 27, 2009 order. [3] The record does not contain a motion for appointment of a forensic examiner. [4] The order stated: On December 11, 2008, this Court heard Plaintiff's Motion to Compel Responses to Requests for Production from Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON and determined that the motion should be granted in part and denied in part. It is therefore ORDERED, ADJUDGED and DECREED that: (1) Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON shall produce the documents requested by plaintiff in her Requests for Production Nos. 1 and 3 for the period of September 20, 2006 through March 14, 2008. (2) At the present time, Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON are not compelled to produce the documents requested by Plaintiff in her Request for Production No. 2. (3) To facilitate production of these documents, the Court hereby appoints Craig Ball, of Austin, Texas, as a Special Master, under the terms and conditions contained in the Consulting Agreement attached to this Order and incorporated herein as if fully set forth in this Order, to conduct an independent forensic examination of the relevant computer hard drives, external hard drives, jump drives, and other such repositories of electronic communications in the possession or control of Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON, for the purpose of locating documents responsive to Plaintiff's Request for Production. The Special Master shall have discretion to employ or to modify search terms, and he is specifically instructed to: a. exclude from production email communications between Stern family members that are of a purely personal nature; b. exclude from production any files or communications relating solely to Ms. Stern's accounting business, or other unrelated businesses of Ms. Stern; c. capture electronic communications, including but not limited to e-mails, to or from DEFENDANT HOWARD K. STERN'S attorneys, which consist of the law firm of Bryan Cave/Powell Goldstein and its employees, former employees and partners, including but not limited to L. Lin Wood, Nicole J. Wade, John C. Patton, Luke Lantta, Ben Erwin, and B. Lyle, and the Law Offices of Eric Sauerberg, and its employees and partners including but not limited to M. Krista Barth, and segregate them in order for the law firm to review and assert any claim of privilege prior to production; d. capture all remaining electronic communications, including but not limited to emails to or from the persons, entities and email addresses listed in parts 1 and 3 of Plaintiff's Requests for Production, and submit them to Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON for privilege review prior to production; (4) Within 14 days after receipt of the captured documents from Special Master, the law firm of Bryan Cave/Powell Goldstein, and Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON shall produce a privilege log and submit it, along with the captured documents, to the Court for in camera inspection. (5) To facilitate the work of the Special Master, this Court ORDERS Defendants BONNIE STERN, ART HARRIS, AND LYNDAL HARRINGTON, at their own expense, TO CONTACT THE SPECIAL MASTER AND TO DELIVER TO HIM THE RELEVANT MEDIA within 10 days of the signing of this order, under terms to be specified by him; (6) Other than as stated in part (5) above, the costs of the Special Master shall be carried by the Plaintiff, until such time as the Court may determine otherwise. [5] In re Weekley Homes, 295 S.W.3d 309 (Tex. 2009) (orig. proceeding). [6] It appears that the forensic examination shows that Harris had his hard drive replaced on December 16, 2008 or that there was some other evidence that he had deleted a large number of files. [7] Rule of Civil Procedure 196.4 governs requests for production of "data or information that exists in electronic of magnetic form." TEX.R. CIV. P. 196.4. It is addressed in the next section. [8] Arthur argues that Weekley Homes is distinguishable from the current case because there, the supreme court addressed a trial court's order to give the opposing party's forensic examiner direct access to the hard drives, while this case involves production to a neutral party. However, Harris argues that Ball was, in fact, not a neutral party, and there is some confusion regarding Ball's role in this litigation, which we address later in this opinion. For purposes of our review of the trial court's order compelling Harris to produce his hard drives, it appears that Ball was in fact a forensic expert hired by and paid by Arthur's counsel, which is exactly the situation addressed in Weekley Homes. See 295 S.W.3d at 313. We conclude that Weekley Homes does apply here. [9] Harris argues that, while Arthur's requests for production did ask for emails, the requests did not specify the form in which the requesting party wanted the emails produced. This argument is not supported by the record. The instructions in the requests for production stated the form in which electronic files should be produced.
{ "pile_set_name": "FreeLaw" }
How to help kids with scary and tragic news? On February 14, 2018, a gunman opened fire at Marjory Stoneman Douglas High School in Parkland, Florida, killing seventeen students and staff members and injuring seventeen others. In commemoration of the sad 1-year anniversary of this school shooting, Screenagers shares these tips for helping kids with scary and tragic news. All of us here at Screenagers have teens and tweens. Lisa, my co-producer of Screenagers, has a son in college in Pittsburgh, so she got alerts from him right away about the shooting. When her daughter woke up, Lisa immediately told her about the incident. Lisa said, “I wanted to tell her about what was going on before she saw it on social media or got a text from one of her friends concerned for her brother.” The digital age makes it key that we get in front of these conversations quickly. The president of the American Psychological Association (APA), Jessica Henderson Daniel, Ph.D., says in response to Pittsburgh’s shootings that “Hate crimes are the most extreme expression of prejudice. Compared to other crimes, hate crimes have a more destructive impact on victims and communities because they target core aspects of our identity as human beings.” I find the American Psychological Association’s guide to talking to your kids about the difficult news to be helpful. They, as do I, encourage parents to share their feelings with their children. It is not about burdening them with one’s anxiety or sadness or other emotions. It is about naming feelings and discussing them. This approach has been shown to be highly effective in helping youth develop greater emotional intelligence. The APA says “It is OK to acknowledge your feelings with your children. They see you are human. They also get a chance to see that even though you are upset, you can pull yourself together and continue on.” Psychologists generally say that small children, less than 5 years old, do not need to be told about these types of events. But, young kids now have such easy access to information on devices so we need to be mindful that they might be seeing much more than we know. For older kids, the APA recommends: “Tell the truth. Lay out the facts at a level they can understand. You do not need to give graphic details.” I believe it is important that we all make sure kids know how rare these tragedies are. In homes where news is on a lot, or where news alerts are readily visible on screens, youth get an inaccurate perspective of the frequency with which tragedies occur. Yes, bad things happen, but the key is letting our children know that for every negative thing, there are thousands of positive things happening. And, be sure at the end of the conversations that you reassure them that they are safe and that you are there for them to talk further. For this TTT, let’s talk about difficult news. Here are some questions you may find useful. What feelings are coming up for all of us in this time of tragedy? When you feel scared or concerned about news how do you process those emotions? Talk to friends? Write posts? Write in a journal? Talk to your family? Upcoming Events Gaming? Check. Superheroes? Absolutely. Sci-fi? Without a doubt. This spring break, geeks shall inherit the earth…or the FCMoD, at the very least! Experience a full week of awesome geekiness, with different themes and activities each[...] The award-winning Get Set for School® Pre-K curriculum uses lively music and playful activities to help young children build a solid foundation for school success. This upbeat workshop introduces a readiness curriculum full of[...] Jurassic Quest is Americas Largest and most realistic Dinosaur Event. Our guests will walk through the Cretaceous period, the Jurassic Period and The Triassic period and experience for themselves what it was like to be[...] Our comprehensive, easy-to-use curriculum uses engaging multisensory techniques and research-based methods to make handwriting a natural and automatic skill for children of all ages and abilities. We use a fun hands-on approach to develop good[...] This unique musical experience is perfect for children ages 0-12 and their parents. Children are invited to dress as a favorite pollinator such as a bee, butterfly or bird and dance and sing along with[...] Follow Us Our Contacts Marketing Partners Mile High Mamas is a Denver mom community with contributions from top mom bloggers across Colorado. On Mile High Mamas you may read hilarious mommy blogs, product reviews, receive bargain notices, win prizes and connect with local Denver moms.
{ "pile_set_name": "Pile-CC" }
UPDATE 9/16: Wilson Bauer was located on Sept. 15, according to police. No additional information was available.  CHICAGO — Police are searching for award-winning Chicago chef Wilson Bauer, who has gone missing. Bauer, 34, was last seen Thursday in the 800 block of North Hermitage Avenue in Ukrainian Village wearing a black T-shirt, jeans and “possibly greyish brown” boots, according to a Chicago Police community alert. Police describe Bauer as 6-foot-1, weighing 210 pounds with a full beard and tattoos on both arms and on his calf. He is bald, wears black-rimmed glasses with taped corners, carries a black backpack and “frequents” the Pilsen neighborhood, according to the alert. Wilson Bauer, 34, was last seen in Ukrainian Village. Bauer has held chef gigs at prominent restaurants Schwa, Longman & Eagle and Elizabeth. While at Schwa, Bauer won the Jean Banchet award for best chef de cuisine in 2017. Earlier this year, Bauer launched his own underground supper club called Chicago, Washington. Prior to that, he was slated to become head chef at Bar Biscay but he was ultimately replaced by another chef before the restaurant opened, according to the Tribune. Anyone with information about Bauer’s whereabouts is urged to call detectives at 312-747-8380.
{ "pile_set_name": "OpenWebText2" }
// Copyright (c) 2011 The LevelDB Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. See the AUTHORS file for names of contributors. #include "table/format.h" #include "leveldb/env.h" #include "port/port.h" #include "table/block.h" #include "util/coding.h" #include "util/crc32c.h" namespace leveldb { void BlockHandle::EncodeTo(std::string* dst) const { // Sanity check that all fields have been set assert(offset_ != ~static_cast<uint64_t>(0)); assert(size_ != ~static_cast<uint64_t>(0)); PutVarint64(dst, offset_); PutVarint64(dst, size_); } Status BlockHandle::DecodeFrom(Slice* input) { if (GetVarint64(input, &offset_) && GetVarint64(input, &size_)) { return Status::OK(); } else { return Status::Corruption("bad block handle"); } } void Footer::EncodeTo(std::string* dst) const { #ifndef NDEBUG const size_t original_size = dst->size(); #endif metaindex_handle_.EncodeTo(dst); index_handle_.EncodeTo(dst); dst->resize(2 * BlockHandle::kMaxEncodedLength); // Padding PutFixed32(dst, static_cast<uint32_t>(kTableMagicNumber & 0xffffffffu)); PutFixed32(dst, static_cast<uint32_t>(kTableMagicNumber >> 32)); assert(dst->size() == original_size + kEncodedLength); } Status Footer::DecodeFrom(Slice* input) { const char* magic_ptr = input->data() + kEncodedLength - 8; const uint32_t magic_lo = DecodeFixed32(magic_ptr); const uint32_t magic_hi = DecodeFixed32(magic_ptr + 4); const uint64_t magic = ((static_cast<uint64_t>(magic_hi) << 32) | (static_cast<uint64_t>(magic_lo))); if (magic != kTableMagicNumber) { return Status::InvalidArgument("not an sstable (bad magic number)"); } Status result = metaindex_handle_.DecodeFrom(input); if (result.ok()) { result = index_handle_.DecodeFrom(input); } if (result.ok()) { // We skip over any leftover data (just padding for now) in "input" const char* end = magic_ptr + 8; *input = Slice(end, input->data() + input->size() - end); } return result; } Status ReadBlock(RandomAccessFile* file, const ReadOptions& options, const BlockHandle& handle, BlockContents* result) { result->data = Slice(); result->cachable = false; result->heap_allocated = false; // Read the block contents as well as the type/crc footer. // See table_builder.cc for the code that built this structure. size_t n = static_cast<size_t>(handle.size()); char* buf = new char[n + kBlockTrailerSize]; Slice contents; Status s = file->Read(handle.offset(), n + kBlockTrailerSize, &contents, buf); if (!s.ok()) { delete[] buf; return s; } if (contents.size() != n + kBlockTrailerSize) { delete[] buf; return Status::Corruption("truncated block read"); } // Check the crc of the type and the block contents const char* data = contents.data(); // Pointer to where Read put the data if (options.verify_checksums) { const uint32_t crc = crc32c::Unmask(DecodeFixed32(data + n + 1)); const uint32_t actual = crc32c::Value(data, n + 1); if (actual != crc) { delete[] buf; s = Status::Corruption("block checksum mismatch"); return s; } } switch (data[n]) { case kNoCompression: if (data != buf) { // File implementation gave us pointer to some other data. // Use it directly under the assumption that it will be live // while the file is open. delete[] buf; result->data = Slice(data, n); result->heap_allocated = false; result->cachable = false; // Do not double-cache } else { result->data = Slice(buf, n); result->heap_allocated = true; result->cachable = true; } // Ok break; case kSnappyCompression: { size_t ulength = 0; if (!port::Snappy_GetUncompressedLength(data, n, &ulength)) { delete[] buf; return Status::Corruption("corrupted compressed block contents"); } char* ubuf = new char[ulength]; if (!port::Snappy_Uncompress(data, n, ubuf)) { delete[] buf; delete[] ubuf; return Status::Corruption("corrupted compressed block contents"); } delete[] buf; result->data = Slice(ubuf, ulength); result->heap_allocated = true; result->cachable = true; break; } default: delete[] buf; return Status::Corruption("bad block type"); } return Status::OK(); } } // namespace leveldb
{ "pile_set_name": "Github" }
National Bolshevism National Bolshevism may be defined as a socialist movement that grounds itself, not in the internationalist, materialist, atheism of Marx, but rather in the traditional culture of the West. The call for the separation of socialism from its Marxist domination was most powerfully made by Oswald Spengler and he remains today the most important thinker of the National Bolshevik tendency. The dominance of Marxist thinking among members of the far left, as well as the acceptance of Marxism as being synonymous with socialism on the part of rightists, has obscured the fact that the genuine interests of the workers, and thus of socialists, might not be synonymous with internationalism, atheism, and social liberalism. In brief, a National Bolshevik program may be summarized as: Dirigism, Autarky, Socialism! Down with Internationalism! 21 January, 2008 "Let's be clear: we have lost this war. We have lost because the initial, central goals of the invasion have all failed: we have not secured WMDS from terrorists because those WMDs did not exist. We have not stymied Islamist terror - at best we have finally stymied some of the terror we helped create. We have not constructed a democratic model for the Middle East - we have instead destroyed a totalitarian government and a phony country, only to create a permanently unstable, fractious, chaotic failed state, where the mere avoidance of genocide is a cause for celebration. We have, moreover, helped solder a new truth in the Arab mind: that democracy means chaos, anarchy, mass-murder, national disintegration and sectarian warfare. And we have also empowered the Iranian regime and made a wider Sunni-Shiite regional war more likely than it was in 2003. Apart from that, Mr Bush, how did you enjoy your presidency?— Andrew Sullivan 17 January, 2008 "Usury will destroy our society, but meanwhile there is no escape from it. We are coming near the end of its maleficient action, not through awakening to its evils but because it is reaching the end of its resources ... The modern world is organized on the principle that money of its nature breeds money. A sum of money lent has, according to our presnent scheme, a natural right to interest. That principle is false in economics as in morals, It ruined Rome, and it is bring us to our end."— Hilaire Belloc, Usury, 1931
{ "pile_set_name": "Pile-CC" }
A tree view is a popular and useful graphical method for displaying on a display screen the hierarchical organization of objects, such as files, in computer memory. A tree view takes its name from an analogy to trees in nature, which have a hierarchical organization of branches and leaves. For example, a leaf belongs to a small branch, which further belongs to a large branch, and all branches of the tree have a common starting point at the root. Analogously, objects in computer memory can have a hierarchical organization, in that an object can be contained in a sub-directory, which can be further contained in another directory, and so on. Thus, all of computer memory can be divided up into sub-directories and directories that ultimately are all contained in a root directory. The structure of the displayed tree view shows both nesting of objects and where the objects belong within the nested hierarchical organization. Unfortunately, such tree views can become cumbersome when the list of objects is large. It might take several scrolling operations by the user to page through a large tree before finding a desired object, and the user can easily become lost. Thus, one of the biggest advantages of a tree display--that the user has an orientation to where the objects are located--can become confusing when there are too many objects. Prior file managers attempted to address this problem by providing the function of taking infrequently used objects at a displayed, expanded level of the tree and un-expanding (collapsing) them into a visual set, leaving the visual set in its respective position. This allowed the user to see more of the important information, without totally removing the objects, thus preserving the advantages of a tree view. In prior file managers, an entire branch of the tree is shown in its expanded form or hidden in its collapsed form. The branch of the tree is often a directory or subdirectory, and the name that the file manager often associates with the collapsed form is the name of the directory or subdirectory that contains the collapsed (unexpanded) objects. Prior file managers suffer from the problem that they could only expand and collapse entire branches. Although this works well when a branch had a small number of objects, as the number of objects in a branch increases, it becomes increasingly difficult for the user to find a desired object and maintain an orientation within the tree.
{ "pile_set_name": "USPTO Backgrounds" }
Observation of a chloride-dependent intermediate during catalysis by angiotensin converting enzyme using radiationless energy transfer. Stopped-flow radiationless energy-transfer kinetics have been used to examine the effects of chloride on the hydrolysis of Dns-Lys-Phe-Ala-Arg by angiotensin converting enzyme. The kinetic constants for hydrolysis at pH 7.5 and 22 degrees C in the presence of 300 mM sodium chloride were KM = 28 microM and kcat = 110 s-1, and in its absence, KM = 240 microM and kcat = 68 s-1. The apparent binding constant for chloride was 4 mM, and the extent of chloride activation in terms of kcat/KM was 14-fold. The effects of chloride on the pre-steady-state were examined at 2 degrees C. In the presence of chloride, two distinct enzyme-substrate complexes were observed, suggesting multiple steps in substrate binding. The initial complex was formed during the mixing period (kobsd greater than 200 s-1) while the second complex was formed much more slowly (kobsd = 40 s-1 when [S] = 5 microM and [NaCl] = 150 mM). Strikingly, in the absence of chloride, only a single, rapidly formed enzyme-substrate complex was observed. These results are consistent with a nonessential activator kinetic mechanism in which the slow step reflects conversion of an initially formed complex, (E X Cl- X S)1, to a more tightly bound complex, (E X Cl- X S)2.
{ "pile_set_name": "PubMed Abstracts" }
Improving the endoscopic endonasal transclival approach: the importance of a precise layer by layer reconstruction. BACKGROUND. The endoscopic endonasal transclival approach (EETCA) is a minimally-invasive technique allowing a direct route to the base of implant of clival lesions with reduced brain and neurovascular manipulation. On the other hand, it is associated with potentially severe complications related to the difficulties in reconstructing large skull base defects with a high risk of postoperative cerebrospinal fluid (CSF) leakage. The aim of this paper is to describe a precise layer by layer reconstruction in the EETCA including the suture of the mucosa as an additional reinforcing layer between cranial and nasal cavity in order to speed up the healing process and reduce the incidence of CSF leak. METHODS. This closure technique was applied to the last six cases of EETCA used for clival meningiomas (2), clival chordomas (2), clival metastasis (1), and craniopharyngioma with clival extension (1). RESULTS. After a mean follow-up of 6 months we had no one case of postoperative CSF leakage or infections. Seriated outpatient endoscopic endonasal controls showed a fast healing process of nasopharyngeal mucosa with less patient discomfort. CONCLUSIONS. Our preliminary experience confirms the importance of a precise reconstruction of all anatomical layers violated during the surgical approach, including the nasopharygeal mucosa.
{ "pile_set_name": "PubMed Abstracts" }
This App allows you to type fast in Telugu . It supports both Telugu and English Keyboards. Its built-in dictionary engine automatically adds your typed words. So next time you type, typing first letter of word displays full word. Just tap and use. Best English to Telugu dictionary in the market. It has more than 31400 english words with telugu meanings, synonyms and antonyms. It has voice enabled pronunciation for english words and doesn't need internet connection. Predictive searching will make your life easier.Features:- + It's an offline dictionary+ More than 31400 words. Free download of English To Telugu Dictionary 3.1, size 32.40 Mb. Everyday life and higher education demand across every industry, has arise the need of English meanings in various languages along with its English pronunciation. Looking to present basic requirements, we have introduced 10 in 1 DICTIONARY which is very useful for the one who requires Hindi, Gujarati, Marathi, Punjabi, Kannada, Tamil, Telugu,. Free download of 10 in 1 Dictionary 9.0, size 58.56 Mb. The intention behind to prepare this application is to reach all Christians by songs and by listening this they will have to grow in spirituality.Here we provide at first version is...1) songs from different languages (Telugu,Hindi,English,Tamil,Malayan,Gujarati,Kannada, etc.).2) different ministry songs and different singer. Freeware download of All Christian Songs 1.0.1.1, size 1.05 Mb. E2H Character Converter is an English to Hindi conversion software that "works as you speek" Hindi language. It is very userful for your office and for your personal works. It has very useful features and it is also simple to use. This software solve your Hindi typing problems. This is a remarkable software tool developed to help in converting Shusha and kruti font to Unicode. Tool can convert font in various devnagari scripts like Hindi, Marathi, Nepali, and Sanskrit with similar ease. Tool not only converts Shusha to mangal or kruti to mangal but also converts kruti to Shusha or Shusha to kruti. Need for such tools. Free download of Hindi Unicode Converter 6.0.0, size 1.41 Mb. E2H is very powerful software which is used to convert English Characters into HINDI. At times, it’s a necessity to prepare a document in other languages other than English which becomes tough because your computer might not support other languages. So we at Multiicon recognized this limitation and have developed the E2H character. Free download of English to Hindi Character Converter 9.0, size 3.39 Mb. Aiseesoft PDF to SWF Converter is really a cool tool converting any pdf files to swf ones which can be displayed via IE,flash player or many other applications. The PDF to SWF Converter is such a multi-language software as supporting English, Turkish, Thai, Latin, Korean, Greek, Cyrillic, Arabic, Japanese, Chinese files etc. E2M is very powerful software which is used to convert English Characters into Marathi. At times, it’s a necessity to prepare a document in other languages other than English which becomes tough because your computer might not support other languages. So we at Multiicon recognized this limitation and have developed the E2M character. Free download of English to Marathi Character Converter 9.0, size 8.13 Mb. E2G is very powerful software which is used to convert English Characters into Gujarati. At times, it’s a necessity to prepare a document in other languages other than English which becomes tough because your computer might not support other languages. So we at Multiicon recognized this limitation and have developed the E2G character. Free download of English to Gujarati Character Converter 9.0, size 3.33 Mb. Easy Date Converter is a program to perform arithmetical operations with Common Era (Gregorian) and Julian dates, Julian day numbers and ordinal dates of the form yyyy-ddd. It is a bilingual English/German program.Suppose you run an advertisement which expires 45 days from today. What date will that be? How many weekdays are there in 2006? If you. Free download of Easy Date Converter 8.59, size 1.73 Mb.
{ "pile_set_name": "Pile-CC" }
// // CDILoadingView.h // Cheddar for iOS // // Created by Sam Soffes on 5/28/12. // Copyright (c) 2012 Nothing Magical. All rights reserved. // @interface CDILoadingView : SSLoadingView @end
{ "pile_set_name": "Github" }
Covers every aspect of Youth, Virtually. A voice that never ages... Phenomenal songs for over 3 decades... Soulful, youthful and romantic melodies... Backing a range of actors... Adding taste to ordinary characters... 5 Filmfares for the Best Playback Singer... He is non other than... Udit Narayan... The voice of the era...SILVER BELLS AT CH&FC GROUNDSBy Mandulee Mendis Harmonious melodies mingled with Indian beat filled the air of CH & FC grounds on 16 th night. The long awaited concert commenced at 7 p.m. with a fascinating dance performed to some Bollywood hits and the dancers included famous Sri Lankan stars Roshan Ranawana, Shalini Tharaka, Suraj Mapa, Akalanka Ganegama and Shashila. Glamour was sprinkled on the audience with the first song of the star of the night: Udit Narayan. It was a blast! The thrill sustained the same till the end until he came out with all his songs followed by few other stars specially including his wife Deepa Narayan. The stage gleamed more, with the splendor and the fascination of local dancers and Bollywood dances added glitter to it. The concert that prolonged till 11.30 p.m. was an obvious success and it was a tremendous work of 20 days which included complete preparation and swift promotion. Everyone could see with a sense of awe, the admirable work of the organizers :specifically Balcony 6 Entertainment, Sandaruwan Thenuwara, Carmen Thenuwara and Gayan Thenuwara ; the sound suppliers: Malinda Lowe (Universal Sounds), Stage lighting: MOS Pvt. Ltd and the promoters specially including MBC networks. On behalf of the organizers, Xtream youth thank each and everyone who did the duties to the best of their ability since 15th June morning when Udit Narayan stepped here in Sri Lanka, during the press conference at Taj Samudra hotel that evening, and during the climax: the magnificent concert which was an apparent success.
{ "pile_set_name": "Pile-CC" }
Development of the headspace Family and Friends Satisfaction Scale: Findings from a pilot study. The primary aim of this pilot study was to determine the psychometric properties of the 18-item headspace Family and Friends Satisfaction Scale (hFAFSS). During August 2015, staff from 22 headspace centres approached family members and friends of young people attending headspace to complete the hFAFSS. Principal components analysis with oblique Promin rotation and polychoric correlations were used to assess the factor structure of the hFAFSS. There were 277 usable responses. Satisfaction was high, resulting in little variance. Parallel analyses suggested that the scale items tapped a single factor (68% of variance). This study is one of the first attempts to measure the satisfaction of family and friends with primary care-based youth mental health services. Satisfaction of family members and friends was shown to be high, but limited variance restricts the usability of the hFAFSS as an evaluation measure, and revision and further testing is needed.
{ "pile_set_name": "PubMed Abstracts" }
China's home-made planes, cars, robots and submarines A man loves Transformers so much he has established a company that builds them. Yang Junlin set up his factory, called "Legend of Iron", in Huizhou, southern China's Guangdong Province. Yang set up his factory and hired more than 10 workers in an effort to realise his dream. Over the past five years he has designed more than 1,000 different Transformers.
{ "pile_set_name": "Pile-CC" }
Q: Working with Switches in UITableView im a newbie in iOS Programming (i know java) and i have trouble with a simple UISwitch. I have a Tab-based Application with two Views. First View: Data (Single View). Second View:Settings (TableView). I started with creating some UILabels on my First view and with some TableView Cells in the Settings-view. Now i just want that when a Switch (which is in a table view cell) is on, a label on the first View should say "YES", else "no". Just something really simple. My question is: How can i get access on my UISwitch in the FirstView.m? I imported the SecondView.h already. But how can my FirstView get access to all the stuff from the SecondView? I searched on google and found : SecondViewController secondView = [self.storyboard instantiateViewControllerWithIdentifier:@"secondView"]; I set the StoryboardID of my SecondView to secondView. But it doesn't work. Can someone help me please? EDIT: Here is a my Code: FirstViewController View did appear Method: -(void)viewDidAppear:(BOOL)animated { BOOL onOff = secondView.mySwitch.on; if (onOff){ label.text = @"On"; } else{ label.text = @"Off"; } } Here is the viewDidLoad of my FirstVIew - (void)viewDidLoad { [super viewDidLoad]; secondView = [self.storyboard instantiateViewControllerWithIdentifier:@"secondView"]; } in my .h file i created a property @property SecondViewController *secondView; And in the .m file i @synthesize it. I'm sure that i set the Storyboard ID, because when i type in something different, the program doesnt even start. in the SecondViewController.h i added the Switch @property (weak, nonatomic) IBOutlet UISwitch *mySwitch; I added it by drag it into the source code with a right click. So it seems that i dont get access to the class +++++++++++++++++ EDIT2: ++++++++++++++++++++++++ This is my complete Project. Basically it is just the Tabbed Template, with a UITableView as the Second View. FirstView is a singleView. So here is my project: FirstViewController.h: #import <UIKit/UIKit.h> #import "SecondViewController.h" @interface FirstViewController : UIViewController @property (weak, nonatomic) IBOutlet UILabel *lblSwitch; @property SecondViewController *secondView; @end FirstViewController.m #import "FirstViewController.h" @interface FirstViewController () @end @implementation FirstViewController @synthesize secondView, lblSwitch; - (void)viewDidLoad { [super viewDidLoad]; secondView = [self.storyboard instantiateViewControllerWithIdentifier:@"secondView"]; // Do any additional setup after loading the view, typically from a nib. } - (void) viewDidAppear:(BOOL)animated{ if (secondView.mySwitch.isOn){ NSLog(@"First View: Switch is on!"); lblSwitch.text = @"Switch is on!"; } else { NSLog(@"First View: Switch is off!"); lblSwitch.text = @"Switch is off!"; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } @end SecondViewController.h #import <UIKit/UIKit.h> @interface SecondViewController : UITableViewController @property (weak, nonatomic) IBOutlet UISwitch *mySwitch; @end SecondViewController.m #import "SecondViewController.h" @interface SecondViewController () @end @implementation SecondViewController @synthesize mySwitch; - (id)initWithStyle:(UITableViewStyle)style { self = [super initWithStyle:style]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; // Uncomment the following line to preserve selection between presentations. // self.clearsSelectionOnViewWillAppear = NO; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; } -(void) viewDidAppear:(BOOL)animated { if (mySwitch.isOn) NSLog(@"Switch is on"); else NSLog(@"Switch is off"); } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } #pragma mark - Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return 1; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [super tableView:tableView cellForRowAtIndexPath:indexPath]; // Configure the cell... return cell; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source [tableView deleteRowsAtIndexPaths:@[indexPath] withRowAnimation:UITableViewRowAnimationFade]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ /* #pragma mark - Navigation // In a story board-based application, you will often want to do a little preparation before navigation - (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender { // Get the new view controller using [segue destinationViewController]. // Pass the selected object to the new view controller. } */ @end That is all the code. Here is a screenshot of the main.storyboard Here is the Photo A: I will post 2 classes for you here: Just copy and paste it. SwitchViewController.h (Your First View Controller) @class LabelViewController; @interface SwitchViewController : UIViewController @property (weak, nonatomic) IBOutlet UISwitch *mySwitch; - (IBAction)switch:(id)sender; //connect the above from storyboard. @property (strong, nonatomic) LabelViewController *secondView; @end SwitchViewController.m #import "SwitchViewController.h" #import "LabelViewController.h" @interface SwitchViewController () @end @implementation SwitchViewController - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.secondView = [self.storyboard instantiateViewControllerWithIdentifier:@"secondView"]; self.view.backgroundColor = [UIColor lightGrayColor]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (IBAction)switch:(id)sender { UISwitch *theSwitch = (UISwitch*) sender; [self addChildViewController:self.secondView]; [self.view addSubview:self.secondView.view]; self.secondView.myLabel.text = (theSwitch.isOn ? @"On" : @"Off"); } @end LabelViewController.h (Your Second View Controller) #import <UIKit/UIKit.h> @interface LabelViewController : UIViewController @property (weak, nonatomic) IBOutlet UILabel *myLabel; @property (weak, nonatomic) IBOutlet UIButton *backToFirstView; - (IBAction)backToFirstView:(id)sender; //connect the above from storyboard. @end LabelViewController.m #import "LabelViewController.h" @interface LabelViewController () @end @implementation LabelViewController - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.view.backgroundColor = [UIColor darkGrayColor]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (IBAction)backToFirstView:(id)sender { [self removeFromParentViewController]; [self.view removeFromSuperview]; NSLog(@"If it works, buy me a beer."); } @end In Storyboard: Create 2 UIViewControllers. While you are there, set Storyboard ID for LabelViewController as secondView. Here is a screenshot of storyboard: SUPER EDIT: There are two classes and a snapshot of storyboard. Your firstView (the one with Label as IBOutlet) In YourFirstViewController.h #import <UIKit/UIKit.h> @interface YourFirstViewController : UIViewController @property (weak, nonatomic) IBOutlet UILabel *myLabel; @end In YourFirstViewCntroller.m #import "YourFirstViewController.h" #import "YourTableViewController.h" @interface YourFirstViewController () @property(strong, nonatomic) YourTableViewController *tableView; @end @implementation YourFirstViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. self.tableView = [self.tabBarController.viewControllers objectAtIndex:1]; self.view.backgroundColor = [UIColor darkGrayColor]; } - (void)viewWillAppear:(BOOL)animated { NSLog(@"view will appear in first view"); [super viewWillAppear:animated]; self.myLabel.text = (self.tableView.mySwitsch.isOn ? @"ON" : @"OFF"); } In YourTableViewController.h (in which you have an IBOutlet for your switch) #import <UIKit/UIKit.h> @interface YourTableViewController : UITableViewController @property (weak, nonatomic) IBOutlet UISwitch *mySwitsch; @end In YourTableViewController.m - (void)viewWillAppear:(BOOL)animated { NSLog(@"view will apear table view"); [super viewWillAppear:animated]; } - (void)viewDidLoad { [super viewDidLoad]; NSLog(@"view did load in table view"); } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } #pragma mark - Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return 1; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if(cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; [cell.contentView addSubview:self.mySwitsch]; } // Configure the cell... return cell; } ***** Super Super Edit ***** If you want to keep your UINavigation Controller around: In your first view controller.m Replace it with the following in your `viewDidLoad': - (void)viewDidLoad { [super viewDidLoad]; //Do any additional setup after loading the view, typically from a nib. self.navController = [self.tabBarController.viewControllers objectAtIndex:1]; self.tableView = (YourTableViewController*) [self.navController.viewControllers objectAtIndex:0]; self.view.backgroundColor = [UIColor darkGrayColor]; } Add a property @property(strong, nonatomic) UINavigationController *navController; Tested.
{ "pile_set_name": "StackExchange" }
# # The Alluxio Open Foundation licenses this work under the Apache License, version 2.0 # (the "License"). You may not use this work except in compliance with the License, which is # available at www.apache.org/licenses/LICENSE-2.0 # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, # either express or implied, as more fully set forth in the License. # # See the NOTICE file distributed with this work for information regarding copyright ownership. # - name: start spark shell: /spark/sbin/start-all.sh - name: start spark history server shell: /spark/sbin/start-history-server.sh # vim :set filetype=ansible.yaml:
{ "pile_set_name": "Github" }
Negative reviews and a lesson on what not to do from BP A recent Fast Company article talked about what a mess BP has made of it’s business brand through its handling of the Gulf oil disaster. The CEO of the company seems to have a knack of saying the wrong thing at the wrong time and, as one publicity expert said earlier today “The best thing BP can do is keep the CEO as far away from any microphone, television cameras and reporters as it possible can. The man is a walking PR disaster!” His responses to the press and media have done as much to damage the public’s view of his company as their handling of the disaster. Just today the CEO of BP apologized for his comment yesterday that he just wants this to be over so he can return to his old life. The backlash for that comment has been huge, especially from the relatives of those who lost their lives on the oil rig. BPs stock price is down about 33% as of this writing and the people are saying BP stands for Bad Press. That brings up the subject of how your company handles negative feedback, whether its a complaining customer, a bad piece in the local paper, a complaint to the BBB or a negative online review. The words you and your team use to respond to negative situations is critical to your clients and your staff’s perceptions of you. It’s an old marketing adage that every customer criticism represents an opportunity to exceed expectations and gain loyalty. It follows that the words you use to respond to clients or colleagues who question your work can cement their perceptions of your professionalism, or fatally undermine your relationship. Think of BP’s proposed “Top Kill” solution to its environmentally, economically, commercially, and politically fatal oil spill, and you’ll get what I mean. “Top Kill”-style bumbles can happen in the workplace, too. Good Language, Bad Language There’s nothing wrong with acknowledging the truth, admitting your mistakes, and taking responsibility for errors or oversights you’ve made. The first rule of accepting negative feedback is, obviously, to accept it — preferably with good grace and humility. But in doing that, there are certain types of language you should avoid, for the sake of your personal or professional brand, as well as your contact’s confidence. Insulting Language Never use language that suggests you’re anything but a serious professional who takes pride in their work. Whether or not your client or colleague thinks you’re an idiot for making whatever mistake you’ve made is irrelevant. You shouldn’t insult yourself, or anyone else, in acknowledging responsibility for an issue. Everyone makes mistakes — including your colleagues — so don’t allow yourself to be overcome with guilt. You’re not an idiot; neither is that third party who let you down, and who you now want to blame for the problem. Substitute “I’m such a moron” or “Pete at the print shop is a total hack” with, “I’m sorry, I followed our standard procedure for checking the proofs, and even had a couple of other people look over it, but obviously we missed this error. It’s my mistake.” Panicked Language Gasping, exclaiming, “Oh my gosh, I can’t believe I didn’t see that!” and moaning “Oh no,” are all evidence of panic, and no client wants to think you can’t handle the everyday ups and downs of work life. In fact, they don’t want to get the impression that there’s anything you can’t handle. So avoid the language of panic. Even if your heart’s racing and your palms are sweaty with horror upon receiving the negative feedback, don’t panic. Just take a breath, apologize calmly, acknowledge the problem, and pledge to investigate. Casual Language If a colleague or client raises an issue with you, you can assume it’s a serious problem for them. So, use appropriately serious language in your response. BP apparently weren’t thinking of this point when they called a possible solution for the Gulf spill the “Junk Shot.” In talking about a multinational, apparently uncontrollable environmental disaster that’s impacting an ocean ecosystem, countless species, thousands of miles of coastline, and millions of human lives, you’d think they’d be able to come up with a solution that sounded a little less like circus entertainment. Remember this the next time someone provides negative feedback on your performance. Giving negative feedback is never pleasant. Your contact is doing it because it’s a serious issue for them. So forget telling your contact “I’ll check it out when I get a sec.” Tell them you’re reviewing it now. If the problem is very serious, consider using more pointed terms, like “investigating” or “inquiring”. Always try to provide a timeframe in which you’ll have an explanation or researched response to their concerns, too. Overly Personal Language Usually, it’s not appropriate to provide details of your personal troubles as excuses or explanations of poor performance. Even saying something as generic as, “I’ve been having some personal problems” only serves to make your client or colleague feel bad for raising the issue. That’s the best outcome. At worst, it can make your more hardline contacts question your professionalism: “So your dog died. Whatever. Can we just focus on the issue here?” In most cases, your client or colleague doesn’t need to know the background against which you underperformed. They’re more likely to want to know the mechanics of what went wrong, and/or how you’ll improve matters. Keep your language on the job and the problem at hand. That said, don’t take personal responsibility for things that aren’t actually your fault. There’s a difference between owning a problem and taking undue responsibility for it. Be honest about your role in the problem, and what you’ll do to resolve it, but also be honest about any aspects of the problem that were — or are — beyond your control. This is as much about expectation management as it is about protecting your reputation. Take Care With Tense In presenting your explanation, or other information, to complaining contacts, try to use the past tense to explain the issue: “We were using a process that didn’t anticipate…” rather than, “The process we use doesn’t anticipate…” Use present and future tense — and spend more time — to focus on your process for resolving the issue and how it’ll provide a good outcome. “I’m undertaking training course that addresses these topics, and those skills will help me perform better in this area,” for example. The language you choose can boost or undermine your personal or professional brand. What tips can you give to help those fielding negative feedback at work today? Related articles by Zemanta A crash course in PR from the folks at @BPGlobalPR | @BPGlobalPR (guardian.co.uk)
{ "pile_set_name": "Pile-CC" }
The method, apparatus, and system according to the present invention are configured to compute interpolated image data of a video image data by means of line-based motion estimation and compensation and to detect and handle errors in interpolated image data obtained as result of performing the line-based motion compensation. The present invention allows efficient use of chip-internal memory and efficient interacting of components, devices, and/or modules enabling the line-based motion estimation and compensation, and processing of the interpolated image data obtained as result of performing the line-based motion compensation, wherein the quality of the resulting image data to be visualized is improved considerably and in an effective way at the same time. Hereinafter, the present invention and its underlying problem is described with regard to the processing of a video signal for line-based motion estimation and motion compensation within a video processing apparatus such as a microprocessor or microcontroller having line memory devices, whereas, it should be noted, that the present invention is not restricted to this application, but can also be used for other video processing apparatus. The market introduction of TV-sets based on 100/120 Hz frame rate or even higher required the development of reliable Field/Frame Rate Up-conversion (FRU) techniques to remove artefacts within a picture such as large area flickers and line flickers. Standard FRU methods, which interpolate the missing image fields to be displayed on Displays without performing an estimation and compensation of the motion of moving objects in successive image fields, are satisfactory in many applications, especially with regard to a better quality of the image and with regard to the reduction of the above-mentioned artefacts. However, many pictures contain moving objects, like persons, subtitles and the like, which cause so-called motion judders. This problem is better understood by referring to FIG. 1, wherein the motion trajectory of the moving objects (white squares) in the original image fields (i.e. transmitted and received image fields) is supposed to be straight-lined. If the missing fields/frames result from interpolation by means of the above mentioned standard FRU methods (i.e. without motion estimation and compensation), the motion of the moving object in the interpolated fields (dark grey squares) is not at a position as expected by the observer (dotted squares). Such artefacts are visible and induce a blurring effect especially of fast moving objects. These blurring effects typically reduce the quality of the displayed images significantly. In order to avoid such blurring effects and to reduce artefacts several methods for motion estimation and motion compensation—or shortly MEMC—are proposed. This MEMC provides the detecting of a moving part or object within the received image fields and then the interpolation of the missing fields according to the estimated motion by incorporating the missing object or part in an estimated field. FIG. 2 shows schematically the change of the position of a moving object between two successive image fields. Between two successive received image fields/frames, the moving objects will have changed their position, e. g. object MO which is in the previous field/frame T in position A is then in the current field/frame T+1 then in position B. This means, that a motion exists from the previous field/frame T to the current field/frame T+1. This motion of an object in successive image fields/frames can be represented by a so-called motion vector. The motion vector AB represents the motion of the object MO from position A in the previous field T to position B in the current field/frame T+1. This motion vector AB typically has a horizontal and a vertical vector component. Starting from point A in the previous field T and applying this motion vector AB to the object MO the object MO is then translated in position B in the current field/frame T+1. The missing position I of the object MO in the missing field/frame T+½ that has to be interpolated must be calculated by the interpolation of the previous field T and the current field T+1 taken account of the respective positions A, B of the moving object MO. If the object MO does not change its position between the previous field/frame and the current field/frame, e. g., if A and B are the same, position I in the missing field is obtained by the translation of A with a motion vector |AB|/2. In this manner the missing field T+½ is interpolated with a moving object in the right position with the consequence that blurring effects are effectively avoided. Theoretically, for each pixel of a field a corresponding motion vector has to be calculated. However, this would increase the number of calculation needed and thus the memory requirements enormously. To reduce this enormous calculation and memory effort there exist basically two different approaches: The first approach employs a so-called block-based MEMC. This first approach assumes that the dimension of the object in the image is always larger than that of a single pixel. Therefore, the image field is divided into several image blocks. For MEMC only one motion vector is calculated for each block. The second approach employs a so-called line-based MEMC. In this second approach the algorithm is based on a reduced set of video input data of a single line of a field or a part of this line. The present invention is based on this second MEMC approach. In present line-based MEMC systems, image data is usually stored in a local buffer or on chip memory, the so-called line memory, to which rather extreme bandwidth requirements are made. Many present MEMC systems, like the implementations described by Gerard de Haan in EP 765 572 B1 and U.S. Pat. No. 6,034,734, apply a cache memory (e.g. a two-dimensional buffer) to reduce the bandwidth requirements and to store a sub-set of an image. The motion compensation device or module fetches video image data from this cache while applying motion vectors. Typically, in MEMC systems this cache covers the whole search range of the motion vectors. Usually, the cache consists of a great amount of so-called line memories. This results in a relatively large amount of memory, e.g. 720 pixels wide and 24 lines (with an associated maximum vertical vector range of [−12−+12]. Such a cache comprising a great amount of single line memories requires a huge memory needed only for MEMC data buffering. As a consequence, the memory portion within the processor covers a relatively sizable chip area. Commonly used MEMC algorithms compensate the motion in two directions, i.e. the motion in the horizontal direction and as well in the vertical direction. For that operation a memory access should be randomly possible, which requires for an application in hardware sufficient embedded chip memory within the video processor for the different temporal incoming data streams. The size of this embedded chip memory strongly depends on the search range (i.e. search area) for the motion of an object, as already outlined above, where the motion estimation can match similar video patterns in two temporal positions and derive the velocity of the motion in terms of pixels per frame or per field. However, this matching process does not always work perfectly, since methods to determine the quality of the measured motion vector are required. Therefore, for the internal storage of further temporal incoming video signals additional memory resources are required. This, however, increases the amount of embedded memory even further, which leads to an increase of the chip area since for an integrated circuit it is the chip internal memory which significantly determines the chip area. Consequently, the chip is getting more and more expensive. Especially in the mainstream market segment such as for modern Plasma- and LCD-TVs these additional costs typically form a limiting factor for an MEMC implementation. The present invention is, therefore, based on the object to provide a more efficient use of the chip-internal resources and especially of the chip-internal memory with regard to motion estimation and motion compensation, wherein the quality of the resulting image data is to be improved at the same time.
{ "pile_set_name": "USPTO Backgrounds" }
/* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class com_badlogic_gdx_physics_box2d_joints_RopeJoint */ #ifndef _Included_com_badlogic_gdx_physics_box2d_joints_RopeJoint #define _Included_com_badlogic_gdx_physics_box2d_joints_RopeJoint #ifdef __cplusplus extern "C" { #endif /* * Class: com_badlogic_gdx_physics_box2d_joints_RopeJoint * Method: jniGetLocalAnchorA * Signature: (J[F)V */ JNIEXPORT void JNICALL Java_com_badlogic_gdx_physics_box2d_joints_RopeJoint_jniGetLocalAnchorA (JNIEnv *, jobject, jlong, jfloatArray); /* * Class: com_badlogic_gdx_physics_box2d_joints_RopeJoint * Method: jniGetLocalAnchorB * Signature: (J[F)V */ JNIEXPORT void JNICALL Java_com_badlogic_gdx_physics_box2d_joints_RopeJoint_jniGetLocalAnchorB (JNIEnv *, jobject, jlong, jfloatArray); /* * Class: com_badlogic_gdx_physics_box2d_joints_RopeJoint * Method: jniGetMaxLength * Signature: (J)F */ JNIEXPORT jfloat JNICALL Java_com_badlogic_gdx_physics_box2d_joints_RopeJoint_jniGetMaxLength (JNIEnv *, jobject, jlong); /* * Class: com_badlogic_gdx_physics_box2d_joints_RopeJoint * Method: jniSetMaxLength * Signature: (JF)V */ JNIEXPORT void JNICALL Java_com_badlogic_gdx_physics_box2d_joints_RopeJoint_jniSetMaxLength (JNIEnv *, jobject, jlong, jfloat); #ifdef __cplusplus } #endif #endif
{ "pile_set_name": "Github" }
Boxwood from Microsoft Research The basic idea is to create a distributed data store over a small cluster. It is similar in motivation to Bigtable and GFS, but lower-level. From the paper: The overall goal of the Boxwood project is to experiment with data abstractions as the underlying basis for storage infrastructure ... [that includes] redundancy and backup schemes to tolerate failures, expansion mechanisms for load and capacity balancing, and consistency maintenance in the presence of failures. The principal client-visible abstractions that Boxwood provides are a B-tree abstraction and a simple chunk store abstraction provided by the Chunk Manager. It is worth noting right away that Boxwood is a research project, not a deployed system. The Boxwood prototype runs on a small cluster of eight machines. GFS and Bigtable run on tens of thousands of machines and provide the backend for many of Google's products. It is also worth noting that they have different standards for failure tolerance. For one of several examples, the Boxwood paper says that "failures are assumed to be fail-stop". Contrast that with the experience of the folks at Google working on Bigtable: One lesson we learned is that large distributed systems are vulnerable to many types of failures, not just the standard network partitions and fail-stop failures assumed in many distributed protocols. For example, we have seen problems due to all of the following causes: memory and network corruption, large clock skew, hung machines, extended and asymmetric network partitions, bugs in other systems that we are using (Chubby for example), overflow of GFS quotas, and planned and unplanned hardware maintenance. In any case, the Boxwood paper is an interesting read. This is work at Microsoft that may follow a similar path to GFS and Bigtable. Update: Mary Jo Foley mentions another Microsoft Research project called Dryad and quotes Bill Gates as saying, "[Google] did MapReduce; we have this thing called Dryad that's better." Unfortunately, there appears to be very little public information on Dryad; I can find no publications on the work. My understanding is that Amazon S3 does is not trying to be a high performance clustered filesystem, a candidate for replacing traditional fileservers and usable for rapid access to data. As such, I do not think S3 is seeking to have the same level of performance or reliability as these other systems. I could be wrong. Do you know of anyone using S3 for data mining or some other high performance, data-intensive operation? I'd be amazed, but please let me know if it is happening. 2. They claim to be suitable for high-availability applications. It is being used by YouOS as a fileserver (see the "Success stories" at: http://www.amazon.com/Success-Stories-AWS-home-page/b/ref=sc_fe_c_0_16427261_2/102-5087021-4901764?ie=UTF8&node=182241011&no=16427261&me=A36L942TSJ2AJA My guess is that they are as reliable and fast as any remote-storage system can be. I would expect the latency of S3 would be too high for many applications. Assuming you kept the amount of S3 calls small or had thousands of threads performing smaller calls on top of S3 you might be ok.... It just seems easier for some people to deploy their own DFS ... Unfortunately the OSS tools still don't exist to do this stuff right...
{ "pile_set_name": "Pile-CC" }
Powhattan, Ohio Powhattan is an unincorporated community in Champaign County, in the U.S. state of Ohio. History Powhattan was founded no later than the 1850s. The community was named after Chief Powhatan. References Category:Unincorporated communities in Champaign County, Ohio Category:Unincorporated communities in Ohio
{ "pile_set_name": "Wikipedia (en)" }
Article content continued “I’m pleased to see significant transit funding identified in today’s budget,” the mayor told reporters, crediting the investment on the FCM’s and big city mayors’ lobbying efforts over the last several years. The city will officially apply for Phase 2 funding this summer, said Watson, to make it clear what Ottawa is looking for in terms of provincial and federal investments. And there are a few reasons to believe that this new fund may work to Ottawa’s advantage. First, the fund will favour projects that are financially structured as public-private partnerships, or P3s. The budget document describes the City of Edmonton’s decision to employ “a public-private partnership to design, build, finance, operate and maintain a 13.2-kilometre new light rail transit line over a 30-year contract” as innovative. Sound familiar? Phase 1 of Ottawa’s LRT, which is currently under construction, almost exactly matches the Edmonton example glowingly referred to in the budget. There are certainly some cities in Canada that don’t want to go the P3 route – critics argue that taxpayers pay too much over time for the protection offered by private partners – but Ottawa is already a P3 proponent. Although the details of the fund’s program parameters won’t be announced until later this year, there’s no reason to believe that Phase 2 won’t meet the requirements. Secondly, the budget calls for eking out the funds over 20 or 30 years instead of paying in a few lump sums upfront. That could be cause for some concern. Would payments spread out over decades cause the city to have to borrow more upfront than planned? Would the investment decrease in value over the years due to inflation, or would the funds be indexed? It’s too early to know.
{ "pile_set_name": "OpenWebText2" }
Should the U.S. Start Slaughtering Horses Again? Share Today the Times tells us that the last slaughterhouse in the U.S. to kill horses for meat closed five years ago. What’s happened since? Well, people in foreign countries aren’t really eating any less horse meat (which has the very sexy name “viande chevaline” in France), so American horses are still ending up on the world’s dinner plates — the slaughter just happens outside the country. And some people think that means the U.S. is losing out on money that could be made off a practice that’s going to happen anyway. As you’d guess, animal welfare groups say that’s nonsense. Hugue Dufour, chef from the now-shuttered M. Wells, who the Times says served horse while working in Canada, adds, “It’s slightly hypocritical to allow these horses to be slaughtered anyway up in Canada or Mexico and not allow people here to get the income or serve the meat.” Plus, have you ever had horse carpaccio? Or sausage made with horse meat? It’s not like it tastes bad. Anyway, let us know what you think. [NYT]
{ "pile_set_name": "Pile-CC" }
(ANSA) - PARMA, 28 DIC - In epoca di Bitcoin a Berceto, piccolo centro della montagna parmense, nasce una nuova criptomoneta e viene addirittura scelta per stilare il bilancio di previsione 2018 del Comune. È la singolare forma di protesta scelta dal sindaco Luigi Lucchi contro i tagli alle amministrazioni locali da parte dello Stato e contro l'Euro che, testuali parole della delibera approvata in Consiglio comunale con solo due voti contrari, è "un grande imbroglio per portare alla miseria interi Popoli e aumentare la diseguaglianza tra ricchi e poveri". La moneta 'coniata' si chiama Hau, in onore dei Lakota (Sioux), popolazione con cui Berceto ha uno storico gemellaggio. Il bilancio in Hau è già stato approvato e la delibera, a parere del Comune di Berceto, è ritenuta valida perché "è espressamente consentito ai Comuni italiani di emettere moneta elettronica e pertanto l'Amministrazione di Berceto vuole dotarsi di tale fondamentale strumento al fine di adempiere ai suoi scopi istituzionali".
{ "pile_set_name": "OpenWebText2" }
Main navigation Secondary Navigation Justin Bieber Revisits His Old Hits in 'Carpool Karaoke' With James Corden! By Philiana Ng 12:01 PM PDT, May 21, 2015 Mariah Careystarted it. Jennifer Hudsoncrushed it. Now Justin Bieber is accompanying Late Late Show host James Corden in the latest edition of “Carpool Karaoke”! On Wednesday’s episode, the 21-year-old singer carpooled with Corden to work and the British host had some of Bieber’s biggest pop hits queued up on the car radio. Before they embarked on their urban safari though, two fans spotted Bieber inside the vehicle and even got a quick selfie! Corden and Bieber certainly made the most out of their ride, with the duo singing along to popular tunes like “Baby” (“It’s got the most dislikes on the Internet,” Bieber deadpanned), “Boyfriend” and “Where Are U Now.” At one point Corden jokingly admitted to not recognizing Bieber when he picked him up since “he had a top on.” Naturally this led Bieber to reveal that he throws out underwear after wearing them once. “That’s the life,” Corden says, almost in awe. They also engaged in deep relationship and life talk (“I want to be completely secure in myself and have a family,” Bieber says of his 10-year goal), discussed the pros and cons of fondue (Bieber is a fan, Corden isn’t) and engaged in a lesson on “swag.” Plus, did you know Bieber was a Rubik's cube master?! Oh, and they also switched clothes mid-ride – because why not? But you can’t end “Carpool Karaoke” without turning to a classic, so Corden and Bieber closed out their ride in style – with Boyz II Men. Justin Bieber and Madonna recently played "Never Have I Ever" on Ellen. Watch the hijinks below!
{ "pile_set_name": "Pile-CC" }
Developmental surface dyslexia is not associated with deficits in the transient visual system. Deficits of the transient visual system have been reported in unselected groups of dyslexics. The aim of this study was to examine whether this finding holds when subjects with a specific type of developmental reading disorder (surface dyslexia) are considered. Ten Italian children were examined. They all presented the characteristic markers of surface dyslexia: slow and laborious reading with errors in tasks which cannot be solved with a grapheme-phoneme conversion (i.e., homophones). Contrast sensitivity thresholds to phase-reversal gratings were within normal limits for most subjects both for stimuli presented centrally and in the right parafovea. This indicates that developmental surface dyslexia is not associated with a deficit in the transient system. In contrast, sensitivity to high spatial frequency stationary stimuli was reduced.
{ "pile_set_name": "PubMed Abstracts" }
EAST MEADOW, N.Y. -- The New York Islanders enter their game against the Pittsburgh Penguins at Barclays Center on Friday (7 p.m. ET; SN, TVA Sports, MSG+, ROOT, NHL.TV) in last place in the Eastern Conference, but hours before the opening faceoff, general manager Garth Snow reiterated his faith in coach Jack Capuano. "Jack, the coaching staff, our players, I have a lot of confidence in everyone in that room," Snow said during an impromptu press conference at the Islanders' practice facility. "The great part about when you face adversity, [you see] who rises to the top. Although it doesn't always feel easy for our fans, when you go through adversity, it's a great challenge and I always look forward to see who rises to that challenge, and it doesn't matter whether it's a player, coaches, staff … we're all in this together and I've got a lot of belief in everyone in that room." The Islanders (5-8-3) have scored 40 goals in 16 games, forcing Capuano to constantly shuffle his lines with the hope of finding some offense. New York could receive a boost via trade, which Snow did not rule out. "Like every other team, we're always looking to improve our team," Snow said. "All avenues, whether it's the draft, whether it's free agency, whether it's trades, waivers, we've gone every route to build this team. Like I said, I believe in the guys that we have in that room right now. It'll never prevent us from always looking to try to improve our club, but those guys in there are gonna rise to the top, and I have a lot of belief in them. "I've had the luxury to work with [co-owners] Jon Ledecky [and] Scott Malkin, and I've had their support," Snow said. "For me, I've been very fortunate to be a manager and to have had the support from ownership that I've had for the last 10 years. I know that doesn't always happen. I'm grateful and I'm appreciative about the support that I get from them." Ledecky and Malkin showed their support on July 1, when the Islanders signed left wing Andrew Ladd to a seven-year, $38.5 million contract. Ladd started the season on the top line with John Tavares but failed to produce and has since been shuffled around. He enters the game Friday with two goals and one assist in 16 games. "Andrew brings a lot of different things to our club," Snow said. "[He's] a very high-character guy. We knew that when we signed him. A lot of people don't know last year, I don't think he had many goals around Christmas and then he turned it on. But Andrew Ladd isn't defined as a hockey player by scoring goals; it's a lot of different things he does, whether it's [being tough] to play against, his leadership … he's a winner. Andrew's a big part of our club. It's the reason why we signed him, because of all those positive attributes that he brings to the club." Prior to the start of the regular season, Snow opted to keep three goaltenders on the 23-man roster rather than right wing PA Parenteau, who had signed a one-year contract on July 2. Parenteau, who had 120 points playing alongside Tavares from 2010-12, was claimed off waivers by the New Jersey Devils and has five goals and two assists in 16 games. Tavares is the only player on the Islanders roster with five goals. Meanwhile, No. 3 goalie Jean-Francois Berube has yet to play a game this season. Jaroslav Halak will make his seventh straight start Friday, with Thomas Greiss as his backup. "I wouldn't want to be in that situation," Snow, a former goaltender, said of Berube. "It stinks. I think he's handling it like a professional. You rewind the last year so start the season [against] Chicago home-and-home, [Halak] was injured. We picked up J-F off waivers, and thank goodness we did. There were times during the season where we had three goalies on our roster, [but] two were healthy. We've got three good goalies. It's one of the strengths of our organization; we've got good goaltending in Bridgeport, we have a couple of great goaltenders in the pipeline that are playing in other places. It's a strength of our organization. I get [the frustration], but it's a 23-man roster. We'll carry as many goalies as we see fit. "And by the way, the CBA says a player is an unrestricted free agent at a certain time. It is what it is." For now, Snow's main concern is getting the Islanders back on track. The game Friday is the first of 66 remaining, and the Islanders are seven points in back of the second wild card into the Stanley Cup Playoffs in the East. "I understand the frustration of our fans," Snow said. "It's the same passionate fan base that blew the roof off the Coliseum in the Pittsburgh series [in 2013], the Washington series (in 2015), the last year of the Coliseum, and they blew the roof off the Barclays Center last year. We were fortunate enough to get to the second round. "It's a passionate fan base; we're in New York. Of course, we're going to hear it when we do well and we're going to hear it when we don't do so well. It's what we signed up for. I can appreciate the passion. Quite frankly, it's one of the best parts of being a general manager of a New York-based team, is you've got the most passionate fan base in the world."
{ "pile_set_name": "OpenWebText2" }
[Rare diseases and their patient organization: the Hungarian Federation of People with Rare and Congenital Diseases]. The aim of the author is to discuss special issues of rare diseases, with emphasis on circumstances present in Hungary, including those leading to the foundation of the non-governmental organization, the Hungarian Federation of People with Rare and Congenital Diseases. The author briefly reviews the most important findings of current international surveys which have been performed with or without the involvement of member associations of the Hungarian Federation of People with Rare and Congenital Diseases. At the level of medical and social services in Hungary, it is still "incidental" to get to the appropriate expert or centre providing the diagnosis or treatment. It is difficult to find the still very few existing services due to the lack of suitable "pathways" and referrals. There are long delays in obtaining the first appointment, resulting in vulnerability and inequality along the regions. The overall consequence is the insufficiency or lack of access to medical and social services. There are also difficulties related to the supply of orphan medication and the long duration of hospitalization. At the level of patient organizations financial scarcity and uncertainty are typical, combined with inappropriate infrastructural background and human resources. The poor quality of organization of patient bodies along with insufficient cooperation among them are characteristic as well. The author concludes that a National Plan or Strategy is needed to improve the current fragmentation of services which would enable patients and health, social and educational professionals to provide and use the best care in the practice. This would ensure all patients with rare diseases to be diagnosed within a possible shortest time allowing access to the care and support needed in time resulting in a decrease in burden of families and society.
{ "pile_set_name": "PubMed Abstracts" }
Poly(ADP-ribose)polymerase (PARP) is essential for facilitating DNA repair, controlling RNA transcription, mediating cell death and regulating immune response. This activity makes PARP inhibitors targets for a number of disorders. PARP inhibitors have shown utility for treating diseases such as ischemia reperfusion injury, inflammatory disease, retroviral infections, ischemia reperfusion injury, myocardial infarction, stroke and other neural trauma, organ transplantation, reperfusion of the eye, kidney, gut and skeletal muscle, arthritis, gout, inflammatory bowel disease, CNS inflammation such as MS and allergic encephalitis, sepsis, septic shock, hemmorhagic shock, pulmonary fibrosis, and uveitis, diabetes and Parkinsons disease, liver toxicity following acetominophen overdose, cardiac and kidney toxicities from doxorubicin and platinum-based antineoplastic agents and skin damage secondary to sulfur mustards. PARP inhibitors have also been shown to potentiate radiation and chemotherapy by increasing cell death of cancer cells, limiting tumor growth, decreasing metastasis, and prolonging the survival of tumor-bearing animals. US 2002/0183325 A1 describes phthalazinone derivatives as PARP inhibitors. US 2004/0023968 A1 describes phthalazinone derivatives as PARP inhibitors. US 2005/0085476 A1 describes fused pyridazine derivatives as PARP inhibitors. US 2005/0059663 A1 describes phthalazinone derivatives as PARP inhibitors. US 2006/0063767 A1 describes phthalazinone derivatives as PARP inhibitors. US 2006/0142293 A1 describes phthalazinone derivatives as PARP inhibitors. US 2006/0149059 A1 describes phthalazinone derivatives as PARP inhibitors. US 2007/0093489 A1 describes phthalazinone derivatives as PARP inhibitors. There is therefore a need in the therapeutic arts for PARP inhibitors. Such compounds can be used to treat subjects suffering from cancer, and can further expand the range of treatment options available for such subjects.
{ "pile_set_name": "USPTO Backgrounds" }
CO oxidation on Pt(111) at near ambient pressures. The oxidation of CO on Pt(111) was investigated simultaneously by near ambient pressure X-ray photoelectron spectroscopy and online gas analysis. Different CO:O2 reaction mixtures at total pressures of up to 1 mbar were used in continuous flow mode to obtain an understanding of the surface chemistry. By temperature-programmed and by isothermal measurements, the onset temperature of the reaction was determined for the different reactant mixtures. Highest turnover frequencies were found for the stoichiometric mixture. At elevated temperatures, the reaction becomes diffusion-limited in both temperature-programmed and isothermal measurements. In the highly active regime, no adsorbates were detected on the surface; it is therefore concluded that the catalyst surface is in a metallic state, within the detection limits of the experiment, under the applied conditions. Minor bulk impurities such as silicon were observed to influence the reaction up to total inhibition by formation of non-platinum oxides.
{ "pile_set_name": "PubMed Abstracts" }
Prince Charles’ intention to become an “activist” king could force the nation’s political class to rethink the role of the monarchy, MPs have been told. The Guardian on Wednesday revealed how Charles is set to reshape the sovereign’s role by making “heartfelt interventions” in national life if he becomes king, but the House of Commons heard that that may be increasingly problematic given emerging plans to devolve more powers to Scotland, England, Wales and Northern Ireland. The Labour MP Roger Godsiff told parliament on Thursday that “the four devolved parliaments, if that is what happens, together with the Westminster parliament, are going to have to decide what the role of the monarchy is in relation to the new constitutional settlement”. Comments by the prince’s allies that, should he replace his mother, Charles planned to continue speaking out on issues that mattered to him sparked warnings that this could precipitate a constitutional crisis because the monarch would be breaching conventions of political neutrality. “[Given] Prince Charles’ spokesman is suggesting Charles III would have a much more activist role within British politics, then maybe that would be appropriate to have such a discussion [in parliament],” Godsiff said. The prospect of parliament reviewing the monarchy’s role in such circumstances is likely to concern Buckingham Palace. Vernon Bogdanor, professor of government at King’s College, London, sought to play down how far Charles would differ from the Queen in his approach to sovereignty. He told the Times: “There is no question of him making any interventions which are not approved by the government of the day. I gather that, out of courtesy, his speeches at the moment are sent to ministers for their comments, although he is not bound by them. He is extremely sensitive to constitutional tradition.” He added: “It is true that the style of the monarchy would change because he is a different person to the Queen.” Anti-monarchy campaigners said Charles risked bringing down the monarchy altogether because an activist king would be intolerable in a democratic society. “The more Charles speaks out, the more transparency people will demand, and the more we’ll find out about his interference,” said Graham Smith, chief executive of Republic, a group campaigning for an elected head of state. “The time will come when the people decide they no longer want him in office, yet have no means to get rid of him. For the sake of British democracy, Charles’s reign must be brief and must end with an election for the next head of state.” Catherine Mayer, the author of a forthcoming biography the heir, Charles: the Heart of a King, said: “I would be extremely surprised if he does it as [the Queen] did it, not only because they are very different characters, but because he is of a different generation and will be living through different times. I think that he is someone who has always been much truer to himself than people realise and it is very hard to imagine that he would cease to be true to himself as monarch.” Patrick Holden, the Prince’s friend and sustainability adviser, told the BBC: “I think he’s always been very good at boundaries so I don’t think there is an issue here. I think he’s impeccable at all times, he’s ploughed the right line and he’s expressed himself in the right way. I don’t think there’s a problem with it at all.”
{ "pile_set_name": "OpenWebText2" }
As part of the reauthorization for the FAA, Congressional Republicans are proposing a big change to the way America's air traffic control system works. Currently, a federal agency, the Federal Aviation Administration, oversees the air traffic control (ATC) of the nation — but HR 4441, the Aviation Innovation, Reform, and Reauthorization (AIRR) Act of 2016 has a provision that would spin off the air traffic control system into a separate, private, non-profit entity. That's not as crazy as it sounds. Many countries have done something similar, including Canada and the United Kingdom. The benefits, say House members who sponsored it, include removing ATC from the governmental budget process — instead, the country's control towers would be funded through user fees (that is, taxes tacked on to commercial airline tickets). A few years ago, during the government shutdown and sequester, ATC budgets were cut, causing delays throughout the system. Oh, and it'll move some 30,000 employees out of the government, which is a nice way for House Republicans to say they reduced the size of government. it could provide a stable, predictable funding stream for ATC The air traffic controller's union likes it too. NATCA President Paul Rinaldi said in a statement before Congress that the legislation, as written, addresses the union's "primary issues of concern." It ensures that air traffic controllers keep their union-negotiated contracts, that safety and efficiency remain priorities, that ATC has a "stable, predictable funding stream," and that air traffic control service "all segments of the aviation community," from commercial carriers to general aviation, and at airports large and small. But not everyone is on board. The National Business Aviation Association, a lobbyist group for business-focused general aviation (think private jets), says that because airlines and their employees (like pilots unions) will hold a majority of seats on the board of the new non-profit, they will have priority to make "decisions over access to airports and airspace" in their own interest, rather than the interest of the entire public. In other words, the NBAA appears to be concerned that privately owned aircraft could get banned from large airports like JFK or Newark in favor of commercial jets. even the airlines aren't all on board Even airlines aren't all on board. Delta says privatizing air traffic control will increase costs to travelers because fees will be tacked on to flights, making them more expensive and thus discouraging some number of flyers from flying. Though the bill would save taxpayers some amount of money by shifting ATC funding off of government books, it's unlikely that any individual taxpayer would see any change at all. Instead, airline tickets would likely become slightly more expensive. And House Democrats oppose the bill because, says Ranking Member of the Aviation Subcommittee Rick Larsen D-WA, it would demote the Department of Defense to a mere advisory role to the private corporation, rather than working in partnership with the FAA, "and that's just not a role I am comfortable relegating primarily to a private corporation." Another concern raised in a hearing this week was that the control over ATC would be handed over to a private company for free, and if the company were to fail, the US Government would be obligated to step in as it would be too important to collapse. but does it matter? Every organization and individual has politics and motivations, whether money or constituents or donors, to worry about. But what about the flying public? Does it matter to us? Ultimately, probably not — apart from the possibility of yet another small fee tacked onto an already expensive ticket. But the planes themselves will still fly. Unless you have a direct involvement with an airline or air traffic control or are a private pilot, it's likely that the entire switchover would be relatively invisible. Correction: This post originally mischaracterized Representative Larsen's concerns over the role of the Department of Defense in the proposed air traffic control plan. Rep. Larsen is concerned about DoD being demoted to an advisory role to the new corporation, rather than working together with the FAA. It originally stated that the proposal would shift primary control of air traffic control away from DoD.
{ "pile_set_name": "OpenWebText2" }
imlSoft File Guard 3.4.18 Free Download Available files: Use this tool to lock, password protect or write protect file,folder,disks.Nobody can access or destroy the protected data without password. • Hide Sentive Data Hide your private files,folders and drives to make them completely invisible to users and programs. • Lock Private Data The locked files/folders/drives are protected from accessing, users can not open, read, modify, move, delete, copy, rename the protected files/folders without password. Files and sub-folders in a locked folder are also password protected. • Protect Important Data The write-protected files/folders/drives are protected from editing, users can not modify, delete, rename the protected files/folders without password. Files and sub-folders in a protected folder are also write-protected. • Password Protected Uninstall : imlSoft Folder Guard is a password-locked software,that is to say,there's no other way to run or unstall it if you have not the password. •Integration with Windows Shell Users can protect files,folders and drives by right clicking mouse operation.
{ "pile_set_name": "Pile-CC" }
No need to recap all the other comments - for me, this is a fabulous machine. Favorite features: easy to plumb (especially with Chris Coffee's kit), pump is quiet with easy to adjust pressure, it has gauges and thermometers, beautiful (it even makes my formica kitchen look better), great sized boilers, and best of all, the PID makes excellent repeatable espresso. The PID really works well. Negative Product Points Like many other machines, it would be a hassle to manually refill the tank (I don't understand why they don't have side fills). If I couldn't plumb I would still get this model, buy Chris's plumb in kit (so easy) and use the tubing as a hose for top refill. I had to adjust the pump pressure when I got the machine (taking off the cover is easy, adjusting the pressure is very easy). It would be nice to have a hole cut through the cover so it could be adjusted without removing the cover (a very minor issue as it's only done once). Detailed Commentary This is my first real machine so I can't compare to anything else. I got interested in home espresso when a friend gave me an old Krups that on rare occasion made up to mediocre espresso. I decided to do one upgrade and am glad I chose this. It's very easy to use, which was a requirement for my wife. It will make very very tasty espresso so long as I grind correctly, tamp consistently, and have good beans. Steaming is great although it took a while to learn (two brands of milk just didn't work, and, I learned that with a 4-hole tip I just have to leave the tip in the center and let each hole churn the milk in 4 'mini' vortexes, very easy). I considered many other models and decided upon a dual boiler. The Brewtus didn't seem to have all the features this has (separate steam power switch, rotary pump, dual plumbing options...), and, Chris Coffee earns it's great reputation. This works on a simple appliance timer so that's a plus - no need for an expensive timer that some brands require. Since I was buying a whole setup, I got a vario, espro tamper, and a naked portafilter (and a good deal). These tools helped me learn and make very consistent, tasty, drinks. Buying Experience I agree with everyone - Chris Coffee is the best. They offer good advice and after purchase help. I don't think I'd buy from any other company after working with them.
{ "pile_set_name": "Pile-CC" }
Q: Missing Options in r.neighbours I tried to use the GRASS GIS tool r.neighbors from QGIS with the method "quart1" and "quart3". But it is not possible to select one of these methods. In GRASS GIS itself it worked well. Any ideas for a workaround for this issue in QGIS? A: You could modify the parameters stored in the text file "r.neighbors.txt" to include those which are missing. This can be found in your QGIS directory, e.g.: /QGIS 2.18/apps/qgis-ltr/python/plugins/processing/algs/grass7/description/r.neighbors.txt When you open this file, it will probably look something like: r.neighbors Makes each cell category value a function of the category values assigned to the cells around it Raster (r.*) ParameterRaster|input|Input raster layer|False ParameterSelection|method|Neighborhood operation|average;median;mode;minimum;maximum;stddev;sum;variance;diversity;interspersion ... The line beginning ParameterSelection contains the list of input methods. Just add the remaining ones separated by a semi-colon (e.g. ;quart1;quart3;perc90;quantile which were obtained from the r.neighbours manual): r.neighbors Makes each cell category value a function of the category values assigned to the cells around it Raster (r.*) ParameterRaster|input|Input raster layer|False ParameterSelection|method|Neighborhood operation|average;median;mode;minimum;maximum;stddev;sum;variance;diversity;interspersion;quart1;quart3;perc90;quantile ... Then save the file (note that you may need rights to save the file. If an error occurs, you can just save the file on desktop, delete the original file then move the edited version in its place.) Restart QGIS and the options should be available which you can then use:
{ "pile_set_name": "StackExchange" }
PenPencilEraser, online school management software is available for schools administration across the globe and for schools willing to transform their school administration digitally. PenPencilEraser, the perfect school management system empowers schools to manage activities and functions effectively at all ends with ERP solution. Bringing a revolutionary change in educational institutions engaging paperless school administration which keeps our development a perennial process. Simple and effective online school management system which functions in the cloud, requires no installation, quick start as no experience required, affordable, suitable for schools of any size, any frontier and any board, it is a highly secured school management software with constant back up.
{ "pile_set_name": "OpenWebText2" }
/* * Copyright (c) 2012-2014 HockeyApp, Bit Stadium GmbH. * All rights reserved. * * Permission is hereby granted, free of charge, to any person * obtaining a copy of this software and associated documentation * files (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, * copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following * conditions: * * The above copyright notice and this permission notice shall be * included in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR * OTHER DEALINGS IN THE SOFTWARE. */ #import "HockeySDK.h" #if HOCKEYSDK_FEATURE_FEEDBACK #import "BITActivityIndicatorButton.h" @interface BITActivityIndicatorButton() @property (nonatomic, strong) UIActivityIndicatorView *indicator; @property (nonatomic) BOOL indicatorVisible; @end @implementation BITActivityIndicatorButton - (void)setShowsActivityIndicator:(BOOL)showsIndicator { if (self.indicatorVisible == showsIndicator){ return; } if (!self.indicator){ self.indicator = [[UIActivityIndicatorView alloc] initWithFrame:self.bounds]; [self addSubview:self.indicator]; [self.indicator setColor:[UIColor blackColor]]; } self.indicatorVisible = showsIndicator; if (showsIndicator){ [self.indicator startAnimating]; self.indicator.alpha = 1; self.layer.borderWidth = 1; self.layer.borderColor = [UIColor lightGrayColor].CGColor; self.layer.cornerRadius = 5; self.imageView.image = nil; } else { [self.indicator stopAnimating]; self.layer.cornerRadius = 0; self.indicator.alpha = 0; self.layer.borderWidth = 0; } } - (void)layoutSubviews { [super layoutSubviews]; [self.indicator setFrame:self.bounds]; } @end #endif /* HOCKEYSDK_FEATURE_FEEDBACK */
{ "pile_set_name": "Github" }
Mesozooplankton are key components of coastal ecosystems, linking the microbial food web to the classic food chain. In this study, species composition and abundance of mesozooplankton is studied for the Daya Bay in April (spring) and October (fall), 2006. A total of 27 species of mesozooplankton were identified in spring and 58 species in fall. Dominant species were Oithona tenuis, Flaccisagitta enflata, Penilia avirostris and Centropages tenuiremis in spring, shifting to Microsetella norvegica, Oithona tenuis and Parvocalanus crassirostris in fall. Higher mesozooplankton abundance was found at Aotou Cove and Dapeng'ao Cove compared to other stations, indicating the influence of eutrophication on mesozooplankton community in the Daya Bay. The outbreak of Noctiluca scintillans bloom in spring reduced the species diversity and abundance of mesozooplankton. This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). ; This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). ; This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). ; This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). ; This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). ; This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07). ; This research was supported by the National Nature Science Foundation of China (Nos. 41276159, 41130855), and the Special Fund of Basic Research for Centre Commonweal Scientific Research Institute (Nos. 2007ZD07, 2011TS06, 2013TS07).
{ "pile_set_name": "Pile-CC" }
Q: ALTER stored procedure (with no changes) causes it to run 20x slower I've been doing some speed tests on a stored procedure that does a bulk insert. It takes some JSON as a parameter, creates a table variable, weeds out any duplicates in the table variable from what's already in the destination table, and then copies the data from the table variable to the destination table. In running these tests, I was seeing some wildly different speed results that were driving me nuts. They made no sense. I finally pinned down the issue and I'm able to reproduce it consistently. Here's the process: Delete all data from the destination table Run the stored procedure and pass in a JSON record of 50,000 rows It executes in about 1.5 seconds. Repeat the process. This time it has existing data it needs to parse looking for duplicates. Same results. Less than 2 seconds Repeat step 4 N times always with the same results. Run an ALTER on the SP without having made ANY changes to the SP itself Repeat step 4. This time it takes 30-40 seconds!!! Delete the data in the destination table, repeat all the steps, same results. I've been reading up on parameter sniffing, trying things like converting the passed in parameters to local parameters, and adding WITH RECOMPILE, but so far, the results are all the same. If this were to happen in prod, it would be unacceptable. Does anyone have any ideas? Thanks! A: This is a bit long for a comment. SQL Server caches the plans for queries in a stored procedure when they are first run. In your case, the first run has an empty table so the query plan is based on an empty table. That seems to be a good query plan for your problem. When you alter the stored procedure, you do have one effect: it forgets the cached query plan. So a new plan is generated, one that uses the current size of the table. For whatever reason, this second query plan is much worse than the first. I don't know why. Usually the problem is the other way around (the query plan on the empty table is the worse one). I would suggest that you figure out how to get the query to have the right plan when there is data and to recompile the code in the stored procedure each time it is run. That might be overkill, but it adds just a little overhead.
{ "pile_set_name": "StackExchange" }
Når jeg sidder i min 3.g-klasse, bryder jeg mig sjældent om mig selv. Jeg har ingen tålmodighed med andre mennesker, mit svar er altid det rigtige, og jeg bliver nogle gang ekstremt irritabel, når nogen skal have noget forklaret to gange. Det er meget mærkeligt. For jeg kommer ud af et hjem med gamle veluddannede hippier, der har fyldt fire etager ud med kærlighed, forståelse og omsorg. Mantraerne ’behandl andre, som du selv vil behandles’, og ’den stærkeste må hjælpe den svageste’, har været med til at bygge det grundlag, hvorpå jeg tog mine første skridt. Så hvordan i alverden kan det være endt så galt? Da jeg i 2001 startede i 0. klasse, så skolen ikke ud, som den gør i dag. Det faktum, at jeg var dygtig, betød ikke, at jeg skulle rykke en klasse op eller have sværere opgaver. I stedet så lærerne bort fra, at jeg sad og læste hele skolebiblioteket under bordet, eller også blev jeg sat til at hjælpe mine klassekammerater. Og det passede mig egentlig fint. Det hele behøvede jo ikke gå så hurtigt. Karrierevalg i 6. klasse Men på et eller andet tidspunkt ændrede den mentalitet sig. At den ændrede sig hos mig er måske blot en afspejling af, at den ændrede sig i samfundet og dermed også i skolen. I 6. klasse skulle vi vælge fremmedsprog, og hvis vi ikke gjorde det, så kunne vi aldrig komme på gymnasiet, og så blev vi aldrig til noget. Derefter skulle vi lægge uddannelsesplaner. Og på uddannelsesmesse. Og til samtaler med vores UU-vejleder, der slæbte rundt på en halv regnskov af foldere om karrierevalg. I en tid, hvor vi knapt var begyndt at finde os selv, skulle vi finde ud af, hvad vi ville være. Alt for tidligt blev vi introduceret for de tårnhøje karaktergennemsnit til de uendeligt lange uddannelser, der alle sammen betød, at man kunne blive til noget. I vores sind blev det oversat til, at det ’at blive til noget’ kunne måles i tal på en skala. Vi var blevet individer, der hver især skulle sikre vores egen fremtid. Jeg er ikke sikker på, at min forklaring er den eneste rigtige. Men den er en del af sandheden. Vi lever i en konkurrencestat, hvor kun de bedst egnede overlever. I spurten mod toppen taber man nogle. Det oplever uddannelsesinstitutionerne også. Måske er det ikke det faglige niveau, der er for højt, men derimod det psykiske pres. I skolen oplever jeg tit, at vi skal være dygtige – bare for at være dygtige. Karaktererne har overhalet fagligheden, og det betyder, at vi alt for sjældent lærer for at lære noget, men i stedet lærer for at præstere. Det er ikke længere okay bare at være okay. Det er ikke længere okay at ligge lunt i midten af klassens faglige niveau. Derfor er der nogle, der står af. Derfor er der nogle, som prøver at følge med i ræset, men aldrig når toppen – og uden grund mister deres faglige selvværd. Og derfor eksisterer 12-talspiger og -drenge med ambitioner så høje, at intet nogensinde kan blive godt nok. Ingen vi-kultur Der findes faktisk en opposition til denne konkurrencestatsmentalitet. Der findes forældre og lærere, som med næb og klør prøver at ændre denne forestilling om, at kun det ypperste er godt nok. De, som vil mejsle det ud i granit, at 7 er en god karakter. At det er okay at være okay. Men de har det svært. For det er ikke blot skolesystemet, de er oppe at toppes med, men derimod en hel generations cementerede tankegang. Man er oppe at slås med elever som mig, der i bund og grund godt ved, at vi har ret. Det er ikke min mening at bebrejde lærerne. Jeg har været meget heldig og haft nogle meget dygtige lærere, der også var rigtig gode mennesker. De påbød os utrætteligt arbejdsgrupper og faste siddepladser, så alle fik en sidemakker, der enten kunne give hjælp eller få hjælp. Dermed blev undervisningen tilgængelig for alle. Men min generation er ikke opdraget i samme ’vi-kultur’ som vores forældre. Vi er en ’jeg-generation’, og derfor var gruppearbejde og faste siddepladser blot en byrde for de dygtige og selvværdsformindskende for de okay. Det handlede efterhånden ikke om at lære noget som klasse, men om at lære noget selv, så vi selv kunne blive til noget. Derudover var vi fuldt og helt klar over, at de karakterer, vores lærere gav os, ikke fortalte noget om vores egenskaber som medmennesker, men alene handlede om vores faglige dygtighed. På den måde blev karaktererne vigtigst. Forældre kommer til kort Alle mennesker jagter anerkendelse. I vores tid kan anerkendelsen næsten kun opnås ved at være perfekt. Det kan man bl.a. se i vores uddannelsessystem, hvor man skal være god til alt fra idræt til fransk for at få et godt gennemsnit. Det kan også ses i den enorme usikkerhed, min generation lider af, fordi vi aldrig kan leve op til de krav, vi stiller til os selv. Min generation har fået konkurrencestat på hjernen. Der er en modsætning mellem samfundets forestilling om det ’at blive til noget’ og vores forældres forestilling om, hvad det gode liv indebærer. For de fleste forældre kommer deres børns lykke før deres uddannelses længde. Selv om vi fejler og ikke kan leve op til alle kravene, så elsker vores forældre os stadig. Det er fordi, vores forældre tager højde for, at vi er mennesker. Men sådan tænker konkurrencestaten ikke. Den føler slet ikke noget. Hvis vi ikke kan følge med på vores arbejdsplads eller i vores klasse, så er vi tabere og alene med nederlaget. I et hjem med teenagere er der mange former for kriser. Der er sociale kriser, tøjkriser, identitetskriser, og så er der skolekriserne. Hvad der begynder som det simpleste bump på vejen i forhold til en matematikopgave, kan ende i gråd, fordi det under overfladen handler om ikke at føle sig god nok. Min egen mors trøstende ord kan lyde nogenlunde sådan her: »Jamen, du har fået så meget andet, du er jo så socialt begavet.« Min lillesøsters skarpe svar lyder nogenlunde sådan her: »Jamen, det kan jeg jo fucking ikke skrive på noget eksamensbevis.« Og nej. Det kan hun jo ikke. Men derfor skal min mor alligevel blive ved med at sige det. Derfor skal forældre blive ved med at fortælle os, at de vigtigste værdier slet ikke kan måles på en skala. Forældre kommer let til kort over for konkurrencestaten. De skal være kontravægt til konkurrencestatens forestilling om, at kun det bedste duer; samtidig skal de forberede os på at være en del af den. Det er der ingen forældre, der kan. Jeg tror, at den gode børneopdragelse i bund og grund handler om at kende sit barn godt nok til at vide, hvad det har brug for. Der findes ikke en universalopskrift. Forældre skal ikke bare opdrage os til gode mennesker. De skal lære os at holde fast i, at det er okay at være okay. De skal fortælle os, at i spurten mod toppen risikerer vi at miste hinanden af syne, og at succesen kan blive ensom. De skal forklare os – igen og igen – at vi risikerer at miste noget, som er langt vigtigere end alle verdens 12-taller. Nemlig vores medmenneskelighed – og dermed os selv. Det er sådan, jeg tænker, når jeg sidder i min egen klasse og føler ubehag ved den person, jeg er blevet. Fordi jeg ikke kan genkende hende.
{ "pile_set_name": "OpenWebText2" }
Today I’ve finished another bit of Deadzone, am playing Mars Attacks in a couple of hours (when I’ve finished the new scenario) and will be leading out the Orcs tonight for a God of Battles fight or two. Or maybe even three. I’m also composing some more bits for DreadBall and Eternal Battle in between those, which just leaves a few fleeting moments for me to be a little confused. Flitting between systems without getting lost is a skill that I’ve learned slowly over the years, but which seems to be impossible to entirely master. It is, however, extraordinarily useful to be even half-competent at because I need to be able to do it all the time. Of course, I am trying to make things easier for myself by writing things like Eternal Battle, which will allow me to focus on a smaller number of options, but there will always be more than one. In many ways I envy the folk who have such focus that they can concentrate on a single period and scale to game in. You come across them occasionally, usually toting beautiful armies and bags of bespoke scenery for their chosen passion. Professionally I cannot do that, not that I ever got anywhere close privately either. Still, I do sometimes gaze on in wonder. Like this: Related 11 Responses to Butterflying About I have absolutely no idea – I am often confronted with the exact same question. I can’t concentrate on one thing EVER, let alone in relation to the niche of my life that is reserved for hobby. Fortunately I have budget restrictions at the moment to keep me vaguely in check, plus the lovely box of Deadzone shaped goodness that is winging its way to me soon. Although saying that, Dreadball Ultimate looks so very very shiny… What I used to do is to paint and practice for tournaments. I’d set out to paint an army for a tournament 6 months away, play with it while painting, then pick another tournament and a new army once that was done. Chewed through Warmachine, Epic and Flames of War armies and practice that way. It’s got a whole lot more disorganised since I stopped doing tournaments. Been thinking that I need to focus on particular systems, but I’m still looking at half a dozen of em that I want to play…
{ "pile_set_name": "Pile-CC" }
Countermeasures for radiocesium in animal products in Norway after the Chernobyl accident--techniques, effectiveness, and costs. Nine years after the reactor accident in Chernobyl contamination by radiocesium is still a significant problem in sheep and reindeer production in Norway. To reduce the impact of the accident, effective countermeasures had to be developed and implemented. The levels of radiocesium in meat were reduced by a combination of countermeasures such as special feeding, use of cesium binders (bentonite and Prussian blue), and changing of slaughtering time. The countermeasures were labor intensive and expensive. Costs per averted dose per person-Sv were calculated to range from NOK 1,000 to 100,000 (7 NOK = $1 U.S.), with the use of cesium binders being the least expensive and condemnation of meat the most costly. Dietary advice, which did not include any compensation costs, had a cost of NOK 40 per person-Sv. Apart from the rejection of meat in 1986, countermeasures were deemed to be justified on a cost-benefit basis (less than NOK 600,000 per person-Sv).
{ "pile_set_name": "PubMed Abstracts" }