text
stringlengths
1
3.78M
meta
dict
Q: Фрагменты в Андроид Android работа с Fragment. Создаю и добавляю фрагменты в программно: fragmenttransaction = fragmentmanager.beginTransaction(); fragmenttransaction.setCustomAnimations(R.anim.slide_up, R.anim.slide_down); if (fragmentmanager.findFragmentByTag(LoginFragment.TAG) != null) { fragmenttransaction.remove(fragment_login); } if (fragmentmanager.findFragmentByTag(RegistrationFragment.TAG) == null) { fragmenttransaction.add(R.id.layout_login_window, fragment_registration, RegistrationFragment.TAG); } fragmenttransaction.commit(); Главный класс Активити унаследован от AppCompatActivity. Каким образом мне перехватить onCreateView у добавляемого фрагмента, чтобы понять что я могу инициализировать View относящиеся фрагменту? A: Методы жизненного цмкла будут вызваны автоматически после fragmenttransaction.commit(); Всё, что вам нужно - в вашем классе фрагмента их переопределить: public class LoginFragment extends Fragment { public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { System.out.println("onCreateView called"); return inflater.inflate(R.layout.article_view, container, false); } }
{ "pile_set_name": "StackExchange" }
In mrgybe's example concerning the Tran's estate, I question why his sons sold the business when new investors could potentially be brought in. Of course, the relative value of the business/estate must be substantiated legally, with some possible portion interpreted to be salable in the end. It's not clear why a total exit was the only alternative. More importantly, it appears like the senior Mr. Tran was a bit short sided on his estate management, as I'm thinking that there were undoubtedly many other viable avenues available to better protect inheritable assets. Of course mrgybe's comment is a straw man as well. There are ways to shelter family businesses, such as creating partnerships along the way. Tax reform in the past has in particular made it possible for farm owners to pass their holdings along to heirs that want to stay in the business. I think those reforms all make sense. In mrgybe's example, rhetorical as it is, the heirs get $3.25 million after taxes. I should weep for them? What percentage of the US wealth spectrum does that put them at? 95th or 98th? Its not onlt that. If the Business was a limted partnership, with his sons as partners, when Tran passed, the tab would be much smaller. If Tran had planned ahead everything would have been ok, but he was the sole owner of the asset, and that is where he erred. I think this country has always had the best incentives for making money and always will. People move here from all parts to grab a slice of the dream, and that will never change, and we've had an estate tax for many many decades of outrageous growth. An exemption in the 2.5-4.5M range is fair. The consensus seems to be that it's OK to keep accumulated wealth provided one is sufficiently sophisticated to put proper estate planning in place........but if one does not possess that sophistication, then it's OK for the government to take the lion's share of a lifetime's effort. Gates, Buffet etc. will likely not pay estate taxes........but an incredibly hardworking, unsophisticated, first generation immigrant will.....and lose the business in the process. Compassion anyone? Incidentally, this is not a straw man..........I estimated the numbers, but otherwise this is real. Oh, and one of the son's in this delightful and very traditional family, was so ashamed at "losing" the business his father had spent his lifetime building, that he left the US and took his own family back to Vietnam. No tears here. He and his family could not have done what they did anywhere else on the planet . And setting up an LP is not estate planning, its just one choice for titling a business. I think one must think long and hard what the outcome would be without any estate taxes. 90% of the heirs of wealthy families are not ambitious, or incentivized to work hard. im not sure if thats because they have tons of money, which they buy good Humbolt buds with or they would rather wait til their parents croak. Wealth redistribution has always, and will always be part of the Western Socialist economic structure. Many, myself included, believe that by lifting someone up, helps to create a more fluid economy. If we pass the bulk of net worth on from family to family, maybe the USA would resemble Saudi Arabia some day. Mr. G. Im always perplexed how those on the right vilify Gates and Buffett, two of Americas greatest capitalists of ALL time.....seems wacked. And setting up an LP is not estate planning, its just one choice for titling a business. Your lack of understanding of the topics on which you apparently advise clients, is truly breathtaking. What is a Family Limited Partnership? January 18, 2010 at 3:19 pm by Anthony Medico, Estate Planning Attorney What is a Family Limited Partnership? The Family Limited Partnership (FLP) is a limited partnership created to transfer ownership of assets to family members with a minimum of tax consequences. The FLP is designed to lower the value of your investments and assets (for estate tax purposes) while still allowing you to maintain full control of your estate inside the limited partnership. It works particularly well to transfer a family business, real estate or an investment portfolio to the next generation.It helps to reduce estate taxes — and to reduce the risk that assets would need to be sold to pay those taxes. You're missing my point. If Mr. Tran set a limited partnership, his children would not have inherited the asset , they would have been OWNERS of the asset. Is this not hard to understand? If you want to contunue to attack attack attack, have at it. But remember, you, Mr. G. bid us farewell last Spring, and now you are back, full of bile. Take a deep breath, be happy to be in America, and enjoy paying taxes like the rest of us. Mr. G I enjoy the debate, but you constantly seem to attack on technicalities, and me , or Mac, or anyone else on the left who is not a wordsmith. My point is the argument gets buried, and you just shove it down my throat. I have ZERO sympathy, ZERO, for someone who prospers in business, but does not take neccessary steps to insure to continuity of his/her business , simple steps. you call them sophisticated Estate planning steps, I disagree. Mr. Tran's son went back to VietNam with millions of dollars, and he made it in the USA, not Thailand, not China, not England, not Australia. mrgybe uses one of the tricks of the right wing, quoting something that you didn't say to disagree with you and ridicule your position. Buried, of course, under a surface veneer of British civility. Here he says: Quote: The consensus seems to be that it's OK to keep accumulated wealth provided one is sufficiently sophisticated to put proper estate planning in place........but if one does not possess that sophistication, then it's OK for the government to take the lion's share of a lifetime's effort. No such consensus. What we are saying is there is ample room in the tax code system for any business to protect a substantial amount of the capital, and the viability of the business, from estate taxes. Indeed, the lion's share of the example's assets pass on to the heirs. If their work helped build the business, they would or should have an ownership interest that could readily be protected. So his "consensus" is fake, created for snarky rhetorical value. Let me be clear. I favor tax policies that would apply equally to the Bush, Clinton and Kennedy families, that would tax a reasonable amount of their inherited wealth when it is inherited. The Republicans, when given authority didn't simply increase the amount that could be protected, it let it all pass without taxation. Kind of like peerage. If mrgybe wants to argue for a fairer taxation system, he might find me agreeing with him--but challenging him to be specific, and to be politically feasible. Snarky comments, with only a veneer of civility, irritate the hell out of me. Living trusts do not reduce inheritance taxes for subsequent beneficiaries. boggsman1 wrote: If the Trust is set up correctly, the spouse can avoid a portion of the inheritance tax, under a certain amount, BUT a Trust does NOT avoid the eventuality of the inheritance tax, when its passed on to the heirs. boggsman1 wrote: And setting up an LP is not estate planning, its just one choice for titling a business. My comments are not mere wordsmithing. All of the quotes above are fundamental misstatements. Coming as they do from a professed subject matter expert, they could lead a reader of this forum, or a client, to take actions, or to fail to take them, which could cost them huge amounts in estate taxes. Mr G . Its not likely that one of my fellow Bay Area windsurfers is going to plan his Estate based on a forum on iwindsurf.com. So, if you want to brow beat me continue to do so, if you get off on it, Im happy to lift your spirits. If you want to have a discussion about the subject matter, then I welcome it...it does not appear to be the case. BTW- item #1 is still factually correct, #2 was a mistake, #3 was a misstatement, you know what I meant, you just chose to avoid the topic, which was the laziness, or ignorance of the hard working Vietnamese man. bye You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou cannot download files in this forum
{ "pile_set_name": "Pile-CC" }
Forgery-preventing means of products is broadly divided into means for making it impossible to copy products themselves and means for attaching an unreproducible label to products as forgery-preventing means so that true and correct products (authentic products) can be identified. Herein, “product” is a generic name of a produced item such as an article, a commodity and goods. In particular, the latter means is frequently used, because it is more generally versatile than the former means, which rather needs to be individually dealt with. The latter means may be further divided into two techniques. One is a technique in which anyone can always identify the existence of forgery-preventing means, and a well known technique includes a hologram. The other is a technique in which forgery-preventing means is ordinarily undetectable, and only persons who know the existence of forgery-preventing means can detect it with special means to determine whether the product is authentic or not. A technique, in which authenticity is identified by observing, with a polarizing plate, a latent image formed using a phase difference medium in which an optical axis is patterned, is known (see, for example, JP-A-2008-137232 (“JP-A” means unexamined published Japanese patent application) and JP-A-2008-129421). However, there are problems in that the thus-visualized latent image is monochromatic when viewed from the front, and further authenticity can not be identified unless the polarizing plate is rotated, which makes authentication cumbersome and complicated.
{ "pile_set_name": "USPTO Backgrounds" }
Q: What should a student do, when a professor delays submitting his letter of recommendation? I'm applying to Stanford, which requires 2 letters of recommendation from teachers, due in 2 days. Both teachers have known about this since mid-September, but one of them hasn't submitted his letter. I've asked him, and he tells me that he has one prepared, but he hasn't submitted it yet. I can't add a third letter that would qualify because of the way that the common application works. Should I wait for the deadline to pass, hoping that he turns his letter in at the last minute? Or should I ask someone else to write a letter in his place? I'm mainly considering the first option, because I have no reason not to trust him, but I'm not sure what will happen if he doesn't pull through. Turns out that they do accept letters of recommendation after the deadline, so long as the application is in on time. A: Politely and very gently remind the professor that the letter of recommendation is due. That's really all you can do. If your professors says they will submit it on time, try to believe them. As you have found out after posting the question, to the extent that it doesn't hold up other parts of their process, people on admissions committees understand that your flakey letter writers don't mean that you are flakey and will usually do what they can to accept or consider late letters. Often, departments will remind students or letter writers that their letters are missing, at, or even after, a deadline.
{ "pile_set_name": "StackExchange" }
President Donald Trump blasted Fox News, the conservative cable network with which he is usually on good terms, claiming on Monday that the outlet "sure ain't what it used to be." The president's criticism followed what he alleged was a "softball" interview with one of his chief critics in Congress, Rep. Eric Swalwell, D-Calif. Advertisement: "Just watched Rep. Eric Swalwell be asked endless softball questions by @marthamaccallum on @FoxNews about the phony Witch Hunt. He was just forced out of the Democrat Presidential Primary because he polled at ZERO. Fox sure ain’t what it used to be. Too bad!" Trump tweeted on Tuesday night. He later added, "Oh well, we still have the great @seanhannity who I hear has a really strong show tonight. 9:00 P.M." Earlier this month, Swalwell became the second Democrat to drop out of the race for the party's 2020 presidential nomination. (Former West Virginia State Sen. Richard Ojeda briefly entered the nominating contest earlier this year.) Advertisement: "After the first Democratic presidential debate, our polling and fundraising numbers weren’t what we had hoped for, and I no longer see a path forward to the nomination. My presidential campaign ends today," Swalwell said at the time. This is not the first time in recent weeks that Trump has torn into Fox News. He recently compared the conservative news network to "low ratings Fake News @CNN." "Watching @FoxNews weekend anchors is worse than watching low ratings Fake News @CNN, or Lyin’ Brian Williams (remember when he totally fabricated a War Story trying to make himself into a hero, & got fired" Trump tweeted earlier this month. During an interview with Salon in April, MacCallum said she hoped to cover both Democrats and Republicans on her network during the course of the 2020 elections. At the time, she was hosting a town hall meeting with Sen. Bernie Sanders, I-Vt., who is among the candidates vying for the Democratic nomination. Advertisement: "We would very much like an opportunity to host one of their debates," MacCallum told Salon, referring to the Democratic National Committee's decision to not allow Fox News host one of its sanctioned debates. "We have said, both Bret [Baier] and I, that we hope that they will continue to keep that door open. It sounds like it's not open at the moment, but we really hope that these forums will keep that an open question going forward. And I think that the candidates should push back on it. I think that they should want to talk to us for the same reason that Bernie Sanders has, I think, rightly decided that this is a good place for him to be." During an interview with Salon last year, Swalwell vowed to use his oversight power as a member of the judiciary and intelligence panels to aggressively investigate the president. Advertisement: "We’re going to conduct the oversight role that we are responsible for, especially where Republicans gave Donald Trump presidential immunity for two years," Swalwell told Salon. "This guy has had two years of just free passes where he has not been reined in, and so, you’re essentially . . . It’s like essentially being responsible for a child for two years who’s had no rules and no accountability. It’s going to be a wake-up call for the president. I think he saw that in real time yesterday when he met with Leader Pelosi and Leader Schumer. We’ll investigate where the Republicans didn’t, and that means filling in the gaps with the Russian investigation, that means seeing his taxes to see if his financial interests are driving foreign and domestic policy. That means looking at how people are cashing in on access that he gives them . . . how he’s cashing in on access that he gives people to the White House."
{ "pile_set_name": "OpenWebText2" }
Two species of human Fc epsilon receptor II (Fc epsilon RII/CD23): tissue-specific and IL-4-specific regulation of gene expression. The Fc epsilon receptor II (Fc epsilon RII, CD23) functions in B cell growth and differentiation and in IgE-mediated immunity. The Fc epsilon RII structure expressed on various cell types has been analyzed identifying two species, Fc epsilon RIIa and Fc epsilon RIIb. Sequence analysis of the cloned cDNAs revealed that they differ only at the N-terminal cytoplasmic region, but share the same C-terminal extracellular region. These Fc epsilon RII species appear to be generated utilizing different transcriptional initiation sites and alternative RNA splicing. Fc epsilon RIIa is constitutively expressed only in normal B cells and B cell lines, whereas Fc epsilon RIIb expression is detectable in various cell types, such as monocytes and eosinophils. Normally, Fc epsilon RIIb is undetectable in B cells and monocytes, and can be induced by interleukin-4. Moreover, Fc epsilon RIIb is expressed on peripheral blood lymphocytes in atopic individuals. These findings may explain the difference in Fc epsilon RIIa and Fc epsilon RIIb function in B cells and the effector phase of IgE-mediated immunity.
{ "pile_set_name": "PubMed Abstracts" }
1. Field of the Invention The present invention relates to a vertical actuator mechanism for the legs of a walking machine and, more particularly, to a vertical actuator mechanism for a pantograph leg mechanisms for a walking machine which achieves isolation between the vertical and horizontal actuator mechanisms in a simple and efficient manner. 2. Description of the Prior Art It has long been known that it would be advantageous to develop a machine that walks rather than one driven by wheels or treads because a machine with legs can operate in areas and on terrain where wheeled or treaded vehicles cannot go. Knowing this, numerous attempts have been made over the years to develop a walking machine. However, the problems in developing such a machine have been so formidable that to this time, no satisfactory machine exists. These problems include coordinating the movement of the various legs, teaching the machine how to sense its environment so that each foot lands properly, and teaching the machine balance so that it does not fall over. The simple fact of the matter is that while walking is second nature to people and animals, it is extremely complex for computers and robots. The computer, with its ability to process enormous amounts of data and actuate suitable commands, promises to make the control of the legs of a walking machine a manageable problem. As a result, a number of researchers around the world have been working on the development of various different types of walking machines. It is highly desirable to form the leg of a walking machine out of a pantograph mechanism. A pantograph is a parallelogram structure where one corner of the parallelogram is a fixed point, the end of one of the legs of the pantograph is the movable point, the foot, and there exists within the pantograph structure what is known as the true pantograph point, a point which lies on a straight line between the fixed point and the movable point where motion of the true pantograph point in any direction will be translated into a proportional motion of the movable point. In order to move the foot of the pantograph structure both vertically and horizontally, so that a walking machine to which the leg mechanism is attached can walk, both a vertical actuator mechanism and a horizontal actuator mechanism is required. By using a pantograph mechanism, small motions of the pantograph point can be multiplied at the foot so that compact actuator mechanisms can be used and small movements of these mechanisms can be translated into large movements of the foot. Another highly desirable objective of a pantograph mechanism is that complete isolation be achieved between the vertical actuator mechanism and the horizontal actuator mechanism. The reason for this is that the vertical actuator mechanism supports the weight of the walking machine and it must, of necessity, be capable of exerting large forces. The horizontal actuator mechanism, on the other hand, is solely responsible for moving the foot horizontally and is not loaded by the weight of the walking machine. Thus, this actuator mechanism can be made small and fast provided that horizontal and vertical foot movements can be isolated and that the walking machine body can be kept level to gravity. In copending application Ser. No. 476,558, filed concurrently herewith, entitled Leg Mechanism for Walking Machine, and assigned to Odetics, Inc., the assignee of the present application, there is disclosed a foldable pantograph leg mechanism for a walking machine which will allow the legs of a walking machine to fold compactly against the machine body. In copending application Ser. No. 476,566, filed concurrently herewith, entitled Horizontal Actuator Mechanism for the Legs of a Walking Machine, and assigned to Odetics, Inc., the assignee of the present application, there is disclosed a horizontal actuator mechanism for the pantograph leg mechanism of a walking machine which allows very small motors to be used in applying the horizontal actuation force. In copending application Ser. No. 476,629, filed concurrently herewith, entitled Walking Machine, and assigned to Odetics, Inc., the assignee of the present application, there is disclosed a walking machine including a body having six legs attached thereto, extending therearound, in uniform positions around the body. As discussed in such application, by arranging a walking machine with a body and six uniformly spaced legs, the machine has the ability to maneuver in areas that are as small as a human being can maneuver in. Upon review of these applications, the problem remains to drive the pantograph leg mechanism in such a manner that isolation between the horizontal and vertical actuator mechanisms is achieved. The problem with using the true pantograph point as the vertical drive point is that the vertical actuator would have to follow the horizontal movement of the leg. This would mean that the actuator itself would have to slide on rails or in some other way accommodate the horizontal motion of the pantograph point, without changing its relationship to the vertical. Actually, one could either have the vertical actuator slide horizontally on rails to accommodate the horizontal motion of the pantograph point or have the horizontal actuator mechanism slide on rails to accommodate the vertical motion. Either alternative is highly inefficient because of the necessity of providing heavy, bulky mechanisms to support the sliding structure. It is the desire of the present invention to provide a simple, compact, lightweight mechanism. The ideal type of linkage to transmit large forces with a lightweight, efficient structure is a push-pull link (a strut) where the link is strictly in tension or compression, rather than sliding rails that have to carry high moments. One end of the strut would be connected to a vertical drive mechanism and the other end connected to a point on the pantograph. However, this causes a swinging action of the strut and if connected to the true pantograph point, horizontal movement of the mechanism will cause vertical movement of the connection point, preventing the desired isolation between the horizontal and vertical actuator mechanisms.
{ "pile_set_name": "USPTO Backgrounds" }
Q: How do I pronounce Emacs? I struggle to pronounce many computer terms, and to this day still mispronounce latex in my head - it just sounds better rhyming with flex. How would I pronounce Emacs? A: Just think of E-max. The letter E is pronounced like the name of the letter is pronounced. I has the same sound as in need, clean, feel, green etc. Also, I recommend this video: How to Say or Pronounce Emacs
{ "pile_set_name": "StackExchange" }
Q: How to monitor what Ubuntu One is doing? Is there a way to monitor what Ubuntu One is doing when it seems to be chewing up so much network bandwidth? I'd like to see what files it's syncing. I looked in ~/.cache/ubuntuone/log but can't find anything that shows this. A: Install Magicicada. It will allow you to see what Ubuntu One is doing with a nice interface. sudo apt-get install magicicada
{ "pile_set_name": "StackExchange" }
Background ========== It has been more than 30 years since human T-cell leukemia virus type-I (HTLV-1) was shown to be the causative agent of adult T-cell leukemia (ATL) \[[@B1],[@B2]\]. However, understanding the true nature of the multiple leukemogenic events \[[@B3]\] that are essential for this aggressive transformation remains elusive \[[@B4]-[@B9]\]. Although approximately 5% of HTLV-1-infected individuals develop ATL after a long latency period, the majority remain asymptomatic carriers (ACs) throughout their lifetimes. However, there are not enough clear determinants to distinguish between individuals who eventually develop ATL and those who remain as ACs \[[@B10],[@B11]\]. To discover the factors associated with disease development, long-term prospective studies have assessed the correlation between disease outcome and proviral load (PVL), that is, the percentage of infected cells among the total peripheral blood mononuclear cells (PBMCs) \[[@B10]-[@B12]\]. The 'Joint Study on Predisposing Factors of ATL Development' (JSPFAD) \[[@B13]\] showed that a PVL higher than 4% is one of the indications of risk for progression to ATL \[[@B10]\]. Although an elevated PVL is currently the best characterized factor associated with a high risk of ATL development, a high PVL alone is not sufficient for disease prediction, suggesting the need to discover additional predictive factors \[[@B10],[@B11]\]. Because ATL is a malignancy caused by HTLV-1 infection, both the integration of provirus into the host genome and the clonal expansion of infected cells are highly critical leukemogenic events \[[@B6],[@B7],[@B14],[@B15]\]. Although many studies have addressed these aspects, the mechanism of HTLV-1 clonal expansion has not been elucidated \[[@B15]-[@B35]\]. Accurate monitoring for changes in clonality occurring before, during, and after ATL development is of great interest and of major clinical significance not only to clarify the underlying mechanisms but also to discover reliable predictive biomarkers for disease progression. A broad range of evidence strongly supports that most neoplasms are composed of clonally expanded cell populations \[[@B36]-[@B38]\]. Owing to its biological significance, the concept of clonal expansion in cancer biology has been investigated using a variety of approaches in many tumor types \[[@B36]-[@B39]\], including ATL \[[@B6],[@B15],[@B16],[@B18]-[@B20],[@B22],[@B24],[@B29]-[@B32]\]. Clonal proliferation of HTLV-1-infected cells was first detected as monoclonal-derived bands by southern blotting \[[@B33]\]. Early studies found that monoclonal integration of HTLV-1 is a hallmark of ATL cells \[[@B16]\]. Furthermore, it was suggested that detecting a monoclonal band is useful for diagnosis and is associated with a high risk of ATL development \[[@B29],[@B30]\]. Subsequent PCR-based methods included inverse PCR, linker-mediated PCR, and inverse long PCR, which enabled analysis of samples with clonality below the detection threshold of southern blotting \[[@B17],[@B25],[@B31],[@B34]\]. Based on the observed banding patterns, the clonality of the samples was described as having undergone monoclonal, oligoclonal, or polyclonal expansion. Such PCR-based analyses revealed that, in addition to a monoclonal proliferation of infected cells, a monoclonal or polyclonal proliferation occurs even in non-malignant HTLV-1 carriers \[[@B31],[@B35]\]. Moreover, considering the stability of the HTLV-1 proviral sequence, it was hypothesized that maintaining a high PVL is achieved by persistent clonal proliferation of infected cells *in vivo*\[[@B25]\]. This hypothesis was further supported by the detection of a particular HTLV-1 clone in the same carrier over the course of several years \[[@B18]\]. Two Miyazaki cohort studies focused on the maintenance and establishment of clonal expansion: Okayama *et al.* analyzed the maintenance of a pre-leukemic clone in an AC state several years prior to ATL onset \[[@B19]\], and Tanaka *et al.* assessed the establishment of clonal expansion by comparing the clonality status of long-term carriers with that of seroconverters. They showed that some of the clones from long-term carriers were stable and large enough to be consistently detectable by inverse long PCR; however, those from seroconverters were unstable and rarely detectable over time \[[@B20]\]. Knowledge provided by conventional studies has shed light on the next challenges worthy of further investigation. Owing to technical hurdles, however, previous studies isolated small numbers of integration sites from highly abundant clones and detected low abundant clones in a non-reproducible manner \[[@B22],[@B34]\]. Furthermore, conventional techniques could not provide adequate information regarding the number of infected cells in each clone (clone size) \[[@B22]\]. To effectively track and monitor HTLV-1 clonal composition and dynamics, we considered devising a new method that would not only enable the high-throughput isolation of integration sites but also provide an accurate measurement of clone size. PCR is a necessary step for the integration site isolation and clonality analysis. However, bias in the amplification of DNA fragments (owing to issues such as extreme fragment length and high GC content) is intrinsic to any PCR-based method \[[@B40]-[@B45]\]. Different fragment amplification efficiencies make it difficult to calculate the amount of starting DNA (the original distribution of template DNA) from PCR products. Hence, estimating HTLV-1 clonal abundance, which requires calculating the number of starting DNA fragments, is only achievable by avoiding the PCR bias. Recently, Bangham's research group analyzed HTLV-1 clonality and integration site preference by a high-throughput method \[[@B22]\]. In the method developed by Gillet *et al.*, clone sizes were estimated using length of DNA fragments (shear sites generated by sonication) as a strategy for removing PCR bias \[[@B22]\]. Owing to the limited variation in DNA fragment size observed with shearing, the probability of generating starting fragments of the same lengths is high, leading to a nonlinear relationship between fragment length and clone size \[[@B22],[@B46]\]. Therefore, Gillet *et al.* used a calibration curve to statistically correct the shear site data \[[@B22]\]. Later, Berry *et al.* introduced a statistical approach, and further addressed the difficulties of estimating clone size from shear site data \[[@B46]\]. Their approach estimates the size of small clones with little error, but estimates for larger clones have greater error \[[@B46]\]. A parameter adopted from the Gini coefficient \[[@B47],[@B48]\] and termed the oligoclonality index was used to describe the size and distribution of HTLV-1 clones \[[@B22]\]. It has been demonstrated that the oligoclonality index differs between malignant and non-malignant HTLV-1 infections, and also a high PVL of HTLV-1-associated myelopathy is due to cells harboring large numbers of unique integration sites \[[@B22]\]. Furthermore, genome-wide integration site profiling of clinical samples revealed that the abundance of a given clone *in vivo* correlates with the features of the flanking host genome \[[@B22],[@B24]\]; although there was not a specific hotspot, HTLV-1 more frequently integrated in transcriptionally active regions of the host genome \[[@B22],[@B24]\]. These findings further clarified the characteristics of HTLV-1 integration sites, and strongly suggested the importance of HTLV-1 clonal expansion *in vivo*. Here we introduce a method that overcomes many of the limitations of currently available methods. Taking advantage of next-generation sequencing (NGS) technology, nested-splinkerette PCR, and a tag system, we designed a new high-throughput method that enables specific isolation of HTLV-1 integration sites and, most importantly, allows for the quantification of clonality not only from the major clones and high-PVL samples but also from low-abundance clones (minor clones) and samples with low PVLs. Moreover, we conducted comprehensive internal validation experiments to assess the effectiveness and accuracy of our new methodology. A preliminary validation was conducted by analyzing DNA samples from HTLV-1-infected individuals with different PVLs and disease status. Subsequently, an internal validation was performed that included an appropriate control with known integration sites and clonality patterns. We present our methodology, which illustrates that employing the tag system is effective for improving quantification of clonal abundance. Methods ======= Our clonality analysis method included two main aspects: (1) wet experiments, and (2) *in silico* analysis (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S1). A general explanation of materials and methods is provided here, and detailed protocols of the wet experiments are included in Additional file [1](#S1){ref-type="supplementary-material"}: Notes. The *in silico* analysis is further described in Results and discussion. NGS data have been deposited in the Sequence Read Archive of NCBI with access number of (SRP038906). Wet experiments --------------- ### ***Biological samples: specimens and cell lines*** Specimens: In total five clinical samples were provided by a biomaterial bank of HTLV-1 carriers, JSPFAD \[[@B13],[@B49]\]. The clinical samples were a part of those collected with an informed consent as a collaborative project of JSPFAD. The project was approved by the Institute of Medical Sciences, the University of Tokyo (IMSUT) Human Genome Research Ethics Committee. Information about the disease status of samples was obtained from JSPFAD database in which HTLV-1-infected individuals were diagnosed based on the Shimoyama criteria \[[@B50]\]. In brief, genomic DNA from PBMCs was isolated using a QIAGEN Blood kit. PVLs were measured by real-time PCR using the ABI PRISM 7000 Sequence Detection System as described in \[[@B10]\]. Cell lines: An IL2-dependent TL-Om1 cell line \[[@B51]\] was maintained in RPMI 1640 medium supplemented with 10% heat-inactivated fetal calf serum (GIBCO), 1% penicillin-streptomycin (GIBCO), and 10 ng/mL IL2 (R&D systems). The same conditions as those of patient samples were used to extract DNA and measure PVL. ### ***Illumina-specific library construction*** We employed a library preparation protocol specifically designed to isolate HTLV-1 integration sites. The final products in the library that we generated contained all the specific sequences necessary for the Illumina HiSeq 2000 platform (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S2). These products included a 5′-flow cell binding sequence, a region compatible with read-1 sequencing primer, 5-bp random nucleotides, 5-bp known barcodes for multiplexing samples, HTLV-1 long terminal repeat (LTR), human or HTLV-1 genomic DNA, a region compatible with read-2 and read-3 sequencing primers, 8-bp random tags, and a 3′-flow cell binding sequence from 5′ to 3′, respectively (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S2B). Incorporating the 5-bp random nucleotides downstream of the region compatible with the read-1 sequencing primer was critical and resulted in high-quality sequence data. We used a library designed without the first 5-bp of random nucleotides as input for the HiSeq 2000 sequencer in our first samples (S-1, S-2, S-3, and S-4). Because all fragments began with the same LTR sequence, clusters generated in the flow cells could not be differentiated appropriately. These samples resulted in low-quality sequence data (see Additional file [1](#S1){ref-type="supplementary-material"}: Notes). Designing the first 5-bp randomly resulted in high-quality sequence data for the remaining samples because clusters were differentiated with no problem during the first five cycles of sequencing (data not shown). Our library construction pipeline comprised the following four steps (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S2) (Additional file [1](#S1){ref-type="supplementary-material"}: Notes): ![**Estimating clone size by 'shear sites'.** Also see Additional file [2](#S2){ref-type="supplementary-material"}: Figure S2 for a simple image from an integration site and its shear sites. **(A)** Depicted is the complex population of uninfected cells (grey circles) together with infected clones (circles of different colors). A clone is shown as a group of sister cells (circles of the same color) having the same integration site (IS). Different clones are distinguishable based on differing integration sites, and thus the number of integration sites represents the number of infected clones. For example, the six different unique integration sites refer to six unique clones. **(B)** Genomic DNA fragmented by sonication generates random shear sites (fragments of different length). Fragment size, measured by an Agilent Bioanalyzer, ranged from 300 to 700 bp. This size range can theoretically provide approximately 400 variations. **(C)** The size distribution of fragments decreased following amplification by integration-site-specific PCR. From the deep sequencing data, the original number of starting fragments could be estimated by removing PCR duplicates and counting fragments with different lengths. For example, five different lengths of PCR amplicons represent five infected sister cells. **(D)** We analyzed four samples, including (S-1: asymptomatic carrier (AC), (8% PVL)), (S-2: smoldering (SM), (9% PVL)), (S-3: smoldering, (31% PVL)), and (S-4: acute, (33% PVL)). Using our method, the clone sizes were quantified by considering only shear sites. The first major clone (the largest clone) of each sample was mapped to (chr 11-41829319 (+)), (chr 15: 59364370 (+)), (chr 4-563543 (-)), and (chr X - 83705328 (-)), respectively. The shear site variations of each major clone were 209, 119, 242, and 222, respectively. Different colors on the pie graphs indicate different integration sites, and the size of each piece represents the clone size.](gm568-1){#F1} \(1\) DNA isolation: DNA was extracted as described above, and the concentration of extracted DNA was measured with a NanoDrop 2000 spectrophotometer (Thermo Scientific). We recommend using 10 μg of DNA as the starting material. However, in practice there are some rare clinical samples with limited DNA available. In order to be able to handle those samples, the method was also optimized for 5 μg and 2 μg of starting DNA.(2) Fragmentation: According to the protocol provided in Supplementary Notes, the starting template DNA was sheared by sonication. The resulting fragments represented a size range of 300 to 700 bp as checked by an Agilent 2100 Bioanalyzer and DNA 7500 kit (Figure [1](#F1){ref-type="fig"}B). \(3\) Pre-PCR manipulations: Four steps of end repair, A-tailing, adaptor ligation, and size selection were performed as described in Additional file [1](#S1){ref-type="supplementary-material"}: Notes. \(4\) PCR: To amplify the junction between the genome and the viral insert, we used nested-splinkerette PCR (a variant of ligation-mediated PCR \[[@B52],[@B53]\]) (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S2). We confirmed that the technique specifically amplifies HTLV-1 integration sites; since there was no non-specific amplification neither from human endogenous retroviruses nor from an exogenous retrovirus such as HIV (see Additional file [1](#S1){ref-type="supplementary-material"}: Table S1 and Additional file [2](#S2){ref-type="supplementary-material"}: Figure S1). Information on oligonucleotides, including adaptors and primers, and the LTR and HTLV-1 reference sequences \[[@B54]\] are provided in Additional file [1](#S1){ref-type="supplementary-material"}: Table S1. The final PCR products were sequenced using the HiSeq 2000 platform. *In silico* analysis -------------------- Raw sequencing data were processed according to the workflow described in the Results and discussion section. The initial forward read (100-bp) was termed Read-1 and the reverse read (100-bp) was termed Read-3 and an index read (8-bp) was termed Read-2. In brief, analysis programs were written in Perl language and run on a supercomputer system provided by The University of Tokyo's Human Genome Center at The Institute of Medical Science \[[@B55]\]. The sequencing output was check for quality using the FastQC tool \[[@B56]\]. The regions corresponding to the LTR and HTLV-1 genome were subjected to a blast search against the reference sequences described in Additional file [1](#S1){ref-type="supplementary-material"}: Table S1. Following isolation of the integration sites, the flanking human sequences were mapped to the human genome (hg19) (the UCSC genome browser \[[@B57]\]) by Bowtie 1.0.0 \[[@B58]\]. The final processed data included information about shear sites (R1R3), tags (R1R2), and a combination of tags and shear sites (R1R2R3). Fitting the data to the zero truncated Poisson distribution for retrieving correlation coefficients were done by the R-package 'gamlss.tr' \[[@B59]\]. The Gini coefficient was calculated by StatsDirect medical statistics software \[[@B60]\]. Results and discussion ====================== General concepts ---------------- We originally designed our method to overcome the limitations of conventional techniques \[[@B31],[@B34]\] and to make improvements in the only existing high-throughput method \[[@B22]\]. In general, our method includes two main sets of wet experiments and an *in-silico* analysis. We used genomic DNA as the starting material to prepare an appropriate library for Illumina sequencing. Subsequently, deep-sequencing data were analyzed by a supercomputer. The resulting information represents the clonality status of each sample (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S1). There are complex populations of infected clones and uninfected cells in a given HTLV-1 infected individual. High-throughput clonality analysis requires monitoring two main characteristics of clones: HTLV-1 integration sites and the number of infected cells in each clone (clone size). Each HTLV-1-infected cell naturally harbors only a single integration site \[[@B23]\]. Therefore, the number of detected unique integration sites corresponds to the number of infected clones. Based on our analysis, which is consistent with the data of Gillet *et al*. \[[@B22]\], employing high-sensitivity deep sequencing allowed for the isolation of a large number of unique integration sites (UISs), including samples with low PVLs (Figure [1](#F1){ref-type="fig"}). We analyzed four samples from HTLV-1-infected individuals with different PVLs, disease status, and expected clonality patterns. The samples include S-1: AC (8% PVL); S-2: smoldering ATL (SM) (9% PVL); S-3: SM (31% PVL); and S-4: acute ATL (33% PVL). Based on the final optimized conditions, 1030, 39, 265, and 384 UISs were isolated from each sample, respectively (Figure [1](#F1){ref-type="fig"}). The most challenging aspect of our clonality analysis was estimating the number of infected cells in each clone. Although a necessary step in the analysis, PCR introduces a bias in the frequency of starting DNA material \[[@B40]-[@B45]\]. Because amplification causes significant changes in the initial frequency of starting materials, PCR products cannot be used directly to estimate the amount of the starting DNA material. To overcome this problem, we needed to manipulate DNA fragments to make them unique prior to PCR amplification. Thus, if each DNA fragment could be marked with a unique feature, it would then be possible to calculate its frequency based on the frequency of that unique feature. When a single unique stretch of DNA is amplified by PCR, the resulting product is a cluster of identical fragments termed PCR duplicates. Therefore, to estimate the frequency of starting DNA fragments, one should count the number of clusters with unique features. The remaining technical question then becomes how to mark the starting DNA prior to PCR amplification. In the following section, we compare and discuss two main strategies, namely (1) shear sites and (2) a tag system, which enable DNA fragments to be uniquely marked. Estimating the size of clones by shear sites -------------------------------------------- The first strategy, described by Gillet *et al.*, relies on shearing DNA by sonication, resulting in fragments of random length \[[@B22]\]. Sonication-derived shear sites were thus used as a distinguishing feature to make fragments unique prior to PCR. Clone sizes were then estimated by statistical approaches \[[@B22],[@B46]\]. To directly assess the effectiveness of the shear site strategy, we analyzed the clonality of the aforementioned clinical samples (S-1, S-2, S-3, and S-4). Genomic DNA was cleaved by sonication with fragments in the 300- to 700-bp range, theoretically providing approximately 400 possible variations in fragment size (Figure [1](#F1){ref-type="fig"}A and B). Following library construction, however, the final product represented smaller size ranges, implying a relatively limited number of variations (Figure [1](#F1){ref-type="fig"}C). Finally, the number of PCR amplicons with unique shear sites was retrieved from deep-sequencing data. See Additional file [2](#S2){ref-type="supplementary-material"}: Figure S2 for a simple image from an integration site and its shear sites. The data obtained from the shear site experiments were not fitted to calibration curves or statistical treatments, which were used by Gillet *et al.* and Berry *et al.*, respectively (See Additional file [1](#S1){ref-type="supplementary-material"}: Notes) \[[@B22],[@B46]\]. For clarity, only the information relating to the major clone of each sample is provided in Figure [1](#F1){ref-type="fig"}D. The shear-site variations of the major clone were 209, 119, 242, and 222 for samples S-1 through S-4, respectively. Even in the case of control samples with 100% PVLs, the shear sites did not provide more than 225 variations (see Validation of the methodology). However, it was expected that samples with differing PVLs and disease status would harbor varying numbers of sister cells, at least in their major clones. Similar variations of shear sites were observed in major clones of AC, SM, and acute samples. These data suggest that, because the number of sister cells in each clone exceeded the shear site variations, the size of the clones was underestimated (Figure [1](#F1){ref-type="fig"}). This is most problematic in the case of large clones and leads to an underestimation of the clone size. Measuring the size of clones by the tag system ---------------------------------------------- We developed an alternate strategy to remove PCR bias and to estimate starting DNA. We designed a tag system in which 8-bp random nucleotides are incorporated at the end of DNA fragments during adaptor ligation step. Each tag acts as a molecular barcode, which gives each DNA fragment a unique signature prior to PCR. Information on the frequency of observed tags from the deep-sequencing data can be used to remove the PCR duplicates and thereby estimate the original clonal abundance in the starting sample. Owing to their random design, the tags could theoretically provide approximately 65,536 variations. This degree of potential variation is expected to provide a unique tag for a large number of sister cells in each clone (Figure [2](#F2){ref-type="fig"}). ![**Measuring clone size using the tag system. (A)** The depiction above shows that shear site variations are not able to cover all sister cells in large clones. As the number of the sister cells in a given clone increases, the probability of DNA shearing at the same site increases. **(B)** Prior to PCR, we incorporated 8-bp random tags into each DNA fragment to uniquely mark them. Random tags could theoretically provide approximately 65,536 variations. The number of potential variations is expected to amply cover large numbers of the sister cells. **(C)** The tag information was used to remove PCR duplicates and to estimate the original number of starting fragments. If the fragments had the same shear sites but different tags, they were counted separately. For example, here five different combinations of tags and shear sites represent five infected cells. **(D)** Samples: S-1, S-2, S-3, and S-4 were analyzed by the final optimal condition (Bowtie parameters: -v 3 - - best, and filtering condition: (merging approach) JT-10). Clone size was measured by tags only or by the combination of shear sites and tags. The covered variations were (393,142, 1751, and 2675) and (269, 119, 1192, and 2038), respectively.](gm568-2){#F2} We analyzed samples S-1, S-2, S-3, and S-4 to assess the effectiveness of our tag system for estimating clone size. The major clone of each sample showed tag variations of 393, 142, 1751, and 2675, respectively (Figure [2](#F2){ref-type="fig"}D). Similar variations of tags and shear sites were observed in the largest clones of S-1 and S-2 ((shear sites *vs.* tags): (209 *vs.* 393) and (119 *vs.* 142)) (Figure [1](#F1){ref-type="fig"}D and Figure [2](#F2){ref-type="fig"}D). In all four samples, those variations were also similar in the minor clones of which the clone sizes did not exceed shear sites variations (approximately \<200 variations) (See Additional file [1](#S1){ref-type="supplementary-material"}: Table S3 and Additional file [2](#S2){ref-type="supplementary-material"}: Table S1 for information on the ten largest clones). However, the variations covered by tags were significantly greater than those of shear sites, especially for large clones like those observed in the major clones of S-3 and S-4 ((shear sites *vs.* tags): (242 *vs.* 1751) and (222 *vs.* 2675)). The variations covered by tags and combinations were almost the same for all four samples ((tags *vs.* combinations): (393 *vs.* 296), (142 *vs.* 119), (1751 *vs.* 1192), and (2675 *vs.* 2038)). Upon comparison of the tag system data with the shear site data, it was clear that both strategies yield essentially the same results when the size of clones is small enough to be covered by the number of shear site variations generated. However, the tag system provides a much better estimation of clonality when the number of sister cells in each clone exceeds shear site variations. Therefore, clone size was underestimated when considering only shear sites in expanded clones like samples S-3 and S-4. Given this, our tag system should be used for samples with different clonality status to avoid underestimation of the size of clones. See Additional file [2](#S2){ref-type="supplementary-material"}: Figure S3 for a simple comparison of shear site and tag variations. Validation of the methodology ----------------------------- Our newly developed method - the tag system and the related data analysis - were successfully validated, internally. As mentioned above, the initial validation was done by analyzing samples from different HTLV-1-infected individuals (Figures [1](#F1){ref-type="fig"} and [2](#F2){ref-type="fig"}). Finally, we conducted a comprehensive internal validation by using an appropriate control with known integration sites and clonality patterns to provide direct evidence for the effectiveness of our system in the clonality analysis. We designed a suitable control because there was not an appropriate control available. Using our system, we could evaluate the method and confirm its accuracy, sensitivity, and reproducibility. We selected two samples with the following special conditions as starting materials for preparing the control system.Sample one (M): DNA from an acute ATL patient with 100% PVLs and a single integration site in the major clone (Figure [3](#F3){ref-type="fig"}A). The integration site of this sample was first checked with conventional splinkerette PCR, which detected a single major integration site. Subsequently, deep-sequencing data (tags only and combinations) showed that approximately 99% of the PVL accounted for the major clone with an integration site at chromosome 12:94976747(-). A small numbers of clones occupied approximately 1% of the PVL of this sample. Those clones were only detected in the second trial samples for which the external PCR products were not diluted. Therefore, to simplify the overall analysis, we removed those low-abundance clones (data not shown).Sample two (T): DNA was isolated from a fresh culture of TL-Om1, which is a registered monoclonal ATL cell line with 100% PVL and a single integration site at chromosome 1:121251270(-) in each cell (Figure [3](#F3){ref-type="fig"}A).Having prepared these two samples, they were sonicated and mixed in proportions of 50:50 and 90:10 (Figure [3](#F3){ref-type="fig"}B). These known proportions were thus expected to generate specific patterns that could be verified with our subsequent analysis. We conducted two independent sets of trials.In the first trial, samples were named as 'first trial control 1 \~ 4' and abbreviated as 1st T-cnt-1 \~ 4. Various amounts of DNA (μg) from samples M and T were mixed to prepare the final expected clone sizes as shown in Figure [3](#F3){ref-type="fig"}C. A 1-μL sample of a 10-fold dilution of external PCR product was used as the starting material for nested PCR for this trial. The samples were run in separate lanes of HiSeq 2000.We named the samples of the second trial as second trial control-1 \~ 4 and abbreviated them as 2nd T-cnt-1 \~ 4. DNA samples were mixed similarly to that for the first trial except for sample four (Figure [3](#F3){ref-type="fig"}D). In contrast to the first trial, we used 1 μL of the external PCR product without any dilution as a starting material for the nested PCR. These samples were multiplexed and run in the same lane of HiSeq 2000. The purpose of the second trial was to test both method reproducibility and the effect that the dilutions had on the results. ![**Preparing the control system. (A)** The control system was designed by mixing sonicated genomic DNA (gDNA) of TL-Om1 with that of an ATL patient in proportions of 50:50 and 90:10. TL-Om1 is a standard ATL cell line with 100% PVL and a known single integration site at (chr1:121251270(-)). The patient sample was from an acute type of ATL with 100% PVL and a single integration site at (chr 12:94976747(-)). **(B)** The expected clonality patterns: (50% *vs.* 50%), (90% *vs.* 10%), and (10% *vs.* 90%) were generated by mixing gDNA from an ATL sample with that from TL-Om1. **(C, D)** Full details of the first trial's and the second trial's samples including: name of samples, total amount of DNA (μg), the amount of DNA (μg) from TL-Om1 (T) *vs*. major clone (M), and expected clone size are provided. **(E)** Integration site position of TL-Om1 and the major clone of ATL sample.](gm568-3){#F3} The samples of both the first and second trials were analyzed under the same conditions, except where noted above. For each control sample, expected patterns and experimentally observed patterns were calculated for (a) raw sequence reads, (b) shear sites, (c) only tags, and (d) the combination of tags and shear sites (Figure [4](#F4){ref-type="fig"}). Figure [4](#F4){ref-type="fig"} shows the data when the optimal conditions were considered. Additional file [1](#S1){ref-type="supplementary-material"}: Figure S3 includes most of the data accumulated during optimization of the method. ![**Validation of the tag system.** For each control sample, both the expected and the experimentally observed patterns of raw sequence reads, shear sites, and the combination of tags and shear sites are represented in the bar graphs. Abbreviations: Com.: Combinations, Exp.: expected pattern, Seq.: raw sequencing data without removing PCR duplicates, Sh.: Shear sites, Tg.: Tags. **(A)** Clone size data of the first trial samples: Data were obtained considering the final optimal conditions: (Bowtie parameters: -v 3 - - best, and filtering condition: (merging approach) JT-10). **(B)** Clone size data of the second trial samples: Data were obtained considering the final optimal conditions: (Bowtie parameters: -v 3 - - best, and filtering condition: (merging approach) JT-10-1%). See Additional file [1](#S1){ref-type="supplementary-material"}: Figure S4 for information on merging approach.](gm568-4){#F4} Evaluating the accuracy of the clonality analyzed based on shear sites *vs.* tags system ---------------------------------------------------------------------------------------- The 'absolute error', a technique used to evaluate system accuracy \[[@B61]\], was used to assess our method. The experimental values were subtracted from expected values (Figure [5](#F5){ref-type="fig"}A). Taking advantage of our control system (the first and second trial samples), the clone size was calculated by considering (a) sequencing reads without removing PCR duplicates, (b) only shear sites, (c) only tags, and (d) the combination of tags and shear sites (Figure [5](#F5){ref-type="fig"}B and C). The absolute errors of raw sequence reads for the first trial samples were 23.58, 6.26, 4.57, and 5.72, whereas those of the second trial samples were 44.66, 9.50, 6.88, and 60.24. The magnitude of errors in the first trial was lower than that of the second trial probably due to the dilution of the external PCR products in the first trial. However because dilution reduced the number of covered integration sites, it should be done sparingly and with the purpose of the experiments in mind. The errors when considering only shear sites were 1.72, 34.33, 21.76, and 18.73 for the first trial and 0.47, 38.29, 36.72, and 40.47 for the second trial. Underestimations caused by low shear site variation did not affect the relative size of clones when the expected size of the clones was 50% *vs.* 50%. In this situation, shear sites had the smallest error: 1.72 for 1^st^ T-cnt-1 and 0.47 for 2^nd^ T-cnt-1. ![**Evaluating the accuracy of the clonality analysis. (A)** Absolute error is calculated by subtracting the expected values from the experimentally observed values. **(B, C)** The accuracy of the method is evaluated by calculating the absolute error of the clone size estimation of the control samples (see Figure [3](#F3){ref-type="fig"}). The *y* axis represents the percentage of absolute errors in different conditions including: (1) raw sequencing reads without removing duplicated PCR, (2) only shear sites, (3) only tags, and (4) the combination of tags and shear sites. The absolute errors of the final optimal condition: the first trial: (Bowtie parameters: -v 3 - - best, and filtering condition: (merging approach) JT-10), and the second trial: (Bowtie parameters: -v 3 - - best, and filtering condition: (merging approach) JT-10-1%) are presented in this figure. Please refer to Additional file [1](#S1){ref-type="supplementary-material"}: Figure S6 for the absolute errors in all examined conditions. **(B)** The absolute errors of the first trial. **(C)** The absolute errors of the second trial. See Additional file [1](#S1){ref-type="supplementary-material"}: Figure S4 for information on merging approach.](gm568-5){#F5} The errors were reduced in the data using the tag system: 7.27, 5.23, 14.49, and 6.50 for the first trial, and 6.67, 7.07, 10.07, and 13.16 for the second trial. In the case of the combination of tags and shear sites, errors were: 6.98, 4.06, 0.21, and 1.31 for the first trial and 3.42, 10.51, 12.26, and 5.83 for the second trial. Interestingly, the samples 'tags only' and 'combinations' showed similar error levels. Based on these data, our system showed lower absolute errors than when considering only shear sites (Figure [5](#F5){ref-type="fig"}) (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S4). Owing to differences in analyzed samples and system setups, we could not directly compare our data with published data \[[@B22],[@B46]\]. Indirect evidence, however, provided by shear site analysis of our own data illustrated that our system has lower absolute errors than using the shear site-based methodology. *In-silico* analysis -------------------- Processing, management, and analysis of the large amount of data generated by deep sequencing require special infrastructures and bioinformatics skills. We designed a data analysis and interpretation pipeline specific for HTLV-1 integration sites and clonality studies. The workflow is provided in Figure [6](#F6){ref-type="fig"}. First, the raw data for high-throughput sequencing were checked for quality by the FastQC tool. We then removed the first 5-bp random nucleotides from read-1 and de-multiplexed those samples that were run in the same lane of the HiSeq 2000 based on 5-bp of the known sequence (Figure [6](#F6){ref-type="fig"} and Additional file [1](#S1){ref-type="supplementary-material"}: Figure S2). The downstream 23 nucleotides, which represented LTR-specific primers, were also trimmed before further analysis. We then separated the remaining sequence of read one into two different datasets: (1) LTR sequence and (2) HTLV-1 or human sequence. The former comprises the 27-bp sequence remaining from the LTR, whereas the latter is composed of the 41-bp or 45-bp HTLV-1 or human sequence. In the case of multiplexed and non-multiplexed samples, different lengths (that is, 41-bp and 45-bp) were available for analysis. Both sets were subjected to blast analysis against LTR and HTLV-1 reference sequences with one or two mismatches permitted, respectively. Reads for which the sequence did not match HTLV-1 were presumed to be human as long as their 27-bp LTR sequences matched the LTR reference sequence. The resulting human reads were mapped to the human genome (hg19) using Bowtie 1.0.0 \[[@B58]\]. We employed various parameters of Bowtie and different lengths of read three to obtain the optimal mapping yield (Additional file [1](#S1){ref-type="supplementary-material"}: Table S2). These conditions were achieved when a maximum of three mismatches were permitted (-v parameter) and when the best alignment regarding the number of mismatches was reported (\--best parameter). In addition, use of the same length of read-1 as in read-3 allowed for better mapping results. Mapping results are further discussed in Additional file [1](#S1){ref-type="supplementary-material"}: Notes.The 5′-mapped regions were considered to be the positions of integration sites and reported as (chromosome: position: (strand)) for example, (chr1:121251270: (-)). In addition, 3′-mapped regions from read-3 were reported as shear sites for each corresponding position. Information on the tags, obtained from read-2, was used to determine the size of clones as described in subsection: Measuring the size of clones by the tag system. Final outputs of our analysis - the three main reports: R1R3, R1R2, and R1R2R3 - include information on shear sites, tags, and a combination of tags and shear sites, respectively (Figure [6](#F6){ref-type="fig"}). ![***In-silico*analysis work flow. (A)** Illumina HiSeq 2000 platform outputs raw data of (Read-1 = 100 bp), (Read-3 = 100 bp), and (Read-2 = 8 bp). Data were analyzed according to this work flow after checking quality with the FastQC tool. In the case of Read-1, the first 5 bp were trimmed, and the next 5 bp were used to de-multiplex indexed samples. The downstream 23 bp, which correspond to the LTR primer (F2), were then removed. The next 27 bp were subjected to a blast search against the LTR reference sequence. For the blast search reads, the remaining 41/45 bp were subjected to a blast search against an HTLV-1 reference sequence. Reads were confirmed to be from HTLV-1 was removed, and the sequences and IDs from the remaining reads which considered as human, were collected. Subsequently, Read-3 with IDs corresponding to Read-1's IDs were collected. The first 41/45 bp of Read-3 were trimmed and collected to have the same length as Read-1. The paired sequences of Read-1 and Read-3 (same lengths) were mapped against hg19 by Bowtie with -v 3 - -best parameters. The 5′-mapped positions were considered to be integration sites and the 3′-mapped positions as shear sites. Read-2 information was used to retrieve the clone size based on tags. Finally, the clone size was computed by combining tag and shear site information. All the analyses were done by our own Perl scripts, which resulted in the following reports. Report R1R3: the distribution of unique shear sites per integration site. Report R1R2: the distribution of unique tags per integration site. Report R1R2R3: the distribution of unique tags and shear sites per integration site. **(B, C)** The structure of Read-1 for the non-multiplexed and multiplexed samples.](gm568-6){#F6} Removing background noise ------------------------- Data obtained from next-generation sequencers are not error free \[[@B40],[@B62]-[@B65]\]. There are many reports on the error rate of Illumina sequencers \[[@B66],[@B67]\]. Teemu Kivioja *et al.* recently developed a system named unique molecular identifiers (UMIs) for quantifying mRNAs and employed filtering criteria to remove false UMIs generated by sequencing errors \[[@B68]\]. In our study, consistent with the data of Kivioja *et al*. \[[@B68]\], the sequencing errors produced false tags with low frequencies. A filtering system was required to remove those tags, which could affect interpretation of our clonality data and reduce the accuracy of the clone size measurement. To minimize the effect of sequencing errors on data interpretation, we tested different filtering conditions to remove background noise. Here, we report our proven filtering approach (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S4). Considering that tags are designed randomly, each tag has an equal probability of being observed. Hence, the distribution of tags should be fitted to the zero truncated Poisson distribution \[[@B59],[@B68]\]. Therefore, we test data fit to the Poisson distribution to determine the efficacy of each filtering condition. The distribution of tags for each sample was measured by the R-package 'gamlss.tr' \[[@B59]\], and the correlation coefficient was compared before and after filtering (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S6). We used a filtering system, which we named the merging approach. The merging approach was conducted by clustering the tags and allowing only one mismatch so that unique tags, differing only in one nucleotide (one-mismatch permission), were merged. Subsequently, if the frequency of observed tag reads (PCR duplicates) was greater than 10, those unique tags were employed in further analysis. Otherwise, they were considered as artifacts. We referred to this filtering approach as 'Join Tag- remove10' (JT-10) in the Figure legends. To facilitate understanding, these filtering conditions are illustrated in Additional file [1](#S1){ref-type="supplementary-material"}: Figure S4. Final discussion ---------------- The advent of NGS technologies holds promise to reveal the complex nature of neoplasms and to move past the limitations of previous methods. Using different approaches starting from early cytogenetic analysis to later, more elaborate studies with NGS technologies, the clonal composition of different tumors has been analyzed \[[@B36]-[@B39]\]. Robust monitoring and tracking of clonal dynamics using provirus integration sites allow for the assessment of clonal composition of HTLV-1-infected individuals from early infection to the final stage of ATL development. To meet the technical requirements for such type of analysis, we combined our expertise in the field of HTLV-1 research and NGS analysis and developed the high-throughput methodology described herein. Gillet *et al.* also recently introduced a high-throughput method to extensively characterize HTLV-1 integration site preferences and quantify clonality (further discussed in Additional file [1](#S1){ref-type="supplementary-material"}: Notes) \[[@B22]\]. They statistically analyzed shear site data to estimate clone size. According to their published data \[[@B22],[@B46]\] and as well as our current data, the limited variation in shear sites leads to an underestimation of the size of large clones. Considering that the incidence of large clones increases with disease progression from the healthy AC state to the malignant states of smoldering, chronic, or acute \[[@B22],[@B46]\], an accurate measurement of clone size - particularly large clones - is of great clinical significance. Our study is the first in which the size of large clones was experimentally measured without using statistical estimation. We have provided details of the method design, optimized experiment protocols, and *in-silico* data processing workflow. To validate our methodology and assess its accuracy, we analyzed eight control samples with known integration sites and clone sizes, and four clinical samples. We subjected the samples to deep sequencing so that they had enough read coverage for each integration site and to ensure accurate measurement of clone size (See Additional file [1](#S1){ref-type="supplementary-material"}: Notes). We proved our methodology to be reliable for isolating large numbers of integration sites and to be accurate for quantifying clone size. Because the tag system could provide a sufficient number of variations regardless of clone size, we were able to demonstrate that the measurements are accurate. Preliminary experiments on the clinical samples with differing PVLs and disease status showed different clonality patterns specific to AC and different ATL disease subtypes. S-1 was selected to represent still-healthy but infected individuals (ACs), S-2 and S-3 to represent indolent types of ATL, and S-4 to represent a typical aggressive type of ATL. Despite similar PVLs, S-1 and S-2 could be distinguished based on clonality patterns (polyclonal *vs.* a shift towards oligoclonal): S-1: AC, 8% PVL, and S-2: SM, 9% PVL. The clones of AC showed a uniform distribution pattern with no large difference in clone size; clones of S-2, however, had non-uniform size. S-2 and S-3 (S-3: SM, 31% PVL) are both smoldering subtypes of ATL progression with differing PVLs (9% *vs.* 31%) and showed similar clonality patterns but a different number of infected cells in each clone. S-3 and S-4 had similar PVL (S-4: acute, 33% PVL) but exhibited different clonality patterns: oligoclonal for S-3 (three or four relatively large clones at the top surrounded with other clones) *vs.* monoclonal for S-4 (a large major clone surrounded with some small clones in the background). After ranking the clones in order of descending size, we noted that the size of the largest clone in the acute sample was 10 times that of the next clone (tags: (chr X: 83705328 (-)) = 2675 *vs.* (chr 14: 30655896 (+)) = 209). Relative size of the major clone (chr X: 83705328 (-)) was also estimated by another method (PCR-southern) (detailed information is provided in Additional file [2](#S2){ref-type="supplementary-material"}: Figure S3 and Additional file [2](#S2){ref-type="supplementary-material"}: Supporting experiments). Samples with distinct disease status (AC, SM, and acute) manifested different clone sizes (Additional file [1](#S1){ref-type="supplementary-material"}: Table S3 and Additional file [2](#S2){ref-type="supplementary-material"}: Table S1 include the number of infected cells in the top 10 clones), but S-1 *vs.* S-2 (0.60 *vs.* 0.67) and S-3 vs. S-4 (0.84 *vs.* 0.80) could not be discriminated based on their oligoclonality index (Additional file [1](#S1){ref-type="supplementary-material"}: Figure S7) (See Additional file [1](#S1){ref-type="supplementary-material"}: Notes for further discussion). Therefore, it can be inferred that, with an accurate measurement of clone size, the application of this method will aid in the discrimination of ATL subtypes. These results suggest a possible association between disease status, PVLs, and clonality patterns. Hence, HTLV-1-infected individuals could be classified in different groups based on their clonality patterns, which could ultimately affect their choice of therapy and estimation of prognosis. Moreover, by interpreting information from previous studies on HTLV-1 clonality \[[@B15],[@B18]-[@B20],[@B22],[@B27],[@B31],[@B32],[@B35]\] and considering the data provided in our present paper, it appears that ACs harbor a polyclonal population of HTLV-1-infected cells, whereas ATL patients show monoclonal patterns. Thus, changes in the clonality pattern and onset of a clonal expansion of HTLV-1-infected cells seem to be potentially applicable as a prognostic indicator of ATL onset. For these purposes, it is necessary to analyze appropriate pools of samples from ACs and different subtypes of ATL and to conduct a cohort study on the clonality patterns of the sequential samples available over time. Conclusions =========== We took advantage of next-generation sequencing technology, a tag system, and an *in-silico* analysis pipeline to develop and internally validate a new high-throughput methodology. The method was proved to accurately measure the size of clones by analyzing control samples with already known clone sizes and clinical samples. We also discussed the novelty, significance, and applications of our method, and compared it with the only existing high-throughput method devised by Gillet *et al.*\[[@B22]\]. Employing our new methodology and the analysis of an appropriate pool of samples provided by JSPFAD \[[@B13]\] will be helpful not only for diagnosis and prediction but also for elaborated understanding of the underlying mechanism of ATL development. The methodology described here could be adapted to investigate and quantify other genome-integrating elements (such as proviruses, transposons, and vectors in gene therapy). In addition, the tag system can be used for quantifying DNA/RNA fragments in RNA expression \[[@B68]\] or in metagenomics for determining the size of bacterial populations. Competing interests =================== The authors declare that they have no competing interests. Authors' contributions ====================== TW, TY, YS, SS, and SF conceived the project. SF designed and carried out the experiments and wrote the manuscript. YL prepared the Perl scripts. YL and SF performed *in-silico* data analysis. SF and TY analyzed and interpret the data. YS, SS and SF contributed in sequencing the samples. YS and KN contributed to *in-silico* data analysis. TY, YL, TW and YS assisted in drafting the manuscript. TY and YS advised the direction of study. TW supervised the study. All authors read and approved the final manuscript. Supplementary Material ====================== ###### Additional file 1 **Supplementary data include (1) Supplementary Notes: '*Supplementary materials and method*' and '*Supplementary results and discussion*' (2) Supplementary figures and tables: seven figures, and three tables provided in a PDF file.** ###### Click here for file ###### Additional file 2 Additional supporting data include (1) Additional supporting protocols and (2) Additional supporting experiments: four figures and one table provided in a PDF file. ###### Click here for file Acknowledgements ================ We gratefully appreciate: JSPFAD for providing clinical samples; M. Nakashima and T. Akashi for maintenance of JSPFAD; Sung-Joon Park, Riu Yamashita, and Kuo-ching Liang for their invaluable advice on *in-silico* analysis; K. Abe, K. Imamura, T. Horiuchi, and M. Tosaka for sequencing technical support; Sara Firouzi and Unes Firouzi for comments on the design of figures. SF expresses deep respect and gratitude to the NITORI scholarship foundation for supporting her during undergraduate studies. Computational analyses were provided by the Super Computer System, Human Genome Center, Institute of Medical Science, at The University of Tokyo. Funding ------- This work was supported by the Japanese Society for the Promotion of Science (JSPS) - DC1 (24.6916 to SF); Third Term Comprehensive Control Research for Cancer, Ministry of Health, Labour and Welfare (H24-G-004 to TW); JSPS KAKENHI (23390250 to TW, 24591383 to TY); and MEXT KAKENHI (221S0001to TW, 221S0002 to YS).
{ "pile_set_name": "PubMed Central" }
Q: Avoiding Nagios commands.cfg I want to call a command directly from a Nagios service file, instead of passing arguments clumsily to commands.cfg. For example I want do this: define service { service_description footest check_command $USER1$/check_http example.com -u http://example.com/index.html use generic-service host_name example } But I get a: Error: Service check command '/usr/share/nagios/libexec/check_http example.com -u http://example.com/index.html' specified in service 'footest' for host 'example' not defined anywhere! A: If you absolutely insist on doing this, which is a really, really terrible idea, this snippet should work: define command { command_name check_by_arbitrary_path check_command /bin/sh -c "$ARG1$" } define service { use remote-service host_name example service_description This is a bad idea, seriously check_command check_by_arbitrary_path!$USER1$/check_http example.com -u http://example.com/index.html } Seriously, though, please don't.
{ "pile_set_name": "StackExchange" }
The ramblings of a creative mind. Follow along and perhaps you'll learn a thing or two! Wednesday, April 2, 2014 Walking a Straight Line Or maybe a diagonal one. I've seen so many cards lately with diagonal patterns so I assumed it must be a trend. I'm not one to follow trends, but I thought I'd try my hands at it. For this card, I laid (lied, lay..whatever..grammar is not my strong point) strips of red line adhesive strips on a 4 x 5 1/4 piece of ivory cardstock. I undid the strips one by one and applied Martha Stewart Glitter to each strip in a rainbow pattern. I used the Winged Wished stamp set from Newton's Nook Designs in the middle and paper pieced the "party flag". So cute, so sparkly, so graphic, so diagonal!
{ "pile_set_name": "Pile-CC" }
Professor Sherman tries to set up Felicity with her son, who is several years older. Julie suggests that it might be a good idea to date a guy without worrying about the prospect of a long-term relationship. Felicity rearranges her work schedule to avoid Ben. While looking over the course catalog with freshman advisee Ruby, Felicity laments the fact that she is no longer able to pursue her interest in art.
{ "pile_set_name": "Pile-CC" }
Apple (Malus x domestica). Apple (Malus x domestica) is one of the most consumed fruit crops in the world. The major production areas are the temperate regions, however, because of its excellent storage capacity it is transported to distant markets covering the four corners of the earth. Transformation is a key to sustaining this demand - permitting the potential enhancement of existing cultivars as well as to investigate the development of new cultivars resistant to pest, disease, and storage problems that occur in the major production areas. In this paper we describe an efficient Agrobacterium tumefaciens-mediated transformation protocol that utilizes leaf tissues from in vitro grown plants. Shoot regeneration is selected with kanamycin using the selectable kanamycin phosphotransferase (APH(3)II) gene and the resulting transformants confirmed using the scorable uidA gene encoding the bacterial beta-glucuronidase (GUS) enzyme via histochemical staining. Transformed shoots are propagated, rooted to create transgenic plants that are then introduced into soil, acclimatized and transferred to the greenhouse from where they are taken out into the orchard for field-testing.
{ "pile_set_name": "PubMed Abstracts" }
FIRST DISTRICT COURT OF APPEAL STATE OF FLORIDA _____________________________ No. 1D19-2492 _____________________________ JOSEPH L. MCDANIELS, Appellant, v. MARGARET A. MCDANIELS, Appellee. _____________________________ On appeal from the Circuit Court for Duval County. W. Gregg McCaulie, Judge. August 30, 2019 PER CURIAM. DISMISSED. ROBERTS, WINOKUR, and M.K. THOMAS, JJ., concur. _____________________________ Not final until disposition of any timely and authorized motion under Fla. R. App. P. 9.330 or 9.331. _____________________________ Joseph L. McDaniels, pro se, Appellant. No appearance for Appellee. 2
{ "pile_set_name": "FreeLaw" }
Don’t risk a dodgy DIY repair if you suffer a punctured tyre There are times when a ‘make do and mend’ mentality is called for, like wrapping a bit of sticky tape round the arm of a pair of broken glasses or gluing the handle back on a favourite coffee mug. However, these are the last things you want to do if you suffer a punctured tyre…and if you think that is stating the obvious, take a look at this photo snapped by one of our etyres Milton Keynes and Luton tyre fitters. The customer had suffered a damaged sidewall resulting in a puncture, but instead of replacing it with the spare or pulling over to safety immediately, the driver patched it up with some glue and sellotape, so they could continue driving on it! Fortunately, the driver contacted our etyres team before the situation got any worse and led to an accident, but there is a time and a place for a dodgy DIY job and it is not when it comes to your tyres. etyres offer a mobile puncture repair service, which means we come to you so you don’t have to risk causing further damage to your punctured tyre or wheel by driving to a garage or tyre depot for a repair. All our puncture repairs are carried out in compliance with the British Standard BS AU 159 and our fitters will always remove the tyre from the wheel to fully inspect any damage and judge whether a puncture can be safely repaired or not. We are happy to report that in our experience around 60 per cent of all punctures examined can be repaired safely and spare you the cost of having to buy a brand new tyre. So if you suffer a damaged tyre, remember to stick to the experts and call etyres immediately to book our puncture repair service.
{ "pile_set_name": "Pile-CC" }
Expanded turn conformations: characterization and sequence-structure correspondence in alpha-turns with implications in helix folding. Like the beta-turns, which are characterized by a limiting distance between residues two positions apart (i, i+3), a distance criterion (involving residues at positions i and i+4) is used here to identify alpha-turns from a database of known protein structures. At least 15 classes of alpha-turns have been enumerated based on the location in the phi,psi space of the three central residues (i+1 to i+3)-one of the major being the class AAA, where the residues occupy the conventional helical backbone torsion angles. However, moving towards the C-terminal end of the turn, there is a shift in the phi,psi angles towards more negative phi, such that the electrostatic repulsion between two consecutive carbonyl oxygen atoms is reduced. Except for the last position (i+4), there is not much similarity in residue composition at different positions of hydrogen and non-hydrogen bonded AAA turns. The presence or absence of Pro at i+1 position of alpha- and beta-turns has a bearing on whether the turn is hydrogen-bonded or without a hydrogen bond. In the tertiary structure, alpha-turns are more likely to be found in beta-hairpin loops. The residue composition at the beginning of the hydrogen bonded AAA alpha-turn has similarity with type I beta-turn and N-terminal positions of helices, but the last position matches with the C-terminal capping position of helices, suggesting that the existence of a "helix cap signal" at i+4 position prevents alpha-turns from growing into helices. Our results also provide new insights into alpha-helix nucleation and folding.
{ "pile_set_name": "PubMed Abstracts" }
Q: Parse.com IOS, cannot fetch a pointer to PFUser I am following the tutorials on Parse.com but I don't seem to get it working properly. Here is my issue. I have a class called Questions and a class named _User of type PFUser (it has a picture icon next to the class name). User logins via FB and is registered as a _User. I can see myself in the _User class. I make a question and I have a field in the Question class named qOwner, which is the owner of the Question. This is being set when saving a new question via the ios app like this : QuestionCard[@"qOwner"] = [PFUser currentUser]; I can see the objectId of myself in _User to be the value inside the qOwner column, at the row for the current question made. Problem is that I cannot retrieve the values inside my app. If I place in the correct place the below: PFQuery *query = [PFQuery queryWithClassName:@"Questions"]; [query addAscendingOrder:@"createdAt"]; [query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) { if (!error) { // The find succeeded. NSLog(@"@@@ : Successfully retrieved %lu questions ", (unsigned long)objects.count); // Do something with the found objects NSLog(@"%@",objects); // fetched data are PFObjects // we need to convert that into a YesOrNo object // and then add it to _cards array for (PFObject *object in objects) { NSLog(@"currentUser : %@",[PFUser currentUser]); PFUser *qOwnerUser = [object objectForKey:@"qOwner"]; NSLog(@"Question Made by : %@",qOwnerUser); } ..... The following is being printed: 1) @@@ : Successfully retrieved 1 cards 2) ( "<Questions: 0x17011f5c0, objectId: zZWGsfciEU, localId: (null)> {\n CardId = 999;\n qOwner = \"<PFUser: 0x17037e000, objectId: LtdxP5K0n6>\";\n type = 0;\n }" ) 3) currentUser : <PFUser: 0x174375e40, objectId: LtdxP5K0n6, localId: (null)> { facebookId = 10153480462518901; thumbnail = "<PFFile: 0x174479a80>"; username = UYRRmfbXDHqr1Ws4VwJcAy2wx; } 4) Question Made by : <PFUser: 0x17037e000, objectId: LtdxP5K0n6, localId: (null)> { } 1-2-3 seem correct, I can see the pointer relation. But why in 4 inside {} I see nothing? I would have expected to see the same user details as in 3. Am I missing something? A: Parse does not fetch the pointer values by default in a query. You have to tell the query to include the pointer data using includeKey:. Do this: PFQuery *query = [PFQuery queryWithClassName:@"Questions"]; [query addAscendingOrder:@"createdAt"]; [query includeKey:@"qOwner"]; [query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) { ... }];
{ "pile_set_name": "StackExchange" }
The Facebook of China - GBond http://www.fastcompany.com/node/1715041/print ====== smoody formatted version of the article with photos: [http://www.fastcompany.com/magazine/152/the-socialist- networ...](http://www.fastcompany.com/magazine/152/the-socialist- networks.html)
{ "pile_set_name": "HackerNews" }
Garden Level and Atrium Wings In the original design, the central rotunda area of the garden level was a dark and often damp basement. The building restoration has transformed the area into the central welcoming place for visitors, with an interpretive exhibit, gift shop, and visitor information desk. Great Seal of the State of Idaho Original state seal painting byEmma Edwards Green Notice the Great Seal of the State of Idaho on the floor of the central rotunda. Adopted in 1891 by the state legislature, the original seal was designed by Emma Edwards Green and is the only state seal designed by a woman. The Latin motto Esto perpetua means “May it endure forever.” The miner represents the chief industry at the time the seal was created, while the woman holding scales represents justice, freedom, and equality. Atrium Wings Senate wing skylight From the central rotunda area, look east and west. You can see a full city block and take in the view of an impressive engineering achievement – the underground atrium wings. These wings were constructed to provide additional space for legislative committee hearing rooms, where the public can participate directly in the legislative process. The wings preserve the integrity of the building’s architecture and improve the functionality of the building. As you explore the new wings, look up. Glass skylights run the length of the central corridors and offer a view of the Capitol dome. These skylights – specially engineered and designed for this project – are consistent with the vision of the original architects and provide a seamless bond between the old and new. The skylights are made of fritted glass – a clear safety glass fired with a pattern of dots for the purpose of shading and lowering solar gain – making artificial light unnecessary in some corridors during the summer and some sunny winter days. Senate hearing rooms and offices are located in the west wing, and House hearing rooms and offices are in the east wing. A large 240-seat auditorium, shared by the Senate and House, is also located in the west wing. As you walk to the west wing, notice the original basement vault doors. These vaults were once used for record storage. All of the original vault doors remain in the building. Vaults The doors to the Visitor Welcome Room and to the lobbyists’ room are some of the original basement vault doors. The basement vaults were originally used to store paper records. There was never any money stored in them. Underground Tunnel Several buildings within the Capitol Mall are connected by an underground tunnel. This tunnel allows state employees to move between the Capitol and a handful of other buildings without having to go outside. It connects the mail room, print shop, and facility services functions as well. The tunnel runs beneath State Street. The tunnel is not open to the public in order to best maintain security.
{ "pile_set_name": "Pile-CC" }
package drds //Licensed under the Apache License, Version 2.0 (the "License"); //you may not use this file except in compliance with the License. //You may obtain a copy of the License at // //http://www.apache.org/licenses/LICENSE-2.0 // //Unless required by applicable law or agreed to in writing, software //distributed under the License is distributed on an "AS IS" BASIS, //WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. //See the License for the specific language governing permissions and //limitations under the License. // // Code generated by Alibaba Cloud SDK Code Generator. // Changes may cause incorrect behavior and will be lost if the code is regenerated. // VersionsItem is a nested struct in drds response type VersionsItem struct { DrdsVersion string `json:"DrdsVersion" xml:"DrdsVersion"` Latest bool `json:"Latest" xml:"Latest"` }
{ "pile_set_name": "Github" }
Bone Metabolism of the Patient with a Malignant Melanoma during the Entry Examination and the Check-up of Whole-body Bone Scintigraphy. Malignant melanoma is a malignancy located predominantly in the skin and the incidence of melanoma increases. We compared the markers of bone metabolism - osteocalcin (OC), beta-carboxyterminal cross-linked telopeptide of type I collagen (β-CrossLaps, β-CTx) and tumour marker - human epididymis protein 4 (HE4) in the serum with finding during the entry examination and the check-up of whole-body bone scintigraphy of the patient with a malignant melanoma. Serum concentrations of OC, β-CTx, HE4 were determined in 1 patient (female, age 64 years) with malignant melanoma and correlated with the presence of equivocal bone metastases detected by whole-body bone scintigraphy (the entry examination and check-up after 6 months). Concentrations of bone metabolism markers decreased during six months and we observed progress in bone metastases. The change of the markers levels during the entry examination and the check-up of the whole-body bone scintigraphy with equivocal finding of bone metastases could be a sign of a possible initiating progression of malignant melanoma despite a clinically negative finding that does not prove the progression of the disease.
{ "pile_set_name": "PubMed Abstracts" }
This classic review article published in Survey of Ophthalmology is reprinted with permission from Elsevier. Copyright (2004). Shields CL, Shields JA. Tumors of the conjunctiva and cornea. Surv Ophthalmol 2004;49:3-24. General Considerations {#sec1-1} ====================== Tumors of the conjunctiva and cornea occupy a large spectrum of conditions ranging from benign lesions such as limbal dermoid or myxoma to aggressive, life-threatening malignancies such as melanoma or Kaposi\'s sarcoma.\[[@ref23][@ref88]\] The clinical differentiation of these tumors is based on the patient\'s medical background as well as certain typical clinical features of the tumor. The recognition and proper management of such tumors requires an understanding of the anatomy of the conjunctiva and cornea and knowledge of general principles of tumor management, both of which are described below. The specific clinical and histopathologic features as well as the management of each tumor is discussed, based on the authors\' personal experience with over 1,600 patients with conjunctival tumors during a 30-year period (Shields CL, submitted for publication). In this report, we review and illustrate the features of conjunctival and corneal tumors for the general ophthalmologist as well as the specialist who might occasionally examine an affected patient and want a quick reference for recognition and therapy. Anatomy {#sec2-1} ------- The conjunctiva is a continuous mucous membrane that covers the anterior portion of the globe. It extends from the eyelid margin onto the back surface of the eyelid (palpebral portion), into the fornix (forniceal portion), onto the surface of the globe (bulbar portion), and up to the corneoscleral limbus (limbal portion). The conjunctiva is composed of epithelium and stroma. The epithelium consist of both stratified squamous and columnar epithelium.\[[@ref89]\] The squamous pattern is found near the limbus and the columnar pattern is found near the fornix. The stroma is composed of fibrovascular connective tissue that thickens in the fornix and thins at the limbus. Special regions of the conjunctiva include the plica semilunaris and caruncle. The plica semilunaris is a vertically oriented fold of conjunctiva, located in the medial portion of the bulbar conjunctiva. It is speculated that the plica semilunaris represents a remnant of the nictitating membrane found in certain animals. The caruncle is located in the medial canthus between the upper and lower punctum. It contains both conjunctival and cutaneous structures such as nonkeratinized stratified squamous epithelium overlying the stroma of fibroblasta, melanocytes, sebaceous glands, hair follicles, and striated muscle fibers. Neoplasms can arise in the conjunctiva from both its epithelial and stromal structures. These are similar clinically and histopathologically to tumors that arise from other mucous membranes in the body. However, unlike other mucous membranes in the body, the conjunctiva is partially exposed to sunlight, which may be a factor in the development of some tumors. Similarly, the cornea can develop epithelial tumors, but corneal stromal tumors are uncommon. The caruncle, with its unique composition of both mucous membrane and cutaneous structures, can generate tumors found both in mucosa and skin. Diagnostic approaches {#sec2-2} --------------------- Unlike many other mucous membranes in the body, the conjunctiva is readily visible. Thus, tumors and related lesions that occur in the conjunctiva are generally recognized at a relatively early stage. Because many of these tumors have typical clinical features, an accurate diagnosis can often be made by external ocular examination and slit-lamp biomicroscopy, provided that the clinician is familiar with their clinical characteristics. A diagnostic biopsy is not usually necessary in cases of smaller tumors (≤4 clock hours limbal tumor or ≤15 mm basal dimension) that appear benign. If a smaller tumor does require a biopsy, it is often better to completely remove the lesion in one operation (excisional biopsy). In cases of larger lesions (\>4 clock hour limbal tumor or \>15 mm basal dimension), however, it may be appropriate to remove a portion of the tumor (incisional biopsy) to obtain a histopathologic diagnosis prior to embarking upon more extensive therapy, as conjunctival tumors are readily accessible to incisional biopsy. Occasionally, exfoliative cytology\[[@ref90]\] and fine-needle aspiration biopsy can provide useful information on the basis of a few cells. In addition to evaluation of the conjunctival lesion, meticulous slit-lamp examination of the cornea is essential in patients with suspected conjunctival tumors. Invasion of squamous cell carcinoma and melanoma into the peripheral cornea may appear as a subtle, gray surface opacity. It is important to completely outline such corneal involvement prior to surgery, because it is often less visible through the operating microscope than it is with slit-lamp biomicroscopy in the office. Management {#sec2-3} ---------- Depending on the presumptive diagnosis and the size and extent of the lesion, management of a conjunctival tumor can consist of serial observation, incisional biopsy, excisional biopsy, cryotherapy, chemotherapy, radiotherapy, modified enucleation, orbital exenteration or various combinations of these methods.\[[@ref1][@ref78][@ref79][@ref80][@ref86]\] If large areas of conjunctiva are removed, mucous membrane grafts from the conjunctiva of the opposite eye, buccal mucosa, or amniotic membrane may be necessary.\[[@ref42][@ref60]\] ### Observation {#sec3-1} Observation is generally the management of choice for most benign, asymptomatic tumors of the conjunctiva. Selected examples of lesions that can be observed without interventional treatment include pingueculum, dermolipoma, and nevus. External or slit-lamp photographs are advisable to document all lesions and are critical to follow-up of the more suspicious lesions. Most patients are examined every 6 to 12 months looking for evidence of growth, malignant change, or secondary effects on normal surrounding tissues. ### Incisional biopsy {#sec3-2} Incisional biopsy is reserved for extensive suspicious tumors that are symptomatic or suspected to be malignant. Examples include large squamous cell carcinoma, primary acquired melanosis, melanoma, and conjunctival invasion by sebaceous gland carcinoma. It should be understood that if tumors occupy 4 clock hours or less on the bulbar conjunctiva, excisional biopsy is generally preferable to incisional biopsy. However, larger lesions can be approached by incisional wedge biopsy or punch biopsy. Definitive therapy would then be planned based on the results of biopsy. Incisional biopsy is also appropriate for conditions that are ideally treated with radiotherapy, chemotherapy, or other topical medications. These lesions include lymphoid tumors, metastatic tumors, extensive papillomatosis, and some cases of squamous cell carcinoma and primary acquired melanosis. Incisional biopsy should generally be avoided for melanocytic tumors, especially melanoma, as this can increase the risk for numerous tumor recurrences.\[[@ref64]\] ### Excisional biopsy {#sec3-3} Primary excisional biopsy is appropriate for relatively smaller tumors (≤4 clock hours limbal tumor or ≤15 mm basal dimension) that are symptomatic or suspected to be malignant. In these situations, excisional biopsy is preferred over incisional biopsy to avoid inadvertent tumor seeding. Examples of benign and malignant lesions that are ideally managed by excisional biopsy include symptomatic limbal dermoid, epibulbar osseous choristoma, steroid-resistant pyogenic granuloma, squamous cell carcinoma, and melanoma. When such lesions are located in the conjunctival fornix they can be completely excised and the conjunctiva reconstructed primarily with absorbable sutures, sometimes with fornix deepening sutures or symblepharon ring to prevent adhesions. If the defect cannot be closed primarily, then a mucous membrane graft can be inserted. Most primary malignant tumors of the conjunctiva, like squamous cell carcinoma and melanoma, arise in the interpalpebral area near the limbus and the surgical technique for limbal tumors is different than that for forniceal tumors.\[[@ref72][@ref78][@ref79]\] Limbal neoplasms possibly can invade through the corneal epithelium and sclera into the anterior chamber and also through the soft tissues into the orbit. Thus, it is often necessary to remove a thin lamella of sclera to achieve tumor-free margins and to decrease the chance for tumor recurrence. In this regard, we employ a partial lamellar sclerokeratoconjunctivectomy with primary closure in for such tumors ([Fig. 1](#F1){ref-type="fig"}). Because cells from these friable tumors can seed into adjacent tissues, a gentle technique without touching the tumor (*no touch technique*) is advised. Additionally, the surgery should be performed using microscopic technique and the operative field should be left dry so that cells adhere to the resected tissue. It is wise to avoid wetting the field with balanced salt solution until after the tumor is completely removed to minimize seeding of cells. There are no published comparative reports of the various surgical techniques for tumor excision, but discussions at the 1997 International Congress of Ocular Oncology in Jerusalem supported the above surgical principles. ![Surgical excision of conjunctival malignancy using the "no touch" technique. (a) Absolute alcohol is applied by a cotton tip applicator to the involved cornea to allow for controlled corneal epitheliectomy. (b) The corneal epithelium is scrolled off using a controlled sweeping motion with a beaver blade. (c) The conjunctival incision is made approximately 4 mm outside the tumor margin. A beaver blade is used to create a thin lamella of tumor-free sclera underlying the limbal portion of the tumor. (d) The conjunctival malignancy is removed, along with tumor-free margins, including underlying sclera and limbal corneal epithelium. (e) Cryotherapy is applied to the conjunctiva at the site of resection. (f) Closure of the conjunctiva with absorbable sutures is performed](IJO-67-1930-g001){#F1} The technique for resection of limbal tumors is shown in [Fig. 1](#F1){ref-type="fig"}. Using retrobulbar anesthesia and the operating microscope, the corneal epithelial component is approached first and the conjunctival component is dissected second, with the goal of excising the entire specimen completely in one piece. Absolute alcohol soaked on an applicator is gently applied to the entire corneal component. This causes epithelial cellular devitalization and allows easier release of the tumor cells from Bowman\'s layer. A beaver blade is used to microscopically outline the malignancy within the corneal epithelium using a delicate epithelial incision or epitheliorhexis technique 2 mm outside the corneal component. The beaver blade is then used to sweep gently the affected corneal epithelium from the direction of the central cornea to limbus, into a scroll that rests at the limbus. Next, a pentagonal or circular conjunctival incision based at the limbus is made 4--6 mm outside the tumor margin. The incision is carried through the underlying Tenon\'s fascia until the sclera is exposed so that full thickness conjuctiva and Tenon\'s fascia is incorporated into the excisional biopsy. Cautery is applied to control bleeding. A second incision is then outlined by a superficial scleral groove approximately 0.2 mm in depth and 2.0 mm outside the base of the overlying adherent conjunctival mass. This groove is continued anteriorly to the limbus. The area outlined by the scleral groove is removed by flat dissection of 0.2-mm thickness within the sclera in an attempt to remove a superficial lamella of sclera, overlying Tenon\'s fascia and conjunctiva with tumor, and the scrolled corneal epithelium. In this way, the entire tumor with tumor-free margins is removed in one piece without touching the tumor itself (no touch technique). The removed specimen is then placed flatly on a piece of thin cardboard from the surgical tray and then placed in fixative and submitted for histopathologic studies. This step prevents the specimen from folding and allows better assessment of the tumor margins histopathologically. The used instruments are then replaced with fresh instruments for subsequent steps, to avoid contamination of healthy tissue with possible tumor cells. After excision of the specimen, cryotherapy is applied to the margins of the remaining bulbar conjunctiva. This is performed by freezing the surrounding bulbar conjunctiva as it is lifted away from the sclera using the cryoprobe. When the ice ball reaches a size of 4--5 mm, it is allowed to thaw and the cycle repeated once more. The cryoprobe is then moved to an adjacent area of the conjunctiva and the cycle is repeated until all of the margins have been treated by this method. It is not necessary to treat the corneal margins with cryoapplication. The tumor bed is treated with absolute alchohol wash on cotton tip applicator and bipolar cautery, avoiding cryotherapy directly to the sclera. Using clean instruments, the conjunctiva is mobilized for closure of the defect by loosening the intermuscular septum with Steven\'s scissor spreading and creation of transpositional conjunctival flaps. Closure is completed with interrupted absorbable 6-0 or 7-0 sutures. If the surgeon prefers, an area of bare sclera can be left near the limbus, but we prefer complete closure as this promotes better healing and allows for facility of further surgery if the patient should develop recurrence. The patient is treated with topical antibiotics and corticosteroids for two weeks and then followed at 3- to 6-month intervals. ### Cryotherapy {#sec3-4} In the management of conjunctival tumors, cryotherapy can be used as a supplemental treatment to excisional biopsy as described above. The advantages of cryotherapy include elimination of subclinical, microscopic tumor cells and prevention of recurrence of malignant tumors, including squamous cell carcinoma and melanoma.\[[@ref14][@ref64]\] It can also be used as a principal treatment for primary acquired melanosis and pagetoid invasion of sebaceous gland carcinoma. If cryotherapy can devitalize the malignant or potentially malignant cells in these instances, radical surgery like orbital exenteration can often be delayed or avoided. The disadvantages of cryotherapy include conjunctival chemosis that may last over one week and if the technique is misused and the globe is accidentally frozen, cataract, uveitis, scleral, and corneal thinning, and phthisis bulbi can occur. ### Chemotherapy {#sec3-5} Recent evidence has revealed that topical eyedrops comprised of mitomycin C, 5-fluorouracil, or interferon are effective in treating epithelial malignancies such as squamous cell carcinoma, primary acquired melanosis, and pagetoid invasion of sebaceous gland carcinoma.\[[@ref19][@ref20][@ref32][@ref38][@ref53][@ref56][@ref57][@ref96][@ref97]\] Mitomycin C or 5-fluorouracil are employed most successfully for squamous cell carcinoma, especially after tumor recurrence following previous surgery. This medication is prescribed topically 4 times daily for a 1-week period followed by a 1-week hiatus to allow the ocular surface to recover \[[Table 1](#T1){ref-type="table"}\]. This cycle is repeated once again so that most patients receive a total of two weeks of the chemotherapy topically. Both mitomycin C and 5-fluorouracil are most effective for squamous cell carcinoma and less effective for primary acquired melanosis and pagetoid invasion of sebaceous gland carcinoma. Caution should be used with this medication as it is most effective for intraepithelial disease and much less effective or ineffective for deeper disease. Toxicities include most commonly dry eye findings, superficial punctate epitheliopathy, and punctal stenosis. Corneal melt, scleral melt, and cataract can develop if these agents are used with open conjunctival wounds or used excessively. Topical interferon can be effective for squamous epithelial malignancies and is less toxic to the surface epithelium, but this medication may require many months of use to effect a result.\[[@ref32]\] ###### Protocol for Use of Mitomycin C for Conjunctival Squamous Cell Neoplasia and Primary Acquired Melanosis Time Medication and Frequency -------- ---------------------------------------------------- Week 1 Slit-lamp biomicroscopy Place upper and lower punctal plugs Cycle 1: mitomycin C 0.04% qid to the affected eye Week 2 No medication Week 3 Cycle 2: mitomycin C 0.04% qid to the affected eye Week 4 No medication Slit-lamp biomicroscopy Prescribe more cycles if residual tumor exists Remove punctal plugs after all medication complete ### Radiotherapy {#sec3-6} Two forms of radiotherapy are employed for conjunctival tumors, namely external beam radiotherapy and custom-designed plaque radiotherapy. External beam radiotherapy to a total dose of 3,000--4,000 cGy is used to treat conjunctival lymphoma and metastatic carcinoma when they are too large or diffuse to excise locally. Side effects of dry eye, punctate epithelial abnormalities, and cataract should be anticipated. Custom-designed plaque radiotherapy\[[@ref65]\] to a dose of 3,000--4,000 cGy can be used to treat conjunctival lymphoma or metastasis. A higher dose of 6,000--8,000 cGy can be employed to treat the more radiation resistant melanoma and squamous cell carcinoma. In general, plaque radiotherapy is reserved for those patients who have diffuse tumors that are incompletely resected and for those who display multiple recurrences. The two designs for conjunctival custom plaque radiotherapy include a conformer plaque technique with six fractionated treatment sessions as an outpatient or a reverse plaque technique with the device sutured onto the episcleral as an inpatient. In unique instances, plaque radiotherapy to a low dose of 2,000 cGy is employed for benign conditions, including steroid resistant pyogenic granuloma that show recurrence after surgical resection.\[[@ref26]\] This treatment should be performed by experienced radiation oncologists and ocular oncologists. There is no published report on a comparison of these radiotherapy techniques. ### Modified enucleation {#sec3-7} Modified enucleation is a treatment option for primary malignant tumors of the conjunctiva that have invaded through the limbal tissues into the globe, producing secondary glaucoma. This occurrence is quite rare but can occasionally be found with squamous cell carcinoma and melanoma. The uncommon mucoepidermoid variant and spindle cell variant of squamous cell carcinoma of the conjunctiva has a greater tendency for intraocular invasion.\[[@ref2][@ref7][@ref25]\] At the time of enucleation, it is necessary to remove the involved conjunctiva intact with the globe so as to avoid spreading tumor cells. Thus, the initial peritomy should begin at the limbus, but when the tumor is approached, the incision should proceed posteriorly from the limbus to surround the tumor-affected tissue by at least 3--4 mm. The tumor will remain adherent to the globe at the limbus. Occasionally, a suture is employed through the surrounding conjunctiva into the episclera to secure the tumor to the globe so that it will not be displaced during subsequent manipulation. The remaining steps of enucleation are gently performed and the globe is removed with tumor adherent after cutting the optic nerve from the nasal side. The margins of the remaining, presumed unaffected conjunctiva are treated with double freeze-thaw cryotherapy. Often this surgical technique leaves the patient with a limited amount of residual unaffected conjunctiva for closure. In these instances, a mucous membrane graft or amniotic membrane graft may be necessary for adequate closure and to provide fornices for a prosthesis. In some instances, a simple horizontal inferior forniceal conjunctival incision from canthus to canthus may suffice, as long as the conformer is constantly worn as a template so the new conjunctival fornix grows deep and around this structure. ### Orbital exenteration {#sec3-8} Orbital exenteration is probably the treatment of choice for primary malignant conjunctival tumors that have invaded the orbit or that exhibit complete involvement of the conjunctiva.\[[@ref64][@ref84][@ref86]\] Either an eyelid-removing or eyelid-sparing exenteration is employed, depending on the extent of eyelid involvement. The eyelid-sparing technique is preferred in that the patients have better cosmetic appearance and heal within 2 or 3 weeks. Specifically, if the anterior lamella of the eyelid is uninvolved with tumor, an eyelid-sparing (eyelid-splitting) exenteration may be accomplished.\[[@ref78][@ref84][@ref86]\] Other options to exenteration are radiotherapy using the external beam approach or the brachytherapy approach. There are too few cases in the literature to do a scientific comparison. ### Mucous membrane graft {#sec3-9} Mucous membrane grafts are occasionally necessary to replace vital conjunctival tissue after removal of extensive conjunctival tumors. The best donor sites include the forniceal conjunctiva of the ipsilateral or contralateral eye and buccal mucosa from the posterior aspect of the lower lip or lateral aspect of the mouth. Such grafts are usually removed by a freehand technique, fashioned to fit the defect, and secured into place with cardinal and running absorbable 6-0 or 7-0 sutures. Currently, in most instances, we employ a donor amniotic membrane graft to replace lost conjunctiva.\[[@ref42][@ref60]\] The tissue is delivered frozen and must be defrosted for 20 minutes. The fine, transparent material is carefully peeled off its cardboard surface, laid basement membrane side up, and sutured into place with absorbable sutures. Topical antibiotic and steroid ointments are applied following all conjunctival grafting procedures. It is important that the surgeon use a minimal manipulation technique for tumor resection. For graft harvest and placement, we prefer to use clean, sterile instruments at both the donor and the recipient sites to avoid transfer and implantation of tumor cells into previously uninvolved areas. Congenital Lesions {#sec1-2} ================== A variety of tumors and related conditions may be present at birth or become clinically apparent shortly after birth.\[[@ref10][@ref74]\] Most of the lesions to be considered here are choristomas, consisting of displaced tissue elements normally not found in these areas. A simple choristoma is comprised of one tissue element such as epithelium whereas a complex choristoma represents variable combinations of ectopic tissues like bone, cartilage, and lacrimal gland. Despite their presence at a young age, all of the conjunctival choristomas discussed herein are sporadic, without hereditary tendency. Dermoid {#sec2-4} ------- Conjunctival dermoid is a congenital well-circumscribed yellow-white solid mass that involves the bulbar conjunctiva or at the corneoscleral limbus.\[[@ref10][@ref11][@ref49][@ref72][@ref74]\] It characteristically occurs near the limbus inferotemporally and often this tumor has fine white hairs, best seen with slit lamp biomicroscopy ([Fig. 2](#F2){ref-type="fig"}). In rare cases, it can extend to the central cornea or be located in other quadrants on the bulbar surface. There are three types of dermoids, classified on the extent of involvement. The first type includes the small limbal dermoid, straddling the limbus and approximately 5 mm in diameter. The second type is larger, often involving the entire surface of the cornea, but not deeper than Descemet\'s membrane. The third type is most extensive and the dermoid involves the cornea, anterior chamber, and iris stroma and its posterior aspect is lined by the iris pigment epithelium. The various types are related to the time during fetal development in which the dermoid develops, with more severe types occurring earlier. ![Epibulbar dermoid. (a) Limbal dermoid. (b) Central corneal dermoid](IJO-67-1930-g002){#F2} Conjunctival dermoid may occur as an isolated lesion or it can be associated with Goldenhar\'s syndrome. Hence, the patient should be evaluated for ipsilateral or bilateral preauricular skin appendages, hearing loss, eyelid coloboma, orbitoconjunctival dermolipoma, and cervical vertebral anomalies that comprise this nonheritable syndrome. Histopathologically, the conjunctival dermoid is a simple choristomatous malformation that consists of dense fibrous tissue lined by conjunctival epithelium with deeper dermal elements including hair follicles and sebaceous glands. The management of an epibulbar dermoid includes simple observation if the lesion is small and visually asymptomatic. It is possible to excise the lesion for cosmetic reasons, but the remaining corneal scar is sometimes cosmetically unacceptable. Larger or symptomatic dermoids can produce visual loss from astigmatism. These can be approached by lamellar keratosclerectomy with primary closure of overlying tissue if the defect is superficial or closure using corneal graft if the defect is deep or full thickness. It has been reported that the cosmetic appearance may improve, but the refractive and astigmatic error and visual acuity may not change.\[[@ref49]\] When the lesion involves the central cornea, a lamellar or penetrating keratoplasty may be necessary and long-term amblyopia can be a problem.\[[@ref72]\] Occasionally, extensive dermoids involve the lateral canthus and carefully planned excision with lateral canthal repair is necessary. Dermolipoma {#sec2-5} ----------- Dermolipoma is believed to be congenital and present at birth, but it typically remains asymptomatic for years and may not be detected until adulthood when it protrudes from the orbit through the conjunctival fornix superotemporally ([Fig. 3](#F3){ref-type="fig"}). It appears as a pale-yellow, soft, fluctuant, fusiform mass below the palpebral lobe of the lacrimal gland, best visualized with the eye in inferonasal gaze. It usually extends for a variable distance into the orbital fat and onto the bulbar conjunctiva, and occasionally it can extend anteriorly to reach the limbus. Unlike herniated orbital fat, dermolipoma can contain fine white hairs on its surface and it cannot be reduced with digital pressure into the orbit. ![Dermolipoma in superotemporal conjunctival fornix](IJO-67-1930-g003){#F3} With computed tomography (CT) or magnetic resonance imaging (MRI), dermolipoma has features similar to orbital fat from which it may be indistinguishable. Histopathologically, it is lined by conjunctival epithelium on its surface and the subepithelial tissue has variable quantities of collagenous connective tissue and adipose tissue. Pilosebaceous units and lacrimal gland tissue may occasionally be present. The majority of dermolipomas require no treatment, but larger symptomatic ones or those that are cosmetically unappealing can be managed by excision of the entire orbitoconjunctival lesion through a conjunctival forniceal approach or by simply removing the anterior portion of the lesion in a manner similar to that used to remove prolapsed orbital fat. Epibulbar osseous choristoma {#sec2-6} ---------------------------- Epibulbar osseous choristoma is a rigid deposit of bone generally located in the bulbar conjunctiva superotemporally ([Fig. 4](#F4){ref-type="fig"}).\[[@ref70]\] It is believed to be congenital and typically remains undetected until personally palpated by the patient in the preteen years. It is clinically suspected due to its rock-hard consistency on palpation, although fibrous tissue tumors can feel similar. The diagnosis can be confirmed with ultrasonography or computed tomography to illustrate the calcium component. This tumor is generally best managed by periodic observation. Occasionally patients report a foreign body sensation and symptomatic lesions can be excised with a circumtumoral conjunctival incision followed by dissection to bare sclera for full thickness conjunctival resection. For those tumors that might be adherent to the sclera, a superficial sclerectomy might be warranted.\[[@ref70]\] ![Epibulbar osseous choristoma on bulbar conjunctiva superotemporally, presenting as a firm, palpable mass](IJO-67-1930-g004){#F4} Lacrimal gland choristoma {#sec2-7} ------------------------- Lacrimal gland choristoma is a congenital lesion, discovered in young children as an asymptomatic pink stromal mass, typically in the superotemporal or temporal portion of the conjunctiva.\[[@ref45]\] It is speculated that this lesion represents small sequestrations of the embryonic evagination of the lacrimal gland from the conjunctiva. The lacrimal gland choristoma can masquerade as a focus of inflammation due to its pink color. Rarely, a cystic appearance ensues from this secretory mass if there is no connection to the conjunctival surface. Excisional biopsy is usually performed to confirm the diagnosis. Respiratory choristoma {#sec2-8} ---------------------- In unique instances, a cystic choristoma, appearing as congenital sclerocorneal ectasia, is found. In one report, this was found to manifest respiratory mucosa.\[[@ref98]\] Complex choristoma {#sec2-9} ------------------ The conjunctival dermoid and epibulbar osseous choristoma are termed simple choristomas as they contain one tissue type such as skin or bone. A complex choristoma contains a greater variety of tissue like dermal appendages, lacrimal gland tissue, cartilage, bone, and occasionally other elements. Complex choristoma contains tissue derived from two germ layers. It is quite variable in its clinical appearance and may cover much of the epibulbar surface or it may form a circumferential growth pattern around the limbus ([Fig. 5](#F5){ref-type="fig"}). For example, a tumor with extensive lacrimal tissue appears as a lobular pink mass whereas one with dermal tissue appears yellow and thick and one with cartilage displays a smooth blue-gray hue. The complex choristoma has a peculiar association with the linear nevus sebaceous of Jadassohn.\[[@ref58][@ref75][@ref82]\] The nevus sebaceous of Jadassohn includes cutaneous features with sebaceous nevus in the facial region and neurologic features including seizures, mental retardation, arachnoid cyst, and cerebral atrophy. The ophthalmic features of this syndrome include epibulbar complex choristoma and posterior scleral cartilage.\[[@ref82]\] ![Epibulbar complex choristoma that was found histopathologically to have cartilage and ectopic lacrimal gland](IJO-67-1930-g005){#F5} The management of the complex choristoma depends upon the extent of the lesion. Observation or wide local excision with mucous membrane graft reconstruction are options. In the rare case of a very extensive lesion, where the lesion causes dense amblyopia with no hope for visual acuity, modified enucleation with ocular surface reconstruction may be necessary. Benign Tumors of Surface Epithelium {#sec1-3} =================================== Several benign tumors and related conditions can arise from the squamous epithelium of the conjunctiva. Papilloma {#sec2-10} --------- Squamous papilloma is a benign tumor, documented to be associated with human papillomavirus (subtypes 6, 11, 16, and 18) infection of the conjunctiva.\[[@ref55][@ref88]\] This tumor can occur in both children and adults. It is speculated that the virus is acquired through transfer from the mother\'s vagina to the newborn\'s conjunctiva as the child passes through the mother\'s birth canal. Papilloma appears as a pink fibrovascular frond of tissue arranged in a sessile or pedunculated configuration. The numerous fine vascular channels ramify through the stroma beneath the epithelial surface of the lesion. In children, the lesion is usually small, multiple, and located in the inferior fornix ([Fig. 6](#F6){ref-type="fig"}). In adults, it is usually solitary, more extensive, and can often extend to cover the entire corneal surface simulating malignant squamous cell carcinoma. Histopathologically, the lesion shows numerous vascularized papillary fronds lined by acanthotic epithelium. ![Recurrent conjunctival papilloma in a child. (a) The fibrovascular mass caused bloody tears. (b) Following 3 months of oral cimetidine, the mass resolved](IJO-67-1930-g006){#F6} In the case of a small sessile papilloma in a child, there are several treatment options. Sometimes, periodic observation allows for slow spontaneous resolution of the viral-produced tumor. Larger or more pedunculated lesions are generally symptomatic with foreign body sensation, chronic mucous production, hemorrhagic tears, incomplete eyelid closure, and poor cosmetic appearance. These lesions are unlikely to show a favorable response to observation or steroids and are best managed by surgical excision. Complete removal of the mass without direction manipulation of the tumor (no touch technique) is generally advisable to avoid spreading of the tumor-related virus. Double freeze-thaw cryotherapy is applied to the remaining conjunctiva around the excised lesion in order to help prevent tumor recurrence. In some instances, the pedunculated tumor is frozen alone and allowed to slough off the conjunctival surface later. For some large unwieldy pedunculated tumors, complete cryotherapy of the mass down its stalk to its base is performed and excision while the mass is in the frozen state is achieved. This is especially important for large lesions to allow for traction on the tumor without forcep manipulation. Closure is completed with absorbable sutures. Topical interferon and mitomycin C have been employed for conjunctival papillomas.\[[@ref27][@ref47]\] For those lesions that show recurrence, oral cimetidine for several months can resolve the papilloma virus-related tumor by boosting the patient\'s immune system and stimulating regression of the mass ([Fig. 6](#F6){ref-type="fig"}).\[[@ref55]\] Keratoacanthoma {#sec2-11} --------------- The conjunctiva can give rise to benign reactive inflammatory lesions that simulate carcinoma including pseudocarcinomatous hyperplasia and its variant, keratoacanthoma.\[[@ref39]\] In some instances a distinct nodule is found. This lesion appears gelatinous or leukoplakic, similar to squamous cell carcinoma of the conjunctiva, but its onset may be more rapid. Massive acanthosis, hyperkeratosis and parakeratosis is found histopathologically.\[[@ref39]\] Treatment is complete resection as this may be difficult to differentiate from carcinoma both clinically and histopathologically. Hereditary benign intraepithelial dyskeratosis {#sec2-12} ---------------------------------------------- Hereditary benign intraepithelial dyskeratosis (HBID) is a peculiar condition seen in an inbred isolate of white, African-American, and Native American (Haliwa Indians). This group resided initially in North Carolina. Hereditary benign intraepithelial dyskeratosis has subsequently been detected in several other parts of the United States. It is an autosomal dominant disorder characterized by bilateral elevated fleshy plaques on the nasal or temporal perilimbal conjunctiva ([Fig. 7](#F7){ref-type="fig"}).\[[@ref63]\] Similar plaques can occur on the buccal mucosa. It can remain relative asymptomatic or it can cause severe redness and foreign body sensation. In some instances it can extend onto the cornea. It has no known malignant potential. It is characterized histopathologically by acanthosis, dyskeratosis on the epithelial surface and deep within the epithelium, and prominent chronic inflammatory cells. ![Hereditary benign intraepithelial dyskeratosis in a young woman who was a descendent of a Haliwa Indian. The opposite eye had a similar lesion](IJO-67-1930-g007){#F7} Hereditary benign intraepithelial dyskeratosis is a benign condition that does not usually require aggressive treatment. Smaller, less symptomatic lesions can be treated with ocular lubricants and judicious used of topical corticosteroids. Larger symptomatic lesions can be managed by local resection with mucous membrane grafting if necessary. Epithelial inclusion cyst {#sec2-13} ------------------------- Conjunctival cysts can occur spontaneously or following inflammation, surgery, or nonsurgical trauma. Histopathologically, they are lined by conjunctival epithelium and are filled with clear fluid that often contains desquamated cellular debris ([Fig. 8](#F8){ref-type="fig"}). These cysts can be simply observed or they can be excised completely with primary closure of the conjunctiva. ![Epibulbar inclusion cyst with thick mucous from conjunctival glands](IJO-67-1930-g008){#F8} Dacryoadenoma {#sec2-14} ------------- Dacryoadenoma is a rare conjunctival tumor, noted in patients as a pink mass. In one report, this tumor was found in the inferior bulbar region of a 48-year-old woman.\[[@ref30]\] It is uncertain if the lesion is congenital or acquired. This benign tumor appears to originate from the surface epithelium and proliferate into the stroma, forming glandular lobules similar to the lacrimal gland. Keratotic plaque {#sec2-15} ---------------- Keratotic plaque is a white limbal or bulbar conjunctival mass, usually in the interpalpebral region.\[[@ref76]\] It is composed of acanthosis and parakaratosis with keratinization of the epithelium. It appears similar to squamous cell carcinoma with leukoplakia. Actinic keratosis {#sec2-16} ----------------- Actinic keratosis is a frothy, white lesion usually located over a chronically inflamed pingueculum or pterygium.\[[@ref76]\] It is also referred to as dysplasia, actinic keratosis variety. Histopathologically, it is composed of a proliferation of surface epithelium with keratosis. Clinically, it resembles squamous cell carcinoma of the conjunctiva. Malignant Lesions of Surface Epithelium {#sec1-4} ======================================= Squamous cell neoplasia can occur as a localized lesion confined to the surface epithelium (conjunctival intraepithelial neoplasia or dysplasia) or as a more invasive squamous cell carcinoma that has broken through the basement membrane and invaded the underlying stroma.\[[@ref2][@ref4][@ref25][@ref29][@ref36][@ref85][@ref92]\] The former has no potential to metastasize but the latter can gain access to the conjunctival lymphatics and occasionally metastasize to regional lymph nodes. It has been found that most squamous cell neoplasia is related to human papillomavirus infection of the conjunctival epithelium and this is most certain in those patients with bilateral squamous cell neoplasia and those immunosuppressed patients who develop this disease.\[[@ref48]\] The currently accepted term for the localized variety is conjunctival intraepithelial neoplasia (CIN), but others prefer the terms dysplasia (mild, moderate, or severe) and carcinoma-in-situ. When the abnormal cellular proliferation involves only partial thickness of the epithelium it is classified as mild CIN, a condition also called mild or moderate dysplasia. When it affects full thickness epithelium it is called severe CIN, a condition also called severe dysplasia. In these cases, there may be an intact surface layer of cells. Where there are no longer normal surface cells then the process is termed carcinoma-in-situ. It is stressed that these are histopathologic terms and the differential between CIN mild and CIN severe cannot be made clinically. Conjunctival intraepithelial neoplasia (CIN) {#sec2-17} -------------------------------------------- Clinically, CIN appears as a fleshy, sessile or minimally elevated lesion usually at limbus in the interpalprebal fissure and less commonly in the forniceal or palpebral conjunctiva ([Fig. 9](#F9){ref-type="fig"}). The limbal lesion may extend for a variable distance into the epithelium of the adjacent cornea. A white plaque (leukoplakia) may occur on the surface of the lesion due to secondary hyperkeratosis. ![Conjunctival intraepithelial neoplasia (CIN; carcinoma-in-situ) with corneal involvement, displaying leukoplakia on both the conjunctiva and cornea](IJO-67-1930-g009){#F9} Histopathologically, mild CIN (dysplasia) is characterized by a partial thickness replacement of the surface epithelium by abnormal epithelial cells that lack normal maturation. Severe CIN (severe dysplasia) is characterized by a nearly full-thickness replacement of the epithelium by similar cells. Carcinoma-in-situ represents full thickness replacement by abnormal epithelial cells. Squamous cell carcinoma {#sec2-18} ----------------------- Squamous cell carcinoma an extension of abnormal epithelial cells through the basement membrane to gain access to the conjunctival stroma. Clinically, invasive squamous cell carcinoma is generally larger and more elevated than CIN ([Fig. 10](#F10){ref-type="fig"}). Leukoplakia may be variable. Uncommonly, lesions that are untreated or incompletely excised can invade through the corneoscleral lamella into the anterior chamber of the eye or they can transgress the orbital septum and invade the soft tissues of the orbit adjacent to the globe.\[[@ref29][@ref85]\] A rare variant of squamous cell carcinoma of the conjunctiva is the mucoepidermoid carcinoma. Clinically, this variant occurs in older patients and has a yellow globular cystic component due to the presence of abundant mucous-secreting cells within cysts. It tends to be more aggressive than the standard squamous cell carcinoma and, therefore, deserves wider excision and closer follow-up.\[[@ref2][@ref25]\] The spindle cell variant of squamous cell carcinoma is likewise aggressive.\[[@ref7]\] ![Invasive squamous cell carcinoma of the conjunctiva. (a) Gelatinous limbal squamous cell carcinoma. (b) Nodular squamous cell carcinoma. (c) Flat diffuse squamous cell carcinoma of the cornea](IJO-67-1930-g010){#F10} Histopathologically, invasive squamous cell carcinoma is characterized by malignant squamous cells that have violated the basement membrane and have grown in sheets or cords into the stromal tissue. As mentioned above, the mucoepidermoid variant contains mucous-secreting cells that often produce mucous-containing cysts within the lesion. Even though the cells of invasive squamous cell carcinoma gain access to the blood vessels and lymphatic channels, regional and distant metastases are both rather uncommon. Patients who are medically immunosuppressed for organ transplantation or those with human immunodeficiency virus are at particular risk to develop conjunctival squamous cell carcinoma. In these cases, the risk for life- threatening metastatic disease is greater.\[[@ref51]\] The management of squamous cell carcinoma of the conjunctiva varies with the extent of the lesion. In general, the management of lesions in the limbal area involves alcohol epitheliectomy for the corneal component and partial lamellar scleroconjunctivectomy with wide margins for the conjunctival component followed by freeze-thaw cryotherapy to the remaining adjacent bulbar conjunctiva, similar to the method used for limbal conjunctival melanoma.\[[@ref78][@ref79][@ref80]\] In some cases, microscopically controlled excision (Mohs surgery) is performed at the time of surgery to ensure tumor-free margins.\[[@ref3]\] Those tumors in the forniceal region can be managed by wide local resection and cryotherapy. In cases where excessive conjunctiva is sacrificed, a mucous membrane graft or amniotic membrane graft may be employed for reconstruction. In all cases, the full conjunctival component along with the underlying Tenon\'s fascia should be excised using the no touch technique as mentioned previously. A thin lamella of underlying sclera should be removed with the tumor for those in the limbal region where the tumor is adherent to the globe. The surgical management of conjunctival squamous cell carcinoma is similar to the management of conjunctival melanoma and is discussed further in the subsequent section on melanoma. For those patients with extensive tumors or those tumors that are recurrent, especially those with extensive corneal component, treatment with topical mitomycin C, 5-fluorouracil, or interferon is advised.\[[@ref19][@ref20][@ref32][@ref38][@ref56][@ref96][@ref97]\] We generally use mitomycin C for two cycles with close monitoring of the patient \[[Table 1](#T1){ref-type="table"}\].\[[@ref56]\] Melanocytic Tumors {#sec1-5} ================== There are several lesions that arise from the melanocytes of the conjunctiva and episclera \[[Table 2](#T2){ref-type="table"}\]. The most important ones include nevus, racial melanosis, primary acquired melanosis, and malignant melanoma. Ocular melanocytosis should be included in this discussion as its scleral pigmentation can masquerade as conjunctival pigmentation. ###### Differential Diagnosis of Pigmented Epibulbar Lesions Condition Anatomical Location Color Depth Margins Laterality Other Features Progression ---------------------------------- ------------------------------------------ ----------------- ------------ -------------- ---------------------------------- ------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Nevus Interpalpebral limbus usually Brown or yellow Stroma Well defined Unilateral Cysts \<1% progress to conjunctival melanoma Racial melanosis Limbus bulbar palpebral conjunctiva Brown Epithelium Ill defined Bilateral Flat, no cysts Very rare progression to conjunctival melanoma Ocular melanocytosis Bulbar conjunctiva Gray Episclera Ill defined Unilateral moreso than bilateral Congenital, usually 2 mm from limbus, often with periocular skin pigmentation \<1% progress to uveal melanoma Primary acquired melanosis (PAM) Anywhere, but usually bulbar conjunctiva Brown Epithelium Ill defined Unilateral Flat, no cysts Progresses to conjunctival melanoma in nearly 50% cases that show cellular atypia Malignant melanoma Anywhere Brown or pink Stroma Well defined Unilateral Vascular nodule, dilated feeder vessels, may be non-pigmented 32% develop metastasis by 15 years See [Figure 11](#F11){ref-type="fig"}, [Figure 12](#F12){ref-type="fig"}, [Figure 13](#F13){ref-type="fig"}, [Figure 14](#F14){ref-type="fig"}, [Figure 15](#F15){ref-type="fig"} for clinical illustrations Nevus {#sec2-19} ----- The circumscribed nevus is the most common melanocytic tumor of the conjunctiva. It generally becomes clinically apparent in the first or second decade of life as a discrete variably pigmented, slightly elevated, sessile lesion that usually contains fine clear cysts on slit-lamp biomicroscopy ([Fig. 11](#F11){ref-type="fig"}).\[[@ref21][@ref54]\] It is typically located in the interpalpebral bulbar conjunctiva near the limbus and remains relatively stationary throughout life with less than 1% risk for transformation into malignant melanoma.\[[@ref21][@ref54]\] The interpalpebral location is so classic that one should doubt the diagnosis of nevus if a patient presents with a forniceal or palpebral pigment mass and suspect primary acquired melanosis, racial melanosis, or malignant melanoma. Over time, a nevus can become more pigmented and the previously inapparent nonpigmented portions can acquire pigment, simulating growth. ![Conjunctival nevus. (a) Pigmented conjunctival nevus. (b) Nonpigmented conjunctival nevus](IJO-67-1930-g011){#F11} Histopathologically, a conjunctival nevus is composed of nests of benign melanocytes in the stroma near the basal layers of the epithelium.\[[@ref9]\] Like cutaneous nevus, it can be junctional, compound, or deep. The best management is usually periodic observation with photographic comparison and if growth is documented then local excision of the lesion should be considered. In some cases, excision for cosmetic reasons is desired. At the time of excision, the entire mass is removed using the no touch technique and, if it is adherent to the globe, then a thin lamella of underlying sclera is remove intact with the tumor.\[[@ref78]\] Standard double freeze-thaw cryotherapy is applied to the remaining conjunctival margins. These precautions are employed to prevent recurrence of the nevus and also to prevent recurrence should the lesion prove to be a melanoma. Racial melanosis {#sec2-20} ---------------- Racial melanosis is a relatively common, bilateral condition of flat conjunctival pigmentation found in darkly pigmented individuals. This pigment is generally present at the limbus, often for 360°, and a variable amount of this pigment can be noted on the limbal cornea and bulbar conjunctiva ([Fig. 12](#F12){ref-type="fig"}). Uncommonly, this pigment involves the fornix and rarely the palpebral conjunctiva. This pigmentation can occasionally be mottled with a patchy appearance. It is extremely rare for conjunctival melanoma to arise from racial melanosis. Histopathologically, the pigmented cells are benign melanocytes located in the basal layer of the epithelium. The recommended management is periodic observation. ![Racial melanosis found bilaterally in patient with dark skin complexion](IJO-67-1930-g012){#F12} Ocular melanocytosis {#sec2-21} -------------------- Ocular melanocytosis is a congenital pigmentary condition of the periocular skin, sclera, orbit, meninges, and soft palate. Typically, there is no conjunctival pigment. However, this condition is commononly confused with primary acquired melanosis because of their similar appearance. In ocular melanocytosis, flat, gray-brown pigment scattered posterior to the limbus on the sclera is visualized through the thin overlying conjunctival tissue ([Fig. 13](#F13){ref-type="fig"}). The entire uvea is also generally affected by similar increased pigment. This condition imparts a 1 in 400 risk for the development of uveal melanoma and not conjunctival melanoma.\[[@ref87]\] Affected patients should be followed once or twice yearly for the development of uveal, orbital, or meningeal melanoma. ![Ocular melanocytosis with episcleral gray pigment, heavy uveal pigment, and little conjunctival pigment](IJO-67-1930-g013){#F13} Primary acquired melanosis (PAM) {#sec2-22} -------------------------------- Primary acquired melanosis is an important benign conjunctival pigmented condition that can give rise to conjunctival melanoma. In contrast to conjunctival nevus, it is acquired in middle age and appears diffuse, patchy, flat, and noncystic \[[Fig. 14](#F14){ref-type="fig"}\]. In contrast to ocular melanocytosis, the pigment is acquired, located within the conjunctiva, and appears brown, not gray, in color. The pigmentation can wax and wane over time.\[[@ref17][@ref18][@ref53]\] In contrast to racial melanosis, PAM generally is found in fair-skinned individuals as a unilateral patchy condition.\[[@ref22]\] ![Primary acquired melanosis of the conjunctiva, showing the characteristic irregular patchy flat pigmentation](IJO-67-1930-g014){#F14} Histopathologically, PAM is characterized by the presence of abnormal melanocytes near the basal layer of the epithelium. Pathologists should attempt to classify the melanocytes as having atypia or no atypia based on nuclear features and growth pattern.\[[@ref17][@ref18]\] PAM with atypia carries nearly 50% risk for ultimate evolution into malignant melanoma whereas PAM without atypia carries nearly 0% risk for melanoma development \[[Table 3](#T3){ref-type="table"}\].\[[@ref17][@ref18]\] ###### Histopathologic Classification of Primary Acquired Melanosis of the Conjunctiva and Risks for Evolution into Conjunctival Melanoma General Classification Risk for Development of Conjunctival Melanoma --------------------------------------------------------------------------------------------------- ----------------------------------------------- Primary acquired melanosis without atypia 0% Primary acquired melanosis with atypia 46% If atypical melanocytes in the epithelium located in other than the basal layer of the epithelium 90% If atypical melanocytes showing epithelioid cellular features (abundant cytoplasm) 75% From Folberg *et al*.^17^ x17. Folberg, R., McLean, I.W., and Zimmerman, L.E. Primary acquired melanosis of the conjunctiva. Hum Pathol. 1985; 16: 136-143 The management of PAM depends on the extent of involvement and the association with melanoma. If there is only a small region of PAM, occupying less than three clock hours of the conjunctiva, then periodic observation or complete excisional biopsy and cryotherapy are options.\[[@ref78]\] If the PAM occupies more than three clock hours, then incisional map biopsy of all four quadrants is warranted, followed by double freeze-thaw cryotherapy to all affected pigmented sites. If the patient has a history of previous or current conjunctival or cutaneous melanoma or if there are areas of nodularity or vascularity within the presumed PAM that are suspicious for melanoma, then a more aggressive approach is warranted with complete excisional biopsy of the suspicious areas using the no touch technique as described previously. Additional small incisional map biopsies should be performed in the regions of flat PAM and even in the apparently uninvolved quadrants of the bulbar conjunctiva to determine if there are melanocytes with atypia. Cryotherapy should be applied to all remaining pigmented areas. We manage patients who have PAM associated with melanoma more aggressively than those with PAM alone. If there is recurrent PAM on follow-up, prompt excisional biopsy and cryotherapy in the operating room or in the outpatient clinic setting is provided. Topical mitomycin C can also be beneficial, especially if there is recurrent corneal PAM; however, mitomycin C is not as effective for PAM as it is for squamous epithelial neoplasia. Malignant melanoma {#sec2-23} ------------------ Malignant melanoma of the conjunctiva most commonly arises from PAM, but it can also arise from a pre-existing nevus or de novo.\[[@ref50][@ref64]\] It typically arises in adults at a median age of 62 years, but rare cases of conjunctival melanoma in children have been recognized.\[[@ref64][@ref91]\] Conjunctival melanoma shows considerable clinical variability. It is generally a pigmented or tan, elevated conjunctival lesion that can be located on the limbal, bulbar, forniceal, or palpebral conjunctiva ([Fig. 15](#F15){ref-type="fig"}). Occasionally, the tumor shows predominance on the cornea, despite origin from the conjunctiva.\[[@ref93]\] Often prominent feeder vessels and surrounding flat PAM are present. Conjunctival melanoma can show both local tumor recurrence and distant metastasis (Tables [4](#T4){ref-type="table"}--[6](#T6){ref-type="table"}).\[[@ref40][@ref41][@ref50][@ref64]\] Multiple recurrences, especially those that occur within the orbit, frequently necessitate orbital exenteration.\[[@ref40][@ref64][@ref84]\] Metastases to ipsilateral facial lymph nodes, brain, lung, and liver are the most common sites.\[[@ref15][@ref64]\] Histopathologically, conjunctival melanoma is composed of variably pigmented malignant melanocytes within the conjunctival stroma. There may be microscopic evidence of PAM or a nevus. ![Conjunctival melanoma. (a) Pigmented melanoma that arose de novo. (b) Pigmented melanoma that arose from primary acquired melanosis (left arrow). Note the flat extension of the melanoma into the cornea. (c) Nonpigmented melanoma, recurrent following previous excisions](IJO-67-1930-g015){#F15} ###### Risks for Local Tumor Recurrence, Exenteration, Metastasis, and Death in Patients with Conjunctival Melanoma Outcome Length of follow-up^a^ ------------------ ------------------------ ---- ---- Recurrence (%) 26 51 65 Exenteration (%) 8 16 32 Metastasis (%) 16 26 32 Death (%) 7 13 na ^a^Kaplan Meier life table analysis. na=not available. From Shields *et al*.^64^ x 64. Shields, C.L., Shields, J.A., Gunduz, K. *et al*. Conjunctival melanoma: risk factors for recurrence, exenteration, metastasis, and death in 150 consecutive patients. Arch Ophthalmol. 2000; 118: 1497-1507 ###### Clinical Factors Predictive of Local Tumor Recurrence Following Resection of Conjunctival Melanoma Factor *P* Relative Risk ---------------------------------------------------------- ------ --------------- Tumor location extralimbal 0.01 2.3 Tumor extending to surgical margin (histopathologically) 0.02 2.9 From Shields *et al*.^64^ x 64. Shields, C.L., Shields, J.A., Gunduz, K. *et al*. Conjunctival melanoma: risk factors for recurrence, exenteration, metastasis, and death in 150 consecutive patients. Arch Ophthalmol. 2000; 118: 1497-1507 ###### Clinical Factors Predictive of Tumor Metastasis from Conjunctival Melanoma Factor *P* Relative Risk ---------------------------------------------------------- ------- --------------- Tumor extending to surgical margin (histopathologically) 0.005 5.7 Tumor location extralimbal 0.03 3.1 From Shields *et al*.^64^ x 64. Shields, C.L., Shields, J.A., Gunduz, K. *et al*. Conjunctival melanoma: risk factors for recurrence, exenteration, metastasis, and death in 150 consecutive patients. Arch Ophthalmol. 2000; 118: 1497-1507 The management of conjunctival melanoma varies with the extent of the lesion.\[[@ref52]\] This malignancy is particularly difficult to treat. Despite excellent microscopic excision of the mass, further disease can develop from associated PAM in 26% of patients by 5 years and 65% of patients by 15 years follow-up \[[Table 4](#T4){ref-type="table"}\].\[[@ref64]\] Classic limbal tumors are removed by absolute alcohol epitheliectomy for the flat corneal component and wide no touch technique, partial lamellar scleroconjunctivectomy with 4 mm margins followed by double freeze-thaw cryotherapy for the conjunctival portion. Larger lesions that extend into the forniceal region or orbit may require more extensive excision, always with tumor free margins encapsulating the tumor and with no touch, dry technique ([Fig. 1](#F1){ref-type="fig"}). Closure is achieved by primary apposition of conjunctiva or with conjunctival rotational flaps, mucous membrane graft from the opposite eye or buccal mucosa, or amniotic membrane transplantation.\[[@ref60]\] Often, fornix deepening sutures or a symblepharon ring is required to reform the fornix. Lesions that extend into the globe may require a modified enucleation and those that extend into the orbit may require orbital exenteration as described above.\[[@ref18][@ref64][@ref81][@ref84]\] Paridaens and associates found that early exenteration did not improve life prognosis.\[[@ref40]\] Shields and associates found tumor related death occurred in 7% of patients at 5 years and 13% at 8 years.\[[@ref64]\] The risk factors for death using multivariate analysis included initial symptoms (lump) (*p* = 0.004) and pathology findings (de novo melanoma without primary acquired melanosis) (*p* = 0.05). The technique of initial surgery (using complete excisional biopsy with the no touch technique combined with cryotherapy to remaining tumor free margins) was shown to be an important factor in preventing eventual tumor recurrence (*p* = 0.07), metastasis (*p* = 0.03), and death (*p* = 0.006) in the univariate analysis, but did not reach significance in the multivariate analysis.\[[@ref64]\] Conditions that can simulate conjunctival melanocytic tumors {#sec2-24} ------------------------------------------------------------ There are several benign, non-neoplastic conditions that can resemble conjunctival PAM or melanoma and these include pingueculum, pterygium, Axenfeld\'s nerve loops at the site of a scleral emissarial canal, mascara deposition in the inferior fornix, silver deposition on the entire conjunctival surface in patients who have used argyrol eyedrops, gunpowder deposition in patients exposed to gunpowder explosions, adrenochrome pigment in the inferior fornix in patients using epinephrine eyedrops, hemorrhagic conjunctival cyst following previous surgery, pigmented cells trapped within a non-melanocytic tumor (fellow travelers),\[[@ref83]\] ochronosis pigmentation at the site of muscle insertion and in pingueculum in patients with alkaptonuria, and calcified Cogan\'s scleral plaque at the horizontal rectus muscle insertions in older adults.\[[@ref76]\] Understanding and recognition of these pseudomelanomas should be achieved by clinicians managing patients with conjunctival malignancies. Vascular Tumors {#sec1-6} =============== Pyogenic granuloma {#sec2-25} ------------------ Pyogenic granuloma is a proliferative fibrovascular response to prior tissue insult by inflammation, surgery, or nonsurgical trauma. It is sometimes classified as a polypoid form of acquired capillary hemangioma.\[[@ref16]\] It appears clinically as an elevated red mass, often with a florid blood supply. Microscopically, it is composed of granulation tissue with chronic inflammatory cells and numerous small caliber blood vessels ([Fig. 16](#F16){ref-type="fig"}). Because the lesion is rarely pyogenic nor granulomatous, the term "pyogenic granuloma" may be a misnomer. Pyogenic granuloma will sometimes respond to topical corticosteroids but many cases ultimately require surgical excision. In bothersome recurrent cases, low-dose plaque radiotherapy can be applied.\[[@ref26]\] ![Pyogenic granuloma](IJO-67-1930-g016){#F16} Capillary hemangioma {#sec2-26} -------------------- Capillary hemangioma of the conjunctiva generally presents in infancy, several weeks following birth, as a red stromal mass, sometimes associated with cutaneous or orbital capillary hemangioma ([Fig. 17](#F17){ref-type="fig"}). Similar to its cutaneous counterpart, the conjunctival mass might enlarge over several months and then spontaneously involute. Management includes observation most commonly, but surgical resection or local or systemic prednisone can be employed. ![Capillary hemangioma of the conjunctiva in a newborn infant](IJO-67-1930-g017){#F17} Cavernous hemangioma {#sec2-27} -------------------- Cavernous hemangioma of the conjunctiva is rare.\[[@ref94]\] This benign tumor appears as a red or blue lesion usually in the deep stroma in young children ([Fig. 18](#F18){ref-type="fig"}). It may be similar to the orbital cavernous hemangioma that is generally diagnosed in young adults. It can be managed by local resection. ![Cavernous hemangioma of the conjunctiva in a young child](IJO-67-1930-g018){#F18} Racemose hemangioma {#sec2-28} ------------------- Occasionally, dilated arteriovenous communication without intervening capillary bed (racemose hemangioma) is found in the conjunctiva. This appears as a loop or neatly wound monolayer of a dilated, noncrossing vessel in the stroma with no evident stimulus or planned direction. It can remain stable for years and is generally monitored conservatively. It is important to rule out Wyburn--Mason syndrome in these cases. Lymphangioma {#sec2-29} ------------ Conjunctival lymphangioma can occur as an isolated conjunctival lesion or, more often, it represents a superficial component of a deeper diffuse orbital lymphangioma.\[[@ref95]\] It usually becomes clinically apparent in the first decade of life and appears as a multliloculated mass containing variable-sized clear dilated cystic channels ([Fig. 19](#F19){ref-type="fig"}). In most instances, one sees blood in many of the cystic spaces. These have been called "chocolate cysts." The treatment of conjunctival lymphangioma is often extremely difficult because surgical resection or radiotherapy cannot completely eradicate the mass. ![Lymphangioma of the conjunctiva](IJO-67-1930-g019){#F19} Varix {#sec2-30} ----- Varix is a venous malformation that can be found in the orbit and rarely the conjunctiva. It is a mass of dilated venous channels that can enlarge with Valsalva maneuver. Some authorities believe that this condition is in the spectrum of lymphangioma. Treatment involves cautious observation. If clotted and painful, cold compresses and aspirin may be useful. Surgical resection should be cautiously employed due to the risk for prolonged bleeding at surgery.\[[@ref71]\] Hemangiopericytoma {#sec2-31} ------------------ Hemangiopericytoma is a tumor composed of the pericytes that surround blood vessels.\[[@ref24]\] It can show both benign and malignant cytological features. It appears as a red conjunctival mass originating from the stroma. Wide surgical resection with tumor-free margins is advised. Kaposi\'s sarcoma {#sec2-32} ----------------- Kaposi\'s sarcoma is best known as a cutaneous malignancy that occurs in elderly immunosuppressed patients. With the advent of acquired immune deficiency syndrome (AIDS), this tumor has become more common and often affects mucous membranes, including conjunctiva. Clinically, it appears as one or more reddish vascular masses that may resemble a hemorrhagic conjunctivitis ([Fig. 20](#F20){ref-type="fig"}). It is moderately responsive to chemotherapy and markedly responsive to low dose radiotherapy.\[[@ref69]\] ![Kaposi\'s sarcoma of the conjunctiva with typical surrounding hemorrhage](IJO-67-1930-g020){#F20} Fibrous Tumors {#sec1-7} ============== Fibroma {#sec2-33} ------- Fibroma is a rare conjunctival tumor that appears as a white stromal mass, either unifocal or multifocal.\[[@ref31]\] Surgical resection is advised. Fibrous histiocytoma {#sec2-34} -------------------- Fibrous histiocytoma is a rare mass of the conjunctiva and is comprised of fibroblasts and histiocytes. Clinically and histopathologically it resembles many other amelanotic stromal tumors. In the conjunctiva it can be benign, locally invasive, or malignant. Wide excision with tumor-free margins is advised. Nodular fasciitis {#sec2-35} ----------------- Nodular fasciitis is a benign proliferation of connective tissue that most commonly occurs in the skin and less commonly in the eyelid, orbit, and conjunctiva. Clinically and histopathologically it can resemble fibrosarcoma. The lesion appears as a solitary white mass in Tenon\'s fascia. Complete excision is advised as the lesion can recur. Neural Tumors {#sec1-8} ============= Neural tumors of the conjunctiva are rare. They tend to manifest a more yellow appearance than the fibrous tumors. Neurofibroma {#sec2-36} ------------ Neurofibroma can occur in the conjunctiva as a solitary mass or as a diffuse or plexiform variety. The former is not usually associated with systemic conditions and the latter is generally a part of von Recklinghausen\'s neurofibromatosis.\[[@ref58][@ref75]\] The solitary tumor is a slowly enlarging elevated stromal mass that is best managed by complete surgical resection. The plexiform type is more difficult to surgically excise and debulking procedures are often necessary. Neurilemoma {#sec2-37} ----------- Neurilemoma, also known as schwannoma, is a benign proliferation of Schwann cells that surround the peripheral nerves. This tumor more commonly arises in the orbit, but there are reports of similar rare tumor in the conjunctiva.\[[@ref44]\] Clinically, this lesion is a yellowish-pink, nodular mass in the stroma. Complete excision is warranted to minimize recurrence. Granular cell tumor {#sec2-38} ------------------- Granular cell tumor is a rare tumor and of disputed origin, but currently, most authorities speculate that it is of Schwann cell origin.\[[@ref76]\] This benign tumor clinically appears smooth, vascular, and pink, and is located in the stroma or within Tenon\'s fascia. Histopathologically, it is comprised of large round cells with pronounced granularity to the cytoplasm. Complete excision is advised. Histiocytic Tumors {#sec1-9} ================== Xanthoma {#sec2-39} -------- Xanthoma most often occurs within the cutaneous dermis, near extensor surfaces and its location on the conjunctiva is exceptionally rare. Conjunctival xanthoma appears as a yellow subepithelial smooth mass affecting one or both epibulbar surfaces. Bilateral conjunctival involvement has been found in a condition termed xanthoma disseminatum. Histopathologically, subepithelial infiltrate of lipidized histiocytes, eosinophils, and Touton giant cells are seen. Juvenile xanthogranuloma {#sec2-40} ------------------------ Juvenile xanthogranuloma is a relatively common cutaneous condition that presents as painless, pink skin papules with spontanteous resolution, generally in children under the age of 2 years. Rarely, conjunctival, orbital, and intraocular involvement is noted. In the conjunctiva, the mass appears as an orange-pink stromal mass, typically in young adults ([Fig. 21](#F21){ref-type="fig"}). If the classic skin lesions are noted, the diagnosis is established clinically and treatment with observation or topical steroid ointment is provided. Otherwise, biopsy is suggested and recognition of the typical histopathologic features of histiocytes admixed with Touton\'s giant cells confirms the diagnosis. ![Juvenile xanthogranuloma of the conjunctiva in a child](IJO-67-1930-g021){#F21} Reticulohistiocytoma {#sec2-41} -------------------- Reticulohistiocytoma is a rare tumor, often found as part of a systemic multicentric reticulohistiocytosis. Clinically, the tumor appears as a pink, vascular limbal mass in an adult. Histopathologically, it is comprised of large histiocytes with granular cytoplasm.\[[@ref13]\] Myxoid Tumors {#sec1-10} ============= Myxoma {#sec2-42} ------ Myxoma is a rare conjunctival tumor that appears as an orange-pink mass within the stroma. The tumors are slow growing, freely movable solitary lesions located usually in the temporal bulbar conjunctiva. Histologically, they are hypocellular and were composed of stellate and spindle-shaped cells interspersed in a loose stroma.\[[@ref43][@ref68][@ref77]\] Myogenic Tumors {#sec1-11} =============== Rhabdomyosarcoma {#sec2-43} ---------------- Ophthalmic rhabdomyosarcoma is generally regarded as a primary orbital tumor; however, it can occur primarily in the conjunctiva and even within the globe.\[[@ref66]\] Conjunctival rhabdomyosarcoma appears as a pink, vascular mass with rapid growth, usually over 1 to 2 months. Complete excisional biopsy is advised and adjunctive therapy with chemotherapy and possibly radiotherapy is warranted depending on many factors.\[[@ref66]\] Lipomatous Tumors {#sec1-12} ================= Lipoma {#sec2-44} ------ Conjunctival lipoma is quite rare and generally is found in adults as a yellowish-pink stromal mass.\[[@ref68][@ref77]\] They are generally of pleomorphic type with large lipid vacuoles surrounded by stellate cells. Herniated orbital fat {#sec2-45} --------------------- Occasionally, orbital fat presents in the conjunctiva as a herniation from the superotemporal orbit. The condition is often bilateral and represents deficiency in the orbital connective tissue to maintain the proper location of the normal orbital fat. Clinically, the mass is deep to Tenon\'s fascia and is most prominent on inferonasal gaze ([Fig. 22](#F22){ref-type="fig"}). Digital reposition of the fat into the orbit can be performed, but is only temporary. Management is observation, unless the condition causes symptoms of dry eye from eyelid malposition. In these cases, resection of the herniated fat and resuspension of the orbit position of the fat is advised. Histopathologically, the tissue comprises large lipid cells. Daniel and coauthors recently described six patients with typical herniated orbital fat that proved on histopathology to have pleomorphic lipoma, with large pleomorphic cells within the adipose tissue arranged in a floret-like pattern.\[[@ref12]\] They noted the clinical overlap between these two conditions. ![Herniated orbital fat](IJO-67-1930-g022){#F22} Liposarcoma {#sec2-46} ----------- Liposarcoma of the conjunctiva has been rarely recognized and shows clinical features similar to lipoma. Histopathologically, neoplastic stellate lipid cells and signet-ring type cells have been observed.\[[@ref77]\] Lymphoid Tumors {#sec1-13} =============== Lymphoid tumors can occur in the conjunctiva as isolated lesions or they can be a manifestation of systemic lymphoma.\[[@ref6][@ref8][@ref34][@ref37][@ref61]\] Clinically, the lesion appears as a diffuse, slightly elevated pink mass located in the stroma or deep to Tenon\'s fascia, most commonly in the forniceal region ([Fig. 23](#F23){ref-type="fig"}). This appearance is similar to that of smoked salmon; hence it is termed the "salmon patch."\[[@ref61]\] It is not usually possible to differentiate clinically between a benign and malignant lymphoid tumor. Therefore, biopsy is necessary to establish the diagnosis and a systemic evaluation should be done in all affected patients to exclude the presence of systemic lymphoma \[[Table 7](#T7){ref-type="table"}\]. Histopathologically, sheets of lymphocytes are found and classified as reactive lymphoid hyperplasia or malignant lymphoma. Most are B cell lymphoma (non-Hodgkin\'s type). Rarely, T cell lymphoma is noted.\[[@ref62]\] Treatment of the conjunctival lesion should include chemotherapy if the patient has systemic lymphoma or external beam irradiation (2,000--4,000 cGy) if the lesion is localized to the conjunctiva. Other options include excisional biopsy and cryotherapy,\[[@ref14]\] local interferon injections, or observation. ![Conjunctival lymphoma. (a) Limbal tumor. (b) Forniceal tumor](IJO-67-1930-g023){#F23} ###### Risks for the Development of Systemic Lymphoma in Patients who Present with Conjunctival Lymphoid Infiltrate and No Sign of Systemic Lymphoma Development of Systemic Lymphoma ----------------------------------------------- ---------------------------------- ---- ---- Generally, if conjunctival lymphoid tumor (%) 7 15 28 Specifically, if conjunctival lymphoma (%) 12 38 79 From Shields *et al*.^61^ x 61. Shields, C.L., Shields, J.A., Carvalho, C. *et al*. Conjunctival lymphoid tumors: clinical analysis of 117 cases and relationship to systemic lymphoma. Ophthalmology. 2001; 108: 979-984 Leukemia {#sec1-14} ======== Leukemia generally manifests in the ocular region as hemorrhages from associated anemia and thrombocytopenia rather than leukemic infiltration.\[[@ref46]\] However, leukemic infiltration can be found with chronic lymphocytic leukemia. In these cases, the tumor appears as a pink smooth mass within the conjunctival stroma either at the limbus or the fornix, similar to a lymphoid tumor. Biopsy reveals sheets of large leukemic cells. Treatment of the systemic condition is advised with secondary resolution of the conjunctival infiltration. Metastatic Tumors {#sec1-15} ================= Metastatic tumors rarely occur in the conjunctiva but conjunctival metastasis can occur from breast carcinoma, cutaneous melanoma, and other primary tumors.\[[@ref33]\] Metastatic carcinoma appears as one or more fleshy pink vascularized conjunctival stromal tumors ([Fig. 24](#F24){ref-type="fig"}). Metastatic melanoma to the conjunctiva usually is pigmented.\[[@ref33]\] ![Metastatic breast carcinoma to the conjunctiva](IJO-67-1930-g024){#F24} Secondary Conjunctival Involvement from Adjacent Tumors {#sec1-16} ======================================================= The conjunctiva can be secondarily involved by tumors of adjacent structures, particularly by direct extension from tumors of the eyelids. The most important tumor to exhibit this behavior is sebaceous gland carcinoma of the eyelid.\[[@ref5][@ref28]\] This tumor can exhibit pagetoid invasion and extend directly into the conjunctival epithelium. This can result in a clinical picture compatible with chronic unilateral blepharoconjunctivitis. Uveal melanoma in the ciliary body region can extend extrasclerally into the subconjunctival tissues, simulating a primary conjunctival tumor. Rhabdomyosarcoma of the orbit, a tumor typically found in children, occasionally presents first with its conjunctival component before the mass is discovered in the orbit.\[[@ref66][@ref73]\] Caruncular Tumors and Cysts {#sec1-17} =========================== The caruncle is a unique anatomic structure that contains elements of both conjunctiva and skin. The tumors and related lesions that develop in the caruncle are similar to those that occur in mucous membranes and cutaneous structures. By histopathologic analysis, 95% of caruncular tumors are benign and 5% are malignant.\[[@ref35]\] The most common lesions include papilloma and nevus \[[Table 8](#T8){ref-type="table"}\] ([Fig. 25](#F25){ref-type="fig"}).\[[@ref35][@ref67]\] Other caruncular lesions include pyogenic granuloma, inclusion cyst, sebaceous hyperplasia, and sebaceous adenoma, and oncocytoma.\[[@ref59]\] Malignant tumors such as squamous cell carcinoma, melanoma, lymphoma, and sebaceous carcinoma are relatively rare in the caruncle. The oncocytoma is a benign tumor that occurs more commonly in the lacrimal or salivary glands. In the caruncle it probably arises from accessory lacrimal gland tissue and often has a blue cystic appearance ([Fig. 25](#F25){ref-type="fig"}). The treatment of most caruncular masses is either observation or local resection, depending on the final diagnosis. ###### Types and Frequency of Tumors of the Caruncle: Comparison of Two Major Survey Lesions (%) Luthra *et al*.^35^ (*n*=112) Shields *et al*.^67^ (*n*=57) ------------------------------- ------------------------------- ------------------------------- Papilloma 13 32 Nevus 43 24 Pyogenic granuloma 3 9 Epithelial inclusion cyst 4 7 Chronic inflammation 4 7 Oncocytoma 4 4 Normal caruncle 0 4 Sebaceous gland hyperplasia 8 2 Sebaceous gland adenoma 0 2 Lipogranuloma 0 2 Seborrheic keratosis 1 2 Lymphangiectasia 0 2 Histiocytic lymphoma 0 2 Squamous cell carcinoma 0 2 Basal cell carcinoma 0 2 Reactive lymphoid hyperplasia 4 0 Foreign body granuloma 3 0 Malignant melanoma 2 0 Capillary hemangioma 2 0 Senile keratosis 1 0 Freckle 1 0 Adrenochrome pigment 1 0 Cavernous hemangioma 1 0 Dermoid 1 0 Granular-cell myeloblastoma 1 0 Plasmacytoma 1 0 Apocrine hydrocystoma 1 0 Pilar cyst 1 0 Sebaceous gland carcinoma 1 0 Ectopic lacrimal gland 1 0 From Luthra, C.L., Doxanas, M.T., and Green, W.R. Lesions of the caruncle. A clinicopathologic study. Surv Ophthalmol. 1978; 23: 183-195 and Shields, C.L., Shields, J.A., White, D., and Augsburger, J.J. Types and frequency of lesions of the caruncle. Am J Ophthalmol. 1986; 102: 771-778 ![Caruncular tumors. (a) Papilloma of the caruncle. (b) Nevus of the caruncle. (c) Oncocytoma of the caruncle](IJO-67-1930-g025){#F25} Miscellaneous Lesions that can Simulate Conjunctival Neoplasms {#sec1-18} ============================================================== A number of non-neoplastic conditions can simulate neoplasms. These include pingueculum, pterygium, foreign body, inflammatory granuloma, amyloidosis, and others.\[[@ref76]\] In most instance, the history and clinical findings should make the diagnosis obvious. In some instances, however, excision of the mass may be necessary in order to exclude a neoplasm. Method of Literature Search {#sec1-19} =========================== A comprehensive literature search over the past 30 years was derived from PubMed using general search words *conjunctiva, cornea, caruncle, tumor, neoplasia, cancer, and malignancy*. Additional search words were input for each of the 47 specific diagnostic entities listed in the outline from *dermoid* to *liposarcoma* to *caruncle tumor*. The search words *conjunctiva tumor* yielded 77 pages of 1,536 references. The search words *conjunctiva neoplasia* yielded 74 pages of 1,473 references, *conjunctiva melanoma* yielded 19 pages of 364 references, and *conjunctiva squamous cell carcinoma* produced 13 pages of 249 references. Additional references were gathered from published articles that provided a literature review of a topic. References used in this report included those that represented the first or second report in the literature of an entity or treatment of an entity, those that represented substantial case series of certain entities, and those that were particularly well-written, well-illustrated, or recent publication. English literature articles were used and non-English articles were included if they met the above criteria. Financial support and sponsorship {#sec2-47} --------------------------------- Nil. Conflicts of interest {#sec2-48} --------------------- There are no conflicts of interest. General ConsiderationsAnatomyDiagnostic ApproachesManagementObservationIncisional biopsyExcisional biopsyCryotherapyChemotherapyRadiotherapyModified enucleationOrbital exenterationMucous membrane graftCongenital TumorsDermoidDermolipomaEpibulbar osseous choristomaLacrimal gland choristomaRespiratory choristomaComplex choristomaBenign Tumors of Surface EpitheliumPapillomaKeratoacanthomaHereditary benign intraepithelial dyskeratosisEpithelial inclusion cystDacryoadenomaKeratotic plaqueActinic KeratosisMalignant Tumors of Surface EpitheliumConjunctival intraepithelial neoplasia (CIN)Invasive squamous cell carcinoma (SCC)Melanocytic TumorsNevusRacial melanosisOcular melanocytosisPrimary acquired melanosis (PAM)Malignant melanomaConditions that can simulate melanocytic tumorsVascular TumorsPyogenic granulomaCapillary hemangiomaCavernous hemangiomaRacemose hemangiomaLymphangiomaVarixHemangiopericytomaKaposi\'s sarcomaFibrous TumorsFibromaFibrous histiocytomaNodular fasciitisNeural TumorsNeurofibromaNeurilemomaGranular cell tumorHistiocytic TumorsXanthomaJuvenile xanthogranulomaReticulohistiocytomaMyxoid TumorsMyxomaMyogenicRhabdomyosarcomaLipomatous TumorsLipomaHerniated orbital fatLiposarcomaLymphoid TumorsLeukemiaMetastatic TumorsSecondary TumorsCaruncular Tumors and CystsMiscellaneous Lesions that Can Simulate Conjunctival NeoplasmsMethod of Literature Search
{ "pile_set_name": "PubMed Central" }
List of Parliamentary constituencies in the North East (region) The region of North East England is divided into 29 parliamentary constituencies which is made up of 19 Borough Constituencies and 10 County Constituencies. Since the 2019 General Election, 19 are represented by Labour MPs and 10 by Conservative MPs. Constituencies Proposed constituencies As part of the Sixth Periodic Review of Westminster constituencies, the Boundary Commission for England published in 2018 the following new constituencies covering the North East for the next United Kingdom general election. County Durham: Billingham and Sedgefield City of Durham and Easington Houghton and Seaham Stockton and Yarm (formerly in Cleveland) North Yorkshire (formerly in Cleveland): Middlesbrough and Eston Middlesbrough South and Thornaby Redcar and East Cleveland Northumberland: Berwick and Morpeth Blyth and Ashington Hexham and Cramlington Tyne and Wear: Newcastle upon Tyne North West See also List of United Kingdom Parliament constituencies List of Parliamentary constituencies in Cleveland List of Parliamentary constituencies in County Durham List of Parliamentary constituencies in Northumberland List of Parliamentary constituencies in Tyne and Wear Notes References Category:Parliamentary constituencies in North East England North East Parliamentary Parliamentary Parliamentary Parliamentary
{ "pile_set_name": "Wikipedia (en)" }
Q: How to Deploy "SQL Server Express + EF" Application It's my first time to Deploy an Application which uses SQL Server Express Database. I'm Using Entity Framework Model First to contact Database. and i created a Setup wizard with Install Shield to Install the App. These are Steps that I'v done to Install The Application in Destination Computer : Installing MS SQL Server Express (DEST) Installing The Program using Setup file (DEST) Detach Database from SQL server and Copy Related .mdf ,.ldf file to the Destination Computer. Attach database file in destination computer using SQL Server Management Studio. I know server names and SQL name Instances are different and my program can't run correctly with the Old Connection String. I'm Beginner at this, and I want to know what should I do in the Destination Computer to make the program run? should I find a way to change the connection string on runtime?! or is there any way to modify installshield project and it do this work for me? (installshield is professional edition) could you suggest me what to do? in my searches I saw that WiX can do this, but I find it complicated, and i don't have enough time to learn it. i need to deploy the app ASAP. Thanks alot. A: Few hints for using LocalDB in your project: Download SQL Express LocalDB 2014 here. You can install it silently with single command like this msiexec /i SqlLocalDB.msi /qn IACCEPTSQLLOCALDBLICENSETERMS=YES Include your .MDF in your VS project and set in properties to Copy if newer so that it gets copied to your bin folder during build so that it is automatically included in the installer. At your app startup (in app.cs) check, if database file exists in desired location (e.g. %PUBLIC%\YourApp\Data) (WPF Desktop Application with MDF used locally for all local users). If not, copy the .mdf file from your app install dir to your data dir. Modify app.config so that your connection string looks like: <add name="MyContextLocalDB" connectionString="Server=(localdb)\MSSQLLocalDB; Integrated Security=True; AttachDBFilename=|DataDirectory|\MyDatabase.mdf; Connection Timeout = 30" providerName="System.Data.SqlClient" /> Connection timeout should be increased, since LocalDB exe is launched when you first try to connect to it. You can also use Database.CreateIfNotExists, but I have never tried it.
{ "pile_set_name": "StackExchange" }
Plewki Plewki may refer to the following places: Plewki, Ostrołęka County in Masovian Voivodeship (east-central Poland) Plewki, Wyszków County in Masovian Voivodeship (east-central Poland) Plewki, Podlaskie Voivodeship (north-east Poland) Plewki, Warmian-Masurian Voivodeship (north Poland)
{ "pile_set_name": "Wikipedia (en)" }
Q: Is it possible to toggle different divs by clicking on different elements using the same function? Say I have 50 div, like this: <div class="btn1"></div> //Toggles the first container to appear <div class="btn2"></div> //Toggles the second container to appear <div class="btn3"></div> //Toggles the third container to appear And another 50 div that contain information, like this: <div class="container-1"><h1>This is the first container</h1></div> <div class="container-2"><h1>This is the second container</h1></div> <div class="container-3"><h1>This is the third container</h1></div> Is it possible to make the corresponding div toggle when each button is clicked with just one function? I have read a little about delegation and operating on parents/siblings but it only seems to work on multiple buttons opening the same container, rather than each button opening each container. Somehow I don't think writing a function for every div is the way to go. A: yes it is possible. Basically you need to add a reference to clicked object so click event will know what to show/hide $(function() { $('.btn').on('click', function(e) { var $dataTarget = $($(this).data('target')); $dataTarget.show().siblings('.container').hide(); }); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="btn btn1" data-target=".container-1"></div> //Toggles the first container to appear <div class="btn btn2" data-target=".container-2"></div> //Toggles the second container to appear <div class="btn btn3" data-target=".container-3"></div> //Toggles the third container to appear <div class="container container-1"> <h1>This is the first container</h1> </div> <div class="container container-2"> <h1>This is the second container</h1> </div> <div class="container container-3"> <h1>This is the third container</h1> </div> This will show the container targeted, then will hide other containers.
{ "pile_set_name": "StackExchange" }
Q: Eslint: no-duplicates Resolve error: unable to load resolver "node" I just updated my project(SPA with VueJS and Quasar Framework) today with npm update and I can't now run it. I am getting error no-duplicates Resolve error: unable to load resolver "node" in many different modules. It is always pointing 1:1 I have no idea what is going on as it was all working fine before... Any suggestions? A: Had the same issue, I think some dependancies have been removed to fix I did npm install eslint-plugin-import --save-dev npm install babel-preset-env --save-dev Then I edited .eslintrs.js modify extends to look like this extends: [ 'standard', 'plugin:import/errors', 'plugin:import/warnings', ], add the import plugin like plugins: [ 'html', 'import' ], then add the following rules 'import/named': 2, 'import/namespace': 2, 'import/default': 2, 'import/export': 2, You then also have to modify where you import quasar for example it looked like this before import Quasar from 'quasar' Now you have to do import Quasar from 'quasar-framework'
{ "pile_set_name": "StackExchange" }
ConvertKit is specifically designed with creative people in mind, and that’s why we’ve chosen it as our email marketing software here at Copyblogger. Any member of our editorial team — no matter how technically challenged — can easily perform any task that needs to be done, including sending messages, creating automated sequences, using tags for message segmentation, reviewing analytics, and identifying personalization opportunities. I think this email also makes quite a brilliant use of responsive design. The colors are bright, and it's not too hard to scroll and click -- notice the CTAs are large enough for me to hit with my thumbs. Also, the mobile email actually has features that make sense for recipients who are on their mobile device. Check out the CTA at the bottom of the email, for example: The "Open Stitcher Radio" button prompts the app to open on your phone. 4. Make Links Clear and Visible & Use Text Links: Make sure that all links to your product purchasing pages are clear and visible. When possible, default to blue, underlined links for easy user recognition. Though in web design it is often unadvisable to use the words "click here" in a link, in email design it typically is more effective to use the words "click here." Make sure that your links are text links and not image-based links as images may not appear in all emails. Lead generation is a win-win for both the buyer and seller. Buyers can request information from several businesses that offer the product or service that they are looking for, then the seller is given the opportunity to make its pitch to people who have given their permission. These are some of the hottest leads. Conversion rates on leads received in this way generally have a much higher success rate than cold contacts. These metrics give you a high-level overview of how your subscribers are interacting with your campaigns and allow you to compare the success of one campaign to another. If you want to go deeper and see the exact people who opened and clicked your campaign, what links they clicked, etc. you can do so by choosing some of the other reports from the right hand side menu. Did you know that 74% of companies that weren’t exceeding revenue goals didn't know their visitor, lead, MQL, or sales opportunities numbers? How about that over 70% of companies not achieving their revenue goals generate fewer than 100 leads per month, and only 5% generate more than 2,500 leads per month? These are just a few examples of what you’ll find in the report. Lead generation is not a new form of acquiring a business, but business trends and time necessities have found a better way to get new clients. Rather than sitting at a trade show table for hours on end, or setting up a display in hopes that targeted consumers will complete a form, you can have leads generated and sent to you using available technology, all while you can direct your time elsewhere. Once you have a good mix of high-value content, including visual content, start promoting it on social channels. The more engagement you get, the more Google considers your content to be of high value, which in turn boosts your SEO rankings. Search engines look for natural links, so the more informative your content is, the more likely people will link to it naturally. Visitor Tracking: Hotjar has a heatmap tool — a virtual tool which creates a color-coded representation of how a user navigates your site — that helps you understand what users want, care about, and do on your site. It records visitors and tells you where they spend the most time on your site. You can use it to gather information on your lead generation forms, feedback forms and surveys, and more. What's the difference between them? One-off communications versus prolonged, email-based interactions. For example, email marketing tools are excellent for one-off communications. You can use these tools for the one time you'd like to send someone an automated email response when they join a subscriber list, on their birthday, or when you promote a new product. But marketing automation tools are better suited for prolonged, email-based interactions. For example, you can use marketing automation tools whenever you want to guide someone from a subscriber list to a product purchase. Or you can send thank you emails or send new product promotions—all without having to lift a finger after the workflow is designed. Not only is InVision's newsletter a great mix of content, but I also love the nice balance between images and text, making it really easy to read and mobile-friendly -- which is especially important, because its newsletters are so long. (Below is just an excerpt, but you can read through the full email here.) We like the clever copy on the call-to-action (CTA) buttons, too. Essentially, you can tell Office Autopilot what to do if certain things occur. For example, if a customer places an order, you can send an order to your fulfillment house to fulfill that order. Or if a customer leaves, you can send them a last minute special offer. Just select the trigger for the action, then select what list it applies to then select what to do when that action is trigger. 5. Use Personalization Fields: While always important in email marketing, because an auto-responder list can not be easily segmented, be sure to use the features of your email marketing program, such as those at Comm100, that allow you to personalize fields within your auto responder email with the subscriber's first name, handle, user name or other submitted information. Let’s begin by with the definition of a lead. What does a lead mean to your company? Many companies have different definitions depending on their sales cycle, but standard definition is a qualified potential buyer who shows some level of interest in purchasing your product or solution. For the leads that fill out a form, they often do so in exchange for some relevant content or a compelling offer. Whether you already have a list of subscribers or are starting from scratch, email marketing services can help. All of the services we cover let you add contacts manually using copy and paste or by uploading CSV or Microsoft Excel files. Some integrate with third-party software enabling you to import Gmail and other webmail contacts, Salesforce.com and other customer relationship management (CRM) data, or other software where you might have contacts stored. Depending on the size and location of your list, third-party integration could be key. Verify whether you can export contacts as well (and how easy it is to do so) should you leave the service. Managing users who unsubscribe should also be easy so you're not accidentally contacting anyone who has opted out of your newsletters. Your website is where the magic happens. This is the place where your audience needs to convert. Whether it is encouraging prospective buyers to sign up for your newsletter or fill out a form for a demo, the key is to optimize your website for converting browsers into actual leads. Pay attention to forms, Calls-to-Action (CTA), layout, design, and content. What's the difference between them? One-off communications versus prolonged, email-based interactions. For example, email marketing tools are excellent for one-off communications. You can use these tools for the one time you'd like to send someone an automated email response when they join a subscriber list, on their birthday, or when you promote a new product. But marketing automation tools are better suited for prolonged, email-based interactions. For example, you can use marketing automation tools whenever you want to guide someone from a subscriber list to a product purchase. Or you can send thank you emails or send new product promotions—all without having to lift a finger after the workflow is designed. The only way this page could be better is if it showed some real gratitude to the new lead. “Yipee” might relate to the prospect’s emotion, but it doesn’t convey thanks on behalf of the brand. While your “thank you” page has a number of goals to accomplish, the first thing it should do is right in the name — say “thank you,” and make the lead feel like an invaluable part of the brand. Coupon: Unlike the job application, you probably know very little about someone who has stumbled upon one of your online coupons. But if they find the coupon valuable enough, they may be willing to provide their name and email address in exchange for it. Although it's not a lot of information, it's enough for a business to know that someone has interest in their company. I got an email today from a marketer…. subject line “Don’t Worry, I won’t email you again” Huh? I was never worried in the first place and is I found it insulting to my intellect to assume that in my daily busy life I would actually take the time to worry about a lame marketer trying to get under my skin. I’m not going to open it because it simply sounds pathetic and self serving. Maybe it’s me but I just don’t like time wasters and nonsensical drival. A warm call is much more valuable than a cold one. Many have already declared cold calling dead and would rather focus on warm leads because when you contact a warm lead, a person is expecting to hear from you or at least has shown some interest towards your business. This means that he or she is much more willing to listen to you and consider purchasing your offering since they have already considered you an option. The CAN-SPAM Act of 2003 was passed by Congress as a direct response to the growing number of complaints over spam e-mails.[citation needed] Congress determined that the US government was showing an increased interest in the regulation of commercial electronic mail nationally, that those who send commercial e-mails should not mislead recipients over the source or content of them, and that all recipients of such emails have a right to decline them. The act authorizes a US $16,000 penalty per violation for spamming each individual recipient.[17] However, it does not ban spam emailing outright, but imposes laws on using deceptive marketing methods through headings which are "materially false or misleading". In addition there are conditions which email marketers must meet in terms of their format, their content and labeling. As a result, many commercial email marketers within the United States utilize a service or special software to ensure compliance with the act. A variety of older systems exist that do not ensure compliance with the act. To comply with the act's regulation of commercial email, services also typically require users to authenticate their return address and include a valid physical address, provide a one-click unsubscribe feature, and prohibit importing lists of purchased addresses that may not have given valid permission.[citation needed] Whether you are hosting a small private function, a large-scale international tradeshow, or an executive-level webinar, event marketing needs to be an integral part of the lead generation mix. After all, events are a critical component of an outbound marketing strategy. Essentially, events offer you the chance to define your brand, clarify the solutions you provide, and establish personal connections with participants. And while they provide you with an invaluable opportunity to engage with prospects and customers, events also give attendees the chance to interact with each other. As every marketer knows, there is no better advertising than the direct words of a satisfied customer. Events also provide a venue to deliver speeches and content that convey your company’s thought leadership and raise your perception in the eyes of buyers. Compared to other marketing tactics, events are more likely to quickly turn a prospect into a strong lead. As a lively, interactive, educational forum, events position your business as a trusted leader in a field of many.
{ "pile_set_name": "Pile-CC" }
22 comments: I selected The North Face.a)http://clifbar.ehclients.com/uploads/blog/free_solo_castleton_north_face.jpgb) Douglas Tompkins and Kenneth "Hap" Kloppc) equipment retail stored) The North Face this name was chosen because the north face of a mountain in the northern hemisphere is generally the most difficult face to climb. By the 1980s, skiwear was added to the line of products, and eventually camping equipment was added as well. The North Face is now a wholly owned subsidiary of the VF Corporation. Founded: 1 May 1947 as Malayan AirlinesCommenced Operations: 1 October 1972Changes it made that impacted the world: First to offer free headsets, a choice of meals and free drinks in Economy Class, in the 1970s First to introduce satellite-based inflight telephones in 1991 First to involve a comprehensive panel of world-renowned chefs, the International Culinary Panel, in developing inflight meals in 1998 First to offer audio and video on demand (AVOD) capabilities on KrisWorld in all classes in October 2001 First to operate the world’s longest non-stop commercial flight between Singapore and Los Angeles in February 2004 on the A340-500, and then surpassing the record (in terms of distance) later that year with the non-stop service to New York (Newark), in June 2004 First to fly the A380 from Singapore to Sydney on 25 October 2007 Singapore Airlines became the first airline to operate a B747-400 on a commercial flight across the Pacific. In May 2008, Singapore Airlines created history again by being the first carrier to operate an all-Business Class service between Asia and the USA with its launch of all-Business Class non-stop flights from Singapore to New York (Newark). How my selected brand evolved its product to understand its customers? Who: Bread Talk What:Bread Talk was incorporated in Singapore o 6 March 2003 as an investment holding public company. The company's Chairman, Dr. George Quek, first conceptualised "BreadTalk" in April 2000 when he saw an opportunity for starting a bakery selling freshly baked breads and buns which are visually creative and attractive. How the Brand has evolved:BreadTalk's first retail outlet commenced business on 1 July 2000 at Parco Bugis Junction. They opened their second retail outlet within the next five months in December 2000 at Novena Square. Their first retail outlet was situated in the HDB heartlands that was also opened in December 2000 at Junction 8 Shopping Centre. In 2001, they expanded their operations by opening another five retail outlets. When they first commenced operations, the whole baking process, from the preparation of the dough to the final topping of the bakery items, was done at each individual retail outlet. As their operations continued to grow, and in preparation for our franchising plans, they set up their central kitchen and shifted their corporate headquarters to their present premises at KA FoodLink, Kampong Ampat in September 2001. To expand their production capacity, they acquired more space, machinery and equipment in 2002. The expansion of their central kitchen was completed by November 2002. Over the last two years, BreadTalk have introduced many new varieties of breads, buns, cakes and pastries and we presently have launched more than 100 bakery items. Their specialty is their "see through" kitchen which allows customers to see their bread being made. BreadTalk also have their own "restaurant" now called 'Food Republic.' Besides food and bread, BreadTalk also expanded their Bread business. They have a small cafe called 'ToastBox' and a place where you can design your own cakes, 'Icing House'. I selected Converse.a) http://newhouseprssa.org/dev/wp-content/uploads/2012/10/Converse-converse-1554611-2045-1333.jpgb) Marquis Mills Conversec) It's a shoe company.d) When the U.S. entered World War II in 1941, Converse shifted production to manufacturing rubberized footwear, outerwear, and protective suits for the military. Widely popular during the 1950s and 1960s, Converse promoted a distinctly American image with its Converse Yearbook. Artist Charles Kerins created cover art that celebrated Converse's role in the lives of high school and college athletes. Converse has become a fashionable shoe of choice for many celebrities including Demi Lovato, Kristen Stewart, who wore them on the red carpet. B)Who?Blizzard Entertainment was founded by three people, Michael "Mike" Morhaime, Allen Adham and Frank Pearce. C) What?An American game developing company famous for many of its award winning products based in Irvine, California, USA with its parent company, Activision Blizzard based in Santa Monica, California, USA. D)How has it evolved?It was first created under the name 'Silicon & Synapse' for mainly game ports until 1993 when it started its own software production. In 1994, they were acquired by 'Davidson & Associates' and briefly changed its name to Chaos Studio before renaming as Blizzard Entertainment after finding out that another company the name Chaos already existed. Since then, it was sold, acquired and consumed into various companies such as CUC International, Cendant, Havas, Vivendi (under Vivendi Games) and finally consumed into the Activision Blizzard it now resides in. Its main production is its award winning franchises such as World of Warcraft, Diablo and Starcraft. I selected Gibson Les Paul.a) http://gibsonlespaulstudio.org/wp-content/uploads/2011/11/Gibson_Les_Paul_by_ToastMan85.jpgb) Lester William Polsfuss, also known as 'Les Paul'c) The No 1. Guitar brand in the market.d) Over the years, Gibson Guitar Corporation has evolved, from Orville Gibson innovating and inventing archtop guitars, to Les Paul inventing the Les Paul guitar, which literally made Rock n Roll possible, thus changing the face of rock music. A) nintendo is founded by Fusajiro YamauchiB)nintendo has created many video games which is considered one of the best in the worldC)in the past, it sells a playing card game called hanafuda, soon it was a success and later they went into other ventures such as a taxi company, a love hotel chain, a TV network, a food company. All this venture eventually failed . later , they went into the video game industry and created many famous games and till now, nintendo is still creates many best selling titles like mario, legend of zelda and kirby a) http://images.wikia.com/alienswarm/images/archive/2/28/20101024142338!800px-Valve_logo_svg.pngb) Valve was founded by former Microsoft employees, Gabe Newell and Mike Harrington.c) AN american game development and digital distribution company based in Kirkland, Washington.d) Valve evolved from developing games to making their own game engine and also a gaming social network of their own. c) Nike is an American multinational corporation that is engaged in the design , development and worldwide marketing and selling of footwear, apparel , equipment , accessories and services. d) The company started out as a sports shop named ' Blue ribbon sports' and now it has evolved into a $10.7 Billion brand , named ' Nike' And now Nike sponsors many high-profile athletes and sports teams around the world, with the highly recognized trademarks of "Just Do It " and the Swoosh logo. What: Student at the University of Texas at Austin in 1984, Michael Dell founded the company as PCs Limited with capital of $1000. Operating from Michael Dell's off-campus dormitory room at Dobie Center, the startup aimed to sell IBM PC-compatible computers built from stock components.Dell provides technology solutions, services and support. How it has evolved: Dell has since gone through significant priority shifts, internal company changes, and revamped their social media policy to the point where they actually have some of the highest customer service rankings of any company. This ensures that our customers connect with the experts who can address their unique issues and ultimately help them do and achieve more. Dell now has a full blow command center for listening to customer feedback and improving their company based on the opinions of their consumers. My brand is Levi's and its promise is far better than necessary that it has kept. It has improved its products for the customers such as creating the annual 'trade in you old jeans' event after realising that customers had old jeans that were too small jeans for them. This event allows Levi's to collect fabric and for the customers to clear space in their wardrobe, it is a win-win situation. I selected Coca-Cola.a) http://www.freshnetworks.com/blog/wp-content/uploads/2011/03/Coca-cola.jpgb) John Pembertonc) A store that sells drinks, specifically Coke.d) In 1886, the recipe was developed and then had three businesses. It was also sold for relieving nausea and mild stomach aches. Then, cans of Coke were made. Two distinct versions of Coke came out, New Coke and Coca-Cola Classic, but new Coke stopped production quickly for the taste which was very different from original versions of Coke. It was then renamed back to Coca-Cola. What: Hewlett-Packard Company or HP is an American multinational information technology corporation headquartered in Palo Alto, California, United States. It provides products, technologies, software, solutions and services to consumers, small- and medium-sized businesses and large enterprises, including customers in the government, health and education sectors. How it has evolved: HP has evolved a lot from when it was founded, their very first financially successful product being a precision audio oscillator to having successful lines of printers, scanners, digital cameras, calculators, PDAs, servers, workstation computers, and computers for home and small business use. There is also a HP IdeaLab to further provide a web forum on early-state innovations to encourage open feedback from consumers and the development community. What: Mojang is a Swedish independent video game developer founded in May 2009 under the name Mojang Specifications by Markus Persson, and most known for creating the popular indie game Minecraft. It is currently developing the games Scrolls and 0x10c, while continuing to update Minecraft. Mojang's company headquarters is in Stockholm. How it has evolved: Following a paid trip and employment offer from Valve Corporation in early September 2010, Markus Persson, Jakob Porsér, and Carl Manneh founded Mojang, as Persson desired to run a self-made independent studio for the continued development of Minecraft. Within a year, the company grew to a size of twelve employees, with their second video game, Scrolls, in development, as well as serving as the publisher of Cobalt. In 2011, Napster founder and former Facebook president Sean Parker offered to invest in Mojang, but was declined. By March 2012, the company had accumulated a net income of over $80 million. American global aerospace, defense, security, and advanced technology company with worldwide interests. How it has developed: Merger talks between Lockheed Corporation and Martin Marietta began in March 1994, with the companies announcing their $10 billion planned merger on August 30, 1994. The deal was finalized on March 15, 1995 when the two companies' shareholders approved the merger. The segments of the two companies not retained by the new company formed the basis for the present L-3 Communications, a mid-size defense contractor in its own right. Lockheed Martin later spun off the materials company Martin Marietta Materials.Both companies contributed important products to the new portfolio. Lockheed products included the Trident missile, P-3 Orion, F-16 Fighting Falcon, F-22 Raptor, C-130 Hercules, A-4AR Fightinghawk and the DSCS-3 satellite. Martin Marietta products included Titan rockets, Sandia National Laboratories (management contract acquired in 1993), Space Shuttle External Tank, Viking 1 and Viking 2 landers, the Transfer Orbit Stage (under subcontract to Orbital Sciences Corporation) and various satellite models. Lockheed Martin made several successful projects since merger and is now working on the F-35 Lightning 2. What: IBM is a computer manufacturer which people trusted from the late 1880s until now. They manufacture not only computers, but computer hardware and softwares. They offer services from mainframe computers to nanotechnology. How it evolved:At first, their employees and computers were used for helping NASA in their space tracking systems.Later on, they came out with their first computer system family, the IBM System/360, which covers a wide range of application, from small to big, commercial and scientific. And with the new system, companies rewrite applications while upgrading their computer capabilities. Then they started coming out with personal computer and sold the PC business to Lenovo after a period of time. IBM now has surpassed Microsoft in the closing value of the company. How :has my selected brand evolved its product to understand its customers? Who: Valve What: Founded in 1996 by former Microsoft employees Gabe Newell and Mike Harrington, Valve became famous from its critically acclaimed Half-Life series, (the first game released in November 1998). How the brand has evolved: After the success of Half-Life, the team worked on mods, spin-offs, and sequels, including Half-Life 2. All current Valve games are built on its Source engine, which owes much of its success to mods and sequels. The company has developed six game series: Half-Life, Team Fortress, Portal, Counter-Strike, Left 4 Dead and Day of Defeat. Valve is noted for its support of its games' modding community: most prominently, Counter-Strike, Team Fortress, and Day of Defeat. Valve has branched out with this tradition to continue developing Dota 2 as the stand-alone sequel to the Warcraft III mod.Each of these games began as a third-party mod that Valve purchased and developed into a full game. They also distribute community mods on Steam. Since Valve Corporation's debut, it has expanded both in scope and commercial value. On January 10, 2008, Valve Corporation announced the acquisition of Turtle Rock Studios.On April 8, 2010, Valve won The Escapist Magazine's March Mayhem tournament for the best developer of 2010,beating out Zynga in the semi-final and BioWare in the finale. On August 1, 2012, Valve Corporation announced revisions to the Steam Subscriber Agreement (SSA) to prohibit class action lawsuits by users against the service provider. Alongside these changes to the SSA, the company also declared publicly the incorporation of Valve S.a.r.l., a subsidiary based in Luxembourg. In 2012, the company acquired Star Filled Studios a small video-game development company. Info taken from :http://en.wikipedia.org/wiki/Valve_Corporation Apple has expanded from a small corporation that Steve Jobs set up to a revolutionary worldwide company, its name known throughout the world. It was the first to create a computer, which have evolved into what they are now. It was the first to create an iPhone that merged a computer with a phone, along with a touchscreen, which a hundreds of thousands were sold. It always continues to update software, adding new functions and adapting to people's needs.
{ "pile_set_name": "Pile-CC" }
t the prime factors of b(q). 2, 3 Suppose 147 = 46*t - 45*t. Suppose -8*g + t = 3*m - 5*g, 3*m + g = 149. List the prime factors of m. 2, 5 Let z = 393 - 233. Suppose 20*t = 24*t - z. Let l = t - -10. What are the prime factors of l? 2, 5 Suppose -5*o = -10*o - 2*y - 9631, 1919 = -o + 2*y. Let q = o + 3720. List the prime factors of q. 5, 359 Suppose -50*f = -44*f - 312. Let q = -3 - f. Let p = 105 - q. List the prime factors of p. 2, 5 List the prime factors of ((-1371)/5)/(27/(-58590)*21). 2, 31, 457 List the prime factors of (-7 - 26)/(-11) - 211*(-82 + -2). 3, 19, 311 Let n = 121 + -142. Let v(y) = 2*y**2 + 5*y - 9. What are the prime factors of v(n)? 2, 3 Let a(j) = 44*j**2 + 5. Let c be a(-4). Let p = 20448 - 20943. Let q = c + p. What are the prime factors of q? 2, 107 List the prime factors of 1066 + -8*(-195)/120. 13, 83 Suppose 16*l = -5*d + 165529, 0*d - 132392 = -4*d - 5*l. List the prime factors of d. 3, 3677 Let w(f) = f**2 - 22*f - 19. Suppose l - 90 = -5*l. Let s be w(l). Let v = 241 + s. List the prime factors of v. 3, 13 Let n = -567 - -801. Let o be (390 - 400)*((-81)/2 - -1). Let w = o - n. What are the prime factors of w? 7, 23 Let w(t) = 29*t - 232. Let f be w(8). List the prime factors of -295*(-1 - f/6). 5, 59 Let c(y) = 16*y**2 - 9*y**2 - y**2 - 56 - 9*y - 5*y**2. List the prime factors of c(15). 2, 17 Suppose 0 = -218*r + 221*r - 1428. Let w = 772 - r. What are the prime factors of w? 2, 37 Suppose 0 = -115*h + 3861315 - 301030. List the prime factors of h. 83, 373 Let x = 63 + -57. What are the prime factors of ((-11800)/150)/((-2)/x)? 2, 59 Let z = 4737 + 55403. What are the prime factors of z? 2, 5, 31, 97 Suppose -38*t + l + 27333 = -37*t, 3*l - 82011 = -3*t. List the prime factors of t. 5, 7, 11, 71 Let x be 1 + -3 - (-5 + 3). Let v(u) = u**3 - 3*u**2 + 13*u - 29. Let d be v(3). Suppose -1580 = -x*g - d*g. What are the prime factors of g? 2, 79 Let t(j) = 47 + 86 - 28 + 32 - 21*j + 19*j. Suppose 2*g = 3*g. List the prime factors of t(g). 137 Suppose -2*w - 304 = -392. Suppose -10392 = -w*s + 4216. List the prime factors of s. 2, 83 Suppose 0*z - 2*z + 106 = 0. Suppose -z + 47 = -3*c. Suppose -3*k = -2*i - 562, -5*k + 910 = -0*k + c*i. List the prime factors of k. 2, 23 Suppose -14*z = 38*z - 17*z. Suppose z = t - 2*n - 476, -5*t = -6*t + n + 479. List the prime factors of t. 2, 241 Let j = 117 + -129. Let d(g) = -15*g + 18. What are the prime factors of d(j)? 2, 3, 11 Suppose -9*j - 5*j + 94078 = 3*j. List the prime factors of j. 2, 2767 Let l(s) = 133*s**3 - 11*s**2 + 52*s + 44. What are the prime factors of l(5)? 2, 11, 757 Suppose -4*j = -31*j + 21816. Let t = -484 + j. What are the prime factors of t? 2, 3 Let m = -60 + 65. Suppose f = 4*y - 4, -m*f + 10*f + 2 = 2*y. Suppose f = 31*q - 30*q - 220. What are the prime factors of q? 2, 5, 11 Let m = 4 + -2. Suppose -5*t + m*c + 143 = -2, -t + 37 = -2*c. Let l = t - -11. What are the prime factors of l? 2, 19 Suppose t = 3*k + 2*k - 93, -5*k + 97 = t. Suppose -k*l + 32 = -11*l. Suppose -2*u = l*d - 1012, 3*d + 2*u + 1020 = 7*d. What are the prime factors of d? 2, 127 Let p = 31313 - 16348. List the prime factors of p. 5, 41, 73 Let c(s) = -2*s**2 + 4*s + 11. Let l be c(5). Let b = l + 26. Suppose 0 = 3*q + 9, b*q - 5*q - 294 = -2*d. List the prime factors of d. 2, 3, 5 Suppose -3*d + 7*d - 5*p - 7717 = 0, 0 = -5*d - 2*p + 9638. Suppose 5*r - c - 3*c - d = 0, -c = 3*r - 1150. What are the prime factors of r? 2, 3 Suppose 0 = 5*l - 4*n - 77, 2*n - 7 = -l - 0. Suppose -896 = l*k + k. List the prime factors of 260 + -2 + k/16. 2, 127 Let c(t) = -t**3 - 11. Let h be c(0). Let a = 426 + h. What are the prime factors of a? 5, 83 Let j(g) = -g**3 + 4*g**2 - 117*g + 193. What are the prime factors of j(-25)? 3, 73, 97 Suppose 18*w - 450990 = -3*a, -2*a - 17 = -5. What are the prime factors of w? 2, 3, 29 Let z = -327 - -523. Let y = z - 16. Suppose y = -4*b + 14*b. What are the prime factors of b? 2, 3 Let v be ((-149)/3)/((-2)/(-6)). Let b be (-2 - -350)*(-20)/(-30). Let m = v + b. What are the prime factors of m? 83 Let o = -33 + 34. Let d be (33/6 - 4)/(o/(-14)). Let i = d + 30. What are the prime factors of i? 3 Suppose -7279 = -2*n - 0*n + 3*w, 0 = w - 5. Suppose d = 8*d - n. Suppose -3*l - 4*v + 2*v + d = 0, 0 = 2*v + 10. List the prime factors of l. 3, 59 Let d(m) = -4*m**2 - 4*m + 15. Let g(c) = -9*c**2 - 9*c + 31. Let p(a) = -7*d(a) + 3*g(a). List the prime factors of p(-22). 2, 3, 5 Let t = -1252 + 7175. List the prime factors of t. 5923 Suppose 0 = 4*n - 2*g - 39120 - 16102, -55197 = -4*n - 3*g. What are the prime factors of n? 3, 43, 107 Let c = 22359 - 17370. What are the prime factors of c? 3, 1663 Suppose 0*p = -3*j + p - 315, 0 = 3*p. Let f = j - -301. Let y = f - 51. List the prime factors of y. 5, 29 Let b(d) = -d**2 - d - 36. Let p be b(0). Let h = p - -40. Suppose h*s - 20 = -s, -5*s + 230 = 3*g. What are the prime factors of g? 2, 5, 7 Let g(u) = u**2 - 5*u. Let s be g(3). Let x(y) = 21*y**2 + 34*y + 7. What are the prime factors of x(s)? 13, 43 Let n = -19 - -32. What are the prime factors of -1*n/2*(-1014)/13? 3, 13 Let d(g) = 3*g**3 - 7*g**2 + 3*g + 2. Let m be d(2). Let j be m/(-6)*(15 + -12). List the prime factors of ((-149 - -5)/6)/j. 2, 3 Let s be 9/(-15) - 10/75*-27. Suppose 0 = 10*r - 6*r + c - 2537, -618 = -r + s*c. List the prime factors of r. 3, 211 Let y = -26 + 30. What are the prime factors of ((-1312)/12)/y*-15? 2, 5, 41 Let w be -1*28/1*(-1770)/60. Let p = -270 + w. List the prime factors of p. 2, 139 Suppose -11400 = -9*p + 5*p. Suppose 0 = v + 2*z - 567, 0 = v - 6*v + 5*z + p. What are the prime factors of v? 569 Let p(r) = 287*r**2 + 36*r - 8. What are the prime factors of p(-5)? 3, 17, 137 Let l(d) = d**3 + 2*d**2 - d - 4. Let m be l(3). Suppose 66 = 4*i - m. Suppose -i = -n + g + g, 20 = 4*g. List the prime factors of n. 2, 3 Suppose -o - 4*o = 5*d - 5, 4*o = -d + 4. Let a = 180 + -161. Suppose d = 3*q - 5 - a. What are the prime factors of q? 2 List the prime factors of 4/8*-2 - 95300/(-6 - -1). 3, 6353 Suppose 0 = -4*j - 4*a + 19570 + 11586, 4*j - a = 31166. What are the prime factors of j? 3, 7, 53 Suppose -2*v + 47 = -5*v - n, 5*v + 75 = -5*n. Let t be (v/(-10))/((-2)/(-15)). Suppose 0 = 6*z - t*z + 150. What are the prime factors of z? 5 Let y(z) = 11*z**2 - 5*z - 1. Let w(f) = -f**2 - 25*f - 12. Let t be w(-22). Let q = t + -52. What are the prime factors of y(q)? 3, 11 Let b(f) = -244*f**3 + 2*f**2 + 35*f + 230. What are the prime factors of b(-6)? 2, 67, 197 Suppose -5*r - 65 = -100. What are the prime factors of 4*(320/r + (-22)/(-77))? 2, 23 Let h(s) = 3*s**3 - 5*s**2 - s - 8. Let f(i) = 127*i**2 + i. Let y be f(1). Let m = y - 124. List the prime factors of h(m). 2, 5 Suppose 0 = 2*g + 27 - 33. Let u(j) = 259*j + g - 276*j - 8. List the prime factors of u(-7). 2, 3, 19 Suppose -38*p + 44*p = 3558. Suppose 8*m = 10*m + 4*b - 1226, 0 = -m + 2*b + p. List the prime factors of m. 3, 67 Suppose -65*s - 91*s = -4142506 - 406142. List the prime factors of s. 2, 61, 239 Let o be (5/(-20))/(2/8). Suppose -d - 3*z - 11 = 2, 5*d + 4*z = -32. What are the prime factors of (65 - (d + 8))/(o/(-2))? 2, 61 Let f = -38207 + 56015. List the prime factors of f. 2, 3, 7, 53 Suppose -42*u - 8197 + 30169 = -51276. What are the prime factors of u? 2, 109 Let g = 62 + 48. What are the prime factors of g/(-88) - (-1729)/4? 431 Let o(t) = t**3 - 31*t**2 + 32*t - 38. Let u be o(30). Suppose u*l = 17*l - 10. List the prime factors of (l*210/(-25))/(2/25). 2, 3, 5, 7 Suppose -j + 2*n + 579 = 0, -10*j + 13*j + 3*n - 1764 = 0. Suppose -2*w + 279 = -j. List the prime factors of w. 2, 3 Let b(j) = -j**3 - 5*j**2 - 3*j - 11. Let l be b(-5). Suppose 0 = l*h + 8, y - 4*h - 20 = -2*y. List the prime factors of 45 - (-3 + 8/y). 2, 23 Suppose 0 = 5*u + 5*s + 10, 3*s = -6*u + u. Suppose -w + 2896 - 159 = u*o, -5*o = w - 4563. List the prime factors of o. 11, 83 Let x(t) = -t**3 - 7*t**2 - 39*t - 1. Suppose 7*s - 2*s + c + 27 = 0, -18 = 4*s + 2*c. List the prime factors of x(s). 197 Let d(h) = -h**2 - 6*h + 21. Let k be d(-9). Let u(r) = -76*r - 32. Let m be u(k). Suppose 6*v - m + 82 = 0. What are the prime factors of v? 3, 19 Suppose -182454 = -3*y -
{ "pile_set_name": "DM Mathematics" }
Generation of histo-blood group B transferase by replacing the N-acetyl-D-galactosamine recognition domain of human A transferase with the galactose-recognition domain of evolutionarily related murine alpha1,3-galactosyltransferase. The alpha1,3-galactosyl epitope (alpha1-3Gal epitope), a major xenotransplant antigen, is synthesized by alpha1,3-galactosyltransferase (alpha1-3Gal transferase), which is evolutionarily related to the histo-blood group A/B transferases. We constructed structural chimeras between the human type A and murine alpha1-3Gal transferases and examined their activity and specificity. In many instances, a total loss of transferase activity was observed. Certain areas could be exchanged, with a potential diminishing of activity. With a few constructs, changes in acceptor substrate specificity were suspected. Unexpectedly, a functional conversion from A to B transferase activity was observed after replacing the short sequence of human A transferase with the corresponding sequence from murine alpha1-3Gal transferase. Because these two paralogous enzymes differ in 16 positions of the 38 amino acid residues in the replaced region, our finding may suggest that despite separate evolution and diversified acceptors, these glycosyltransferases still share the three-dimensional domain structure that is responsible for their sugar specificity, arguing against the functional requirement of a strong purifying selection playing a role in the evolution of the ABO family of genes.
{ "pile_set_name": "PubMed Abstracts" }
when (-105042)/(-112) - (-4 + 124/32) is divided by 157? 153 Suppose 0 = -24*v + 132353 + 200143. What is the remainder when v is divided by 163? 162 Let j(b) = 10*b**3 + b**2 - 7*b - 55. What is the remainder when j(6) is divided by 75? 74 Suppose 7 = c + j - 2*j, 26 = 5*c - 2*j. Calculate the remainder when 150 is divided by c. 2 Let q = 156 - 144. Suppose -16 = f - 5*f, 5*f = -3*x + 2. What is the remainder when 44 is divided by (x/q)/(2/(-36))? 8 Let v(a) = a**3 + 11*a**2 - 13*a - 11. Let c be v(-12). Suppose 2*p + p = 2*y + c, 5*p - 2*y + 5 = 0. Calculate the remainder when p*1*(-52)/4 is divided by 10. 9 Suppose -2*i = 14, 199*h - 196*h + i - 4334 = 0. Calculate the remainder when h is divided by 9. 7 What is the remainder when (4/(-5))/(8/(-1860)) - -2 is divided by 4? 0 Suppose -13*g - 41*g + 73337 = 5*g. What is the remainder when g is divided by 30? 13 Suppose 195*a + 197*a + 10584 = 400*a. What is the remainder when a is divided by 6? 3 Let l be ((28/(-3))/(-7))/((-2)/6). Let w be -8 - l - 279/3. Let m = w - -107. What is the remainder when 88 is divided by m? 8 What is the remainder when 230 is divided by (-6)/(-5)*(-5 - (-4500)/54)? 42 Let u be 18/(-15) - 2/30*-3. Calculate the remainder when 259 is divided by ((-21)/(-14)*-4)/(u/11). 61 Let a = -10 - -102. Let p = a + -79. What is the remainder when 49 is divided by p? 10 Let a(x) = 247*x + 108. What is the remainder when a(1) is divided by 177? 1 Suppose 4*r - 105 = w, 3*r - w = 53 + 26. Suppose -29*s = 84 - 12032. Let m = -206 + s. What is the remainder when m is divided by r? 24 Suppose 5*i = i - 4, -4*i - 175 = 3*w. Let g = 20 - w. What is the remainder when g is divided by (88/(-198))/((-6)/135)? 7 Calculate the remainder when 38 - (-77027)/170 - 2/20 is divided by 5. 1 Calculate the remainder when 3272 is divided by (30 - 6)/2 + 207/23. 17 Suppose w = 4*m - 14, 5*w + 16*m = 19*m + 49. Calculate the remainder when 2974 is divided by w. 6 Suppose -3*p + 186 = 3*u, 2*p + 275 = -10*u + 15*u. What is the remainder when 332 is divided by u? 47 Let n = -50 + 78. What is the remainder when 384/(-6)*(-4 + 3) is divided by n? 8 What is the remainder when 2040 is divided by 1090/654 - ((-111)/9 - 2)? 8 Calculate the remainder when 1043 is divided by (-190)/25*(3756/(-136) - (-10)/85). 207 Let l(x) = 4*x**2 - 5*x - 78. Let j be l(-10). Let f = j - 238. What is the remainder when f is divided by 35? 29 Calculate the remainder when 1/((-7)/10)*(1301 + -1574) is divided by 113. 51 Let k = 34 + -54. Let u = 43 + k. Suppose -b = 4*t - u, 3*t - 6 - 9 = 0. Calculate the remainder when 10 is divided by b. 1 Suppose -3*w + 227 = -h, 3*w + 3*h + 219 = 6*w. Suppose -5*f + 33 = -w. What is the remainder when f is divided by 4? 2 Calculate the remainder when 418 is divided by (-35)/(-56)*14*4/1. 33 Suppose 1202 = 11*r - 8137. Suppose 0 = -d - r + 903. Calculate the remainder when 155 is divided by d. 47 Let d = 6717 - 6608. Calculate the remainder when 530 is divided by d. 94 Suppose 792 = -10*o + 34*o. What is the remainder when 5442 is divided by o? 30 Let w(i) = -i + 24. Let b be (1 - (-627)/(-3))*-1. Suppose 9*x = 7*x + b. What is the remainder when x is divided by w(9)? 14 Suppose 0 = 5*c - 3*c + 4. Let p(z) = 6*z - 6. Calculate the remainder when 65*(1/(-5) - c/2) is divided by p(4). 16 Suppose -589 = -4*c + 3*i, -4*c - 79*i = -78*i - 609. Calculate the remainder when c is divided by 38. 37 Suppose 89*b - 404*b = -82215. What is the remainder when 1321 is divided by b? 16 Let u = 2996 - 2949. Calculate the remainder when 245 is divided by u. 10 Let w be 1/13*13*(555 + 2). Let k = w - 138. What is the remainder when k is divided by 105? 104 Suppose 3*d + 2*d = 15. Suppose d*f + 198 = 9*f. What is the remainder when 84 is divided by f? 18 Suppose -4*t + 4*d + 1516 - 332 = 0, -4*t + 1149 = -9*d. What is the remainder when t is divided by (-2)/(20/(-9) - -2)? 6 Let j = -4 - -7. Let a(i) be the third derivative of i**6/20 - i**5/15 + i**4/24 - i**3 - 8*i**2 + 1. Calculate the remainder when a(j) is divided by 31. 30 Suppose -71*q + 255*q = 14*q + 3060. What is the remainder when 1817 is divided by q? 17 Let z be (208/(-24))/(2/12). Let x = 120 - 49. Let d = z + x. What is the remainder when 75 is divided by d? 18 Let q(f) = 8*f + 2. Let p(y) = -y**2 - 13*y - 10. Let u(r) = -r**3 - 7*r**2 + 7*r + 39. Let j be u(-7). Calculate the remainder when q(7) is divided by p(j). 18 Let x = -211 + 216. Suppose 0 = -2*i - x*q + 64, 5*i - 3*q + 0*q - 160 = 0. Let m = 11 + -6. Calculate the remainder when i is divided by m. 2 Suppose -2*i - i + 2*k = -896, 2*k = i - 296. Suppose 12*t + i = 15*t. What is the remainder when t is divided by 36? 28 Let s(o) = -9*o + 1. Suppose -3*y - 5*d = -7*d - 160, -175 = -3*y + 5*d. Calculate the remainder when y is divided by s(-2). 12 Suppose 10*h - 6*h - 668 = 0. Suppose -4*a = -76*f + 75*f + 42, 5*f + 5*a = 160. Calculate the remainder when h is divided by f. 31 Calculate the remainder when 6390 is divided by -38 + 21 + (290 - 27). 240 Let b(r) = r**2 - 148*r + 4898. What is the remainder when 4377 is divided by b(101)? 149 Let v(n) = -3*n - 31. Let c be v(-12). Suppose -4*s = c*o - 338, -s - 10 = -6*s. Calculate the remainder when o is divided by (1 - (-3)/(-5))*35. 10 Let v = -25 - -81. Let u(p) = -p**2 - 24*p - 42. Let r be u(-22). Let z(j) = 25*j - 31. What is the remainder when v is divided by z(r)? 18 Let i(g) = -165*g**2 + 8*g - 10. Let t be i(8). What is the remainder when 271 is divided by t/(-663) - 4/(-26)? 15 Let z(b) = 6*b**2 + 4*b - 2. Let q(o) = -o**3 - 13*o**2 - 2. Let w be q(-13). Calculate the remainder when 40 is divided by z(w). 12 Let h be ((-2)/(-4))/(6 + 6362/(-1060)). Let s = 539 + h. What is the remainder when s is divided by 25? 24 Let r be 2660/285*(-6)/(-4). Suppose -r*y - 923 = -27*y. What is the remainder when y is divided by 44? 27 Suppose 0 = 5*c - 424 - 456. Suppose -5*u - 4*r + 1240 = -7*r, 0 = -u - 4*r + 248. Let g = u - c. Calculate the remainder when g is divided by 15. 12 What is the remainder when 246 is divided by 2/(2*1 - ((-210)/66 - -5))? 4 Let m = -2922 + 2939. Calculate the remainder when 1609 is divided by m. 11 Suppose 66*b - 240 = 26*b. Calculate the remainder when 179 is divided by b. 5 Let t(o) = 117*o - 2. Let k be t(6). Suppose 3*f - k = -91. Suppose -115*y - 8160 = -355*y. What is the remainder when f is divided by y? 33 Suppose -n + 59 = -2*b, 216 = 2*n + 2*b + 152. Calculate the remainder when 395 is divided by n. 26 Let c(r) be the second derivative of 5*r**4/12 + r**3/2 + 17*r. What is the remainder when 48 is divided by c(2)? 22 Suppose 153 + 84 = -k - 4*z, 2*k = 3*z - 474. Let p = 316 + k. What is the remainder when 552 is divided by p? 78 Let x = 117 - 152. Let z = 107 + x. Calculate the remainder when z is divided by 5. 2 Let h be 7/9 + -1 + (-184)/(-18). Suppose h*l = 49 + 41. What is the remainder when l is divided by 2? 1 What is the remainder when 527142/10269*(0 - -1)/((-1)/(-6)) is divided by 32? 20 Calculate the remainder when 1397 is divided by (588/(-35) + 17)/(1/50). 7 Suppose -c + 4*w - 22 = 257, 5*w - 1149 = 4*c. Let y = -274 - c. Let m = -10 + 18. Calculate the remainder when y is divided by m. 1 Suppose -52 = -3*m - m. Suppose -163*n = -15506 - 10248. Suppose -232 = -m*u + n. Calculate the remainder when 117 is divided by u. 27 Suppose -122 = -2*j + 190. Suppose 0 = 19*k - 20*k - 3*d + 31, 0 = k - 5*d - 39. Calculate the remainder when j is divided by k. 20 Suppose 12 + 6 = -6*t. Let f = 59 + t. Calculate the remainder when 222 is divided by f. 54 Suppose 4*z = 2*w - 182, 3*z - 4 = z. Suppose 7*v + 22 = f + 4*v, 0 = -f - v + 6. Calculate the remainder when w is divided by f. 5 Suppose 9*q - 12*q + 268 = 2*s, 0 = s + 1. What is the remainder when 1527 is divided by q? 87 Let k(h) = 3*h**3 - 5*h**2 - 3*h + 23. What is the remainder when 471 is divided by k(4)? 102 Let c(q) = q**2 - 11*q - 23. Let p be c(13). Suppose 16 = -p*f + 5*f. Suppose -x = -24 + f. What is the remainder when 63 is divided by x? 15 Suppose -44*a + 275 = -19*a. Let j = -28 - -115. Calculate the remainder when j is divided by a. 10 Suppose 0*w + 4*w = 3*t + 1, -w - t + 9 = 0. What is the remainder when 0 + 1 - -20 - 16/4 is divi
{ "pile_set_name": "DM Mathematics" }
Q: RecycleView adapter data show wrong when scrolling too fast I have a custom Recycle View adapter that list my items. in each Item I check database and draw some circles with colors. when I scroll listview very fast every drawed data ( not title and texts) show wrongs! how I can manage dynamic View creation without showing wrong data?! @Override public void onBindViewHolder(final ItemViewHolder itemViewHolder, int i) { itemViewHolder.date.setText(items.get(i).getData()); // set the title itemViewHolder.relative_layout_tag_place.addView(generateTagImages(items.get(i).getServerId())); // had to generate a Relativelaout with } and this is importMenuTags(): private RelativeLayout generateTagImages(String serverId) { List<String> color_list = new ArrayList<>(); RelativeLayout result = new RelativeLayout(context); List<String> list = db.getCardTags(serverId); int i = 0; for (String string : list) { RelativeLayout rl = new RelativeLayout(context); color_list.add(get_the_proper_color); Drawable drawable = context.getResources().getDrawable(R.drawable.color_shape); drawable.setColorFilter(Color.parseColor(dao.getTagColor(string)), PorterDuff.Mode.SRC_ATOP); RelativeLayout.LayoutParams lparams = new RelativeLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); lparams.addRule(RelativeLayout.ALIGN_PARENT_START); lparams.setMargins(i, 0, 0, 0); lparams.width = 35; lparams.height = 35; rl.setLayoutParams(lparams); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN) { rl.setBackground(drawable); } else { rl.setBackgroundDrawable(drawable); } result.addView(rl); i = i + 25; } return result; } I also had the same problem in simple custom adapter that it's solved by moving my function place out of if (convertView == null) { this is the link. A: As per seeing in your code, I found your relative layout must be showing some extra data while scrolling. And thats because of recycling of views. Here public void onBindViewHolder(final ItemViewHolder itemViewHolder, int i) { itemViewHolder.date.setText(items.get(i).getData()); // set the title itemViewHolder.relative_layout_tag_place.addView(generateTagImages(items.get(i).getServerId())); // had to generate a Relativelaout with //Problem is here. Suppose you added some child views in above holde.relative_layout , and this ViewHodler is recyclerd and provided to another item view, It already have all previously added views in it. and you are adding new child view with them. Hope you understand your problem. Solution: Very easy one. remove all previsousley added view in onBindViewHolder public void onBindViewHolder(final ItemViewHolder itemViewHolder, int i) { itemViewHolder.date.setText(items.get(i).getData()); // set the title itemViewHolder.relative_layout_tag_place.removeAllViews(); itemViewHolder.relative_layout_tag_place.addView(generateTagImages(items.get(i).getServerId())); // had to generate a Relativelaout with
{ "pile_set_name": "StackExchange" }
Q: Create cartesian product expansion of two variadic, non-type template parameter packs Lets say, I have two lists of non-type template parameteres (which might have a different type) a template foo that takes one value of each of those lists as a parameter How can I create a variadic parameter pack of foos, parameterized with the cartesian product of the two list elements? Here is what I mean: template<int ...> struct u_list {}; template<char ...> struct c_list {}; template<int, char > struct foo {}; template<class ...> struct bar {}; using int_vals = u_list<1, 5, 7>; using char_vals = c_list<-3, 3>; using result_t = /* magic happens*/ using ref_t = bar< foo<1, -3>, foo<1, 3>, foo<5, -3>, foo<5, 3>, foo<7, -3>, foo<7, 3> >; static_assert(std::is_same<result_t, ref_t >::value, ""); I'm looking for a solution that works in c++11 and doesn't use any libraries except the c++11 standard library. I also have my handroled version of c++14's index_sequence / make_index_sequence and can provide the non-type parameter lists as arrays if that simplifies the code. The closest I've found so far is this: How to create the Cartesian product of a type list?. So in principle (I haven't tested it) it should be possible to turn the non-type parameter packs into type parameter packs and then apply the solution in the linked post, but I was hoping that there is a simpler / shorter solution along the lines of this: template<int... Ints, char ... Chars> auto magic(u_list<Ints...>, c_list<Chars...>) { //Doesn't work, as it tries to expand the parameter packs in lock step return bar<foo<Ints,Chars>...>{}; } using result_t = decltype(magic(int_vals{}, char_vals{})); A: You may do something like the following: template <int... Is> using u_list = std::integer_sequence<int, Is...>; template <char... Cs> using c_list = std::integer_sequence<char, Cs...>; template<int, char> struct foo {}; template<class ...> struct bar {}; template <std::size_t I, typename T, template <typename, T...> class C, T ... Is> constexpr T get(C<T, Is...> c) { constexpr T values[] = {Is...}; return values[I]; } template <std::size_t I, typename T> constexpr auto get_v = get<I>(T{}); template<int... Ints, char ... Chars, std::size_t ... Is> auto cartesian_product(u_list<Ints...>, c_list<Chars...>, std::index_sequence<Is...>) -> bar<foo< get_v<Is / sizeof...(Chars), u_list<Ints...> >, get_v<Is % sizeof...(Chars), c_list<Chars...> > >... >; template<int... Ints, char ... Chars> auto cartesian_product(u_list<Ints...> u, c_list<Chars...> c) -> decltype(cartesian_product(u, c, std::make_index_sequence<sizeof...(Ints) * sizeof...(Chars)>())); using int_vals = u_list<1, 5, 7>; using char_vals = c_list<-3, 3>; using result_t = decltype(cartesian_product(int_vals{}, char_vals{})); Demo Possible implementation of std part: template <typename T, T ... Is> struct integer_sequence{}; template <std::size_t ... Is> using index_sequence = integer_sequence<std::size_t, Is...>; template <std::size_t N, std::size_t... Is> struct make_index_sequence : make_index_sequence<N - 1, N - 1, Is...> {}; template <std::size_t... Is> struct make_index_sequence<0u, Is...> : index_sequence<Is...> {}; And change in answer: template <std::size_t I, typename T, template <typename, T...> class C, T ... Is> constexpr T get(C<T, Is...> c) { using array = T[]; return array{Is...}[I]; } template<int... Ints, char ... Chars, std::size_t ... Is> auto cartesian_product(u_list<Ints...>, c_list<Chars...>, index_sequence<Is...>) -> bar<foo< get<Is / sizeof...(Chars)>(u_list<Ints...>{}), get<Is % sizeof...(Chars)>(c_list<Chars...>{}) >... >; Demo C++11
{ "pile_set_name": "StackExchange" }
, Welcome! This is a community for people interested in stories, icons, discussions, fanart and fiction involving the Supernatural TV show characters Dean Winchester and/or Layla Rourke. Submissions do not have to include both characters, but they do need to at least stem from the idea of some sort of interaction between them, or the episode "Faith." Other canon characters are acceptable as long as they conform to this idea. Remembering of course that the primary interaction or emphasis should stem from that between Dean and Layla or stemming from the episode, Faith, all types of entries are welcome at this time. Please post using the standard warnings for spoilers, as well as the following genre labels: het for heterosexual interaction or relationship, gen for general stories without emphasis on a relationship, RPF-gen or RPF-het for fiction involving any of the actors on the show (including Jensen and Julie). Not sure RPS belongs here, but if you're so motivated, fire away.slash for a story with homosexual interaction or relationship, and wincest for you-know-what interaction between Dean and Sam Winchester (or their other family members). Also, please rate your submission using the following standard ratings:G, PG, NC-17, R, and M (for Mature, strongest rating). BE CERTAIN TO F-LOCK YOUR POSTS TO THE COMMUNITY IF YOUR RATING IS NC-17 or higher. Be as specific as you can about anything your story contains, even if it is not the main emphasis, that others may find disturbing, including violence, sexual situations, strong language, death, and incest. If you have questions, feel free to email susannah at susannaheanes dot com and we'll get back to you swift as a Reaper.And have fun, Dean and Layla are glad you are here!.
{ "pile_set_name": "Pile-CC" }
This application seeks support for Pediatric Oncology Group activities by Southwestern Medical School. Since 1981 our institution has grown to become the Group's third or fourth largest member, among 40 participating institutions, with regard to patients enrolled on therapeutic studies. Moreover, investigators from our center have risen to positions of leadership on major Group committees, including the Executive Committee, New Agents and Pharmacology Committee, and Lymphoid Diseases Committee (Relapsed ALL Subcommittee). Investigators from Southwestern Medical School also have served and are serving as study coordinators of key Pediatric Oncology Group protocols, including the SIMAL-3 study for relapsed patients with ALL, the Group's second largest study in patient accrual. We are now proposing a major commitment of resources from our center to continue Pediatric Oncology Group research studies. This grant proposal describes the personnel and facilities in our center and its affiliate institution, Cook Children's Hospital in Fort Worth, with which we aim to conduct Group activities. The past history of active participation in clinical cancer research within and separate from the Pediatric Oncology Group and our initial contributions to the Group during the past 3 years provide evidence of this commitment. Specifically, we aim to participate in Pediatric Oncology Group research activities by: 1) Actively enrolling as many patients as possible on Pediatric Oncology Group treatment and ancillary protocols, among the 130-140 new children with cancer seen each year at Southwestern Medical School and its affiliate institution; 2) collecting recording, and submitting all necessary data in a timely and organized fashion in order that our protocol entries continue to receive a high rate of evaluability; 3) making scientific contributions to the Pediatric Oncology Group by actively serving on key committees, as protocol coordinators, and as consultants to the Group's leaders in a variety of areas, particularly regarding new therapeutic approaches to ALL and development of Phase I and Phase II drug trials.
{ "pile_set_name": "NIH ExPorter" }
Hueneosauria The Hueneosauria are a group of Ichthyosauria, living during the Mesozoic. In 2000, Michael Werner Maisch and Andreas Matzke defined a node clade Hueneosauria as the group consisting of the last common ancestor of Mixosaurus cornalianus and Ophthalmosaurus icenicus; and all of its descendants. The clade is named after Friedrich von Huene, a German paleontologist who was a leading ichthyosaur expert in the early twentieth century. The Hueneosauria contain the more derived ichthyosaurs, which have the morphology of a fish. The group originated in the early Triassic and became extinct during the Cretaceous. References Category:Ichthyosaurs Category:Triassic ichthyosaurs Category:Cretaceous ichthyosaurs Category:Early Triassic first appearances Category:Cretaceous extinctions
{ "pile_set_name": "Wikipedia (en)" }
Searchers find live worms in shuttle wreckage CAPE CANAVERAL, Florida (AP) -- Hundreds of worms from a science experiment aboard the space shuttle Columbia have been found alive in the wreckage, NASA said Wednesday. The worms, known as C. elegans, were found in debris in Texas several weeks ago. Technicians sorting through the debris at Kennedy Space Center in Florida didn't open the containers of worms and dead moss cells until this week. All seven astronauts were killed when the shuttle disintegrated over Texas on February 1. Columbia contained almost 60 scientific investigations. "To my knowledge, these are the only live experiments that have been located and identified," said Bruce Buckingham, a NASA spokesman at the Kennedy Space Center. The worms and moss were in the same nine-pound locker located in the mid-deck of the space shuttle. The worms were placed in six canisters, each holding eight petri dishes. The worms, which are about the size of the tip of a pencil, were part of an experiment testing a new synthetic nutrient solution. The worms, which have a life cycle of between seven and 10 days, were four or five generations removed from the original worms placed on Columbia in January. The C. elegans are primitive organisms that share many biological characteristics of humans. In 1999, C. elegans became the first multicellular organism to have the sequencing of its genome completed. C. elegans have two sexes: males and hermaphrodites, which are females that produce sperm. A hermaphrodite worm can self-fertilize for the first 300 or so eggs but later usually prefers to accept sperm from males to produce a larger number of offspring. The experiment was put together by researchers at the NASA Ames Research Center in California. The moss, known as Ceratodon, was used to study how gravity affects cell organization. During Columbia's flight, shuttle commander Rick Husband sprayed the moss with a chemical that destroyed protein fiber. He also sprayed the moss with formaldehyde to preserve it. Seven of the eight aluminum canisters holding the moss were recovered. Why worms? The C. elegans are primitive organisms that share many biological characteristics of humans. The experiment was put together by an Ames Research Center researcher and Dr. Fred Sack at Ohio State University. "The cells were surprisingly well-preserved, but we're analyzing how useful it's going to be," Sack said. NASA officials said they don't know if the worms will still have any scientific value since they were supposed to have been examined and unloaded from Columbia within hours of landing "It's pretty astonishing to get the possibility of data after all that has happened," Sack said. "We never expected it. We expected a molten mass."
{ "pile_set_name": "OpenWebText2" }
LONDON (Reuters) - British manufacturers selling goods ranging from machinery to toys into the European Union would need to have their products retested by EU safety regulators in the event of a “no deal” Brexit, the British government said on Thursday. Currently British manufacturers of many goods are subject to the EU’s “New Approach” rules that ensure products sold across the bloc meet its safety and environmental standards. But if Britain and the EU fail to agree the terms of their divorce ahead of the March 29 leaving date, procedures would change markedly for British producers, the British government said in a paper detailing the consequences of a no-deal Brexit. The goods covered by the paper include construction products, lifts, toys and machinery. Road vehicles, aerospace and pharmaceuticals have been covered in separate papers. “Products which were tested by a UK-based notified body will need to be retested by an EU-recognised conformity assessment body before placing on the EU internal market,” the paper said of a no-deal scenario. “Alternatively, manufacturers can seek to arrange for their files to be transferred to an EU-recognised notified body to allow for certificates of conformity issued by a UK-based notified body to continue to be valid.” Conversely, Britain would allow goods that meet EU requirements to be sold in the British market without being retested, for a time-limited period, the government said.
{ "pile_set_name": "OpenWebText2" }
I love camping but one aspect I don’t love is the pesky mosquitoes. My poor middle daughter seems to get eaten alive no matter how much bug spray we put on her. So when I was contacted about trying the Thermacell Mosquito Repellent Lantern, I was happy to accept to see how well it worked on our recent camping trip to Kelley’s Island. When we scheduled our camping trip, I have to admit I was a bit worried about how my daughter would fare in the evenings around the campfire. So in addition to liberal amounts of bug spray, I was happy to put the Thermacell Mosquito Repellent Lantern to the test to see if it would provide adequate protection for my daughter. Thermacell makes quite a few different mosquito repellents. We received the Bristol lantern to test out. This model comes with the lantern, 3 mosquito repellent mats, and 1 butane cartridge. The lantern needs 3 AAA batteries, which are not Benefits of the Thermacell Mosquito Repellent Lantern The lantern uses allethrin to repel mosquitoes. Allethrin is a synthetic version of the natural repellent found in chrysanthemums. Set up is easy. It only takes a few minutes to get the lantern ready to start providing protection. The lantern provides odorless protection which is a vast improvement over mosquito repellent candles and citronella torches. There is no open flame, so you don’t have to worry about using it around little kids. The area of protection is large. One lantern will repel mosquitoes in a 15′ x 15′ space, which is more than adequate for most decks, patios, and campsites. It works really well. We used it for several evenings on our camping trip and did not once have problems with mosquitoes. I was impressed! A couple of things to note: You can’t use the lantern as your only source of light. Light attracts bugs at night time and this lantern is no exception. We tried to set it on the picnic table one evening and had to move it because we were getting bombarded with insects. The solution? Set it 5 feet away or so from where you are sitting. If you use this lantern regularly you will have to buy refills for the butane cartridge and the mosquito repellent mats. The lantern comes with one butane cartridge, which will last 12 hours. That will probably get us through the summer. However the mosquito repellent mats don’t last as long – on average only about 4 hours. So if you entertain or camp frequently you will probably want to have some refills on hand. Other uses for the Thermacell Mosquito Repellent Lantern In addition to camping, this lantern would be wonderful for any evening outdoor events – like watching the fireworks, attending an outdoor concert, or spending an evening at the lake. It would be also be great for providing protection for your summertime parties that extend in to the evening. It would provide excellent protection for your patio or backyard to keep your guests from getting eaten alive. You could also use it for protection while you do a little evening gardening. I always liked to garden as the sun was going down and things cooled down a bit, but the mosquitoes made it unbearable. Now I can set the lantern nearby and garden in peace! Want to try the Thermacell Mosquito Repellent Lantern for yourself? Order it directly from Thermacell. Use code FamilyGuide2016 at checkout on Thermacell.com to receive 20% off your order of $40 or more. Offer expires Dec. 31st, 2016. Buy it locally. You can check out their store locator to find a store near you. We saw it in Bed, Bath, and Beyond last night while we were there. I was really glad to have gotten to try the Thermacell mosquito repellent lantern. It definitely made evenings around our campsite a lot more pleasant. We have added it to our camping gear and will be taking it along on any future camping trips to ensure mosquitoes don’t ruin our fun. A big thank you to Thermacell for sending us a patio shield mosquito protection lantern to try on our recent camping trip. As always, all opinions are my own. No additional compensation was received. Leave a Comment Your Name : Your Comments : Please enter the two words you see below. If they're hard to read, get a new set of words by clicking the reload icon just to the right of the words.
{ "pile_set_name": "Pile-CC" }
Q: Hibernate validator doesn't work Im trying to use hibernate validator in my spring mvc project with a html form. It compiles correctly but when I put put the wrong text in the form input, hibernate doesn't detect that, the BindingResult has no erros. Im using java 8 with Tomcat 9.0. Here is my code: The class that I want to validate: private int id; private String jugador; @NotNull(message = "No puede estar vacio") @Size(min = 1, max = 16, message = "...") private String contra; @NotNull(message = "No puede estar vacio") @Size(min = 1, max = 16, message = "...") @Email private String email; The controller: @RequestMapping("/procesarRegistro") public String procesarRegistro(@Valid @ModelAttribute("Cuenta") Cuenta cuenta, BindingResult bindingResult){ //This is allways false (idk why) if(bindingResult.hasErrors()){ return "registrarse"; } } The html form: <form:form action="procesarRegistro" name="reg" modelAttribute="Cuenta" cssClass="login" cssStyle="margin-top: 4%; margin-left: 40%; margin-right: 40%; border-radius: 3%; background: #fff;"> <div class = "contenedorLogin"> <div class = "input-contenedor"> <form:input path="jugador" placeHolder = "Nombre de minecraft" value = "" cssClass="textLog" cssStyle="text-align: left;"></form:input><br> <form:input path= "id" placeHolder = "Numero de cuenta" cssClass="textLog" cssStyle="text-align: left;"></form:input><br> <form:input path="email" placeHolder = "email" cssClass="textLog" cssStyle="text-align: left;"></form:input><br> <form:errors path = "email"></form:errors> <form:password path="contra" placeHolder = "Tu contraseña" value = "" cssClass="textLog" cssStyle="text-align: left;"></form:password><br> <form:errors path = "contra"></form:errors> <input type="text" value="confirmar"> <input type="password" name="contra2" placeholder="Confirmar" value = "" class = "textLog" style="text-align: left;"><br><br> </div> <input type="submit" value = "Registrarse" class = "botonLog"> <p style="text-align: center; color: gray;">Si tienes alguna duda entra al servidor de minecraft y pon /cuenta</p> </div> </form:form> Im using the 5.4.3 version of hibernate validator (I have tried 7.0, 6.1 and 6.0). Here is a photo of my jar files: Thanks for reading, i would be so pleased if someone can help me :) A: prueba de utilizar el validador @NotBlank. Es probable que te esté llegando un valor "" en vez de nulo. Para mayor control utiliza los dos validador es juntos @NotNull y @NotBlank. Espero te ayude
{ "pile_set_name": "StackExchange" }
The effect of oral contraceptives on reproductive function during semichronic exposure to ethanol by the female rat. 1. Female rats were placed on water, 5% ethanol (ET), or 20% ET drinking solutions for 8 weeks. The last 2 weeks, the rats received orally either ethinyl estradiol (EE), norethindrone acetete (NED), or a combination of both. 2. Luteinizing hormone decreased due to ET drinking and was undetectable subsequent to the steroidal treatment. 3. Prolactin increased after steroid treatment and alcohol drinking in the controls. 4. Ethanol (5%) plus EE increased prolactin as did the steroidal combination, whereas ET (20%) likewise increased prolactin in conjunction with NED over water controls. 5. Hepatic alcohol dehydrogenase was inhibited due to EE when compared to water-controls in the 5% ET drinking animal, whereas aldehyde dehydrogenase was induced in combination with NED in both the 5% and 20% ET drinking rats.
{ "pile_set_name": "PubMed Abstracts" }
Senate Republicans are still trying to keep their distance from Roy Moore, creating a fresh break with President Donald Trump and the Republican National Committee, which have re-embraced Moore less than a week before a key special Senate election despite accusations of child molestation against the Alabama Republican. Both the National Republican Senatorial Committee and the Senate Leadership Fund, a super PAC controlled by allies of Senate Majority Leader Mitch McConnell, said they plan on staying out of the contest. Several Republican senators furiously protested the RNC’s decision on Tuesday. But there’s a clear sense of resignation among GOP senators who have tried to block Moore from winning the race, acknowledging that the explicit seal of approval from Trump has left them no good options in the Dec. 12 contest. McConnell has acknowledged that he can’t force Moore out of the race. “That’s up to them,” Sen. Ron Johnson (R-Wis.) said of the RNC’s renewed involvement in the race, throwing up his hands. “I can’t blame them. Let’s face it, they represent the Republican Party,” Sen. Orrin Hatch (R-Utah) added, speaking about the RNC. “Frankly, I think if he gets elected, that ought to be — that ought to settle an awful lot of the questions.” The RNC sent $50,000 to the Alabama Republican Party to help Moore in the final week of the campaign. Moore campaigned with former White House strategist Steve Bannon on Tuesday night in Fairhope, an affluent suburb in a county Trump won with more than 70 percent of the vote. Morning Score newsletter Your guide to the permanent campaign — weekday mornings, in your inbox. Email Sign Up By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Sen. Susan Collins (R-Maine) said flatly that she does not think the RNC should be supporting Moore. On his Twitter account Tuesday, Sen. Jeff Flake of Arizona, one of Moore’s loudest critics in the Senate, showed off a $100 check he made to the campaign of Moore’s Democratic foe, Doug Jones. “I don’t understand that move,” South Dakota Sen. John Thune, the third-ranking Senate Republican, said of the RNC’s decision. “I guess that’s consistent with what the president wants to see happen, but it’s not consistent with what I’ve been saying. I just think, again, we’re putting ourselves in a situation where we’re going to have a cloud of uncertainty and a cloud of distraction come January.” McConnell and allies of the majority leader say his position hasn’t shifted since The Washington Post last month printed the first allegations that Moore preyed on teenage girls when he was in his 30s. More than a half-dozen women have accused Moore of sexual misconduct, including some who were teenagers at the time. “Look, I’ve made my position perfectly clear,” McConnell said. “I had hoped that Judge Moore would resign — in other words, withdraw from the race. That obviously is not going to happen. If he were to be elected, I think he would immediately have an issue with the Ethics Committee, which they would take up.” “I don’t think he’s wavered at all in his opinion of Roy Moore,” said Josh Holmes, a former McConnell chief of staff. “I think what’s changed is the fact that Moore is inevitably going to [remain] the nominee.” Moore didn’t seem eager to win McConnell’s backing. Toward the beginning of his speech in Fairhope on Tuesday night, Moore ticked off a list of Trump’s unfulfilled promises, from building the wall to rolling back NAFTA. Why weren’t they getting done? “Mitch doesn’t want it,” Moore said. Many Republicans, once it became clear there was no way to replace Moore, have simply tried to erase the race from their thoughts. “It doesn’t even come up at conference lunches,” Holmes said. McConnell has repeatedly stressed that he believes the accusers and tried to force Moore off the ballot, without success. But a viable Republican write-in campaign never materialized, and a potential idea to shift the Dec. 12 election date — by having current Sen. Luther Strange resign — was quickly determined to be unworkable. “I don’t think anybody’s surprised here,” said Richard Shelby (R-Ala.), the state’s senior senator, who backed Strange in the primary. “The president’s interested in keeping 52 votes up here, and I’d like to, too. But a lot of us have different views on it, you know.” Shelby cast his vote for a write-in candidate, but he declined to say for whom he voted. And Shelby added that he believes Moore will “probably win” next Tuesday. Roy Moore was set to campaign with former White House strategist Steve Bannon on Tuesday night in Fairhope, an affluent suburb in a county Donald Trump won with more than 70 percent of the vote last November. | Brynn Anderson/AP Photo North Carolina Sen. Thom Tillis, the NRSC’s vice chairman, said: “The president has voiced his support for him, so I’m sure that was instructive in [the RNC’s] decision. … That’s a call they have to make; it’s not something I’ve personally done or will do.” Other top Senate Republicans have kept up their anti-Moore posture. Colorado Sen. Cory Gardner, the chairman of the National Republican Senatorial Committee who has called for Moore’s expulsion should he be elected, said Tuesday that the campaign arm’s position has not changed. McConnell also said the NRSC will not get re-involved. There’s nothing that Senate Republicans can do to prevent Moore from being seated as a member of the Senate, multiple GOP senators stressed Tuesday. A 1969 Supreme Court decision held that a duly elected member of Congress couldn’t be excluded as long as he or she met the age, citizenship and residency requirements laid out in the Constitution. Expelling a victorious Moore after he was seated would be another matter, though it hasn’t been done since the Civil War era. But despite the Senate Ethics Committee’s broad jurisdiction to examine senators’ conduct, there have been some questions about whether the committee could investigate instances that occurred before the individual was a sitting member of the Senate. “The last two senators expelled from the Senate were the two Missouri senators in 1862,” said Sen. Roy Blunt (R-Mo.), though he said he agrees with Gardner’s decision to have the NRSC stay out of the race. “So expulsion is not something that the Senate has generally thought was the business of the Senate.” Jones’ campaign has sought to capitalize on the deep GOP divisions, highlighting Shelby’s declaration that he had voted for a write-in candidate instead of Moore. “His conduct is so disturbing, Sen. Shelby will not vote for him,” a narrator said in Jones’ latest campaign ad released on Tuesday. The ad then shows a clip of Shelby declaring: “I will not be voting for Judge Moore.”
{ "pile_set_name": "OpenWebText2" }
5 simple ideas to reboot the Toys R Us brand As my better half and I drove by a vacant, dusty former Toys R Us, I uttered: “That’s so sad”. My love looked at me and asked: “Is it, though”? Thinking for a moment, I concluded that the loss of the Toys R Us and Babies R Us brands are a complete and utter travesty. I know full well this comes from a very emotional place deep in my core. Toys are amazing. There are scores of action figures, Vinyl Pops and DC Comics statues littering every desk I use. As a child, Toys R Us was the place to get my action figure fix. From Teenage Mutant Ninja Turtles to every weird iteration of the Batman figures, Geoffrey the Giraffe had me covered. Those toys helped inform my creative thinking process as I crafted imaginary words where Panthro raced to rescue Junkyard Dog from the wrath of Juggernaut. Knowing that my daughter may not be able to run the aisles of a Toys R Us and have that same joy I did is pretty sad. There’s hope my fellow 80’s kids. Toys R Us via Engadget Toys R Usmay be able to return, as they stop their bankruptcy auction and aim to get the brand back up on its feet. My mind started racing thinking of all the ways I would help elevate the joyful brand to not only be competitive but also a must-visit destination. Keep the logos and the equity they hold The Toys R Us word mark holds a lot of brand equity. Many of the former Toys R Us kids are now adults with kids of their own. Seeing that logo is a reminder of simpler times. Back when you’d beg your parents for the newest plastic figure on the market. Be online always A major downfall of Toys R Us was its inability to embrace emerging technology, mainly digital shopping. It simply jumped on the bandwagon too late. By the time they entered the online game, they could not compete with the digital retail giants. Amazon, Target, and Walmart all dipped into the Toys R Us toy pool with better online experiences than TRU. The toy giant did little to stay relevant online. This time around, the new toy giant will need to put a heavy emphasis on their digital offerings. Whether that’s better pre-order bonuses for games (helping them compete with Amazon as Prime members lose pre-order bonuses), or offering exclusive online deals on the hottest piece of formed plastic on the market. The sky is the limit for the brand; they need to embrace the online space. Perhaps VR store tours for kids? I’m spit-balling here. Be like Nintendo. Create a 1st party toy line Nintendo is a brilliant company. It’s gaming consoles are supported by a dedicated, hardcore base of life-long fans. Whenever Nintendo announced a new Mario, Zelda, or god-help-us Metroid game, they print money before the game is even released. Sure the Wii and Switch have 3rd party and independent games on them, but the money-makers are more often than not a Mario Game. Nintendo owns Mario, making that a closed loop of endless revenue. Toys R Us needs a Mario. Embrace sideshow collectibles and other luxury toy brands I love a good Sideshow Collectible. Finding one in the wild is always a treat. Stores like Midtown Comics or Forbidden Planet in New York cater to folks like me. The adults with disposable income who enjoy the finer toys in life. Toys R Us can get in on this, using high-quality collectible figures as a means to get guys and gals like me back into their stores. The answer could be an offshoot brand called Toys R Us Gold, or Toys R Us Old. Again, I am spitballing. Hold Toy Cons for independent toy makers Conventions are big business. Every brand with a sizable audience invests in a meetup at some point or another. Comic-Con, Designer-Con, Digital Thinkers Conference. The sky is the limit. Just bring Toys R Us back already Whatever the proverbial “they”decide to do, having Toys R Us once more is a net positive for humanity. We can all use a little childish joy in our lives. That’s what Toys R Us can generate in people. The sight of Geoffrey welcoming people into the world of toys is sorely lacking. Do you have any ideas on how to help Toys R Us return and flourish? Drop me a line on Twitter and let’s talk about toys.
{ "pile_set_name": "Pile-CC" }
Introduction ============ Stem cells stay in their undifferentiated stage until they receive appropriate activation signals and begin the differentiation process into specific lineages, according to the type of received stimuli. Signals such as growth factors and physical cues are provided by the surrounding cellular microenvironment ([@B1],[@B6]). Cell density and cell-cell interactions play major roles in the differentiation process ([@B7]). Some chemical compounds such as dexamethasone are osteogenic supplements and, like growth factors, they play essential roles in osteogenic differentiation of mesenchymal stem cells (MSCs) ([@B8]). In this study, we have used a pulsed electromagnetic field (PEMF) as the biophysical guide to create waves with constant properties. Such waves are non-ionizing and create non-thermal fields with high rates of amplitude changes ([@B9]). The applied frequency of the extremely low frequency electromagnetic field is under 300 Hz and the amplitude ranges from 0.2 to 20 millitesla (mT) ([@B10]). PEMF is clinically used to treat osteoporosis by increasing bone mass in women with menopause and snapback in patients with osteotomies. This field also acts to reduce the bone resorption activity of osteoclasts ([@B11]) as well as increase calcium content and other bone minerals ([@B12]). As a biophysical factor, PEMF motivates the release of Ca^2+^ions from the smooth endoplasmic reticulum as the starting point of signaling pathways that activate osteogenic differentiation. The increase in intracellular Ca^2+^level also triggers enzymatic cascades, resulting in the secretion of growth factors such as bone morphogenetic proteins (BMPs), expression of osteoblast-specific genes, and cell proliferation ([@B13]). The interaction between an electromagnetic field and biological tissue is related to the amplitude, frequency, and form of the wave in addition to the time duration of the exposure ([@B9]). Until now, modern medicine has used extremely low frequency PEMFs to treat non-union bone fractures, pseudarthrosis, osteoporosis, and periodontal disease ([@B9]). Interaction of electromagnetic fields with the extracellular matrix can increase cytosolic Ca^2+^and then promote the proliferation of osteoblastic cells ([@B14]). It has been proven that the expression of osteoblastic marker genes is upregulated in response to a combination of specific PEMFs and chemical compounds such as BMPs or other inductive factors ([@B9], [@B15]). In this study, we researched the effect of PEMF on MSCs proliferation and differentiation toward osteoblasts along with the amount of expression of osteoblastic marker genes such as osteocalcin (*Ocn*) and runt-related transcription factor 2 (*Runx2*). Our objective was to analyze the effects of an electromagnetic field on osteogenic differentiation of stem cells. In addition, we assessed the influence of chemical factors when combined with PEMF. Materials and Methods ===================== This was an experimental animal study conducted on rat bone marrow derived MSCs. Mesenchymal stem cell isolation and culture ------------------------------------------- All animal experiments were performed according to approved guidelines of the Ethics Committee at Pasteur Institute of Iran. A total of 9 male, 4-week-old Wistar rats (weights: 230-250 g) were anesthetized in order to obtain bone marrow aspiration from their iliac crests under sterile conditions. After isolation of bone marrow stem cells according to the Ficoll-Paque technique, we cultured these cells in Minimum Essential Medium Eagle Alpha Modification (α-MEM medium, Sigma, NY, USA) supplemented with 15% fetal bovine serum (FBS), 1% penicillin/ streptomycin \[100 U/ml of penicillin and 100 µg/ml of streptomycin (Sigma, NY, USA)\] and 1% L-glutamine (Gibco, NY, USA). The medium was changed every 3 days ([@B8]). Cells at passage-3 (P3) were used for the experiments. The rat osteosarcoma cell line (UMR106) provided by National Cell Bank of Iran (C586) was the positive control group and stem cells comprised the negative control group. Multipotential assay -------------------- We performed chondrogenic, osteogenic, and adipogenic differentiation experiments to examine the multipotential differentiation ability of the isolated cells. For osteogenic differentiation P3 cells were exposed to osteogenic medium that contained Dulbecco's Modified Eagle's Medium (DMEM), 10% FBS, 100 nM dexamethasone, 10 mM β-glycerol phosphate, and L-ascorbic acid 2-phosphate for 21 days. The medium was changed every 3 days. Thereafter, the cells were fixed and stained with Alizarin red S. In order to induce adipogenesis, the cells were subjected to DMEM that contained 10% FBS, 0.5 μM of 3-isobutyl-1-methylxanthine (IBMX), 1 μM dexamethasone, 10 μg/ml insulin, and 100 μM indomethacin for 15 days. Subsequently, the cells were fixed with 4% paraformaldehyde and stained with Oil red O. For directing cells toward chondrogenic differentiation, the cell pellets were prepared and incubated with DMEM that contained 50 mM ascorbic acid-2 phosphate, 10 ng/mL transforming growth factor b1 (TGF b1, R&D Systems, USA), 100 nM dexamethasone, 1% ITS-Premix (BD Biosciences, USA), and 1 mM sodium pyruvate (Gibco, NY, USA) for 28 days. Chondrogenic differentiation was examined by fixing the cells with 10% formalin, followed by sectioning the pellets and staining them with Alcian blue. All chemicals unless otherwise indicated were purchased from Sigma, USA ([@B16]). Immunophenotyping ----------------- Bone marrow derived MSCs were studied for the expression of CD45 as the hematopoietic marker, along with CD73 and CD90 as the MSC surface markers. CD73, a PE conjugated antibody, (BD Biosciences, CA, USA) and FITC-conjugated goat anti-mouse IgG antibodies for CD45 and CD90 (FITC conjugated) were used. Mouse IgG1 K isotype control (eBiosciences, CA, USA), mouse IgG2a K isotype control FITC (eBiosciences, CA, USA), and donkey anti-mouse IgG (H+L) PE (eBiosciences, CA, USA) were used as secondary antibodies for detection of the selected markers. Unstained cells were used for gating in the flow cytometric analysis. We counted 15000 events for each antibody. Data were analyzed by FlowJo software version 7.6.4 ([@B17]). Pulsed electromagnetic field exposure ------------------------------------- PEMF stimulation was performed using Helmholtz coils of copper wire ([@B18]). A pair of 12.7 cm-diameter circular coils was placed opposite to each other within the incubator and the cell flask located in the uniform field area of the coil center. The proper shielding and utilization of Plexiglass was performed to guarantee the prevention of any disturbance to the applied stimulating magnetic field. The electromagnetic field generator, named the Helmholtz coil, consists of two solenoid electromagnets on the same axis. The non-sinusoidal magnetic field is generated by an electric current through the coils. This device is used to produce uniform electromagnetic waves in order to create a uniform magnetic field. These coils cancel the interference of external magnetic fields generated by nearby electrical devices or the Earth's magnetic field. The employed device had three parts: a stimulator, the coils, and a control box. Intensity of the created field was regulated by changing the voltage of the stimulator. The apparatus was ordered by the National Cell Bank of Iran and Behi Afzar Saz Pooya Company of Iran fabricated the entire system ([@B10]). Intensity of the field was 0.2 mT with a 15 Hz frequency. We used a pulse on time of 40 mseconds and pulse off time of 27 mseconds. A tesla meter (Lutron) measured the magnetic flux at the center of the coil. At first, we analyzed three differentiation periods of daily exposure in order to determine the most effective duration of exposure. PEMF was used to stimulate the cells with 0.2 mT and 15 Hz for 10 consecutive days with 2, 4, and 6 hours of exposure per day. The cells from all the groups were exposed to PEMF at 0.2 mT intensity and 15 Hz frequency for 10 consecutive days and 6 hours of exposure per day. This study had three experimental groups: i. Cells incubated with regular culture medium and exposed to the field, ii. Cells stimulated with simultaneous application of the electromagnetic field and chemical differentiation medium (50 μM ascorbate-2 phosphate, 10 mM β-glycerophosphate, and 0.1 μM dexamethasone) for 7 consecutive days, and iii. Cells subjected to a combination of the mentioned electromagnetic field and chemical differentiation medium for 10 consecutive days. Upon completion of the tests, we performed real- time polymerase chian reaction (PCR) analysis to quantify the expressions of the marker genes ([@B15], [@B19]). Untreated MSCs were utilized as the negative control and UMR-106 was the positive control. MTT assay --------- The tetrazole 3-([@B4],[@B5]-dimethylthiazol-2-yl)- 2,5-diphenyltetrazolium bromide (MTT), as a histomorphological stain, was used to study the effect of PEMF on MSC proliferation. MTT is reduced to purple formazan by viable cells. Hence, the number of living cells can be determined based on the absorbance of the formazan solution ([@B20]). We have performed the MTT assay on cells subjected to the 0.2 mT electromagnetic field (6 hours of exposure per day) on the 5^th^, 10^th^, and 14^th^days. Immunocytochemistry ------------------- Immunocytochemistry assay was used to scan the influence of electromagnetic field exposure. Antibodies were used against two osteogenic markers, anti *Runx2* and anti osteocalcins. Immediately after exposure to the field, the cells were washed twice with phosphate-buffered saline (PBS) and fixed with 4% paraformaldehyde (Sigma, NY, USA) for 20 minutes at 4˚C. Next, they were permeabilized with 0.5% Triton X100 (Merck, NJ, USA) after which 0.5% gout serum was used to block the nonspecific antibodies. Cells were incubated overnight at 4˚C with mouse monoclonal antibodies against *Runx2* and *Ocn* (both from Abcam, Cambridge, UK). Thereafter, they were incubated with FITC conjugated secondary antibody at a 1:100 dilution (Abcam, Cambridge, UK) at room temperature in the dark for 2 hours. Finally, the presence of the mentioned proteins was examined under a Zeiss fluorescence microscope (×630) ([@B21]). Real-time reverse transcriptional polymerase chain reaction assay ----------------------------------------------------------------- We used real-time reverse transcriptional polymerase chain reaction (RT-PCR) to examine the expressions of the *Ocn* and *Runx2* genes by the stimulated cells. Total RNA was extracted using the RNeasy plus Mini Kit (Qiagen, MD, USA) according to the manufacturer's instructions. The purity of extracted RNA was evaluated by means of a nanodrop spectrophotometer (Implen, Germany). High quality samples with concentrations \>400 ng/μl and A260/A280 ≥1.8 were chosen for analysis. The QuantiTect Reverse Transcription Kit (Qiagen, MD, USA) was used to synthesize complementary DNA (cDNA) from the extracted RNA. Gel electrophoresis was carried out to verify the integrity of cDNA. TaqMan real-time PCR was performed for quantitative analysis of *Ocn* and *Runx2* expressions. Reactions were carried out using an ABI StepOne system with StepOne v2.1 software (Applied Biosystems, CA, USA). All primers and probes were designed using the Primer Express software (version 3.0). The recommended sequences by this software were analyzed using gene runner software. Ribosomal protein large subunit 13a (*RPL13A*) was selected as the housekeeping gene for normalization of the obtained data that corresponded to *Runx2* and *Ocn* mRNA level quantification. Primer sequences were as follows: *Runx2* F: 5ʹ-GCCAGGTTCAACGATCTGAGA-3ʹ R: 5ʹ-GGAGGATTTGTGAAGACCGTTATG-3ʹ probe: 5ʹ-TGAAACTCTTGCCTCGTCCGCTCC-3ʹ *Ocn* F: 5ʹ-GCAGACCTAGCAGACACCATGA-3ʹ R: 5ʹ-CCAGGTCAGAGAGGCAGAATG-3ʹ probe: 5ʹ-TCTCTGCTCACTCTGCTGGCCCTG-3ʹ *RPL13* F: 5ʹ-TGAACACCAACCCGTCTCG-3ʹ R: 5ʹ-GCAGCCTGGCCTCTTTTG-3ʹ probe: 5ʹ-CCCCTACCACTTCCGAGCCCCA-3ʹ. PCR products were checked by gel electrophoresis according to the product size (data not shown). Each reaction was performed in triplicate with a total volume of 20 μl that contained 5 μl of cDNA sample, 10 µl of TaqMan Universal PCR Master Mix (Applied Biosystems, USA), 10 pmol of each primer, and 5ʹ-Fam-/3ʹ-Tamra-labeled probe. The thermal cycling profile involved an initial activation for 10 minutes at 95˚C, followed by 15 seconds at 95˚C, 1 minute at 60˚C, and running for 40 cycles. The melting curve stage was set at 95˚C (15 seconds), 60˚C (1 minute), and 95˚C (15 seconds) ([@B22]). Gene expression values were calculated using the following formula: ΔΔCT= \[minimum CT Targets-minimum CT RPL-13A\]Test samples-\[minimum CT Targets- minimum CT RPL-13A\]Stem cells Real-time PCR was performed to compare the effects of 0.1 and 0.2 mT-fields on the expressions of the osteogenic markers. In another part of this study the effects of three daily-exposure durations of 2, 4 and 6 hours for electromagnetic field application were studied after 10 days of stimulation. Real-time PCR was used to compare gene expression levels among the above mentioned groups. Surgical procedures ------------------- Each animal was anesthetized and a small incision was made on the left side of the cranium. The periosteum and soft tissues were removed to access the cranial bone. The collagen-based scaffolds ([@B23]) with dimensions of 5×5×1 mm^3^were implanted after a predrilling with a dental drill. The following three groups were defined and studied in triplicate: i. Bone sockets without any scaffolds; ii. Defects filled with scaffolds seeded with untreated MSCs, and iii. Defects filled with scaffolds seeded with electromagnetically and chemically motivated MSCs. Vicryl 3-0 suture was used to close the incisions. We performed autologous transplantation and each animal used its own stem cells. The scaffolds were retrieved after 10 weeks of implantation and fixed with 4% paraformaldehyde for 12 hours. Thereafter, decalcification of bones was carried out in 10% EDTA for two weeks followed by embedding in paraffin. Tissue blocks were sectioned into 5 µm thick sections and stained with hematoxylin and eosin (H&E) to assess bone healing ([@B24]). Statistical analysis -------------------- All data that corresponded to the three separate experiments were expressed as means ± SD. Statistical analyses were performed using oneway ANOVA and the student's t test via SPSS software version 17.0. P values lower than 0.05 were considered statistically significant. Results ======= Differentiation potential assays -------------------------------- The results of multi-lineage differentiation experiments confirmed the potential of isolated cells to differentiate into adipocytes, osteoblasts, and chondrocytes. After oil red O staining, we observed the presence of lipid vacuoles. Alizarin red S staining revealed the presence of calcified nodules and Alcian blue staining demonstrated sulfated glycosaminoglycans and chondrogenesis (data not shown). These observations indicated the multi-potent identity of the isolated cells. Characterization of mesenchymal stem cells ------------------------------------------ We used flow cytometry to characterize the mesenchymal identity of the isolated cells. According to the results, the cells were negative for the hematopoietic marker, CD45. These cells highly expressed CD73 and CD90 as MSC- associated surface proteins. The obtained results ([Fig.1](#F1){ref-type="fig"}) confirmed the mesenchymal identity of the isolated cells. ![Flow cytometric analysis of isolated rat bone marrow-derived mesenchymal stem cells (MSCs). Cells were analyzed for expression of MSC specific surface markers. A. CD45 (negative marker), B. CD73 (positive marker), and C. CD90 (positive marker) as well as cell size (forward-angle light scatter, FAS). The positive mean value of each marker is shown in the corresponding graph. Graphs confirm the mesenchymal identity of the isolated cells.](Cell-J-19-34-g01){#F1} Electromagnetic field and proliferation --------------------------------------- We used the MTT assay to determine the influence of a low frequency electromagnetic field on stem cell proliferation after 5, 10, and 14 days of cell exposure to PEMF (0.2 mT, 15 Hz for 6 hours exposure/day). Unstimulated MSCs were the negative control. As illustrated in Figure 2, cells stimulated with the electromagnetic field had a higher proliferation rate compared to unstimulated MSCs. Thus PEMF treatment for 14 days did not have any negative effect on MSC proliferation; rather, it enhanced the proliferative activity of these cells. ![Results of the MTT assay on mesenchymal stem cells (MSCs) exposed to an electromagnetic field (0.2 mT, 15 Hz, 6 hours/day) to estimate the number of cells after 5, 10 and 14 days. Unstimulated cells cultured for 5, 10, and 14 days were used as the control groups (P\<0.05).\ PEMF; Pulsed electromagnetic field.](Cell-J-19-34-g02){#F2} Effects of electromagnetic field intensity ------------------------------------------ We conducted real-time PCR analysis of the effects of field intensity on gene expression. MSCs from two separate groups were exposed to 0.1 mTor 0.2 mT-intensity fields with similar field parameters of 15 Hz frequency and 6 hours application of PEMF per day for 10 consecutive days. As shown in Figure 3, the 0.2 mT intensity field resulted in a greater increase in expression of osteoblastic genes compared to the 0.1 mT field. ![The effects of two different electromagnetic field intensity levels (0.1 mT and 0.2 mT) at 15 Hz, 6 hours/day for 10 consecutive days on the expressions of *Runx2* and *Ocn* according to realtime polymerase chain reaction (PCR). UMR-106 was the positive control. As shown, 0.2 mT intensity was more influential in stimulating mesenchymal stem cells (MSCs) to express osteogenic markers (P\<0.05).](Cell-J-19-34-g03){#F3} Effects of electromagnetic field exposure duration -------------------------------------------------- We tested three different durations of daily exposure in order to find the most influential duration. Stem cells were stimulated with PEMF (0.2 mT and 15 Hz) for 10 consecutive days with daily exposure durations of 2, 4, or 6 hours. We observed the highest expression levels of *Runx2* and *Ocn* in the group that received 6 hours of daily exposure to PEMF ([Fig.4](#F4){ref-type="fig"}). ![The effect of exposure duration (2, 4 or 6 hours/day) of the electromagnetic field (0.2 mT, 15 Hz, for 10 days) on osteoblastic gene expressions. UMR-106 and untreated mesenchymal stem cells (MSCs) were the positive and negative controls, respectively. The 6 hours of exposure per day was the most effective time duration (P\<0.05 in † and P\<0.001 in other columns).](Cell-J-19-34-g04){#F4} Combination of electromagnetic field and chemical induction ----------------------------------------------------------- Simultaneous application of chemical supplements and the electromagnetic field was carried out to assess the effects of combined treatment on expressions of the osteogenic genes. Real-time PCR was performed after electromagnetic field exposure at 6 hours daily for a 10-day period along with concurrent incubation with chemical factors in order to quantify mRNA levels of the osteogenic markers. MSCs were incubated for 7 and 10 days in induction medium. We compared the results with cells stimulated only with PEMF. The results showed that *Runx2* and *Ocn* had the highest expression levels 10 days after cells were subjected to the combination of induction medium and PEMF waves ([Fig.5A, B](#F5){ref-type="fig"}). Immunocytochemistry for pulsed electromagnetic field stimulation ---------------------------------------------------------------- Immunocytochemistry results demonstrated a slight expression of *Runx2* protein in stem cells ([Fig.6A](#F6){ref-type="fig"}) and presence of higher amounts of *Runx2* in cells stimulated only with the electromagnetic field ([Fig.6B](#F6){ref-type="fig"}). We observed no osteocalcin expression in unstimulated stem cells ([Fig.6C](#F6){ref-type="fig"}) and large amounts of osteocalcin in cells stimulated only with the electromagnetic field ([Fig.6D](#F6){ref-type="fig"}). In vivo studies --------------- Histological analysis was performed to assess bone and tissue ingrowth by differentiated MSCs stimulated by the electromagnetic field. After 10 weeks of implantation, we observed no signs of any inflammatory cells such as macrophages, lymphocytes, or giant cells in the different experimental groups. In all test groups, new osteoid areas formed adjacent to the pre-existing bones ([Fig.7](#F7){ref-type="fig"}). As a result of osteoblast activity, osteoids were produced on the surface of the new bone. Implanted scaffolds underwent degradation and no signs of the scaffold residues could be observed. In general, after 10 weeks the created defects had evidence of new bone in all three test groups. ![Osteoblastic gene expression levels by cells simultaneously subjected to electromagnetic field and induction medium for 7 or 10 days, or only exposed to the electromagnetic field (magnetic group) for 10 days. An electromagnetic field (0.2 mT, 15 Hz) was applied for 6 hours per day. UMR-106 and untreated mesenchymal stem cells (MSCs) were used as positive and negative controls, respectively. A. *Runx2* and B. *Ocn*. Combined application of induction medium and pulsed electromagnetic field (PEMF) for 10 days was the most effective treatment (P\<0.001). Mag; Electromagnetic stimulation and Chem; Chemical induction.](Cell-J-19-34-g05){#F5} ![Immunocytochemistry to localize. A. *Runx2* in unstimulated stem cells, B. *Runx2* in cells exposed to electromagnetic field, C. Osteocalcin in unstimulated mesenchymal stem cells (MSCs), and D. Osteocalcin in cells only exposed to the electromagnetic field (0.2 mT, 15 Hz, 6 hours/day for 10 consecutive days). Electromagnetic field is solely able to promote the expression of osteogenic genes and osteogenic differentiation. Fluorescence visualization was performed using a Carl Zeiss fluorescent microscope (×630).](Cell-J-19-34-g06){#F6} ![Histological analysis of *in vivo* bone formation using hematoxylin and eosin (H&E) staining. A, B. Bone sockets in the absence of scaffolds as the negative control, C, D. Defects filled by undifferentiated mesenchymal stem cell (MSC)-seeded scaffolds, E, and F. Defects filled by electromagnetically differentiated MSC-seeded scaffolds. Arrows show osteoblast cells. Newly formed bones (rectangular area) are located adjacent to the pre-existing bones (\*).](Cell-J-19-34-g07){#F7} Discussion ========== The present study evaluated the effects of electromagnetic field application and biochemical stimulation on MSCs and their gene expression patterns. Electromagnetic field parameters were selected such that the effect of PEMF on the expression of osteogenic markers and osteogenic differentiation could be assessed. We used the MTT assay, immunocytochemistry, TaqMan real- time PCR, and histological analysis to study the behavior of stem cells in response to this exposure. Flow cytometry analysis confirmed the identity of the cells .The MTT assay was carried out to investigate the effects of the low electromagnetic field on MSCs. The results indicated a progressive increase in the proliferation rate of MSCs due to the application of the extremely low frequency PEMF. A 20-60% increase in cell density due to exposure to the field was previously reported ([@B15], [@B19]). It has been suggested that the healing effects of the electromagnetic field on fractures may be related to its effects on promoting proliferation and growth acceleration in stem cells and preparing more progenitor cells for differentiation toward osteoblasts ([@B19]). Previous studies have suggested that PEMF may activate free ions on the cell surface. K^+^and Ca^2+^currents affect the activated K^+^channels when progressing from the G~1~-to S-phase and this mechanism may promote the proliferation of undifferentiated stem cells ([@B14]). In another study, electromagnetic fields have been shown to alter membrane functions by opening or closing ion channels, bind ligands, and the numbers and distribution of receptors ([@B11]). Thus PEMFs affect the molecular currents and cause a specific transmembrane signaling which can promote osteogenic differentiation. There are contradictions in terms of the duration of daily exposure and its consequences among different studies. Although Matsumoto et al. ([@B25]) have reported that longer stimulation durations per day resulted in higher bone contents, they did not observe any significant difference in terms of bone formation between two groups stimulated for 4 or 8 hours per day. However, the results of the present experiment revealed that 6 hours of stimulation per day showed greater benefit in enhancing the mRNA level of *Ocn*, as a late osteogenesis marker. Previous studies have demonstrated that compared to higher intensity values such as 0.8 mT, low electromagnetic intensities (0.2 and 0.3 mT) are more effective in promoting bone formation ([@B25]). In the present study, we have compared low electromagnetic intensities in order to determine which one more effectively promoted osteogenesis. Among the 0.1 and 0.2 mT intensities, we determined that the latter intensity level led to higher expression levels of early and late osteoblastic genes. These findings supported the results of previous reports ([@B26], [@B27]). *Runx2* has an inconsistent expression pattern during differentiation. This disharmony in mRNA levels was first reported by Tsai et al. ([@B15]). However, an overall up-regulation of this gene during osteogenic culture has been observed. Jansen et al. ([@B28]) previously reported that bone marker genes reached their highest expression levels between days 5 and 9 of exposure to PEMF. In other words, these genes reached their maximum expression levels just before and around the onset of cell mineralization. This result was also observed in the current study, in which we documented the highest expressions of *Runx2* and *Ocn* on day 10 of exposure to the electromagnetic field. *Runx2* and *Ocn* expressions downregulated between days 10 and 14 which indicated a transition to the mineralization stage. According to this finding and in agreement with previous reports, we concluded that PEMF treatment affected osteogenic differentiation of stem cells and stimulated mineralization at a time period just prior to the mineralization stage. Downregulation of osteogenic genes after an initial upregulation has been reported in previous works ([@B15], [@B28]). Multiple signaling pathways promote osteogenic differentiation of stem cells, some of which such as the canonical Wnt signaling pathway are triggered by PEMF application. The canonical Wnt pathway results in β-catenin stability, which goes to the nucleus and leads to the expression of target genes subsequently resulting in osteogenic differentiation and bone formation ([@B20], [@B29]). The chemical induction medium that contained ascorbic acid, β-glycerophosphate, and dexamethasone has promoted mineralization of the extracellular matrix through activation of different signal transduction pathways ([@B8]). Thus PEMF waves and the utilized biochemical factors reinforced the effects of each other. Implantation of differentiated cells on prefabricated scaffolds to defective areas of the bone and following the changes in the tissue has not been previously considered. According to the *in vivo* results of this study, differentiated osteoblasts seeded on scaffolds promoted filling of the incision and healing of the defects, after H&E staining of the sections related to different implant types, we observed the formation of new bone tissues throughout the scaffold structures. There was no fibrous tissue formation or inflammatory response observed in the different groups. The new osteoblasts produced osteoids on the surface of the pre-existing bone. This research intended to find the optimized parameters of the electromagnetic field in order to achieve an osseous tissue that could be implanted into the stem cell donor. In this process certain defects or malformations would be treated, therefore PEMF could be used to treat some osteogenic disorders via promoting osteogenic differentiation. In similar studies, no *in vivo* analysis was used to estimate the efficiency of the new osteoblasts and their life-time. Some of the field parameters utilized in those studies were slightly different. Conclusion ========== The induced electric currents by electromagnetic fields have the potential to induce osteogenesis in MSCs. Therefore, PEMF has modulating effects on stem cell proliferation and promotion of osteogenic differentiation. PEMF is a potentially low cost tool for tissue engineering which can construct new bone. This tool can be applied for fabrication of autografts in orthopedic surgeries as well as for treatment of maxillofacial disorders. This work was financially supported by the Iran National Science Foundation ( INSF ) grant number 89004147. There is no conflict of interest in this study.
{ "pile_set_name": "PubMed Central" }
HRC Blog HRC Global Fellow on Women’s History Month HRC Global Engagement Fellow, Jane “TJay” Wothaya Thirikwa of Kenya, spoke to a packed audience earlier this week at the Peace Corps’ headquarters in Washington, DC. TJay spoke as part of an ongoing discussion for Women’s History Month on the realities for lesbian, gay, bisexual and transgender advocates on the ground in Kenya and throughout various African nations. As an experienced LGBT leader in Kenya, TJay spoke passionately about the rising tide of violence against LGBT individuals in places that have been popular in the news due to recently passed anti-LGBT bills, like Uganda, Nigeria, but also more silently in places like Cameroon and the Gambia. Kenya, according to TJay, may face anti-LGBT legislation similar to those laws that were passed in Nigeria and Uganda that further criminalizes same-sex relations. When speaking about the governments of various African nations’ refusals to acknowledge the existence of LGBT people within their countries, TJay said, “When the government asks, ‘Where is the data that proves there are a large contingency of LGBT people in this country? Where are the numbers?’ My response is that even if there are five of us, we are citizens with rights just like every other person in this country! As a Kenyan citizen, the government has a responsibility to protect me from discrimination—not let vigilante groups attack me for being the person I am.” In places like Nigeria, groups of marauding citizens are using anti-LGBT laws as pretext for hunting down people, primarily men, who are suspected of being gay and then torturing them in the streets. This is a gross violation and misconception of the Same Sex Marriage Prohibition Act, which, in addition to outlawing same-sex marriage, further criminalizes same-sex relations with up for 14 years in prison and up to 10 years in prison for anyone who supports or participates LGBT organizations or people. Though horrific and a violation of human rights, this law does not provide legal recourse for citizens to attack someone simply because they are suspected of being LGBT. And yet the government has not prosecuted any of those men who instigate the witch hunt and beat others in the street. We congratulate TJay on her eye-opening talk at the Peace Corps and for her continued advocacy for Kenyan and pan-African LGBT rights both at home and in her fellowship with HRC. For more information on how to become an HRC Global Fellow, click here.
{ "pile_set_name": "Pile-CC" }
package solver import ( "context" "io" "time" "github.com/moby/buildkit/client" "github.com/moby/buildkit/util/progress" digest "github.com/opencontainers/go-digest" "github.com/sirupsen/logrus" ) func (j *Job) Status(ctx context.Context, ch chan *client.SolveStatus) error { vs := &vertexStream{cache: map[digest.Digest]*client.Vertex{}} pr := j.pr.Reader(ctx) defer func() { if enc := vs.encore(); len(enc) > 0 { ch <- &client.SolveStatus{Vertexes: enc} } close(ch) }() for { p, err := pr.Read(ctx) if err != nil { if err == io.EOF { return nil } return err } ss := &client.SolveStatus{} for _, p := range p { switch v := p.Sys.(type) { case client.Vertex: ss.Vertexes = append(ss.Vertexes, vs.append(v)...) case progress.Status: vtx, ok := p.Meta("vertex") if !ok { logrus.Warnf("progress %s status without vertex info", p.ID) continue } vs := &client.VertexStatus{ ID: p.ID, Vertex: vtx.(digest.Digest), Name: v.Action, Total: int64(v.Total), Current: int64(v.Current), Timestamp: p.Timestamp, Started: v.Started, Completed: v.Completed, } ss.Statuses = append(ss.Statuses, vs) case client.VertexLog: vtx, ok := p.Meta("vertex") if !ok { logrus.Warnf("progress %s log without vertex info", p.ID) continue } v.Vertex = vtx.(digest.Digest) v.Timestamp = p.Timestamp ss.Logs = append(ss.Logs, &v) } } select { case <-ctx.Done(): return ctx.Err() case ch <- ss: } } } type vertexStream struct { cache map[digest.Digest]*client.Vertex } func (vs *vertexStream) append(v client.Vertex) []*client.Vertex { var out []*client.Vertex vs.cache[v.Digest] = &v if v.Started != nil { for _, inp := range v.Inputs { if inpv, ok := vs.cache[inp]; ok { if !inpv.Cached && inpv.Completed == nil { inpv.Cached = true inpv.Started = v.Started inpv.Completed = v.Started out = append(out, vs.append(*inpv)...) delete(vs.cache, inp) } } } } vcopy := v return append(out, &vcopy) } func (vs *vertexStream) encore() []*client.Vertex { var out []*client.Vertex for _, v := range vs.cache { if v.Started != nil && v.Completed == nil { now := time.Now() v.Completed = &now v.Error = context.Canceled.Error() out = append(out, v) } } return out }
{ "pile_set_name": "Github" }
Mike Ragga Mike is the Editor-In-Chief and co-founder of the DNB Vault who, notably, was a long time writer for the now defunct KMAG while covering music and events from Warp Tour, NOFX, Dirtyphonics and Benny Page.
{ "pile_set_name": "OpenWebText2" }
A Simple Guide To Making Rosin Thanks to an emerging solventless technique called “rosin” technology, concentrate enthusiasts and patients alike are now able to enjoy the effects of concentrates without having to be concerned over the solvents used for extraction. Rosin utilizes extreme heat and pressure to extract the resin from the plant, thus eliminating tedious purge and evaporation processes that could take days to complete. When comparing rosin to BHO, the two are indistinguishable on an aesthetic level. Rosin, when extracted properly, retains just as many valuable terpenes that account for the pungent and aromatic flavours present in other cannabis concentrates. Extraction Method Perhaps the reason why rosin has been widely embraced by the cannabis community is that the sheer simplicity of the technology allows cannabis users with no prior chemistry or extraction experience, to try it for themselves safely, and with no risk. Rosin can be easily extracted at home with the following items: 5. Quality cannabis (ensure that your flower is resinous as this will produce better yields) Steps To Make Rosin 1. Turn your hair straightener to the lowest setting (280-300F: anything higher and you will be losing valuable terpenes), and cut yourself a square of parchment paper. Fold the paper in half and place your starting material in between the folded parchment paper. 2. Carefully line up the buds inside of the parchment paper with your hair straightener and apply very firm pressure (some prefer to stand on their hair straightener to apply the firmest possible pressure – be careful you don’t break your heating tool!) for about 5-7 seconds.You will want to hear a sizzle before you remove the heat and pressure – this indicates that the resin has melted from your plant material. 3. Remove the parchment from your hair straightener and unfold the paper. You can toss away the flattened nug, and use your collection tool to capture the fresh rosin you’ve pressed. For larger batches you should press on a new square of parchment each time and then collect all of the rosin at the end. You could also place the parchment in the freezer for a few second to stabilize the rosin if it’s quite sappy. This will help to collect the extract. 4. Temperature is everything when it comes to rosin pressing!Lower temperatures (200’s) give a lower yield but the end product is very flavourful and stable (hard and has a shatter-like consistency). Higher temperatures (300’s) give a larger yield but the end product is less flavourful and less stable (sappy).
{ "pile_set_name": "Pile-CC" }
The aim of this meta‐review is to aggregate and evaluate the top‐tier evidence for the efficacy and safety of nutrient supplements in the treatment of mental disorders, and to explore the conditions under which they may be effective. To do this, we identified, synthesized and appraised all available data from meta‐analyses of randomized controlled trials (RCTs) examining health outcomes and quality of evidence for all nutrient supplements across various mental disorders. Along with providing a clear overview of the efficacy of specific nutrient supplements across different disorders, we also aimed to explore which dosages and symptomatic targets are most appropriate, while additionally reporting on the safety and tolerability for all supplements examined. Alongside the theoretical potential for nutrient supplements to target certain aspects of mental disorders, there is also a vast amount of clinical trials and meta‐analyses examining their use in psychiatric treatment, and some data in prevention 47 , 48 . However, there remains considerable contention around their role in clinical care. This likely stems from the lack of clear and up‐to‐date guidance for clinicians and researchers regarding their: a) relative effectiveness for improving clinical outcomes in people with mental illness, and b) safety for use, particularly in conjunction with psychiatric medications. Third, there is nascent (but growing) evidence that mental disorders may be linked to dysfunction of the gut microbiome 41 , 42 . As gut bacteria can be modified through micronutrients and pre/probiotics 43 , 44 , this suggests that some pre/probiotic supplements may serve as potentially useful novel therapeutic options worthy of further investigation 45 , 46 . Second, there are now extensive data from large‐scale studies showing that psychotic and mood disorders are associated with significantly reduced serum levels of essential nutrients, including zinc 34 , 35 , folate 36 , 37 and vitamin D 38 , 39 . Since these deficits appear to be related to treatment response and clinical outcomes in these populations 11 , 34 , 40 , there is a possibility that nutrient supplementation could improve outcomes. First, recent clinical research has found that many mental disorders are associated with heightened levels of central and peripheral markers of oxidative stress and inflammation 26 - 29 , and an association has been reported between the efficacy of both pharmacological and lifestyle interventions for mental illness and changes in these biomarkers 30 , 31 . Thus, the antioxidant and anti‐inflammatory properties of certain nutrient supplements (such as N‐acetylcysteine 32 and omega‐3 fish oils 33 ) indicates that these could be beneficial in the treatment of psychiatric conditions caused or exacerbated by heightened inflammation and oxidative stress. Currently, there is an increased academic and clinical interest in the role of nutrient supplements for the treatment of various mental disorders 14 - 16 . This growth of research is partly attributable to our evolving understanding of the neurobiological underpinnings of mental illness, which implicates certain nutrients as a potential adjunctive treatment for a variety of reasons 25 . Nutrient supplements are widely used across the population. For instance, in the US, over half of adults take some form of nutrient supplements 17 . There is a lack of evidence that this wide‐scale usage reduces the incidence of diseases or premature mortality (indeed, many of the best quality trials – e.g., of vitamins D 18 and E 19,20 – were negative). However, some specific nutrient supplements are linked to health benefits for specific populations or clinical conditions (for instance, women in pregnancy are advised to supplement with folic acid to reduce the risk of neural tube deficits in offspring 21 ; individuals with pernicious anaemia are treated with vitamin B12 22 ; oral supplementation with zinc is a first‐line treatment for Wilson's disease 23 ; and national medical associations have recommended omega‐3 fatty acids for patients with myocardial infarction 24 ). The importance of diet for maintaining physical health is widely accepted, due to the clear impact of dietary risk factors on cardiometabolic diseases, cancer and premature mortality 12 , 13 . In parallel, the potential impact of diet on mental disorders is increasingly acknowledged 14 , 15 . However, along with regular food intake, nutrients can also be consumed in supplement form 16 . Supplements are typically used in attempts to: a) complement an inadequate diet (or low measured plasma levels of a nutrient) to achieve recommended nutrient intakes/levels; b) administer specific nutrients at greater doses than those found in a typical diet, for putative physiological benefits; c) provide nutrients in more bioavailable forms for individuals with genetic differences, or relevant health issues, which may result in poor nutrient absorption. Supplements can be synthetically manufactured or directly food‐derived, typically including substances such as vitamins (e.g., folic acid, vitamin D), dietary minerals (e.g., zinc, magnesium), pre/probiotics (from specific strains of gut bacteria), polyunsaturated fatty acids, PUFAs (typically as omega‐3 fish oils), or amino acids (e.g., N‐acetylcysteine, glycine). Furthermore, although the metabolic and hormonal side effects of psychotropic medications can affect food intake 7 , 8 , inadequate nutrition appears to be present even prior to psychiatric diagnoses. For instance, in depression, it seems that poor diet precedes and acts as a risk factor for illness onset 6 , 9 , 10 . Similarly, in psychotic disorders, various nutritional deficits are evident even prior to antipsychotic treatment 11 . Abundant evidence now suggests that people with mental disorders typically have an excess consumption of high‐fat and high‐sugar foods, alongside inadequate intake of nutrient‐dense foods, compared to the general population 1 - 5 . The relationship between poor diet and mental illness appears to persist even when controlling for other factors which could explain the association, such as social deprivation or obesity, and is not explained by reverse causation 1 , 6 . The potential impact of publication bias was assessed wherever there were sufficient data for appropriate analyses, and the adjusted effect sizes (when controlling for small study bias) are presented alongside the main findings. Where reported, all relevant study characteristics were also extracted, specifically with regards to the nutritional supplement used (including type, dose and co‐factors), the sample and the diagnostic details, and any relevant subgroup analyses implemented (e.g., separating high/low quality trials, specific patient subsamples, or dosage levels). For both primary and secondary analyses, we also extracted the number of participants (N), along with the number of trials/comparisons (n) from which the pooled effect size was derived. Additionally, heterogeneity was quantified using the I 2 statistic, and categorized as low (I 2 <25%), moderate (I 2 =25‐50%) or high (I 2 >50%). The results of secondary analyses, focusing on safety and tolerability, were typically reported as categorical outcomes (relative rates of adverse events or discontinuation in active vs. placebo conditions). These were extracted as either odds ratios (ORs) or risk ratios (RRs), in line with the originally reported outcomes. In line with conventional interpretations, SMDs were classified as negligible (<0.2), small (0.2‐0.4), moderate (0.4‐0.8), or large (>0.8). In cases where meta‐analyses had provided effect sizes corrected for publication bias, these were reported alongside the main effects observed, and interpreted as the primary findings from the analysis. In cases where continuous outcomes were reported as weighted mean differences or raw mean differences, these were recalculated into an SMD (Hedges' g) using Comprehensive Meta‐Analysis 3.0. Where original meta‐analyses had reported beneficial effects of nutrient supplementation as negative value effect sizes (to represent a reduction in symptoms), these were re‐coded to positive – such that all effect sizes presented here are positive values when indicating benefit from nutrient supplementation compared to placebo, or negative values when placebo was associated with better outcomes than nutrient supplementation. Where meta‐analyses had applied fixed‐effects models to calculate the effect sizes of nutritional supplementation compared to placebo, these were also recalculated using a random‐effects model, such that SMDs across supplements/disorders could be meaningfully compared. Primary analyses focused on the effects of nutrient supplementation on measures of physical or mental health outcomes from eligible meta‐analyses. For each nutritional supplement used for each disorder, we manually extracted effect size data as standardized mean differences (SMDs) with 95% confidence intervals (CIs) compared to placebo conditions, along with the reported probability of the compared effects being due to chance (p value). Data were initially extracted by five authors (KA, ST, WM, MS, DS), and then cross‐checked for quality with duplicate data extraction by four independent authors (JF, BS, JC, FS). AMSTAR‐2 assesses 16 constructs, which all indicate the quality of a systematic review/meta‐analysis. Seven of these were identified as “critical domains”, which can be used to determine the overall confidence in review findings 50 . For the purposes of our meta‐review, the included meta‐analyses were scored on all the 16 AMSTAR‐2 items, but also received a separate score for the number of “critical domains” they adhered to. The quality of eligible meta‐analyses was assessed using “A Measurement Tool to Assess Systematic Reviews” Version 2 (AMSTAR‐2) 50 , an updated version of the original AMSTAR designed to better capture review quality and confidence in findings. Where overlapping meta‐analyses of a given nutritional supplement for a specific outcome/disorder existed, the most recently updated meta‐analysis was used, as long as it captured more than 75% of the trials in the earlier version. Where older meta‐analyses presented unique findings, through inclusion of a greater number of studies or use of particular subgroup analyses, these data were used as secondary analyses for our meta‐review. All data on physical and/or mental health outcomes (including changes in clinical measures, response rates, and adverse effects) from meta‐analyses of RCTs examining nutritional supplements for any eligible disorder were included in this meta‐review. A meta‐analysis was classified as eligible if: a) it had clearly stated inclusion, intervention and comparison criteria aligned with the participant, intervention and comparison criteria listed above; b) it reported a systematic search with a screening procedure; c) it had used systematic data extraction and reported pooled continuous or categorical outcome data from more than one study. All nutrient supplements were considered for this meta‐review, used either as adjunctive treatment or monotherapy. Nutrient supplements were defined as vitamins, minerals, macronutrients, fatty acids or amino acids (including oral supplement forms of precursors to these) commonly found in the human diet. Meta‐analyses of dietary modification interventions and herbal supplements were not included. All studies of the above conditions were eligible provided that at least 75% of the sample had a confirmed mental illness or at‐risk state, ascertained by either clinical diagnostic history or reaching established thresholds on validated screening measures. Studies examining mental health outcomes of nutrient supplementation in the general population were only included if data from a mental illness subgroup (with 75% of the sample meeting the above criteria) were available. Studies examining nutrient supplements only for ameliorating the malnutrition associated with eating disorders or substance abuse disorders were excluded. Studies examining neurodevelopmental disorders (e.g., autism, intellectual disability) or neurodegenerative disorders (e.g., dementia) were also not included. We included studies of individuals with common and severe mental disorders, i.e., depressive disorders, bipolar disorder (type I and II), schizophrenia and other psychotic disorders, anxiety and stress‐related disorders, dissociative disorders, personality disorders, and attention‐deficit/hyperactivity disorder (ADHD). Studies of individuals who met criteria for being at “ultra‐high risk” or “clinical high risk” for developing a psychotic disorder were also included. The title and keyword search algorithm is presented in Table 1 . The systematic search was conducted using Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Health Technology Assessment Database, Allied and Complementary Medicine (AMED), PsycINFO and Ovid MEDLINE(R), from inception until February 1, 2019. No meta‐analyses on the effects of prebiotics or probiotics in mental disorders were identified in our search. However, in groups of individuals with mild to moderate depression (as determined by thresholds on clinically validated scales), probiotic treatments of varying strains and doses reduced depressive symptoms significantly more than placebo (N=163, n=3, SMD= 0.684, 95% CI: 0.0712‐1.296, p=0.029) 71 . As an adjunctive to clozapine treatment (N=58, n=3) 58 , glycine was ineffective for positive (SMD=0.63, 95% CI: –0.21 to 1.48, I 2 not reported), negative (SMD=0.03, 95% CI: –0.51 to 0.57, I 2 not reported) and total symptoms scores (SMD=0.32, 95% CI: –0.2 to 0.84, I 2 not reported). No eligible data were available for effects of sarcosine as an adjunctive to clozapine. The effects on negative symptoms fell short of statistical significance (sarcosine: N=132, n=4, SMD=0.32, 95% CI: –0.03 to 0.66, p=0.07; glycine: N=268, n=7, SMD=0.39, 95% CI: –0.11 to 0.9, p=0.13) 57 . However, significant benefits for negative symptoms were observed in individuals treated with non‐clozapine antipsychotics (sarcosine: N=112, n=3, SMD=0.39, p=0.04; glycine: N=219, n=5, SMD=0.60, p=0.05; CIs and I 2 not provided) 57 . The amino acids sarcosine and glycine (which occur naturally in meat, dairy and legumes) have also been assessed as adjunctive treatments for schizophrenia, due to their potential action as N‐methyl‐D‐aspartate (NMDA) receptor modulators 57 . Neither sarcosine (at 2 g/day) or glycine (at 2.8‐60 g/day) had any effect on positive symptoms, although both did significantly reduce total psychopathology as an adjunctive to antipsychotic treatment (sarcosine: N=132, n=4, SMD=0.41, 95% CI: 0.06‐0.76, p=0.02, I 2 not reported; glycine: N=159, n=6, SMD=0.66, 95% CI: 0.04‐1.28, p=0.04, I 2 not reported) 57 . Across all the above disorders, the rates of discontinuation and severe adverse events from N‐acetylcysteine supplementation did not differ significantly from the placebo conditions 56 , 72 , 74 . There was no significant difference in rates of mild adverse events (particularly with regards to gastrointestinal upset) in people with schizophrenia (N=186, n=2, OR=1.56, 95% CI: 0.87‐2.80, p=0.14, I 2 =0) 56 , but N‐acetylcysteine supplementation was associated with higher rates of mild adverse events in mood disorders (N=574, n=5, OR=1.61, 95% CI: 1.01‐2.59, p=0.049) 72 . In 155 individuals with OCD taking concomitant medications (mostly SSRIs), 2‐3 g/day N‐acetylcysteine produced a trend‐level effect towards reduction in obsessive‐compulsive symptoms (n=4, SMD=0.295, 95% CI: –0.018 to 0.608, p=0.064, I 2 =65%) 74 . N‐acetylcysteine (2‐2.4 g/day) also had no significant effects on symptoms of anxiety in a pooled mixed psychiatric sample (N=319, n=2, SMD=0.03, 95% CI: –0.21 to 0.28, p=0.80, I 2 =0%) 72 . As an adjunctive treatment for individuals with bipolar disorder (N=224, n=2), 2 g/day N‐acetylcysteine did not differ from placebo in its impact on overall illness severity (Clinical Global Impression ‐ Severity, CGI‐S: SMD=0.11, 95% CI: –0.15 to 0.37, p=0.42, I 2 =90%, and Clinical Global Impression ‐ Improvement, CGI‐I: SMD=0.16, 95% CI: –0.09 to 0.42, p=0.22, I 2 =0%) or mania ratings (N=224, n=2, SMD=0.05, 95% CI: –0.2 to 0.31, p=0.68, I 2 =0.01%) 72 . N‐acetylcysteine was also found to be ineffective on depressive symptoms in people with bipolar disorder (N=124, n=2, SMD=0.59, 95% CI: –0.3 to 1.48, p=0.19, I 2 =83%) 56 . Across three RCTs in people with schizophrenia (N=247), adjunctive treatment with N‐acetylcysteine significantly reduced total symptom scores (SMD=0.74, 95% CI: 0.06‐1.43, p=0.03). Although included trials were rated as high‐quality, the overall strength of evidence was weak due to high risk of publication bias and significant heterogeneity in existing data (I 2 =84%) 56 . Regarding symptom subgroups, there was a non‐significant trend indication of beneficial effects on negative symptoms (SMD=0.59, 95% CI: –0.10 to 2.00, p=0.08, I 2 =93%), but no effects beyond placebo for positive symptoms (SMD=0.16, 95% CI: –0.29 to 0.62, p=0.48, I 2 =66%) or general symptomatology (SMD=0.2, 95% CI: –0.21 to 0.62, p=0.34, I 2 =59%) 56 . In people with mood disorders (including bipolar disorder and MDD; N=493, n=3), N‐acetylcysteine at 2‐3 g/day had small but significant effects compared to placebo on global functioning (SMD=0.19, 95% CI: 0.01‐0.39, p=0.04, I 2 =64%) and social functioning (SMD=0.22, 95% CI: 0.03‐0.41, p=0.02, I 2 =67%). It also significantly improved other measures of functional impairment (SMD=0.31, 95% CI: 0.12‐0.50, p=0.002, I 2 =86%) 72 . It has been the most commonly assessed amino acid supplement across mental disorders. In a mixed sample of 574 psychiatric patients with high levels of depression (comorbid or primary), adjunctive treatment (2‐3 g/day) significantly reduced depressive symptoms (n=5, SMD=0.37, 95% CI: 0.19‐0.55, p=0.001, I 2 =92.64%), but had no effects on perceived quality of life (N=543, n=4, SMD=0.14, 95% CI: –0.04 to 0.32, p=0.14, I 2 =68%) 72 . There was high heterogeneity between studies, but no evidence of publication bias. N‐acetylcysteine is the nutraceutical form of the amino acid cysteine, found in abundance in high protein foods, and acts as a precursor to glutathione, which has antioxidant activity throughout the body. Omega‐3 conferred no benefits in tasks of forward memory (N=224, n=2, SMD=0.06, 95% CI: –0.21 to 0.34, p=0.66, I 2 =0%) and information processing (N=309, n=4, SMD=0.46, 95% CI: –0.29 to 1.21, p=0.23, I 2 =89%) 81 , and did not produce any improvements in composite cognitive scores for overall IQ (N=247, n=3, SMD=0.05, 95% CI: –0.21 to 0.32, p=0.71, I 2 =0%), inhibition (N=274, n=5, SMD=–0.12, 95% CI: –0.44 to 0.2, p=0.47, I 2 =42.8%), attention (N=267, n=5, SMD=–0.12, 95% CI: –0.33 to 0.1, p=0.28, I 2 =0%), short‐term memory (N=567, n=4, SMD=0.03, 95% CI: –0.10 to 0.16, p=0.64, I 2 =0%), reading (N=622, n=4, SMD=0.01, 95% CI: –0.09 to 0.12, p=0.79, I 2 =0%), spelling (N=260, n=3, SMD=0.03, 95% CI: –0.34 to 0.40, p=0.89, I 2 =48.9%), or reaction time (N=260, n=5, SMD=0.09, 95% CI: –0.13 to 0.3, p=0.44, I 2 =0%) 82 . As to cognitive dysfunction, the only positive effects of omega‐3 in young people with ADHD were observed in individual task scores for errors of omission (N=214, n=3, SMD=1.09, 95% CI: 0.43‐1.75, p=0.001, I 2 =75%) and errors of commission (N=85, n=2, SMD=2.14, 95% CI: 1.24‐3.03, p<0.001, I 2 =63%) 81 . A positive trend was detected for composite scores of working memory (N=506, n=3, SMD=0.23, 95% CI: –0.001 to 0.46, p=0.05, I 2 =33.9%) 82 and individual task scores for backward memory (N=224, n=2, SMD=0.37, 95% CI: –0.05 to 0.79, p=0.08, I 2 =55%). With regards to behavioural comorbidities, there was no indication of effects of omega‐3 on emotional lability, conduct problems or aggression in young people with ADHD 80 . Only effects on parent‐rated oppositional behaviour approached significance in primary analyses (SMD=0.2, 95% CI: 0.03‐0.38, p=0.02, I 2 =0.2%). A trend for a positive effect on parent‐rated oppositional behaviour was also observed when applying strict inclusion criteria (SMD=0.15, 95% CI: –0.006 to 0.31, p=0.06, I 2 =8%), and when examining only high‐quality trials (SMD=0.2, 95% CI: 0.03‐0.38, p=0.02, I 2 =0.2%). Omega‐3 supplements (120‐2,513 mg/day; mean: 616 mg/day) reduced composite symptom scores in ADHD significantly more than placebo (N=1,408, n=16, SMD=0.26, 95% CI: 0.15‐0.37, p<0.001, I 2 =25%) 79 . Although still statistically significant, the magnitude of benefit was negligible when applying a trim and fill analysis to adjust for publication bias (SMD=0.16, 95% CI: 0.03‐0.28). Similar small effects were observed for both symptom domains of hyperactivity‐impulsivity (SMD=0.26, 95% CI: 0.13‐0.39, p<0.001) and inattention (SMD=0.22, 95% CI: 0.1‐0.34, p<0.001). Subsequent analyses (although including fewer trials) replicated these findings of small but significant effects of omega‐3 supplements on composite scores, hyperactivity‐impulsivity and inattention symptoms 80 . Across the 16 RCTs reporting on ADHD symptom domains, significant benefits were observed for both hyperactivity/impulsivity (SMD=0.209, 95% CI: 0.059‐0.358, p=0.006) and inattention (SMD=0.162, 95% CI: 0.047‐0.276, p=0.006) 77 . Subgroup analyses revealed that significant benefits from PUFAs were only observed on parent‐rated measures, with no effects on teacher/clinician rated measures of overall symptoms, hyperactivity/impulsivity or inattention 77 . A subsequent analysis using stricter inclusion criteria of RCTs (and excluding data from trials with less than 50 participants) found no benefits of PUFA supplementation on teacher‐rated measures of ADHD symptoms (N=287, n=3, SMD=0.08, 95% CI: –0.32 to 0.47, p=0.56, I 2 =0%), and the benefits for parent‐rated measures also fell short of statistical significance (N=411, n=4, SMD=0.32, 95% CI: –0.15 to 0.8, p=0.098, I 2 =52.4%). In young people and children with ADHD, overall analyses of any PUFA supplementation (including any omega‐3 and omega‐6 supplements, at varying doses) showed significant effects beyond placebo for composite ADHD symptom scores (N=1,689, n=18, SMD=0.192, 95% CI: 0.086‐0.297, p<0.001, I 2 =19.3%) 77 . However, after adjusting for publication bias, the effects of PUFAs on composite symptom scores fell short of significance (SMD=0.118, 95% CI: –0.014 to 0.250, p=0.08). Examination of safety profiles found that EPA was well tolerated in psychotic disorders and did not cause adverse effects other than mild gastrointestinal upset 55 . In the at‐risk groups, trial attrition in omega‐3 treatment conditions was no different to the placebo control conditions 60 . In youth at risk of psychosis, PUFA supplements were also ineffective for reducing attenuated psychotic symptoms (N=347, n=3, SMD=0.31, 95% CI: –0.26 to 0.88, I 2 =80%) 61 , negative symptoms (N=347, n=3, SMD=0.06, 95% CI: –0.35 to 0.46, I 2 =63%) 62 , and functional disability (N=252, n=2, SMD=‐0.08, 95% CI: –0.33 to 0.17) 63 over 52 weeks. Similar null effects were also observed over shorter (i.e., 12 and 26 week) time frames 61 - 63 . Three trials (N=512) examining the impact of omega‐3 (1,200‐1,400 mg/day) as a monotherapy to prevent transition to psychosis in young people meeting “at risk” criteria showed no indication of benefit (all p>0.1) compared to placebo over 26 weeks (OR=0.64, 95% CI: 0.15‐2.68) or 52 weeks (OR=0.64, 95% CI: 0.18‐2.26) 60 . As an adjunctive treatment for people with schizophrenia, the effect of omega‐3 (2‐3 g/day of EPA) fell short of statistical significance for total symptom scores (N=335, n=7, SMD=0.242, 95% CI: –0.028 to 0.512, p=0.08, I 2 =33.8%) 55 . Omega‐3 supplements revealed no significant effects on depressive symptoms in people with schizophrenia (N=264, n=4, SMD=0.14, 95% CI: –0.11 to 0.39, p=0.28, I 2 =8%) 59 . Across all placebo‐controlled trials of omega‐3 PUFAs in people with bipolar disorder, effects on mania were not significant (N=242, n=6, SMD=0.198, 95% CI: –0.037 to 0.433, p=0.10, I 2 =0%) although there were small positive effects on depression (N=305, n=6, SMD=0.338, 95% CI: 0.035‐0.641, p=0.029, I 2 =30%) 75 . An analysis including only double‐blind trials found similar positive effects for bipolar depression, although falling just short of statistical significance (N=150, n=4, SMD=0.36, 95% CI: –0.01 to 0.73, p=0.051, I 2 =8%) 76 . The majority of studies were identified as low risk of bias, and showed no indication that omega‐3 increased rates of adverse events or mania/hypomania in bipolar disorder 76 . An analysis in people aged ≥65 years with clinical depression (either diagnosed or meeting thresholds on validated self‐report measures) found that omega‐3 (averaging 1.3 g/day of EPA/DHA) had large, significant effects on depressive symptoms compared to placebo (SMD=0.94, 95% CI: 0.5‐1.37, p<0.001, I 2 =32.7%), although with only a limited number of small studies (N=187, n=4). Further subgroup analyses of EPA formulas indicated slightly larger effects on depressive symptoms in studies using >12 week treatment periods (N=274, n=4, SMD=1.07, p<0.01) compared to those using ≤12 week periods (N=695, n=19, SMD=0.55, p<0.001), and for those using omega‐3 as an adjunctive treatment (N=535, n=15, SMD=0.72, p<0.001) rather than as a monotherapy for depression (N=434, n=8, SMD=0.44, p=0.017) 51 . In analyses examining different formulations of omega‐3 for individuals with any clinical depression, omega‐3 supplements containing ≥50% DHA had no benefits beyond placebo (N=469, n=6, SMD=–0.028, 95% CI: –0.21 to 0.16, p>0.1) 51 . However, omega‐3 supplements containing >50% EPA had moderately large positive effects on depressive symptoms (N=969, n=23, SMD=0.61, 95% CI: 0.38‐0.85, p<0.001). Again, publication bias was evident, and the estimated positive effects of high‐EPA omega‐3 was reduced, but still significant, after adjusting for this (SMD=0.42, 95% CI: 0.18‐0.65, p<0.001). Subgroup analyses found that omega‐3 supplements were only effective as an adjunctive treatment for MDD in cohorts with no reported comorbidities (N=201, n=6, SMD=0.74, 95% CI: 0.34‐1.13, p<0.01, I 2 =42%), whereas there was no indication of efficacy in samples where MDD occurred in comorbidity with cardiometabolic or neurological diseases (N=201, n=4, SMD=0.05, 95% CI: –0.4 to 0.5, p=0.82, I 2 =45%) 65 . Furthermore, omega‐3 was ineffective for the treatment of MDD in pregnant women (N=121, n=3, SMD=0.24, 95% CI: –0.73 to 1.21, p=0.63, I 2 =85%) 59 . A further subgroup analysis of individuals with indicated depression (but no diagnosis of MDD) found small positive effects of omega‐3 for depressive symptoms (N=759, n=12, SMD=0.22, 95% CI: 0.01‐0.43, p<0.05, I 2 =46%). Across 13 independent RCTs in 1,233 people with MDD, omega‐3 supplements (mean: 1,422 mg/day of EPA) reduced depressive symptoms (SMD=0.398, 95% CI: 0.114‐0.682, p=0.006, I 2 not available), with no evidence of publication bias 64 . When used specifically as an adjunctive to antidepressants in MDD, omega‐3 supplements (930‐4,400 mg/day of EPA) also produced moderate effects on depressive symptoms (N=448, n=11, SMD=0.608, 95% CI: 0.154‐1.062, p=0.009, I 2 =82%), although there was some indication of publication bias 75 . A subsequent analysis of omega‐3 as an adjunctive to antidepressants in MDD produced similar results (N=402, n=10, SMD=0.48, 95% CI: 0.11‐0.84, p=0.01, I 2 =64%), although again showing evidence of significant publication bias 65 . Adjusting for publication bias produced smaller (but still significant) estimates of effects of omega‐3 as an adjunctive treatment for MDD (SMD=0.19, 95% CI: 0.00‐0.38, p=0.049). PUFAs have been the most widely assessed nutritional supplement across the various psychiatric conditions, administered as omega‐3 fatty acids, including eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), and omega‐6 fatty acids, such as linoleic acid (LA). As a therapeutic option for managing side effects of antipsychotics, vitamin E showed no difference from placebo on levels of improvement in tardive dyskinesia 52 . Nevertheless, it did significantly reduce the risk of tardive dyskinesia “worsening” over 1 year (N=85, n=5, RR=0.23, 95% CI: 0.07‐0.76), although this result was based on low‐quality trials 52 . No significant effects on total symptom scores in schizophrenia were observed from pooled analyses of antioxidant vitamins (vitamin C and vitamin E: N=340, n=6, SMD=0.296, 95% CI: –0.39 to 0.98, p=0.40, I 2 =40.6%); mineral supplements (zinc and chromium: N=129, n=2, SMD=0.324, 95% CI: –0.48 to 1.13, p=0.43, I 2 =0%); or vitamin B6 (N=75, n=3, SMD=0.682, 95% CI: –0.09 to 1.45, p=0.08, I 2 =58.4%) 53 . Eleven RCTs examined the efficacy of mineral supplementation for depression, using either zinc or magnesium. Zinc was administered at 25 mg/day (elemental) as an adjunctive treatment for MDD, and had moderate significant effects on depressive symptoms (N=104, n=4, SMD=0.66, 95% CI: 0.26‐1.06, p=<0.01) 65 . Although there was no evidence of heterogeneity (I 2 =0%), all included RCTs were identified as having high risk of attrition bias, due to lack of intent‐to‐treat analyses 65 . In individuals with depression identified using self‐report measures, magnesium supplementation at 225‐4,000 mg/day had no effects beyond placebo (N=538, n=8, SMD=0.22, 95% CI: –0.17 to 0.48, I 2 =30.9%) 70 . No data on magnesium as an adjunctive treatment in diagnosed MDD are available. Vitamin D was found to significantly reduce depressive symptoms in patients with clinical depression (N=948, n=4, SMD=0.58, 95% CI: 0.45‐0.72, p<0.01, I 2 =0%). This estimate included data from non‐blinded trials using intramuscular injections 69 . Nevertheless, in our re‐analysis of data using only double‐blind RCTs of oral supplements, similar positive effects were observed at doses of 1,500‐7,143 IU/day (N=828, n=3, SMD=0.57, 95% CI: 0.43‐0.71, p<0.001, I 2 =0%). Discontinuation did not differ between inositol and placebo groups 68 . However, inositol supplementation was associated with a trend towards a higher rate of gastrointestinal upset than placebo (N=183, n=6, SMD=3.26, 95% CI: 0.94‐11.34, p=0.06, I 2 =0%). In schizophrenia, inositol supplementation (6‐12 g/day) was not superior to placebo for total symptom scores (N=66, n=3, SMD=0.155, 95% CI: –0.35 to 0.58, p=0.63, I 2 =87.2%) 53 . Among individuals with bipolar disorder, inositol (5.7‐19 g/day) had no effect on depressive symptoms (N=42, n=2, SMD=–0.11, 95% CI: –0.75 to 0.52, p=0.72, I 2 =0%) or response rates (RR=0.63, 95% CI: 0.35‐1.12, p=0.12, I 2 =22%) 68 . In anxiety disorders, inositol (12‐18 g/day) had no effects on Hamilton Anxiety Rating Scale scores (N=52, n=2, SMD=0.04, 95% CI: –0.58 to 0.51, p=0.89) and symptom scores in OCD samples (N=46, n=2, SMD=0.15, 95% CI: – 0.43 to 0.73, p=0.60) 68 . In an overall analysis of the effects of inositol (3.6‐19 g/day, median: 12 g/day) on depressive symptoms across bipolar disorder, unipolar depression and premenstrual dysphoric disorder, no significant difference from placebo was found (N=188, n=7, SMD=0.35, 95% CI: –0.2 to 0.89, p=0.22, I 2 =70%) 68 . Inositol was also ineffective when examined as adjunctive to SSRIs in MDD (N=78, n=2, SMD=–0.17, 95% CI: –0.66 to 0.33, p=0.50, I 2 =0%) and for depressive symptoms in premenstrual dysphoric disorder (N=58, n=2, SMD=1.15, 95% CI: –0.08 to 2.39, p=0.07, I 2 =78%) 68 . Folate‐based supplements had no significant effects on positive symptoms, general psychopathology or depressive symptoms in patients with schizophrenia 54 . However, they reduced negative symptoms more than placebo (N=281, n=5, SMD=0.25, 95% CI: 0.01‐0.49, p=0.04, I 2 =0). The effect persisted in high‐quality RCTs (N=190, n=2, SMD=0.30, 95% CI: 0.00‐0.60, p=0.05, I 2 =0), but became non‐significant when excluding the RCT using 15 mg/day methylfolate (N=226, n=4, SMD=0.23, 95% CI: –0.04 to 0.50, p=0.10, I 2 =0%) 54 . Seven RCTs (N=340) examined folate‐based supplements as an adjunctive treatment for schizophrenia 54 . Vitamin B9 was administered as methylfolate (n=2) or folic acid (n=5), and also in combination with B6 and B12 (n=3). In overall analyses, the small effects of vitamin B9 on total symptoms were not statistically significant (SMD=0.20, 95% CI: –0.02 to 0.41, p=0.08, I 2 =0), and subgroup analyses of high‐quality studies confirmed the absence of overall effects (N=231, n=3, SMD=0.15, 95% CI: –0.11 to 0.42, p=0.26, I 2 =0%). The folate‐based supplements were ineffective on total symptom scores when administered as folic acid (N=268, n=5, SMD=0.13, 95% CI: –0.12 to 0.37, p=0.32, I 2 =0%), even in combination with other homocysteine‐reducing B vitamins (i.e., B6 and B12) (N=219, n=3, SMD=0.18, 95% CI: –0.13 to 0.5, p=0.24, I 2 =16%). However, effects on total symptom scores in two trials of high‐dose methylfolate (15 mg/day) approached statistical significance (N=72, n=2, SMD=0.45, 95% CI: 0.02‐0.92, p=0.06, I 2 =0%). Two RCTs examining a high dose (15 mg/day) of methylfolate (the most bioactive metabolite of folic acid) as an adjunctive treatment for MDD found moderate‐to‐large benefits for depressive symptoms (N=99, n=2, SMD=0.73, 95% CI: 0.28‐1.19, p=0.002, I 2 =3%) 67 . There was no evidence of adverse effects or statistical heterogeneity. However, when including the lower‐dose trials of methylfolate (7.5 mg/day), no significant effects on depression were observed (N=249, n=3, SMD=0.34, 95% CI: –0.4 to 1.08, p=0.37, I 2 =81%). When administering vitamin B9 as folic acid (0.5‐10 mg/day), no significant effects on depressive symptoms were observed (N=657, n=4, SMD=0.4, 95% CI: –0.08 to 0.88, p=0.1, I 2 =83%). Significant effects were observed in the two trials using low dose (<5 mg/day) folic acid (N=190, SMD=0.57, 95% CI: 0.23‐0.91, p<0.001, I 2 =25%), while no significant benefits were observed from doses of ≥5 mg/day (N=467, n=2, SMD=0.24, 95% CI: –0.56 to 1.03, p=0.56, I 2 =76%) 67 . As an adjunctive to SSRIs in 904 individuals with unipolar depression (mostly MDD), folate‐based supplements (including folic acid and methylfolate, administered at varying doses) were associated with significantly greater reductions in depressive symptoms compared to placebo, although there was large heterogeneity between trials (n=7, SMD=0.37, 95% CI: 0.01‐0.72, p=0.04, I 2 =79%) 67 . The most widely assessed vitamin supplement for mental disorders was vitamin B9, which is also referred to as “folate” when in dietary form. It can be administered in supplement form as folic acid, folinic acid or methylfolate (which is also known as l‐methylfolate, levomefolic acid, or 5‐methyltetrahydrofolate). Figures 2 - 7 show the efficacy of nutrient supplementation (as determined by meta‐analyses) for all clinical outcomes reported across different psychiatric conditions, including depressive disorders (Figure 2 ), anxiety disorders (Figure 3 ), schizophrenia (Figure 4 ), states at risk for psychosis (Figure 5 ), bipolar disorder (Figure 6 ), and ADHD (Figure 7 ). The overall quality of meta‐analyses is also displayed in these figures. Nutrient supplements with sufficient data (i.e., from meta‐analyses with >400 participants) are highlighted in Table 2 . For all nutrients assessed, the specifics of these findings, along with data on safety and tolerability, are detailed below. Effects of nutrient supplements in attention‐deficit/hyperactivity disorder (ADHD), shown as standardized mean difference with 95% CI. Circles represent no significant difference from placebo; diamonds represent p≤0.05 compared to placebo; * represents trim‐and‐fill estimate adjusted for publication bias. A2 – AMSTAR‐2 total score, A2‐CA – AMSTAR 2 “critical domains” adhered to, PUFAs – polyunsaturated fatty acids, NA – not available, RCTs – randomized controlled trials. Effects of nutrient supplements in depressive disorders, shown as standardized mean difference with 95% CI. Circles represent no significant difference from placebo; diamonds represent p≤0.05 compared to placebo; * represents trim‐and‐fill estimate adjusted for publication bias. A2 – AMSTAR‐2 total score, A2‐CA – AMSTAR 2 “critical domains” adhered to, MDD – major depressive disorder, EPA – eicosapentaeonoic acid, DHA – docosahexaenoic acid, SSRIs – selective serotonin reuptake inhibitors, NA – not available. The quality assessment of the meta‐analyses is provided alongside the respective outcomes in Figures 2 - 7 . Individual meta‐analyses fulfilled between 4 and 16 of the AMSTAR‐2 criteria (median: 12, mean: 12). The majority of the meta‐analyses (25 out of 33) adhered to five or more of the seven “critical domains”, but only five of them adhered to all the domains 52 , 58 , 64 , 78 , 80 . Twenty‐six of the 33 included meta‐analyses were published in 2016‐2019. Specific psychiatric conditions (and reported outcomes) considered in this meta‐review included: schizophrenia (examining total symptoms along with positive, negative, general and depressive symptoms, and tardive dyskinesia) 52 - 59 ; states at risk for psychosis (examining attenuated psychosis symptoms, negative symptoms, transition to psychosis, and functioning) 60 - 63 ; depressive disorders (including any clinical depression, diagnosed major depressive disorder (MDD), depression in pregnancy, in old age, or as a comorbidity to chronic health conditions) 51 , 59 , 64 - 73 ; anxiety and stress‐related conditions (including generalized anxiety disorder, obsessive‐compulsive disorder (OCD) and trichotillomania) 68 , 72 , 74 ; bipolar disorder type I and II (examining overall symptoms, bipolar mania, bipolar depression, functional impairments, and quality of life) 56 , 68 , 72 , 75 , 76 ; and ADHD (including composite symptoms, hyperactivity‐impulsivity, inattention, behavioural comorbidities such as aggression, and cognitive functioning) 77 - 82 . Meta‐analyses examined RCTs of PUFAs, vitamins, minerals, amino acid supplements and pre/probiotics, with primary analyses including outcome data from a total of 10,951 individuals. All meta‐analyses were based on nutrient supplementation administered in conjunction with “usual care” (without specifying treatment regimens) or as an adjunctive treatment to a specific class of psychotropics (e.g., selective serotonin reuptake inhibitors (SSRIs) in depression, or antipsychotics in schizophrenia). Only one of the meta‐analyses reported on a nutrient supplement as monotherapy for a mental disorder (i.e., omega‐3 fatty acids for depression 51 ), whereas no others specifically excluded patients taking medications. No meta‐analyses directly compared nutrient supplementation to psychotropic medications. All studies 51 - 82 were placebo‐controlled. The search returned 1,194 results, which were reduced to 737 after duplicates were removed. One further potentially eligible article was retrieved from the additional search of Google Scholar. Title and abstract screening removed 597 articles, while 141 articles were retrieved and reviewed in full. Of these, 108 were ineligible. Thus, in total, eligible data from 33 independent meta‐analyses of RCTs of nutrient supplementation in mental disorders were included for this meta‐review (see Figure 1 ). DISCUSSION This meta‐review aggregated and evaluated all the recent top‐tier evidence from meta‐analyses of RCTs examining the efficacy and safety of nutritional supplements in mental disorders. We identified 33 eligible meta‐analyses published from 2012 onwards (26 since 2016), with primary analyses including 10,951 individuals with psychiatric conditions (specifically depressive disorders, anxiety and stress‐related disorders, schizophrenia, states at risk for psychosis, bipolar disorder and ADHD), randomized to either nutritional supplementation (including omega‐3 fatty acids, vitamins, minerals, N‐acetylcysteine and other amino acids) or placebo control conditions. Although the majority of nutritional supplements assessed did not significantly improve mental health outcomes beyond control conditions (see Figures 2-7), some of them did provide efficacious adjunctive treatment for specific mental disorders under certain conditions. The nutritional intervention with the strongest evidentiary support is omega‐3, in particular EPA. Multiple meta‐analyses have demonstrated that it has significant effects in people with depression, including high‐quality meta‐analyses with good confidence in findings as determined by AMSTAR‐264. Meta‐analytic data have shown that omega‐3 is effective when given adjunctively to antidepressants51, 64. As a monotherapy intervention, the data are less compelling for omega‐3, while DHA or DHA‐predominant formulas do not appear to show any obvious benefit in MDD51, 64. Omega‐3 supplementation appears to be of greatest benefit when administered as high‐EPA formulas, as significant relationships between EPA dosage and effect sizes are also observed in high‐quality meta‐analyses of RCTs59, 64. Emergent data from RCTs further indicate that omega‐3 may be most beneficial for patients presenting with raised inflammatory markers83. The available meta‐analyses suggest that omega‐3 supplementation is not effective in patients with depression as a comorbidity to chronic physical conditions65, including cardiometabolic diseases, a finding which has been replicated in subsequent trials84. In light of current adverse event data, omega‐3 seems to represent a safe adjunctive treatment. More research is needed concerning the efficacy of omega‐3 supplements in other mental health conditions. For instance, omega‐3 was indicated as potentially beneficial for children with ADHD, again with high EPA formulas conferring largest effects79. However, the negligible effect sizes after controlling for publication bias, along with the low review quality identified by AMSTAR‐2, reduces confidence in findings. Additionally, whereas the existing meta‐analytic data have found a lack of significant benefits in people with schizophrenia55, 59, subsequent trials in young people with first‐episode psychosis have reported more positive, though mixed, results85, 86, putatively ascribed to neuroprotective effects87, 88. Adjunctive treatment with folate‐based supplements was found to significantly reduce symptoms of MDD and negative symptoms in schizophrenia54, 67. However, in both cases, AMSTAR‐2 ratings indicated low confidence in review findings, and positive overall effects in these meta‐analyses were driven largely by RCTs of high‐dose (15 mg/day) methylfolate. Methylfolate is readily absorbed, overcoming any genetic predispositions towards folic acid malabsorption, and successfully crossing the blood‐brain barrier89, 90. Indeed, a placebo‐controlled trial of methylfolate in schizophrenia reported significant increases in white matter within just 12 weeks, co‐occurring with a reduction in negative symptoms91. RCTs not captured in our meta‐review92 and retrospective chart analyses93 have further indicated benefits of methylfolate supplementation in other mental disorders. Considering this, alongside the lack of detrimental side effects (in fact, significantly fewer adverse events in samples receiving treatment compared to placebo54), further research on methylfolate as an adjunctive treatment for mental disorders is warranted. Regarding other vitamins (such as vitamin E, C or D), minerals (zinc and magnesium) or inositol, there is currently a lack of compelling evidence supporting their efficacy for any mental disorder, although the emerging evidence concerning positive effects for vitamin D supplementation in major depression has to be mentioned. Beyond vitamins, minerals and omega‐3 fatty acids, certain amino acids are now emerging as promising adjunctive treatments in mental disorders. Although the evidence is still nascent, N‐acetylcysteine in particular (at doses of 2,000 mg/day or higher) was indicated as potentially effective for reducing depressive symptoms and improving functional recovery in mixed psychiatric samples72. Furthermore, significant reductions in total symptoms of schizophrenia have been observed when using N‐acetylcysteine as an adjunctive treatment, although with substantial heterogeneity between studies, especially in study length (in fact, N‐acetylcysteine has a very delayed onset of action of about 6 months56, 94). N‐acetylcysteine acts as a precursor to glutathione, the primary endogenous antioxidant, neutralizing cellular reactive oxygen and nitrogen95. Glutathione production in astrocytes is rate limited by cysteine. Oral glutathione and L‐cysteine are broken down by first‐pass metabolism, and do not increase brain glutathione levels, unlike oral N‐acetylcysteine, which is more easily absorbed, and has been shown to increase brain glutathione in animal models96. Additionally, N‐acetylcysteine has been shown to increase dopamine release in animal models96. N‐acetylcysteine may assist in treatment of schizophrenia, bipolar disorder and depression through decreasing oxidative stress and reducing glutamatergic dysfunction96, but has wider preclinical effects on mitochondria, apoptosis, neurogenesis and telomere lengthening of uncertain clinical significance. NMDA receptors are activated by binding D‐serine or glycine97. Sarcosine is a naturally occurring glycine transport inhibitor and can act as a co‐agonist of NMDA98. As such, D‐serine, glycine and sarcosine may improve psychotic symptoms through NDMA modulation99. We found reductions in total psychotic symptoms, but not negative symptoms, with glycine and sarcosine. Additionally, we found that glycine was not effective in combination with clozapine. This may be because clozapine already acts as a NMDA receptor glycine site agonist97. The role of the gut microbiome in mental health is also a rapidly emerging field of research99. Gut microbiota differs significantly between people with mental disorders and healthy controls, and recent faecal transplant studies using germ‐free mice indicate that these differences could play a causal role in symptoms of mental illness41, 100, 101. Intervention trials that aim to investigate the effect of probiotic formulations on clinical outcomes in mental disorders are now beginning to emerge71. We included one recent meta‐analysis that evaluated the pooled effect of probiotic interventions on depressive symptoms: while the primary analysis reported no significant effect, the moderately large effect in the three included studies suggests that probiotics may be beneficial for those with a clinical diagnosis of depression rather than subclinical symptoms71. However, additional trials are required to replicate these results, to evaluate the long‐term safety of probiotic interventions, and to elucidate the optimal dosing regimen and the most effective prebiotic and probiotic strains102. While this meta‐review has highlighted potential roles for the use of nutrient supplements, this should not be intended to replace dietary improvement. The poor physical health of people with mental illness is well documented103, and excessive and unhealthy dietary intake appears to be a key factor involved4, 5. Improved diet quality is associated with reduced all‐cause mortality104. whereas multivitamin and multimineral supplements may not improve life expectancy18-20. A meta‐analysis of dietary interventions in people with severe mental illness found benefits on a number of physical health aspects105. It is unlikely that standard nutrient supplementation will be able to cover all beneficial aspects of improved dietary intake. In addition, whole foods may contain vitamins and minerals in different forms, whereas nutrient supplements may only provide one form. For example, vitamin E occurs naturally in eight forms, but nutrient supplements may only provide one form. Dietary interventions also reduce dietary elements in excess, such as salt, which is a key driver of premature mortality13. While improving dietary intake appears to have a clear role in increasing life expectancy and preventing chronic disease, there is currently a lack of studies evaluating this in people with mental disorders. Additionally, although recent meta‐analyses of RCTs have demonstrated that dietary improvement reduces symptoms of depression in the general population106, more well‐designed studies are needed to confirm the mental health benefits of dietary interventions for people with diagnosed psychiatric conditions25. Our data should be considered in the light of some limitations. First, although meta‐analyses of RCTs typically constitute the top‐tier of evidence, it is important to acknowledge that many of the outcomes included in this meta‐review had significant amounts of heterogeneity between the included studies, or were based on a small number of studies. A next step within this field of research is to move from study‐level to patient‐level meta‐analyses, as this would provide a more personalized picture of the effects of nutrient supplements derived from adequately powered moderator, mediator and subgroup analyses. Additionally, comparing nutrient supplements in the same trial would be desirable. It is recognized that people with mental disorders commonly take nutritional supplements in combinations. In some instances, research has supported this approach, most commonly in the form of multivitamin/mineral combinations107. However, recent research in the area of depression has revealed that “more is not necessarily better” when it comes to complex formulations108. Of note, recent large mood disorder clinical trials have revealed that nutrient combinations may not have a more potent effect, and in some cases placebo has been more effective47, 108, 109. In conclusion, there is now a vast body of research examining the efficacy of nutrient supplementation in people with mental disorders, with some nutrients now having demonstrated efficacy under specific conditions, and others with increasingly indicated potential. There is a great need to determine the mechanisms involved, along with examining the effects in specific populations such as young people and those in early stages of illness. A targeted approach is clearly warranted, which may manifest as biomarker‐guided treatment, based on key nutrient levels, inflammatory markers, and pharmacogenomics 83, 91, 110.
{ "pile_set_name": "OpenWebText2" }
Zarrin Khul Zarrin Khul (, also Romanized as Zarrīn Khūl and Zarrin Khool; also known as Zarī Khūl-e Pā’īn and Zarīn Khūl-e Pā’īn, and Zarrīn Khūl-e Pā’īn) is a village in Saghder Rural District, Jebalbarez District, Jiroft County, Kerman Province, Iran. At the 2006 census, its population was 19, in 4 families. References Category:Populated places in Jiroft County
{ "pile_set_name": "Wikipedia (en)" }
Q: Mechanical energy of a body is relative? Since, the potential energy of a body is relative and depends on the point we choose as having zero potential. Does this mean that the mechanical energy (potential energy + kinetic energy) of the body is also relative? A: As Vladimir said, the answer is yes, although you are not talking about the same thing as he is. Kinetic energy is relative, because it depends on velocity, which is not the same for every frame of reference. Easiest example is this : say you and your friend are sitting on a train. From your perspective, your friend has zero kinetic energy, since he is not moving relatively to you. But if I stand on the ground and I see the train passing by, I will see your friend going foward with a certain velocity $v$ (the same as the train), and his (classical) kinetic energy will be $\frac{1}{2}mv^2$. Thus we both measure a different energy. Potential energy is also "relative", but for another reason. It does not depend on the reference frame like kinetic energy. Instead, it depends on a reference point chosen arbitrarily. John Taylor (Classical Mechanics) defines potential energy $U(\overrightarrow{r})$ as (minus) the work done by a conservative force $\overrightarrow{F}$ that acts on a body that goes from a reference point $\overrightarrow{r_0}$ to the point $\overrightarrow{r}$ (where the potential energy is calculated). Mathematically, we write: $$ U(\overrightarrow{r})=-W(\overrightarrow{r_0}\rightarrow\overrightarrow{r})= -\int_\overrightarrow{r0}^\overrightarrow{r}\overrightarrow{F}\cdot d\overrightarrow{r} $$ As you might have noted, since this reference point is arbitrary, different values could be calculated for the potential energy of the same object in the same situation. However, the value of this potential is of no interest. What really matters is its variation, ie its gradient. If you know a bit about calculus, you might know that, for a conservative force $\overrightarrow{F}$ : $$ \overrightarrow{F}=-\vec{\nabla} U $$ Which means that (minus) the variation of the potential energy must be equal to the force applied. But, you can add any constant $C$ to this potential $U$ and define a new potential energy $U'(\overrightarrow{r})=U(\overrightarrow{r})+C$ and get the same result, since the derivative of the constant is zero. For example, the gravitational potential energy on the surface of the Earth is approximately $U(\overrightarrow{r})=U(h)=mgh$. Common sense invites us to use the ground as a reference point for $h=0$, but you could use the top of a building without any problem : would the object go below that point, its potential energy would be negative, which has little significance since its value is arbitrary. Whichever point of reference you choose, if say the object drops 10 m from it initial position, you will always measure a loss of potential energy $\Delta U = mg\Delta h=mg\times(-10 \;\mathrm{m})$. So, really we don't care how much mechanical energy an object has. What really is important is that, as long as you are in an inertial reference frame and that all forces are conservative, this value is constant, whatever it is. This is exactly what the law of conservation of energy is about! And if there are non-conservative forces, then the change of potential energy will be minus the variation of the kinetic energy (again, you see that we don't really care about the value of the PE, only how it changes). Also note that this doesn't apply to the "rest mass energy" introduced in special relativity ($E=mc^2$), which is characteristic for a given particle and which is always measured in the same reference frame as the object (at rest, no velocity). In other words, it is not relative nor arbitrary.
{ "pile_set_name": "StackExchange" }
Q: android error: Could not read input channel file descriptors from parcel I've made an application for Android that works more or less like this: Application communicates with the Web service and transfers information (not files) I can navigate to a different screen using Intent and startActivity Unfortunately, sometimes the application crashes with the following error in different activity: java.lang.RuntimeException: Could not read input channel file descriptors from parcel. at android.view.InputChannel.nativeReadFromParcel(Native Method) at android.view.InputChannel.readFromParcel(InputChannel.java:135) at android.view.IWindowSession$Stub$Proxy.add(IWindowSession.java:523) at android.view.ViewRootImpl.setView(ViewRootImpl.java:481) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:301) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:215) at android.view.WindowManagerImpl$CompatModeWrapper.addView(WindowManagerImpl.java:140) at android.view.Window$LocalWindowManager.addView(Window.java:537) at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:2507) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1986) at android.app.ActivityThread.access$600(ActivityThread.java:123) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1147) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:137) at android.app.ActivityThread.main(ActivityThread.java:4424) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) at dalvik.system.NativeStart.main(Native Method) But I don't know what does this error mean because I don't work with files. Any idea? A: This question appears to be the same as Could not read input channel file descriptors from parcel, which was (incorrectly) closed as off-topic. It is also more or less same as Could not read input channel file descriptors from parcel crash report. Unfortunately those questions haven't gotten a satisfactory (and sufficiently general) answer, so I am going to try anyway. File descriptors are used in multiple places in Android: Sockets (yes, open network connections are "files" too); Actual files (not necessarily files on disks, may as well be android.os.MemoryFile instances); Pipes—Linux uses them everywhere, for example the attempt to open pipe, that resulted in your exception, was likely required to send input events between IME (keyboard) process and your application. All descriptors are subject to shared maximum limit; when that count is exceeded, your app begins to experience serious problems. Having the process die is the best scenario in such case, because otherwise the kernel would have run out of memory (file descriptors are stored in kernel memory). You may have issues with descriptors (files, network connections) not being closed. You must close them ASAP. You may also have issues with memory leaks—objects not being garbage-collected when they should (and some of leaked objects may in turn hold onto file descriptors). Your own code does not have to be guilty, libraries you use and even some system components may have bugs, leading to memory leaks and file descriptor leaks. I recommend you to use Square's Leak Canary—a simple, easy to use library for automatic detection of leaks (well, at least memory leaks, which are most frequent).
{ "pile_set_name": "StackExchange" }
Livestock You will not start out your farm life with a barn to hold livestock animals. To start raising livestock you'll need to hire Gannon to build a Barn, which costs 200 Lumber and 12,000 G (27,000 G combined). There is only 1 animal Barn available, but you can increase its size. The original small size can hold up to 4 Feeder bins, the medium can hold 8, and the large barn can fit up to 12 Feeders. Each Feeder bin can accommodate one cow or sheep. Cows and sheep share the same barn. You can have any combination of the two types of animals as long as you have enough Feeder bins. When you receive a baby cow or baby sheep, it will only grow if you feed it every day. If you don't feed it, then it will take longer to mature. Normally, a cow will take 18 days to mature and a sheep will take 16 days to mature. If you forget to feed it for a day, or can't due to a storm, then that will add an extra day to its growing time. Cows and Jersey Cows After you build your livestock barn, Mirabelle will visit and give you a free, baby normal cow. If you want more normal cows, you can purchase them from her shop for 5000 G each. To collect milk from a cow, you need to buy a Milker from Mirabelle's shop. The Milker's price will vary depending on how many Wonderful slots it has on it (1 slot is 1000 G, 2 slots is 2000 G, etc.). Every day you can milk you normal, black-spotted adult Cow to receive a bottle of Milk. The more hearts your cow has, the higher quality of milk you'll receive. You will unlock the Jersey Cow once you have shipped an S-Rank milk from a normal cow. The day after you ship the S-Rank Milk, Mirabelle will have baby Jersey Cows for 15,000 G each. Jersey Cows take just as long as normal Cows to mature, but they produce milk every 3 days instead of every day. Jersey Milk sells for more profit than normal Milk. The milk you receive from cows can be converted into several different items via the maker machines: Item Recovery Profit Milk +4 STA+1 FUL Avg. 192 G to shipS-Rank max 270 G to ship60 G to Chen Yogurt +4 STA+4 FUL Avg. 288 G to shipS-Rank max 405 G to ship90 G to Chen Butter +1 STA+1 FUL Avg. 386 G to shipS-Rank max 540 G to ship120 G to Chen Cheese +6 STA+4 FUL Avg. 578 to shipS-Rank max 810 G to ship180 G to Chen Item Recovery Profit Jersey Milk +4 STA+1 FUL Avg. 675 G to shipS-Rank max 945 G to ship60 G to Chen Superb Yogurt +4 STA+4 FUL Avg. 1013 G to shipS-Rank max 1417 G to ship90 G to Chen Superb Butter +1 STA+1 FUL Avg. 1350 G to shipS-Rank max 1890 G to ship120 G to Chen Superb Cheese +6 STA+4 FUL Avg. 2026 to shipS-Rank max 2835 G to ship180 G to Chen If you want to birth cows on your farm, buy the Cow Miracle Potion from Mirabelle for 3500 G. The same potion will work on a normal or Jersey Cow. You also need to have an available Birthing Pen, and the cow must be an adult. Use the Cow Miracle Potion on the cow you want to impregnate, and then remember to put Fodder in the Birthing Pen's Feeder every day inside of the barn. After 21 days, the baby cow will be born. The new cow will have half the number of hearts its mother had. It will take 18 days for the baby cow to mature into an adult cow. Cows have a life span of about 6 years. Sheep and Suffolk Sheep You don't need to do anything special to unlock normal Sheep. As soon as Gannon finishes building your Barn, Mirabelle will have Sheep for 4000 G each. You will also need to buy the Clippers from her store for 1000 G per available Wonderful Slot. Normal, adult Sheep can have their Wool clipped every 3 days. The more hearts the Sheep has, the better the Wool that it produces. Mirabelle will also have the Suffolk Sheep for sale after you ship your first S-Rank Wool. Each Suffolk Sheep costs 12,000 G! The pink Suffolk Sheep can have their wool clipped every 6 days, but it sells for much higher than normal Wool. You can convert wool into yarn with the Yarn Maker Machine: Name Profit Wool Avg. 772 G to shipS-Rank max 1080 G to ship240 G to Chen Yarn Avg. 2412 G to shipS-Rank max 3375 G to ship1000 G to Chen Name Profit Suffolk Wool Avg. 1930 G to shipS-Rank max 2700 G to ship240 G to Chen Superb Yarn Avg. 6032 G to shipS-Rank max 8437 G to ship1000 G to Chen Having Sheep born on your farm is similar to the method used for Cows, but the adult Sheep must have all of its wool intact before the Sheep Miracle Potion can be used on it. The Sheep potion is at Mirabelle's shop for 3000 G each. After 16 days, the baby Sheep will be born with half of the hearts that the mother Sheep had. Another 16 days later and the child will be an adult Sheep.
{ "pile_set_name": "Pile-CC" }
proc testAnonRanges(type lowT, type countT) { var zero = 0:countT; // Applying #0 to a 0.. uint range results in wraparound leading to // an error when trying to iterate over it when bounds checks are // on. for i in 0:lowT..#(0:countT) do write(i, ' '); writeln(); for i in 0:lowT..#(zero) do write(i, ' '); writeln(); for i in 0:lowT..#(1:countT) do write(i, ' '); writeln(); for i in 0:lowT..#(10:countT) by 2:lowT do write(i, ' '); writeln(); for i in (0:lowT.. by 2:lowT) #(10:countT) do write(i, ' '); writeln(); for i in 10:lowT..#10:countT do write(i, ' '); writeln(); } testAnonRanges(uint(64), int(64));
{ "pile_set_name": "Github" }
2018-2019 Waiver May 24, 2018 02:50 PM Waiver & Release: 2018-2019 Release of Liability I understand and recognize the risk of physical injury inherent in dance and dance performances and I am willing to assume those risks. I agree that I will not hold Silhouette Dance Company, its directors or employees liable for injuries sustained while in attendance and/or participating in any dance activity at Silhouette Dance Company or any activity involving Silhouette Dance Company i.e. recital dress rehearsals, recital performances, community performances, conventions and competitions. Enrollment 2018-2019 SEASON ENROLLMENT A non-refundable registration fee is due at the time of enrollment for every student. The fee is $40 per student and $20 for each additional sibling. It is understood that enrollment is from August 2018 to June 2019 (whenever recital take place-TBA). The first month's tuition will also be due at time of enrollment for classes. There is a two month minimum for all dance lessons. Tuition Tuition is automatically debited from your checking account, debit or credit card on the 1st of each month. A $25 late fee will be assessed after the 3rd of the month. MasterCard and Visa debit and credit cards are accepted. All accounts must have a card on file. Silhouette Dance Company is authorized to charge your account the $25 NSF fee should there be insufficient funds. There is a 10% tuition discount when tuition is paid by the semester. (August to December and January to June). Delinquent accounts will be turned over to an outside collection agency if the account holder fails to make satisfactory payment arrangements with SDC. An account is considered delinquent once it is two months past due. No student will be allowed to continue classes if the account is two months past due or prior payment arrangements haven’t been made. There are NO REFUNDS for any reason including Registration Fees, Tuition, Costume Fees or Recital Fees. Withdrawal There is a TWO month minimum for all dance lessons. 30 days notice BEFORE the first of the month is required to discontinue classes. To withdraw from classes, the parent must email the Director at ashley@silhouettedancecompany.com and receive a confirmation in order to ensure automatic debit will discontinue. Withdraw MUST be submitted in writing-phone calls, leaving a voicemail or saying it in person are NOT proper withdraw notices. If you decide to take a leave of absence during the dance year you will need to notify the Director via email (ashley@silhouettedancecompany.com). If you would like to keep your place in class, you can freeze your account by paying 60% of your tuition cost for each month you are out OR you may unenroll completely in which all registration fees will apply when you return to re-enroll. If you wish to unenroll you must be sure to give the 30 days notice before the 1st of the month or else the next month will be charged via auto draft. Non Refundable Fees Auto Drafted Performance Fees It is assumed that ALL STUDENTS will perform in all performances when enrolled and all non-refundable fees associated with the performances are auto drafted. There will be 2 performances throughout the year, a Christmas show and an end of year annual recital. We must receive email notification by September 1 (Christmas show) and October 1st, 2017 (Annual Recital) that your dancer is not participating in order to ensure costume and recital fees are not charged. Dancers who enroll after October 1st, must notify us at time of enrollment. There are two, non-refundable fees associated with the CHRISTMAS SHOW & ANNUAL RECITAL: Costumes and Recital Fee. The Costume & Christmas Performance Fee will be auto drafted on September 15th. Costumes will be minimal for this performance to keep cost low. The Annual Recital costume fees will be auto drafted in 2 installments; October 15 & November 15. Costumes range from $65-80 each and include all accessories and a brand new pair of tights. The Recital Fee will be auto drafted February 15th, 2019. The Recital Fee is $80 per family and helps cover the cost of the production (theatre rental, backdrop, keepsake programs, etc.) so the show will be free for your family and friends to attend. Attendance Regular attendance is the student's and parent's responsibility and is extremely important for dancers to learn and review steps and combinations with their class. Excessive absences may hinder their progress and necessitate a change in class level. In the spring, once students have begun learning their recital dances, no more than 4 absences will be allowed otherwise student may be required to attend and pay for private lessons in order to perform in the routine. Students are always welcome to take a make-up lesson within a similar class level. Simply check the schedule online, find the class that works best for you and inform the studio via email when your dancer plans to attend the make-up class. Dress Code See attached Dress Code Document for dress code guidelines. All students must dress according to their required dress code for that particular class. Failure to do so will result in possible loss of class time or removal from class. Studio Policies 1. Please inform the office if your child must enter class late, leave early or will be picked up by someone other than yourself. We would appreciate a phone call or email if your child will be absent from class. 2. No gum, food, drinks or street shoes are allowed in the studio. 3. Appropriate and courteous behavior is expected from students, parents, visitors and staff. 4. Students are to remain inside the building while waiting to be picked up after lessons. 5. We reserve the right to dismiss, suspend or expel any person who is in violation of studio policies or any other reason deemed reasonable by the owner. 6. No children (students not already in class, siblings or visitors) may be left unattended at any time. Photo Release I understand that photos of my child(ren) taken in dance class, dance camp, picture day and/or performances may be used in promotional advertising including printed brochures, newspaper advertisements, social media such as Facebook, Instagram and the Silhouette Dance Company website. Signature Text Whether I registered online, over the phone or in person, I have read, understand and accept the policies and procedures of Silhouette Dance Company, LLC.
{ "pile_set_name": "Pile-CC" }
Q: CodeIgniter Production and Developpement server on the same domain. (no subdomain) I googled this many times but now I have to ask it here. I want to make a workflow for a website for Developpement/Production. My constraint is that I use Facebook Connect (Facebook Graph now) so I need to have the dev and prod on the same domain and server. (to be able to log in and test the features) I thought I will edit the CodeIgniter Index.php to redirect if I have a specific user agent (I can edit the one of my firefox) You think it's a good Idea or you have a better one ? And now comes the eternal question : how can I deploy this the easy way ? should I use Capistrano or Phing ? or simply a script with SVN ? Please help me, I'm totally new to this Deployment thing. I used to work directly in production for my little websites or on other domains. but now it's not possible anymore. A: For me, I'll have something like two application folders. One called "production", one called "development". Then in your index.php file, where you set your application folder, you can use php to determine which one to use for whatever reason. Just set your $application_folder variable to whichever one you need. (You could do this based on anything. A cookie, IP address or something.)
{ "pile_set_name": "StackExchange" }
Ćazim Suljić Ćazim Suljić (; born 29 October 1996) is a French-born Bosnian footballer who plays as a midfielder in Italy for Alessandria. Club career Suljić is a youth product of Saint-Étienne. He made his Coupe de la Ligue debut on 16 December 2015 against Paris Saint-Germain. He played the full game. On 11 July 2019, he signed with Alessandria. References Category:1996 births Category:Living people Category:French people of Bosnia and Herzegovina descent Category:French footballers Category:Bosnia and Herzegovina footballers Category:Association football midfielders Category:Bosnia and Herzegovina expatriate footballers Category:AS Saint-Étienne players Category:Thonon Évian F.C. players Category:F.C. Crotone players Category:Expatriate footballers in Italy Category:Expatriate footballers in Slovenia Category:NK Ankaran players Category:A.C. Cuneo 1905 players Category:U.S. Alessandria Calcio 1912 players Category:Serie A players Category:Slovenian PrvaLiga players Category:Serie C players Category:Bosnia and Herzegovina expatriate sportspeople in Italy Category:Bosnia and Herzegovina expatriate sportspeople in Slovenia
{ "pile_set_name": "Wikipedia (en)" }
Start Date: 2/28/01; HourAhead hour: 5; No ancillary schedules awarded. No variances detected. LOG MESSAGES: PARSING FILE -->> O:\Portland\WestDesk\California Scheduling\ISO Final Schedules\2001022805.txt
{ "pile_set_name": "Enron Emails" }
The Dinosaur and Cavemen Expo promises to take visitors back in time � 65 million years back, to be precise. Now in its fourth year, the event is sponsored by the National Science Foundation and hosted by the University of Missouri�s integrative anatomy and paleobiology groups. There will be dinosaur models, fossils, meteorites, asteroids and moon dust on display, and the NASA-produced film �Earth�s Wild Ride,� which offers a dynamic look at the planet�s natural history, will be shown in the Columbia Public School Planetarium every 45 minutes. Raptor Rehab and the MU School of Natural Resources also will be there to introduce visitors to large birds of prey � modern relatives of dinosaurs. The Dinosaur and Cavemen Expo will run 11:30 a.m. to 4:30 p.m. Saturday at Rock Bridge High School, 4303 S. Providence Road. Admission is free.
{ "pile_set_name": "OpenWebText2" }
Genotype and allele frequencies of C3435T polymorphism of the MDR1 gene in various Jewish populations of Israel. The human multidrug-resistant gene (MDR1) encodes for P-glycoprotein (P-gp), which is a membrane-bound efflux-transporter conferring resistance to a number of natural cytotoxic drugs and potentially toxic xenobiotics. The wobble C3435T polymorphism at exon 26 was associated with different expression levels of the MDR1 gene and substrate uptake. Differences in allele frequencies of the C3435T polymorphism have previously been demonstrated between racial groups. In this study, 500 individuals from 5 Jewish populations of Israel (Ashkenazi, Yemenite, North African, Mediterranean, Near-Eastern) were examined for C3435T polymorphism using a PCR-RFLP-based technique to calculate genotype and allele frequencies. Frequencies of the C allele were quite similar among the Ashkenazi (0.65), Yemenite (0.645), and North-African (0.615) Jewish populations. However, the frequency of this allele was slightly lower among Mediterranean Jews (0.58) and significantly lower among Near-Eastern Jews (0.445). The frequency of the C allele among Near-Eastern Jews is, therefore, significantly different from those of all other tested Jewish populations. In comparison to previously studied non-Jewish populations, the frequency of this allele among Near-Eastern Jews is different from that in West Africans (0.91) but is similar to that in whites (0.497). However, the C allele frequencies among the other 4 Jewish populations are significantly lower than that found among West Africans and significantly higher than among non-Jewish whites. These data may have important therapeutic and prognostic implication for P-gp-related drug dosage recommendation in Jewish populations.
{ "pile_set_name": "PubMed Abstracts" }
Q: Calling .NET Web Services Asynchronically from Java I need to make asynchronous calls to .NET web services from java since synchronous calls are too slow. I know in .NET this is easily done since the stub (proxy) class created by wsdl.exe also generates methods for asynchronous calls(BeginMethod()/EndMethod()). I created the service stub using eclipse Ganymede but no asynchronous method call were generated. How do you do this in java? Thanks in advance A: Since you are using Eclipse, you are probably using Axis2 to generate the Web Services client. Axis2 is capable of generating an asynchronous client. Have a look at the instructions here. You need to select the "Generate async" or "Generate both sync and async" option. This is an article for asynchronous web services with Axis2. It refers mainly to the service (not the client), but the client code isn't much different. All Java Web Services Framework support asynchronous operations. You just need to configure the generator properly.
{ "pile_set_name": "StackExchange" }
Unfortunately, this project was not successful We as an individual can help many homeless getting up their feet. Come and join the effort by offering help. I am sure you must have seen many people sleeping rough on streets. Let’s help them with a small contribution and lets fill their Christmas and 2017 with smile. I like you to help me raising awareness. Even sharing this page makes a big difference. I like to start this charity project in birmingham area but with your suport i like to go national and internation level. According to National Statistics 17% were found to be homeless but not in priority need; and 8% were found to be intentionally homeless and in priority need. Helping Homeless People . Small Contribution makes a big difference. Sharing is caring.
{ "pile_set_name": "Pile-CC" }
Avg. Customer Reviews More Ways To Shop I love this chair. It is just as pictured -which means it matches the other furniture in my living room. Also, it was easy to assemble. And the price is amazing! I recommend this chair and Overstock. show more Very Happu Rating: This chair was perfect for my small living room and although it's not as comfortable as I wanted, it still was enough for me! I love the red color of my chair. I definitely recommend this chair for everyone. The quality was great also, but I now it won't last as long as I want, but it will do great for now! show more Modern looking but yet Inviting Rating: Informal entry area needed some chairs in front of fireplace and these work and look great because it is a room between living room and kitchen and oversized chairs would get in the way. These are the just the right size. show more Wow! Rating: I ordered this chair in isolation from other pieces in the room praying it would 'go' with the scheme and colors in the room and be comfortable also. Bingo! it was a home run and at a savings! Thanks!! show more Great accent chair Rating: Perfect accent chair for my living room - chair looks exactly as it does in the picture show more JTO Rating: I do love my rocking chair. We have grandbabies. It is comfy and looks good in my living room. show more Dava Rating: This chair is lovely. Breaks in nicely, we use it lightly in our apartment (morning coffee, drinks) but not as a living room chair. I have two of them and am happy with my purchase. show more Stylish Cortesi Chairs Rating: Love, love, love my chairs. For the price you can't go wrong. Easy to put together and really simply beautiful. This was my very first purchase from O and I must tell you I was thrilled. Arrived in 2 days to boot!! Comfy cute Cortesi! show more Good Looking Chairs! Rating: I bought two of these chairs for my living room. They did not look good in my living room. I put them in my bedroom and they are perfect! They look very stylish and are comfortable to sit in! I'm very happy with them! Shop Overstock™ and find the best online deals on everything for your home and your family. We work every day to bring you discounts on new products across our entire store. Whether you're looking for memorable gifts or everyday essentials, you can buy them here for less. Not just anyone’s mobile outlet, your mobile outlet.
{ "pile_set_name": "Pile-CC" }
--- abstract: 'This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical analysis of Young and Daly in the presence of a fault prediction system, which is characterized by its recall and its precision, and which provides either exact or window-based time predictions. We succeed in deriving the optimal value of the checkpointing period (thereby minimizing the waste of resource usage due to checkpoint overhead) in all scenarios. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. In addition, the results of this analytical evaluation are nicely corroborated by a comprehensive set of simulations, thereby demonstrating the validity of the model and the accuracy of the results.' author: - | Guillaume Aupy$^{1}$,Yves Robert$^{1,2}$, Frédéric Vivien$^{1}$ and Dounia Zaidouni$^{1}$\ $1.$ Ecole Normale Supérieure de Lyon & INRIA, France\ [{Guillaume.Aupy | Yves.Robert | Frederic.Vivien | Dounia.Zaidouni}@ens-lyon.fr]({Guillaume.Aupy | Yves.Robert | Frederic.Vivien | Dounia.Zaidouni}@ens-lyon.fr)\ $2.$ University of Tennessee Knoxville, USA bibliography: - 'biblio.bib' title: Impact of fault prediction on checkpointing strategies --- Introduction {#sec.intro} ============ In this paper, we assess the impact of fault prediction techniques on checkpointing strategies. We assume to have jobs executing on a platform subject to faults, and we let $\mu$ be the mean time between faults (MTBF) of the platform. In the absence of fault prediction, the standard approach is to take periodic checkpoints, each of length [C]{}, every period of duration [$T$]{}. In steady-state utilization of the platform, the value [$T_{\text{opt}}$]{}of [$T$]{}that minimizes the (expectation of the) waste of resource usage due to checkpointing is easily approximated as ${\ensuremath{T_{\text{opt}}}\xspace}= \sqrt{2 \mu{C\xspace}}$, or ${\ensuremath{T_{\text{opt}}}\xspace}= \sqrt{2 (\mu +{R\xspace}){C\xspace}}$ (where [R]{}is the duration of the recovery). The former expression is the well-known Young’s formula [@young74], while the latter is due to Daly [@daly04]. Now, when some fault prediction mechanism is available, can we compute a better checkpointing period to decrease the expected waste? and to what extent? Critical parameters that characterize a fault prediction system are its recall [$r$]{}, which is the fraction of faults that are indeed predicted, and its precision [$p$]{}, which is the fraction of predictions that are correct (i.e., correspond to actual faults). The major objective of this paper is to refine the expression of the expected waste as a function of these new parameters, and to design efficient checkpointing policies that take predictions into account. We deal with two problem instances, one where the predictor system provides exact dates for predicted events, and another where it only provides time windows during which events take place. The key contributions of this paper are the following: (i) The design of several checkpointing policies, their analysis, and a new formula for the checkpointing period that extends Young’s and Daly’s to take predictions into account; (ii) The analytical characterization of the best policy for each set of parameters; (iii) The validation of the theoretical results via extensive simulations, for both Exponential and Weibull failure distributions; (iv) The demonstration that even a poor predictor can lead to a significant reduction of application execution time; and (v) The demonstration that recall is far more important than precision, hence giving insight into the design of future predictors. The rest of the paper is organized as follows. We first detail the framework in Section \[sec.framework\]. We deal with exact date predictions in Section \[sec.no.intervals\], and with time-window based predictions in Section \[sec.intervals\]. Section \[sec.simulations\] is devoted to simulations. Finally, we provide concluding remarks in Section \[sec.conclusion\]. Framework {#sec.framework} ========= Checkpointing strategy ---------------------- We consider a *platform* subject to faults. Our work is agnostic of the granularity of the platform, which may consist either of a single processor, or of several processors that work concurrently and use coordinated checkpointing. The key parameter is $\mu$, the mean time between faults (MTBF) of the platform. If the platform is made of $N$ components whose individual MTBF is $\mu_{ind}$, then $\mu = \frac{\mu_{ind}}{N}$. Checkpoints are taken at regular intervals, or periods, of length [$T$]{}. We use [C]{}, [D]{}, and [R]{}for the duration of the checkpoint, downtime and recovery (respectively). We must enforce that ${C\xspace}\leq {\ensuremath{T}\xspace}$, and useful work is done only during ${\ensuremath{T}\xspace}-{C\xspace}$ units of time for every period of length [$T$]{}, if no fault occurs. The *waste* due to checkpointing in a fault-free execution is ${\ensuremath{\textsc{Waste}}\xspace}= \frac{{C\xspace}}{{\ensuremath{T}\xspace}}$. In the following, the *waste* always denote the fraction of time that the platform is not doing useful work. Fault predictor --------------- A fault predictor is a mechanism that is able to predict that some faults will take place, either at a certain point in time, or within some time-interval window. The accuracy of the fault predictor is characterized by two quantities, the *recall* and the *precision*. The recall [$r$]{}is the fraction of faults that are predicted while the precision [$p$]{}is the fraction of fault predictions that are correct. Traditionally, one defines three types of *events*: (i) *True positive* events are faults that the predictor has been able to predict (let $\textit{True}_P$ be their number); (ii) *False positive* events are fault predictions that did not materialize as actual faults (let $\textit{False}_P$ be their number); and (iii) *False negative* events are faults that were not predicted (let $\textit{False}_N$ be their number). With these definitions, we have ${\ensuremath{r}\xspace}= \frac{\textit{True}_P}{\textit{True}_P+\textit{False}_N}$ and ${p\xspace}= \frac{\textit{True}_P}{\textit{True}_P+ \textit{False}_P}$. Fault rates ----------- In addition to $\mu$, the platform MTBF, let ${\ensuremath{\mu_{P}}\xspace}$ be the mean time between predicted events (both true positive and false positive), and let ${\ensuremath{\mu_{NP}}\xspace}$ be the mean time between unpredicted faults (false negative). Finally, we define the mean time between events as ${\ensuremath{\mu_e}\xspace}$ (including all three event types). The relationships between $\mu$, ${\ensuremath{\mu_{P}}\xspace}$, ${\ensuremath{\mu_{NP}}\xspace}$, and ${\ensuremath{\mu_e}\xspace}$ are the following: - $\frac{{1-{\ensuremath{r}\xspace}}}{\mu} = \frac{1}{{\ensuremath{\mu_{NP}}\xspace}}$ (here, $1-{\ensuremath{r}\xspace}$ is the fraction of faults that are unpredicted); - $ \frac{{\ensuremath{r}\xspace}}{\mu} = \frac{{\ensuremath{p}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}$ (here, ${\ensuremath{r}\xspace}$ is the fraction of faults that are predicted, and ${\ensuremath{p}\xspace}$ is the fraction of fault predictions that are correct); - $\frac{1}{{\ensuremath{\mu_e}\xspace}}=\frac{1}{{\ensuremath{\mu_{P}}\xspace}}+\frac{1}{{\ensuremath{\mu_{NP}}\xspace}}$ (here, events are either predicted (true or false), or not). Predictor with exact event dates {#sec.no.intervals} ================================ In this section, we present an analytical model to assess the impact of prediction on periodic checkpointing strategies. We consider the case where the predictor is able to provide exact prediction dates, and to generate such predictions at least ${C\xspace}$ seconds in advance, so that a checkpoint can indeed be taken before the event (otherwise the prediction cannot be used, because there is not enough time to take proactive actions). We consider the following algorithm:\ (1) While no fault prediction is available, checkpoints are taken periodically with period ${\ensuremath{T}\xspace}$;\ (2) When a fault is predicted, we decide whether to take the prediction into account or not. This decision is randomly taken: with probability [$q$]{}, we trust the predictor and take the prediction into account, and, with probability $1-{\ensuremath{q}\xspace}$, we ignore the prediction. If we take the prediction into account, there are two cases. If we have enough time before the prediction date, we take a checkpoint as late as possible, i.e., so that it completes right at the time where the fault is predicted to happen. After the checkpoint, we then complete the execution of the period (see Figure \[fig.enoughtime\](a)). Otherwise, if we do not have enough time to take an extra checkpoint (because we are already checkpointing), then we do some extra work during $\varepsilon$ seconds (see Figure \[fig.no\_enoughtime\](b)). We account for this work as idle time in the expression of the waste, to ease the analysis. Our expression of the waste is thus an upper bound. The rationale for not always trusting the predictor is to avoid taking useless checkpoints too frequently. Intuitively, the precision ${\ensuremath{p}\xspace}$ of the predictor must be above a given threshold for its usage to be worthwhile. In other words, if we decide to checkpoint just before a predicted event, either we will save time by avoiding a costly re-execution if the event does correspond to an actual fault, or we will lose time by unduly performing an extra checkpoint. We need a larger proportion of the former cases, i.e., a good precision, for the predictor to be really useful. The following analysis will determine the optimal value of ${\ensuremath{q}\xspace}$ as a function of the parameters ${C\xspace}$, $\mu$, ${\ensuremath{r}\xspace}$, and ${\ensuremath{p}\xspace}$. \[fig.no\_enoughtime\] Computing the waste {#sec.nointalg} ------------------- Our goal in this section is to compute a formula for the expected waste. Recall that the waste is the fraction of time that the processors do not perform useful computations, either because they are checkpointing, or because a failure has struck. There are four different sources of waste (see Figure \[fig.waste-exact\]):\ (1) **Checkpoints:** During a fault-free execution, the fraction of resources used in checkpointing is $ \frac{{C\xspace}}{{T}}$.\ (2) **Unpredicted faults:** This overhead occurs each time a unpredicted fault strikes, that is, on average, once every ${\ensuremath{\mu_{NP}}\xspace}$ seconds. The time wasted because of the unpredicted fault is then the time elapsed between the last checkpoint and the fault, plus the downtime and the time needed for the recovery. The expectation of the time elapsed between the last checkpoint and the fault is equal to half the period of checkpoints, because the time where the fault hits the system is independent of the checkpointing algorithm. Finally, the waste due to unpredicted faults is: $ \frac{1}{{\ensuremath{\mu_{NP}}\xspace}} \left[ \frac{{T}}{2} + {D\xspace}+ {R\xspace}\right]$.\ (3) **Predictions taken into account:** Now we have to compute the execution overhead due to a prediction which we trust (hence we checkpoint just before its date). This overhead occurs each time a prediction is made by the predictor, that is, on average, once every ${\ensuremath{\mu_{P}}\xspace}$ seconds, and that we decide to trust it, with probability ${\ensuremath{q}\xspace}$. If the predicted event is an actual fault, we waste ${C\xspace}+{D\xspace}+{R\xspace}$ seconds: we waste ${D\xspace}+ {R\xspace}$ seconds because the predicted event corresponds to an actual fault, and if we have enough time before the prediction date, we waste ${C\xspace}$ seconds because we take an extra checkpoint as late as possible before the prediction date (see Figure \[fig.enoughtime\](a)). Note that if we do not have enough time to take an extra checkpoint (see Figure \[fig.no\_enoughtime\](b)), we overestimate the waste as ${C\xspace}$ seconds. If the predicted event is not an actual fault, we waste ${C\xspace}$ seconds. An actual fault occurs with probability ${\ensuremath{p}\xspace}$, and a false prediction is made with probability $(1-{\ensuremath{p}\xspace})$. Averaging with these probabilities, we waste an expected amount of $\left [ {p\xspace}({C\xspace}+ {D\xspace}+ {R\xspace}) + (1-{p\xspace}) {C\xspace}\right] $ seconds. Finally, the corresponding overhead is $\frac{1}{{\ensuremath{\mu_{P}}\xspace}} {\ensuremath{q}\xspace}\left [ {p\xspace}({C\xspace}+ {D\xspace}+ {R\xspace}) + (1-p) {C\xspace}\right]$.\ (4) **Ignored predictions:** The final source of waste is for predictions that we do not trust. This overhead occurs each time a prediction is made by the predictor, that is, on average, once every ${\ensuremath{\mu_{P}}\xspace}$ seconds, and that we decide to trust it, with probability $1-{\ensuremath{q}\xspace}$. If the predicted event corresponds to an actual fault, we waste $(\frac{{T}}{2} +{D\xspace}+ {R\xspace})$ seconds (as for an unpredicted fault). Otherwise there is no fault and we took no extra checkpoint, and thus we lose nothing. An actual fault occurs with a probability [$p$]{}. The corresponding overhead is $\frac{1}{{\ensuremath{\mu_{P}}\xspace}} (1-{\ensuremath{q}\xspace}) \left [ {p\xspace}(\frac{{T}}{2} + {D\xspace}+ {R\xspace}) + (1-{p\xspace}) 0 \right] $.\ Summing up the overhead over the four different sources, and after simplification, we obtain the following equation for the waste: $${\ensuremath{\textsc{Waste}}\xspace}= \frac{{C\xspace}}{{T}} + \frac{1}{\mu} \left[ (1- {\ensuremath{r}\xspace}{\ensuremath{q}\xspace}) \frac{{T}}{2} + {D\xspace}+ {R\xspace}+ \frac{{\ensuremath{q}\xspace}{\ensuremath{r}\xspace}}{{p\xspace}} {C\xspace}\right] \label{eq.waste}$$ Validity of the analysis {#sec.validity} ------------------------ Equation (\[eq.waste\]) is accurate only when two events (an event being a prediction (true or false) or an unpredicted fault) do not take place within the same period. To ensure that this condition is met with a high probability, we bound the length of the period: without predictions, or when predictions are not taken into account, we enforce the condition ${T}< \alpha \mu$; otherwise, with predictions, we enforce the condition ${T}< \alpha {\ensuremath{\mu_e}\xspace}$. Here, $\alpha$ is some tuning parameter chosen as follows. The number of events during a period of length ${T}$ can be modeled as a Poisson process of parameter $\beta = \frac{{T}}{\mu}$ (without prediction) or $\beta = \frac{{T}}{{\ensuremath{\mu_e}\xspace}}$ (with prediction). The probability of having $k \geq 0$ faults is $P(X=k) = \frac{\beta^{k}}{k!} e^{-\beta}$, where $X$ is the number of faults. Hence the probability of having two or more faults is $\pi = P(X\geq2) = 1 -( P(X=0) + P(X=1)) = 1 - (1+\beta) e^{-\beta}$. If we assume $\alpha=0.27$ then $\pi \leq 0.03$, hence a valid approximation when bounding the period range accordingly. Indeed, with such a conservative value for $\alpha$, we have overlapping faults for only $3\%$ of the checkpointing segments in average, so that the model is quite reliable. In addition to the previous constraint, we must always enforce the condition ${C\xspace}\leq {T}$, by construction of the periodic checkpointing policy. Finally, the optimal waste may never exceed $1$; when the waste is equal to $1$, the application no longer makes any progress. Waste minimization {#sec.minwaste} ------------------ We differentiate twice Equation  with respect to [T]{}: $${\ensuremath{\textsc{Waste}}\xspace}'({T}) = \frac{-{C\xspace}}{{T}^{2}} + \frac{1}{\mu} \left[ (1- {\ensuremath{r}\xspace}{\ensuremath{q}\xspace}) \frac{1}{2}\right]$$ $${\ensuremath{\textsc{Waste}}\xspace}''({T}) = \frac{2 {C\xspace}}{{T}^{3} } > 0$$ We obtain that ${\ensuremath{\textsc{Waste}}\xspace}''({T}) $ is strictly positive, hence ${\ensuremath{\textsc{Waste}}\xspace}({T}) $ is a convex function of ${T}$ and admits a unique minimum on its domain. We also compute ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{q\}}}\xspace}$, the extremum value of ${T}$ that is the unique zero of the function ${\ensuremath{\textsc{Waste}}\xspace}'({T})$, as ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{q\}}}\xspace}=\sqrt{ \frac{2 \mu {C\xspace}}{1-{\ensuremath{r}\xspace}{\ensuremath{q}\xspace}}}$. Note that this Equation makes sense even when $1-{\ensuremath{r}\xspace}{\ensuremath{q}\xspace}=0$. Indeed this would mean that both ${\ensuremath{r}\xspace}=1$ and ${\ensuremath{q}\xspace}=1$: the predictor predicts every fault, and we take proactive action for each one of them, there should never be any periodic checkpointing! Finally, note that ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{q\}}}\xspace}$ may well not belong to the admissible domain $[{C\xspace}, \alpha {\ensuremath{\mu_e}\xspace}]$. The optimal waste ${\ensuremath{\textsc{Waste}_{\text{opt}}}\xspace}$ is determined via the following case analysis. We rewrite the waste as an affine function of ${\ensuremath{q}\xspace}$: $${\ensuremath{\textsc{Waste}}\xspace}({\ensuremath{q}\xspace}) = \frac{{\ensuremath{r}\xspace}{\ensuremath{q}\xspace}}{\mu}\left (\frac{{C\xspace}}{{p\xspace}}-\frac{{T}}{2}\right )+\left ( \frac{{C\xspace}}{{T}}+\frac{{T}}{2 \mu}+\frac{{D\xspace}+ {R\xspace}}{\mu} \right )$$ For any value of [T]{}, we deduce that ${\ensuremath{\textsc{Waste}}\xspace}({\ensuremath{q}\xspace})$ is minimized either for ${\ensuremath{q}\xspace}=0$ or for ${\ensuremath{q}\xspace}=1$. This (somewhat unexpected) conclusion is that the predictor should sometimes be always trusted, and sometimes never, but no in-between value for ${\ensuremath{q}\xspace}$ will do a better job. Thus we need to minimize the two functions ${\ensuremath{\textsc{Waste}}\xspace}^{\{0\}}$ and ${\ensuremath{\textsc{Waste}}\xspace}^{\{1\}}$ over the domain of admissible values for [T]{}, and to retain the best result. We have ${\ensuremath{\textsc{Waste}}\xspace}^{\{0\}}(T)= \frac{{C\xspace}}{{T}} + \frac{1}{\mu} \left[ \frac{{T}}{2} + {D\xspace}+ {R\xspace}\right]$. We recognize here the waste function of Young [@young74] and write ${\ensuremath{\textsc{Waste}_Y}\xspace}= \frac{{C\xspace}}{{T}} + \frac{1}{\mu} \left[ \frac{{T}}{2} + {D\xspace}+ {R\xspace}\right]$. The function ${\ensuremath{\textsc{Waste}_Y}\xspace}(T)$ is a convex function and reaches its minimum for ${\ensuremath{T_{\text{Y}}}\xspace}$ in the interval $[{C\xspace},\alpha \mu]$: - If (${C\xspace}<{\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace}<\alpha \mu$): ${\ensuremath{T_{\text{Y}}}\xspace}={\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace}=\sqrt{2 \mu {C\xspace}}$ - If (${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace}<{C\xspace}$): ${\ensuremath{T_{\text{Y}}}\xspace}={C\xspace}$ - If (${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace} \geq \alpha \mu$): ${\ensuremath{T_{\text{Y}}}\xspace}=\alpha \mu$ Thus, [$\textsc{Waste}_Y$]{}($ = {\ensuremath{\textsc{Waste}}\xspace}^{\{0\}}$) is minimized for: $${\ensuremath{T_{\text{Y}}}\xspace}=\min \left ( \alpha \mu,\max(\sqrt{2 \mu {C\xspace}}, {C\xspace}) \right )$$ Similarly, we have: ${\ensuremath{\textsc{Waste}}\xspace}^{\{1\}}({T})=\frac{{C\xspace}}{{T}} + \frac{1}{\mu} \left[ (1- {\ensuremath{r}\xspace}) \frac{{T}}{2} + {D\xspace}+ {R\xspace}+ \frac{{\ensuremath{r}\xspace}}{{p\xspace}} {C\xspace}\right] $. The function ${\ensuremath{\textsc{Waste}}\xspace}^{\{1\}}(T)$ is a convex function and reaches its minimum for [$T_{1} $]{} in the interval $[{C\xspace},\alpha {\ensuremath{\mu_e}\xspace}]$. - If (${C\xspace}<{\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace}<\alpha {\ensuremath{\mu_e}\xspace}$): ${\ensuremath{T_{1} }\xspace}={\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace}=\sqrt{ \frac{2 \mu {C\xspace}}{1-{\ensuremath{r}\xspace}}}$ - If (${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace}<{C\xspace}$): ${\ensuremath{T_{1} }\xspace}={C\xspace}$ - If (${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace} \geq \alpha {\ensuremath{\mu_e}\xspace}$): ${\ensuremath{T_{1} }\xspace}=\alpha {\ensuremath{\mu_e}\xspace}$ Thus, ${\ensuremath{\textsc{Waste}}\xspace}^{\{1\}}$ is minimized for: $${\ensuremath{T_{1} }\xspace}=\min \left ( \alpha {\ensuremath{\mu_e}\xspace},\max(\sqrt{ \frac{2 \mu {C\xspace}}{1-{\ensuremath{r}\xspace}} }, {C\xspace}) \right )$$ Finally, the optimal waste is: $${\ensuremath{\textsc{Waste}_{\text{opt}}}\xspace}= \min \left ({\ensuremath{\textsc{Waste}_Y}\xspace}({\ensuremath{T_{\text{Y}}}\xspace}),{\ensuremath{\textsc{Waste}}\xspace}^{\{1\}}({\ensuremath{T_{1} }\xspace}) \right )$$ Prediction and preventive migration {#sec.migration} ----------------------------------- In this section, we make a short digression and briefly present an analytical model to assess the impact of prediction and preventive migration on periodic checkpointing strategies. As before, we consider a predictor that is able to predict exactly when faults happen, and to generate these predictions at least ${C\xspace}$ seconds before the event dates. The idea of migration consists in moving a task for execution on another node, when a fault is predicted to happen on the current node in the near future. Note that the faulty node can later be replaced, in case of a hardware fault, or software rejuvenation can be used in case of a software fault. We consider the following algorithm, which is very similar to that used in Section \[sec.nointalg\]: 1. When no fault prediction is available, checkpoints are taken periodically with period ${\ensuremath{T}\xspace}$. 2. When a fault is predicted, we decide whether to execute the migration or not. The decision is a random one: with probability [$q$]{}we trust the predictor and do the migration and, with probability 1-[$q$]{}, we ignore the prediction. If we take the prediction into account, we execute the migration as late as possible, so that it completes right at the time when the fault is predicted to happen. As before, we have four different sources of waste. Summing the overhead of the execution of these different sources, we obtain the following equation for the waste (where ${M}$ is the duration of a migration): $$\begin{aligned} {\ensuremath{\textsc{Waste}}\xspace}&= \frac{{C\xspace}}{{T}} \\&+ \frac{1}{{\ensuremath{\mu_{NP}}\xspace}} \left[ \frac{{T}}{2} + {D\xspace}+ {R\xspace}\right] \\&+ \frac{1}{{\ensuremath{\mu_{P}}\xspace}} {\ensuremath{q}\xspace}\left [ {p\xspace}({M}) + (1-p) {M}\right] \\&+ \frac{1}{{\ensuremath{\mu_{P}}\xspace}} (1-{\ensuremath{q}\xspace}) \left [ {p\xspace}(\frac{{T}}{2} + {D\xspace}+ {R\xspace}) + (1-{p\xspace}) 0 \right] $$ After simplification, we get:$${\ensuremath{\textsc{Waste}}\xspace}= \frac{{C\xspace}}{{T}} + \frac{1}{\mu} \left[ (1- {\ensuremath{r}\xspace}{\ensuremath{q}\xspace}) \left (\frac{{T}}{2}+{D\xspace}+ {R\xspace}\right ) + \frac{{\ensuremath{q}\xspace}{\ensuremath{r}\xspace}}{{p\xspace}} {M}\right] \label{eq.wasteM}$$ Equation  is very similar to Equation , and the minimization of the waste proceeds exactly as in Section \[sec.minwaste\]. In a nutshell, ${\ensuremath{\textsc{Waste}}\xspace}(T) $ is again a convex function and admits a unique minimum over its domain $[{C\xspace}, \alpha {\ensuremath{\mu_e}\xspace}]$, the unique zero of the derivative has the same value ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{q\}}}\xspace}=\sqrt{ \frac{2 \mu {C\xspace}}{1-{\ensuremath{r}\xspace}{\ensuremath{q}\xspace}}}$, and for any value of $T$, the waste is minimized for either ${\ensuremath{q}\xspace}=0$ or ${\ensuremath{q}\xspace}=1$. We conduct the very same case analysis as in Section \[sec.minwaste\]. Predictor with a prediction window {#sec.intervals} ================================== In the previous section, we supposed that the predictor was able to predict exactly when faults will strike. Here, we suppose (maybe more realistically) that the predictor gives a *prediction window*, that is an interval of time of length [$I$]{}during which the predicted fault is likely to happen. As before in Section \[sec.no.intervals\]: (i) We suppose that we have enough time to checkpoint before the beginning of the prediction window; and (ii) When a prediction is made, we enforce that the scheduling algorithm has the choice either to take or not to take this prediction into account, with probability [$q$]{}. We start with a description of the strategies that can be used, depending upon the (relative) length [$I$]{}of the prediction window. Let us define two *modes* for the scheduling algorithm:\ **Regular**: This is the mode used when no fault prediction is available, or when a prediction is available but we decide to ignore it (with probability $1-{\ensuremath{q}\xspace}$). In regular mode, we use periodic checkpointing with period [$T_{\text{R}}$]{}. Intuitively, [$T_{\text{R}}$]{}corresponds to the checkpointing period $T$ of Section \[sec.no.intervals\].\ **Proactive**: This is the mode used when a fault prediction is available and we decide to trust it, a decision taken with probability [$q$]{}. Consider such a trusted prediction made with the prediction window $[t_0,t_0+{\ensuremath{I}\xspace}]$. Several strategies can be envisioned:\ (1) [<span style="font-variant:small-caps;">Instant</span>]{}, for *Instantaneous–* The first strategy is to ignore the time-window and to execute the same algorithm as if the predictor had given an exact date prediction at time $t_{0}$. Just as described in Section \[sec.no.intervals\], the algorithm interrupts the current period (of scheduled length [$T_{\text{R}}$]{}), checkpoints during the interval $[t_{0}-C,t_{0}]$, and then returns to regular mode: at time $t_{0}$, it resumes the work needed to complete the interrupted period of the regular mode.\ (2) [<span style="font-variant:small-caps;">NoCkptI</span>]{}, for *No checkpoint during prediction window–* The second strategy is intended for a short prediction window: instead of ignoring it, we acknowledge it, but make the decision not to checkpoint during it. As in the first strategy, the algorithm interrupts the current period (of scheduled length [$T_{\text{R}}$]{}), and checkpoints during the interval $[t_{0}-C,t_{0}]$. But here, we return to regular mode only at time $t_0+{\ensuremath{I}\xspace}$, where we resume the work needed to complete the interrupted period of the regular mode. During the whole length of the time-window, we execute work without checkpointing, at the risk of losing work if a fault indeed strikes. But for a small value of [$I$]{}, it may not be worthwhile to checkpoint during the prediction window (if at all possible, since there is no choice if ${\ensuremath{I}\xspace}< C$).\ (3) [<span style="font-variant:small-caps;">WithCkptI</span>]{}, for *With checkpoints during prediction window–* The third strategy is intended for a longer prediction window and assumes that ${C\xspace}\leq {\ensuremath{I}\xspace}$: the algorithm interrupts the current period (of scheduled length [$T_{\text{R}}$]{}), and checkpoints during the interval $[t_{0}-C,t_{0}]$, but now decides to take several checkpoints during the prediction window. The period [$T_{\text{P}}$]{}of these checkpoints in proactive mode will presumably be shorter than [$T_{\text{R}}$]{}, to take into account the higher fault probability. To simplify the presentation, we use an integer number of periods of length [$T_{\text{P}}$]{} within the prediction window. In the following, we analytically compute the optimal number of such periods. But we take at least one period here, hence one checkpoint, which implies $C \leq I$. We return to regular mode either right after the fault strikes within the time window $[t_0,t_0+{\ensuremath{I}\xspace}]$, or at time $t_0+{\ensuremath{I}\xspace}$ if no actual fault happens within this window. Then, we resume the work needed to complete the interrupted period of the regular mode. The third strategy is the most complex to describe, and the complete behavior of the scheduling algorithm is shown in Algorithm \[algo.proactive\]. Note that for all strategies, exactly as in Section \[sec.no.intervals\], we insert some additional work for the particular case where there is not enough time to take a checkpoint before entering proactive mode (because a checkpoint for the regular mode is currently on-going, see Figure \[fig.no\_enoughtime\](b)). We account for this work as idle time in the expression of the waste, to ease the analysis. Our expression of the waste is thus an upper bound. Waste for strategy [<span style="font-variant:small-caps;">WithCkptI</span>]{} {#sec-waste-int} ------------------------------------------------------------------------------ In this section we focus on computing the waste of [<span style="font-variant:small-caps;">WithCkptI</span>]{}, the most complex strategy. We first compute the fraction of time spent in the *regular* mode (checkpointing with period [$T_{\text{R}}$]{}) and the fraction of time spent in the *proactive* mode (checkpointing with period [$T_{\text{P}}$]{}). Let [$I'$]{}be the average time spent in the *proactive* mode. When a prediction is made, we may choose to ignore it, which happens with probability $1-{\ensuremath{q}\xspace}$. In this case, the algorithm stays in regular mode and does not spend any time in the proactive mode. With probability [$q$]{}, we may decide to take the prediction into account. In this case, if the prediction is a false positive event (no actual fault strikes), which happens with probability $1-{\ensuremath{p}\xspace}$, then the algorithm spends [$I$]{}units of time in the proactive mode. Otherwise, if the prediction is a true positive event (an actual fault hits the system), which happens with probability ${\ensuremath{p}\xspace}$, then the algorithm spends an average of ${\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}$ in the proactive mode. Here ${\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}$ is the expectation of the time elapsed between the beginning of the prediction window and the time when a fault happens, knowing that a fault happens in the prediction window. Note that if faults are uniformly distributed across the prediction window, then ${\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}= \frac{{\ensuremath{I}\xspace}}{2}$. Altogether, we obtain $ {\ensuremath{I'}\xspace}= {\ensuremath{q}\xspace}\left((1-{\ensuremath{p}\xspace}){\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right)$. Each time there is a prediction, that is, on the average, every ${\ensuremath{\mu_{P}}\xspace}$ seconds, the algorithm spends a time ${\ensuremath{I'}\xspace}$ in the proactive mode. Therefore, Algorithm \[algo.proactive\] spends a fraction of time $\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}$ in the proactive mode, and a fraction of time $1-\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}$ in the regular mode. As in Section \[sec.no.intervals\], we assume that there is a single event of any type (either a prediction (true or false), or an unpredicted failure) within each interval under study. The condition $T \leq \alpha {\ensuremath{\mu_e}\xspace}$ then becomes ${\ensuremath{T_{\text{R}}}\xspace}+ {\ensuremath{I}\xspace}\leq \alpha {\ensuremath{\mu_e}\xspace}$, since ${\ensuremath{T_{\text{R}}}\xspace}+{\ensuremath{I}\xspace}$ is the longest time interval considered in the analysis of Algorithm \[algo.proactive\]. We now identify the four different sources of waste, and we analyze their respective costs:\ (1) **Waste due to periodic checkpointing.** There are two cases, depending upon the mode of Algorithm \[algo.proactive\]:\ (a) **Regular mode.** In this mode, we take periodic checkpoints. We take a checkpoint of size [C]{}each time the algorithm has processed work for a time ${\ensuremath{T_{\text{R}}}\xspace}-{C\xspace}$ in the regular mode. This remains true if, after spending some time in the regular mode, the algorithm switches to the proactive mode, and later switches back to the regular mode. This behavior is enforced by recording the amount of work performed under the regular mode (variable [$W_{\mathit{reg}}$]{}, at line \[algo.proactive.wreg\] of Algorithm \[algo.proactive\]), and by taking this value into account at line \[algo.proactive.completion\]. Given the fraction of time that Algorithm \[algo.proactive\] spends in the regular mode, this source of waste has a total cost of $\left(1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\right)\frac{{C\xspace}}{{\ensuremath{T_{\text{R}}}\xspace}}$.\ (b) **Proactive mode.** In this mode, we take a checkpoint of size [C]{}each time the algorithm has processed work for a time ${\ensuremath{T_{\text{P}}}\xspace}-{C\xspace}$. If no fault happens while the algorithm is in the proactive mode, then the algorithm stays exactly a time [$I$]{}in this mode (thanks to the condition at line \[algo.proactive.Ilimit\]). The waste due to the periodic checkpointing is then exactly $\frac{{C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}}$ (because [$T_{\text{P}}$]{}divides [$I$]{}). If a fault happens while the algorithm is in proactive mode, then, the expectation of the waste due to the periodic checkpointing is upper-bounded by the same quantity $ \frac{{C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}}$ (this is an over-approximation of the waste in that case). Overall, taking into account the fraction of time Algorithm \[algo.proactive\] is in the proactive mode, the cost of this source of waste is $\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\frac{{C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}}$.\ (2) **Waste incurred when switching to the proactive mode.** Each time we take into account a prediction (which happens with probability [$q$]{}on average every [$\mu_{P}$]{}units of time), we start by doing one preliminary checkpoint if we have the time to do so (line \[algo.proactive.addC\]). If we do not have the time to take an additional checkpoint, the algorithm do not do any processing for a duration of at most [C]{} (line \[algo.proactive.wait\]). In both cases, the wasted time is at most [C]{}and this happens once every $\frac{{\ensuremath{\mu_{P}}\xspace}}{{\ensuremath{q}\xspace}}$. Hence, switching from the regular mode to the proactive one induces a waste of at most $\frac{{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}C$.\ (3) **Waste due to predicted faults.** Predicted faults happen with frequency $\frac{{\ensuremath{p}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}$. As we may choose to ignore a prediction, there are still two cases depending on the mode of the algorithm at the time of the fault:\ (a) **Regular mode.** If the algorithm is in regular mode when a predicted fault hits, this means that we have chosen to ignore the prediction, a decision taken with probability $(1-{\ensuremath{q}\xspace})$. The time wasted because of the predicted fault is then the time elapsed between the last checkpoint and the fault, plus the downtime and the time needed for the recovery. The expectation of the time elapsed between the last checkpoint and the fault is equal to half the period of checkpoints, because the time where the fault hits the system is independent of the checkpointing algorithm. Therefore, the waste due to predicted faults hitting the system in regular mode is $\frac{{\ensuremath{p}\xspace}(1-{\ensuremath{q}\xspace})}{{\ensuremath{\mu_{P}}\xspace}}\left(\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2}+{D\xspace}+{R\xspace}\right)$.\ (b) **Proactive mode.** If the algorithm is in proactive mode when a fault hits, then we have chosen to take the prediction into account, a decision that is taken with probability ${\ensuremath{q}\xspace}$. The time wasted because of the predicted fault is then, in addition to the downtime and the time needed for the recovery, the time elapsed between the last checkpoint and the fault or, if no checkpoint had already been taken in the proactive mode, the time elapsed between the start of the proactive mode and the fault. Here, we can no longer assume that the time the fault hits the system is independent of the checkpointing date. This is because the proactive mode starts exactly at the beginning of the prediction window. Let [$T_{\text{lost}}$]{}denote the computation time elapsed between the latest of the beginning of the proactive mode and the last checkpoint, and the fault date. Then the expectation of [$T_{\text{lost}}$]{}depends on the distribution of the fault date in the prediction window. However, we know that whatever the distribution, ${\ensuremath{T_{\text{lost}}}\xspace}\leq {\ensuremath{T_{\text{P}}}\xspace}$. Therefore we over approximate the waste in that case by $\frac{{\ensuremath{q}\xspace}{\ensuremath{p}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\left({\ensuremath{T_{\text{P}}}\xspace}+{D\xspace}+{R\xspace}\right)$.\ (4) **Waste due to unpredicted faults.** There are again two cases, depending upon the mode of the algorithm at the time the fault hits the system:\ (a) **Regular mode.** In this mode the work done is periodically checkpointed with period [$T_{\text{R}}$]{}. The time wasted because of an unpredicted fault is then the time elapsed between the last checkpoint and the fault, plus the downtime and the time needed for the recovery. As before, the expectation of this value is ${\ensuremath{T_{\text{lost}}}\xspace}= \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2}$. An unexpected fault hits the system once every ${\ensuremath{\mu_{NP}}\xspace}$ seconds on the average. Taking into account the fraction of the time the algorithm is in regular mode, the waste due to unpredicted faults hitting the system in regular mode is $\left(1-\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\right)\frac{1}{{\ensuremath{\mu_{NP}}\xspace}}\left(\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2}+{D\xspace}+{R\xspace}\right)$.\ (b) **Proactive mode.** Because of the assumption that a single event takes place within a time-interval, we do not consider the very unlikely case where a unpredicted fault strikes during a prediction window. This amounts to assume that $\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\frac{1}{{\ensuremath{\mu_{NP}}\xspace}}({\ensuremath{T_{\text{P}}}\xspace}+{D\xspace}+{R\xspace})$ is negligible. We gather the expressions of the six different types of waste and simplify to obtain the formula of the overall waste: $$\begin{aligned} {\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{WithCkptI}\xspace}}&= \quad \left(\left (1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \right )\frac{1}{{\ensuremath{T_{\text{R}}}\xspace}} + \frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\frac{1}{{\ensuremath{T_{\text{P}}}\xspace}} + \frac{{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\right){C\xspace}+ \frac{{\ensuremath{p}\xspace}(1-{\ensuremath{q}\xspace})}{{\ensuremath{\mu_{P}}\xspace}}\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \nonumber \\ & + \frac{{\ensuremath{p}\xspace}{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} {\ensuremath{T_{\text{P}}}\xspace}+\left (1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \right ) \frac{1}{{\ensuremath{\mu_{NP}}\xspace}} \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \nonumber \\ & + \left(\frac{{\ensuremath{p}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}+\left(1-\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\right)\frac{1}{{\ensuremath{\mu_{NP}}\xspace}}\right)\left({D\xspace}+{R\xspace}\right) \label{eq.proa.waste}\end{aligned}$$ Waste of the other strategies {#sec-waste-other} ----------------------------- The waste of the first strategy (*Instantaneous*) is very close to the one given in Equation . The difference lies in [$T_{\text{lost}}$]{}, the expectation of the work lost when a fault is predicted and the prediction is taken into account. When a prediction is taken into account and the predicted event is an actual fault, the waste in Equation  was $\frac{{\ensuremath{q}\xspace}{p\xspace}}{{\ensuremath{\mu_{P}}\xspace}}({C\xspace}+ {D\xspace}+ {R\xspace})$. Because the prediction was exact, [$T_{\text{lost}}$]{}was equal to 0. However in our new Equation, the waste for this part is now $\frac{{\ensuremath{q}\xspace}{p\xspace}}{{\ensuremath{\mu_{P}}\xspace}}({C\xspace}+ {\ensuremath{T_{\text{lost}}}\xspace}+ {D\xspace}+ {R\xspace})$. On average, the fault occurs after a time [$\mathbb{E}_{I}^{(f)}$]{}. However, because we do not know the relation between [$\mathbb{E}_{I}^{(f)}$]{}and [$T_{\text{R}}$]{}, then [$T_{\text{lost}}$]{}has expectation $\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2}$ if $\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \leq {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}$. The new waste is then: $$\begin{aligned} {\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{Instant}\xspace}} = \frac{{C\xspace}}{{\ensuremath{T_{\text{R}}}\xspace}} + \frac{1}{\mu} \left[ (1- {\ensuremath{r}\xspace}{\ensuremath{q}\xspace}) \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} + {D\xspace}+ {R\xspace}\right. \left. + \frac{{\ensuremath{q}\xspace}{\ensuremath{r}\xspace}}{{p\xspace}} {C\xspace}+{\ensuremath{q}\xspace}{\ensuremath{r}\xspace}\min \left ( {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}, \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \right) \right] \label{eq.waste-instant}\end{aligned}$$ As for the second strategy (*No checkpoint during prediction window*), we do no longer incur the waste due to checkpointing in proactive mode as we no longer checkpoint in proactive mode. Furthermore, the value of [$T_{\text{lost}}$]{}in proactive mode becomes [$\mathbb{E}_{I}^{(f)}$]{}instead of [$T_{\text{P}}$]{}. Consequently, the total waste when there is no checkpoint during the proactive mode is: $$\begin{aligned} {\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}} &=\left (1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \right )\frac{{C\xspace}}{{\ensuremath{T_{\text{R}}}\xspace}} + \frac{{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}{C\xspace}+ \frac{{\ensuremath{p}\xspace}(1-{\ensuremath{q}\xspace})}{{\ensuremath{\mu_{P}}\xspace}}\left (\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} + {D\xspace}+{R\xspace}\right) \\ & + \frac{{\ensuremath{p}\xspace}{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \left ({\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}+ {D\xspace}+{R\xspace}\right) +\left (1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \right ) \frac{1}{{\ensuremath{\mu_{NP}}\xspace}} \left( \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} + {D\xspace}+{R\xspace}\right) \nonumber \\\end{aligned}$$ which we rewrite as $$\begin{aligned} {\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{NoCkptI}\xspace}} &=\left(\left (1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \right )\frac{1}{{\ensuremath{T_{\text{R}}}\xspace}} + \frac{{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\right){C\xspace}+ \frac{{\ensuremath{p}\xspace}(1-{\ensuremath{q}\xspace})}{{\ensuremath{\mu_{P}}\xspace}}\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \nonumber \\ & + \frac{{\ensuremath{p}\xspace}{\ensuremath{q}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}+\left (1 -\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}} \right ) \frac{1}{{\ensuremath{\mu_{NP}}\xspace}} \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \nonumber \\ & + \left(\frac{{\ensuremath{p}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}+\left(1-\frac{{\ensuremath{I'}\xspace}}{{\ensuremath{\mu_{P}}\xspace}}\right)\frac{1}{{\ensuremath{\mu_{NP}}\xspace}}\right)\left({D\xspace}+{R\xspace}\right) \label{eq.proa.noCkpt.waste}\end{aligned}$$ Note that when ${\ensuremath{I}\xspace}=0$, [<span style="font-variant:small-caps;">Instant</span>]{}and [<span style="font-variant:small-caps;">NoCkptI</span>]{}are identical. Indeed, we have ${\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}=0$ if ${\ensuremath{I}\xspace}=0$, and we check that Equations  and  are identical in that case. Waste minimization {#sec-opt-int} ------------------ In this section we aim at minimizing the waste of the three strategies, and then we find conditions to characterize which one is better. Recall that : $${\ensuremath{I'}\xspace}= {\ensuremath{q}\xspace}\left ( (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right )$$ **[<span style="font-variant:small-caps;">WithCkptI</span>]{}.** In order to compute the optimal value for [$T_{\text{P}}$]{}, let us find the portion of the waste that depends on [$T_{\text{P}}$]{}: $${\ensuremath{\textsc{Waste}}\xspace}_{{\ensuremath{T_{\text{P}}}\xspace}} = \frac{{\ensuremath{r}\xspace}{\ensuremath{q}\xspace}}{ \mu}\left ( \frac{ (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{{\ensuremath{p}\xspace}} \frac{ {C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}} + {\ensuremath{T_{\text{P}}}\xspace}\right )$$ As we can see, the optimal value for [$T_{\text{P}}$]{}is independent from [$q$]{}, but also from $\mu$. The optimal value for [$T_{\text{P}}$]{}is thus: $$\label{tp.opt.int} {\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{extr}}}}=\sqrt{ \dfrac{(1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{p} {C\xspace}}$$ However, for our algorithm to be correct, we want $\frac{{\ensuremath{I}\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}} \in \mathbb{N}$ (the interval [$I$]{} is partitioned in $k$ intervals of length [$T_{\text{P}}$]{}, for some integer $k$). We choose ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}$ equal to either $\frac{{\ensuremath{I}\xspace}}{\left \lfloor \frac{{\ensuremath{I}\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{extr}}}}}\right \rfloor}$ or $\frac{{\ensuremath{I}\xspace}}{\left \lfloor \frac{{\ensuremath{I}\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{extr}}}}}\right \rfloor +1}$, depending on the value that minimizes ${\ensuremath{\textsc{Waste}}\xspace}_{{\ensuremath{T_{\text{P}}}\xspace}}$. Note that we also have the constraint ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}} \geq {C\xspace}$, hence if both values are lower than [C]{}, then ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}={C\xspace}$. Now that we know that ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}$ is independent from both [$q$]{}and [$T_{\text{R}}$]{}, we can see the waste in Equation  as a function of two variables. One can see from Equation  that the waste is an affine function of [$q$]{}. This means that the minimum is always reached for either ${\ensuremath{q}\xspace}=0$ or ${\ensuremath{q}\xspace}=1$. We now consider the two functions ${\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=0\}}$ and ${\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=1\}}$ in order to minimize them with respect to [$T_{\text{R}}$]{}. First we have: $$\label{waste.int.q0} {\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=0\}} =\frac{{C\xspace}}{{\ensuremath{T_{\text{R}}}\xspace}} + \frac{1}{\mu}\left ( \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} + {D\xspace}+{R\xspace}\right )$$ As expected, this is exactly the equation without prediction, the study of the optimal solution has been done in Section \[sec.no.intervals\], it is minimized when ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_0} =\min \left( \alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace}, \max \left ( \sqrt{2 {C\xspace}\mu}, {C\xspace}\right )\right )$. Next we have: $$\begin{aligned} \label{waste.int.q1} {\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=1\}} &= \left (1 -\frac{{\ensuremath{r}\xspace}\left ( (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right )}{{\ensuremath{p}\xspace}\mu} \right ) \left ( \frac{{C\xspace}}{{\ensuremath{T_{\text{R}}}\xspace}} + \frac{1-{\ensuremath{r}\xspace}}{\mu}\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2} \right )\nonumber \\ & +\frac{{\ensuremath{r}\xspace}}{ \mu}\left ( \frac{\left ( (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right )}{{\ensuremath{p}\xspace}} \frac{{C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}} + {\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}} \right ) + \frac{{\ensuremath{r}\xspace}}{{\ensuremath{p}\xspace}\mu}{C\xspace}\nonumber \\ & + \left(\frac{{\ensuremath{r}\xspace}}{\mu}+\left (1 -\frac{{\ensuremath{r}\xspace}\left ( (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right )}{{\ensuremath{p}\xspace}\mu} \right )\frac{1-{\ensuremath{r}\xspace}}{\mu}\right)\left({D\xspace}+{R\xspace}\right)\end{aligned}$$ This equation is minimized when $${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1} = \sqrt{ \dfrac{2 \mu{C\xspace}}{(1-{\ensuremath{r}\xspace})}}$$ One can remark that this value is equal to the result without intervals (Section \[sec.no.intervals\]). Actually, the only impact of the prediction interval [$I$]{}is the moment when we should take a pre-emptive action. Note that when ${\ensuremath{r}\xspace}=0$ (this means that there is no prediction), we have ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1} = {\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_0} $, and we retrieve Young’s formula [@young74]. Finally, we know that the waste is defined for ${C\xspace}\leq {\ensuremath{T_{\text{R}}}\xspace}\leq \alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace}$. Hence, if ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1} \notin [{C\xspace},\alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace}]$, this solution is not satisfiable. However Equation  is convex, so the optimal solution is [C]{}if ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1} < {C\xspace}$, and $\alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace}$ if ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1} > \alpha {\ensuremath{\mu_e}\xspace}$. Hence, when ${\ensuremath{q}\xspace}=1$, the optimal solution should be $$\label{tnp.opt.int} \min \left (\alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace},\max \left (\sqrt{ \dfrac{2 \mu{C\xspace}}{(1-{\ensuremath{r}\xspace})}},{C\xspace}\right )\right).$$ **[<span style="font-variant:small-caps;">Instant</span>]{}**. The derivation is similar . The optimal value for [$q$]{}is either $0$ or $1$, thus we consider ${\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{Instant}\xspace}}^{\{0\}} = {\ensuremath{\textsc{Waste}_Y}\xspace}$ and ${\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{Instant}\xspace}}^{\{1\}}$. If ${\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}>\frac{{\ensuremath{T_{\text{R}}}\xspace}}{2}$, then ${\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{Instant}\xspace}}^{\{0\}} < {\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{Instant}\xspace}}^{\{1\}}$, so we can assume $\min({\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}, \frac{{\ensuremath{T_{\text{R}}}\xspace}}{2}) = {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}$. Then we derive that ${\ensuremath{\textsc{Waste}}\xspace}_{{\textsc{Instant}\xspace}}^{\{1\}}$ is minimized for ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1}$ as before.\ **[<span style="font-variant:small-caps;">NoCkptI</span>]{}**. One can see that Equation  and Equation  only differ by the quantity : $$\frac{{\ensuremath{q}\xspace}{\ensuremath{r}\xspace}}{\mu}\left ( \frac{(1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{{\ensuremath{p}\xspace}} \frac{{C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}} + {\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}} - {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right )$$ This value is linear in [$q$]{}and a constant with regards to [$T_{\text{R}}$]{}. Hence the minimization is almost the same. Once again we can see that the optimal value for [$q$]{}is either 0 or 1. We can consider the two functions ${\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}\{{\ensuremath{q}\xspace}=0\}}$ and ${\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}\{{\ensuremath{q}\xspace}=1\}}$. We remark that ${\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}\{{\ensuremath{q}\xspace}=0\}} = {\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=0\}}$, and hence that the study has already been done. As for ${\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}\{{\ensuremath{q}\xspace}=1\}}$, it is also minimized when ${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}} = \sqrt{ \dfrac{2 \mu{C\xspace}}{(1-{\ensuremath{r}\xspace})}}$. Finally, the last step of this study is identical to the previous minimization, and the optimal solution when ${\ensuremath{q}\xspace}=1$ is defined by : $${\ensuremath{T_{\text{R}}}\xspace}^{{\ensuremath{\text{opt}}}_1}=\min \left (\alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace},\max \left (\sqrt{ \dfrac{2 \mu{C\xspace}}{(1-{\ensuremath{r}\xspace})}},{C\xspace}\right )\right)$$ **Summary**. Finally in this section, we consider the waste for the two algorithms that take the prediction window into account (the one that does not checkpoint during the prediction window, and the one that checkpoints during the prediction window), and try to find conditions of dominance of one strategy over the other. Since the equation of the waste is identical when ${\ensuremath{q}\xspace}=0$, let us consider the case when ${\ensuremath{q}\xspace}=1$. We have seen that: $$\begin{aligned} \label{diff.waste.algo} ({\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=1\}} - {\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}\{{\ensuremath{q}\xspace}=1\}}) & = \frac{{\ensuremath{r}\xspace}\left ( (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right )}{{\ensuremath{p}\xspace}\mu}\frac{{C\xspace}}{{\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}} \nonumber \\ & + \frac{{\ensuremath{r}\xspace}}{\mu} \left ( {\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}} - {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right ) \end{aligned}$$ We want to know when Equation  is nonnegative (meaning that it is beneficial not to take any checkpoints during proactive mode). We know that this value is minimized when ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{extr}}}}=\sqrt{ \dfrac{ (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{p}{C\xspace}}$ (Equation ), then a sufficient condition would be to study the equation : $${\ensuremath{\textsc{Waste}}\xspace}_{\text{withCkpt}\{{\ensuremath{q}\xspace}=1\}} - {\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}\{{\ensuremath{q}\xspace}=1\}} \geq 0$$ with ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{extr}}}}$ instead of ${\ensuremath{T_{\text{P}}}\xspace}^{{\ensuremath{\text{opt}}}}$. That is: $$\begin{aligned} &\frac{{\ensuremath{r}\xspace}(1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{{\ensuremath{p}\xspace}\mu}\frac{{C\xspace}}{\sqrt{ \dfrac{ (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{p} {C\xspace}}} + \frac{{\ensuremath{r}\xspace}}{\mu} \left ( \sqrt{ \dfrac{ (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{p} {C\xspace}} - {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\right ) &\geq 0 \nonumber\\ & \Leftrightarrow 2\sqrt{ \dfrac{ (1 - {\ensuremath{p}\xspace}) {\ensuremath{I}\xspace}+ {\ensuremath{p}\xspace}{\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}}{p} {C\xspace}} \geq {\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}\!^2 \label{cond.noCkpt}\end{aligned}$$ Consequently, we can say that if Equation  is matched, then ${\ensuremath{\textsc{Waste}}\xspace}_{\text{noCkpt}}$ $\leq {\ensuremath{\textsc{Waste}}\xspace}$, the algorithm where we do not checkpoint during the proactive mode has a better solution than Algorithm \[algo.proactive\]. For example, if we assume that faults strike uniformly during the prediction window $[t_{0}, t_{0}+{\ensuremath{I}\xspace}]$, in other words, if $0 \leq x \leq {\ensuremath{I}\xspace}$, the probability that the fault occurs in the interval $[t_{0}, t_{0}+x]$ is $\frac{x}{{\ensuremath{I}\xspace}}$, then ${\ensuremath{\mathbb{E}_{I}^{(f)}}\xspace}=\frac{I}{2}$, and our condition becomes $${\ensuremath{I}\xspace}\leq 16 \frac{1 - \sfrac{{\ensuremath{p}\xspace}}{2}}{{\ensuremath{p}\xspace}}{C\xspace}.$$ We can now finish our study by saying that in order to find the optimal solution, one should compute both optimal solutions for ${\ensuremath{q}\xspace}=0$ and ${\ensuremath{q}\xspace}= 1$, for both algorithms, and choose the one that minimizes the waste, as was done in Section \[sec.no.intervals\], except when Equation  is valid, then we can focus on the computation of the waste of the algorithms that does not checkpoint during proactive mode. Simulation results {#sec.simulations} ================== In order to validate our model, we have instantiated it with several scenarios. The experiments use parameters that are representative of current and forthcoming large-scale platforms [@j116; @Ferreira2011]. We have $C=R=10mn$, and $D=1mn$. The individual (processor) MTBF $\mu_{ind} = 125$ years, and the total number of processors $N$ varies from $N=16,384$ to $N=524,288$, so that the platform MTBF $\mu$ varies from $\mu=4,000mn$ (about $1.5$ day) down to $\mu=125mn$ (about $2$ hours). For instance the Jaguar platform, with $N=45,208$ processors, is reported to experience about one failure per day [@6264677], which leads to $\mu_{ind} = \frac{45,208}{365}\approx 125$ years. We have analytically computed the optimal value of the waste for each strategy (using the formulas of Section \[sec-opt-int\]) using a computer algebra software. In order to check the accuracy of our model, we have compared the results with those from simulations using a fault generator. Our simulation engine generates a random trace of failures, parameterized either by an Exponential failure distribution or by a Weibull distribution law with shape parameter $0.5$ and $0.7$; Exponential failures are widely used for theoretical studies, while Weibull failures are representative of the behavior of real-world platforms [@Weibull1; @Weibull2; @Heien:2011:MTH:2063384.2063444]. With probability [$r$]{}, we decide if a failure is predicted or not. In both cases, the distribution is scaled so that its expectation corresponds to the platform MTBF $\mu$. Then the simulation engine generates another random trace of false predictions (whose distribution is identical to the first trace or a uniform distributions). This second distribution is scaled so that its expectation is $\frac{{\ensuremath{p}\xspace}\mu}{{\ensuremath{r}\xspace}(1-{\ensuremath{p}\xspace})}$, the inter-arrival time of false predictions. Finally, both traces are merged to derive the final trace with all events. Each value reported for the simulations is the average of $100$ randomly generated experiments. In the simulations, we compare up to ten checkpointing strategies. Here is the list:\ $\bullet$ [<span style="font-variant:small-caps;">Young</span>]{}is the periodic checkpointing strategy of period ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace} = \sqrt{2 \mu {C\xspace}}$ given in [@young74]. Note that Daly’s formula [@daly04] leads to the same results.\ $\bullet$ [<span style="font-variant:small-caps;">ExactPrediction</span>]{}is derived from the strategy Section \[sec.no.intervals\] (with exact prediction dates). However, in the simulations, we always take prediction into account and use an uncapped period ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace} = \sqrt{ \dfrac{2 \mu{C\xspace}}{1-{\ensuremath{r}\xspace}}}$ instead of ${\ensuremath{T_{1} }\xspace} = \min(\alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace}, \max({C\xspace}, {\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace}))$.\ $\bullet$ Similarly, [<span style="font-variant:small-caps;">Instant</span>]{}, [<span style="font-variant:small-caps;">NoCkptI</span>]{}and [<span style="font-variant:small-caps;">WithCkptI</span>]{}are the three strategies described in Section \[sec.intervals\], with the same modification: we always take prediction into account and use an uncapped period ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace} $ instead of ${\ensuremath{T_{1} }\xspace}$ in regular mode.\ $\bullet$ To assess the quality of each strategy, we compare it with its [<span style="font-variant:small-caps;">BestPeriod</span>]{}counterpart, defined as the same strategy but using the best possible period ${\ensuremath{T_{\text{R}}}\xspace}$. This latter period is computed via a brute-force numerical search for the optimal period. The rationale for modifying the strategies described in the previous sections is of course to better assess the impact of prediction. For the computer algebra plots, in addition to the waste with the *capped periods* given in Section \[sec-opt-int\], i.e., with ${\ensuremath{T_{0} }\xspace}= {\ensuremath{T_{\text{Y}}}\xspace}= \min(\alpha \mu, \max({C\xspace}, {\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace}))$, and ${\ensuremath{T_{1} }\xspace} = \min(\alpha {\ensuremath{\mu_e}\xspace}- {\ensuremath{I}\xspace}, \max({C\xspace}, {\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace}))$, we also report the waste obtained for the *uncapped periods*, i.e., using ${\ensuremath{T_{0} }\xspace} = {\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{0\}}}\xspace}$ without prediction and ${\ensuremath{T_{1} }\xspace} = {\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace} = \sqrt{ \dfrac{2 \mu{C\xspace}}{1-{\ensuremath{r}\xspace}}}$ with prediction. The objective is twofold: (i) Assess whether the validity of the model can be extended; and (ii) Provide an exact match with the simulations, which mimic a real-life execution and do allow for an arbitrary number of faults per period. Predictors from the literature ------------------------------ We first experiment with two predictors from the literature: one accurate predictor with high recall and precision [@5958823], namely with ${\ensuremath{p}\xspace}=0.82$ and ${\ensuremath{r}\xspace}=0.85$, and another predictor with more limited recall and precision [@5542627], namely with ${\ensuremath{p}\xspace}=0.4$ and ${\ensuremath{r}\xspace}=0.7$. In both cases, we use two different time-windows, ${\ensuremath{I}\xspace}=300s$ and ${\ensuremath{I}\xspace}=3,000s$. The former value does not allow for checkpointing within the prediction window, while the latter values allow for several checkpoints. Note that we always compare the results with [<span style="font-variant:small-caps;">ExactPrediction</span>]{}, the strategy that assumes exact prediction dates. Figures \[fig.082.085\] and \[fig.04.07\] show the average waste degradation of the ten heuristics for both predictors, as a function of the number of processors $N$. We draw the plots as a function of the number of processors $N$ rather than of the platform MTBF $\mu = \mu_{ind}/N$ , because it is more natural to see the waste increase with larger platforms; however, this work is agnostic of the granularity of the processors and intrinsically focuses on the impact of the MTBF on the waste. The first observation is that the prediction is always useful for the whole set of parameters under study! The second observation is the good correspondence between analytical results and simulations in Figures \[fig.082.085\] and \[fig.04.07\] (compare subfigures (a) and (b) with (c), (d) and (e), and subfigures (f) and (g) with (h), (i) and (j)). This shows the validity of the model for the whole range of distributions (Exponential and both Weibull shapes). More precisely: (i) The capped model overestimates the waste for large platforms (or small MTBFs), in particular for large values of ${\ensuremath{I}\xspace}$ (see Figures \[fig.082.085\](f) and \[fig.04.07\](f)), but this was the price to pay for mathematical rigor; (ii) The uncapped model is accurate for the whole range of the study. Another striking result is that all strategies taking prediction into account have the same waste as their [<span style="font-variant:small-caps;">BestPeriod</span>]{}counterpart, which demonstrates that our formula ${\ensuremath{T_{{\ensuremath{\text{extr}}}}^{\{1\}}}\xspace} = \sqrt{\frac{2 \mu {C\xspace}}{1-{\ensuremath{r}\xspace}}}$ is indeed the best possible checkpointing period in regular mode. Unsurprisingly, [<span style="font-variant:small-caps;">ExactPrediction</span>]{}is better than the heuristics that use a time window instead of exact prediction dates, especially with a high number of processors. However, interval based heuristics achieve close results when ${\ensuremath{I}\xspace}=300s$, or when ${\ensuremath{I}\xspace}=3,000s$ and a small number of processors ($N<2^{16}$). \ \ \ \ \ In order to compare the heuristics without prediction to those with prediction, we report job execution times in Table \[makespan.300.tab\]. For the strategies with prediction, we compute the gain (expressed in percentage) over [<span style="font-variant:small-caps;">Young</span>]{}, the reference strategy without prediction. For ${\ensuremath{I}\xspace}=300s$, the three strategies are identical. But for ${\ensuremath{I}\xspace}=3,000s$, [<span style="font-variant:small-caps;">WithCkptI</span>]{}has often better results. First, with ${\ensuremath{p}\xspace}=0.85$ and ${\ensuremath{r}\xspace}=0.82$ and ${\ensuremath{I}\xspace}=3,000s$, we save $25\% $ of the total time with $N=2^{19}$, and $14\%$ with $N=2^{16}$ using strategy [<span style="font-variant:small-caps;">WithCkptI</span>]{}. With ${\ensuremath{I}\xspace}=300s$, we save up to $44\%$ with $N=2^{19}$, and $18\%$ with $N=2^{16}$ using any strategy (though [<span style="font-variant:small-caps;">NoCkptI</span>]{}is slightly better than [<span style="font-variant:small-caps;">Instant</span>]{}). Then, with ${\ensuremath{p}\xspace}=0.4$ and ${\ensuremath{r}\xspace}= 0.7$, we still save $32\%$ of the execution time when ${\ensuremath{I}\xspace}=300s$ and $N=2^{19}$, and $13\%$ with $N=2^{16}$. The gain gets smaller with ${\ensuremath{I}\xspace}=3,000s$, but remains non negligible since we can save up to $9.7\%$ with $N=2^{19}$, and $7.6\%$ with $N=2^{16}$. Unexpectedly in this last case, the strategy that is the most efficient is [<span style="font-variant:small-caps;">Instant</span>]{}and not [<span style="font-variant:small-caps;">WithCkptI</span>]{}. We observe that the size of the prediction-window [$I$]{}plays an important role too: we have better results for ${\ensuremath{I}\xspace}=300$ and $({\ensuremath{p}\xspace},{\ensuremath{r}\xspace})=(0.4,0.7)$, than for ${\ensuremath{I}\xspace}=3000$ and $({\ensuremath{p}\xspace},{\ensuremath{r}\xspace})=(0.82,0.85)$. In Table \[makespan.300.tab\], we report the job execution times for Weibull distributions with $k=0.5$. For ${\ensuremath{I}\xspace}=300s$, the three strategies are identical. But for ${\ensuremath{I}\xspace}=3,000s$, [<span style="font-variant:small-caps;">WithCkptI</span>]{}has often better results. First, with ${\ensuremath{p}\xspace}=0.85$ and ${\ensuremath{r}\xspace}=0.82$ and ${\ensuremath{I}\xspace}=3,000s$, we save $61\% $ of the total time with $N=2^{19}$, and $30\%$ with $N=2^{16}$ using strategy [<span style="font-variant:small-caps;">WithCkptI</span>]{}. With ${\ensuremath{I}\xspace}=300s$, we save up to $74\%$ with $N=2^{19}$, and $38\%$ with $N=2^{16}$ using any strategy (though [<span style="font-variant:small-caps;">NoCkptI</span>]{}is slightly better than [<span style="font-variant:small-caps;">Instant</span>]{}). Then, with ${\ensuremath{p}\xspace}=0.4$ and ${\ensuremath{r}\xspace}= 0.7$, we still save $66\%$ of the execution time when ${\ensuremath{I}\xspace}=300s$ and $N=2^{19}$, and $33\%$ with $N=2^{16}$. The gain gets smaller with ${\ensuremath{I}\xspace}=3,000s$, but we can save up to $52\%$ with $N=2^{19}$, and $22\%$ with $N=2^{16}$. Using a Weibull failure distribution with shape parameter 0.5, we observe that the gain due to prediction is twice larger than the gain computed with a Weibull failure distribution with shape parameter 0.7. We can conclude the same remark from Figures \[fig.082.085\](e), \[fig.082.085\](j), \[fig.04.07\](e) and \[fig.04.07\](j). We also performed simulations with a trace of false predictions parametrized by a uniform distribution and we observe that the result (Figures  \[fig.082.085.UNIF\] and  \[fig.04.07.UNIF\]) are similar to the result (Figures  \[fig.082.085\] and  \[fig.04.07\]) with simulations with a trace of false predictions parametrized by a distribution identical to the distribution of the trace of failures. ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- ${\ensuremath{I}\xspace}=300$ $2^{16}$ procs $2^{19}$ procs $2^{16}$ procs $2^{19}$ procs [<span style="font-variant:small-caps;">Young</span>]{} 81.3 30.1 81.2 30.1 [<span style="font-variant:small-caps;">ExactPrediction</span>]{} 65.9 (19%) 15.9 (47%) 69.7 (14%) 19.3 (36%) [<span style="font-variant:small-caps;">NoCkptI</span>]{} 66.5 (18%) 16.9 (44%) 70.3 (13%) 20.5 (32%) [<span style="font-variant:small-caps;">Instant</span>]{} 66.5 (18%) 17.0 (44%) 70.3 (13%) 20.7 (31%) ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- : Comparing job execution times for a Weibull distribution ($k=0.7$), and reporting the gain when comparing to [<span style="font-variant:small-caps;">Young</span>]{}.[]{data-label="makespan.300.tab"} \ ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- ${\ensuremath{I}\xspace}=3,000$ $2^{16}$ procs $2^{19}$ procs $2^{16}$ procs $2^{19}$ procs [<span style="font-variant:small-caps;">Young</span>]{} 81.2 30.1 81.2 30.1 [<span style="font-variant:small-caps;">ExactPrediction</span>]{} 66.0 (19%) 15.9 (47%) 69.8 (14%) 19.3 (36%) [<span style="font-variant:small-caps;">NoCkptI</span>]{} 71.1 (12%) 24.6 (18%) 75.2 (7.3%) 28.9 (4.0%) [<span style="font-variant:small-caps;">WithCkptI</span>]{} 70.0 (14%) 22.6 (25%) 75.4 (7.1%) 27.2 (9.7%) [<span style="font-variant:small-caps;">Instant</span>]{} 71.2 (12%) 24.2 (20%) 75.0 (7.6%) 28.3 (6.0%) ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- : Comparing job execution times for a Weibull distribution ($k=0.7$), and reporting the gain when comparing to [<span style="font-variant:small-caps;">Young</span>]{}.[]{data-label="makespan.300.tab"} ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- ${\ensuremath{I}\xspace}=300$ $2^{16}$ procs $2^{19}$ procs $2^{16}$ procs $2^{19}$ procs [<span style="font-variant:small-caps;">Young</span>]{} 125.4 171.8 125.5 171.7 [<span style="font-variant:small-caps;">ExactPrediction</span>]{} 75.8 (40%) 39.4 (77%) 82.9 (34%) 51.8(70%) [<span style="font-variant:small-caps;">NoCkptI</span>]{} 77.3 (38%) 44.8 (74%) 84.6 (33%) 58.2 (66%) [<span style="font-variant:small-caps;">Instant</span>]{} 77.4 (38%) 45.1 (74%) 84.7 (33%) 59.1 (66%) ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- : Comparing job execution times for a Weibull distribution ($k=0.5$), and reporting the gain when comparing to [<span style="font-variant:small-caps;">Young</span>]{}.[]{data-label="makespan.300.tab"} \ ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- ${\ensuremath{I}\xspace}=3,000$ $2^{16}$ procs $2^{19}$ procs $2^{16}$ procs $2^{19}$ procs [<span style="font-variant:small-caps;">Young</span>]{} 125.4 171.9 125.4 172.0 [<span style="font-variant:small-caps;">ExactPrediction</span>]{} 76.1 (39%) 39.4 (77%) 83.0 (34%) 51.7 (70%) [<span style="font-variant:small-caps;">NoCkptI</span>]{} 90.0 (28%) 71.8 (58%) 98.3 (22%) 84.5 (51%) [<span style="font-variant:small-caps;">WithCkptI</span>]{} 87.8 (30%) 66.6 (61%) 98.0 (22%) 82.2 (52%) [<span style="font-variant:small-caps;">Instant</span>]{} 89.8 (28%) 70.9 (59%) 98.2 (22%) 83.2 (52%) ------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- : Comparing job execution times for a Weibull distribution ($k=0.5$), and reporting the gain when comparing to [<span style="font-variant:small-caps;">Young</span>]{}.[]{data-label="makespan.300.tab"} Recall vs. precision {#sec.impact} -------------------- In this section, we assess the impact of the two key parameters of the predictor, its recall [$r$]{}and its precision ${\ensuremath{p}\xspace}$. To this purpose, we conduct simulations where one parameter is fixed, and we let the other vary. We choose two platforms, a smaller one with $N=2^{16}$ processors (or a MTBF $\mu=1,000mn$) and the other with $N=2^{19}$ processors (or a MTBF $\mu=125mn$). In both cases, we use a prediction-window of size ${\ensuremath{I}\xspace}=300s$, and a Weibull failure distribution with shape parameter $k=0.7$ (we have similar results (Figures  \[fig.recall.19.05\] and  \[fig.precision.19.05\]) for $k=0.5$). In Figure \[fig.recall.19\], we fix the value of [$r$]{}(either ${\ensuremath{r}\xspace}=0.4$ or ${\ensuremath{r}\xspace}=0.8$) and we let ${\ensuremath{p}\xspace}$ vary from $0.3$ to $0.99$. In the four plots, we observe that the precision has a minor impact on the waste. In Figure \[fig.precision.19\], we conduct the opposite experiment and fix the value of [$p$]{}(either $ {\ensuremath{p}\xspace}=0.4$ or ${\ensuremath{p}\xspace}=0.8$), letting ${\ensuremath{r}\xspace}$ vary from $0.3$ to $0.99$. Here we observe that increasing the recall can significantly improve the performance. Altogether we conclude that it is more important (for the design of future predictors) to focus on improving the recall [$r$]{}rather than the precision [$p$]{}, and our results can help quantify this statement. We provide an intuitive explanation as follows: unpredicted failures prove very harmful and heavily increase the waste, while unduly checkpointing due to false predictions turns out to induce a smaller overhead. Related work {#sec.related} ============ Considerable research has been conducted on fault prediction using different models (system log analysis [@5958823], event-driven approach [@GainaruIPDPS12; @5958823; @5542627], support vector machines [@LiangZXS07; @Fulp:2008:PCS:1855886.1855891]), nearest neighbors [@LiangZXS07], …). In this section we give a brief overview of the results obtained by predictors. We focus on their results rather than on their methods of prediction. The authors of [@5542627] introduce the *lead time*, that is the time between the prediction and the actual fault. This time should be sufficient to take proactive actions. They are also able to give the location of the fault. While this has a negative impact on the precision (see the low value of [$p$]{}in Table \[rel.work.tab\]), they state that it has a positive impact on the checkpointing time (from 1500 seconds to 120 seconds). The authors of [@5958823] also consider a lead time, and introduce a *prediction window* when the predicted fault should happen. The authors of [@LiangZXS07] study the impact of different prediction techniques with different prediction window sizes. They also consider a lead time, but do not state its value. These two latter studies motivate the work of Section \[sec.intervals\], even though [@5958823] does not provide the size of their prediction window. Unfortunately, much of the work done on prediction does not provide information that could be really useful for the design of efficient algorithms. These informations are those stated above, namely the lead time and the size of the prediction window, but other information that could be useful would be: (i) the distribution of the faults in the prediction window; (ii) the precision as a function of the recall (see our analysis); and (iii) the precision and recall as functions of the prediction window (what happens with a larger prediction window). While many studies on fault prediction focus on the conception of the predictor, most of them consider that the proactive action should simply be a checkpoint or a migration right in time before the fault. However, in their paper [@Fu:2007:EEC:1362622.1362678], Li et al. consider the mathematical problem to determine when and how to migrate. In order to be able to use migration, they stated that at every time, 2% of the resources are available. This allowed them to conceive a Knapsack-based heuristic. Thanks to their algorithm, they were able to save 30% of the execution time compared to an heuristic that does not take the reliability into account, with a precision and recall of 70%, and with a maximum load of 0.7. Finally, to the best of our knowledge, this work is the first to focus on the mathematical aspect of fault prediction, and to provide a model and a detailed analysis of the waste due to all three types of events (true and false predictions and unpredicted failures). Conclusion {#sec.conclusion} ========== In this work, we have studied the impact of prediction, either with exact dates or window-based, on checkpointing strategies. We have designed several algorithms that decide when to trust these predictions, and when it is worth taking preventive checkpoints. We have introduced an analytical model to capture the waste incurred by each strategy, and provided the optimal solution to the corresponding optimization problems. We have been able to derive some striking conclusions:\ $\bullet$ The model is quite accurate and its validity goes beyond the conservative assumption that requires capping checkpointing periods to diminish the probability of having several faults within the same period;\ $\bullet$ A unified formula for the optimal checkpointing period is $\sqrt{ \dfrac{2 \mu{C\xspace}}{1-{\ensuremath{r}\xspace}{\ensuremath{q}\xspace}}}$, which unifies both cases with and without prediction, and nicely extends the work of Young and Daly to account for prediction;\ $\bullet$ The simulations fully validate the model, and show that: (i) A significant gain is induced by using predictions, even for mid-range values of recall and precision; and (ii) The best period (found by brute-force search) is always very close to the one predicted by the model and given by the previous unified formula; this holds true both for Exponential and Weibull failure distributions;\ $\bullet$ The recall has more impact on the waste than the precision: *better safe than sorry*, or better prepare for a false event than miss an actual failure! Altogether, the analytical model and the comprehensive results provided in this work enable to fully assess the impact of fault prediction on optimal checkpointing strategies. Future work will be devoted to refine the assessment of the usefulness of prediction with trace-based failure and prediction logs from current large-scale supercomputers. [*Acknowledgments.*]{} The authors are with Université de Lyon, France. Y. Robert is with the Institut Universitaire de France. This work was supported in part by the ANR [*RESCUE*]{} project.
{ "pile_set_name": "ArXiv" }
using System; using Microsoft.EntityFrameworkCore; using Microsoft.EntityFrameworkCore.Infrastructure; using Microsoft.EntityFrameworkCore.Metadata; using Microsoft.EntityFrameworkCore.Migrations; using Migrations.Context; namespace ComputedColumns.EF.Migrations { [DbContext(typeof(StoreContext))] partial class StoreContextModelSnapshot : ModelSnapshot { protected override void BuildModel(ModelBuilder modelBuilder) { modelBuilder .HasAnnotation("ProductVersion", "1.1.0-rtm-22752") .HasAnnotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn); modelBuilder.Entity("ComputedColumns.Models.Order", b => { b.Property<int>("Id") .ValueGeneratedOnAdd() .HasAnnotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn); b.Property<int>("CustomerId"); b.Property<DateTime>("OrderDate") .ValueGeneratedOnAdd() .HasColumnType("datetime") .HasDefaultValueSql("getdate()"); b.Property<decimal?>("OrderTotal") .ValueGeneratedOnAddOrUpdate() .HasColumnType("money") .HasComputedColumnSql("Store.GetOrderTotal([Id])"); b.Property<DateTime>("ShipDate") .ValueGeneratedOnAdd() .HasColumnType("datetime") .HasDefaultValueSql("getdate()"); b.Property<byte[]>("TimeStamp") .IsConcurrencyToken() .ValueGeneratedOnAddOrUpdate(); b.HasKey("Id"); b.ToTable("Orders","Store"); }); modelBuilder.Entity("ComputedColumns.Models.OrderDetail", b => { b.Property<int>("Id") .ValueGeneratedOnAdd() .HasAnnotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn); b.Property<decimal?>("LineItemTotal") .ValueGeneratedOnAddOrUpdate() .HasColumnType("money") .HasComputedColumnSql("[Quantity]*[UnitCost]"); b.Property<int>("OrderId"); b.Property<int>("Quantity"); b.Property<byte[]>("TimeStamp") .IsConcurrencyToken() .ValueGeneratedOnAddOrUpdate(); b.Property<decimal>("UnitCost") .HasColumnType("money"); b.HasKey("Id"); b.HasIndex("OrderId"); b.ToTable("OrderDetails","Store"); }); modelBuilder.Entity("ComputedColumns.Models.OrderDetail", b => { b.HasOne("ComputedColumns.Models.Order", "Order") .WithMany("OrderDetails") .HasForeignKey("OrderId") .OnDelete(DeleteBehavior.Cascade); }); } } }
{ "pile_set_name": "Github" }
Forest Service seeks $6.3M from man for wildfire CHEYENNE, Wyo.—The U.S. Forest Service wants to collect $6.3 million from a 77-year-old man the agency blames for causing a 2012 forest fire that threatened to burn into the town of Jackson. The Forest Service alleges James G. Anderson Jr. sparked the wildfire on Sept. 8, 2012, by burning twigs and paper in a rusted-out barrel at his son's home and allowing the flames to get out of control. The Forest Service sent Anderson a bill in November for the firefighting costs. The amount was due to the agency's Albuquerque, N.M., service center on Dec. 13, according to a copy of the bill obtained by The Associated Press through the Freedom of Information Act. No criminal charges have been filed while the civil matter remains unresolved, said John Powell, a spokesman for the U.S. attorney's office in Wyoming. The Horsethief Canyon Fire burned 5 square miles of Bridger-Teton National Forest. At the height of the fire, officials urged some residents of nearby Jackson to be prepared to evacuate at a moment's notice. The firefighters succeeded in halting the flames a couple miles outside town. Firefighting costs for several agencies totaled about $9 million. Anderson's share, according to a Nov. 13 certified letter the Forest Service sent him, includes about $3.8 million incurred by the Forest Service and some $2 million by the U.S. Bureau of Land Management. He also owes about $64,000 to the U.S. Fish and Wildlife Service, $154,000 to the National Park Service and $252,000 to the state of Wyoming and Teton County. Advertisement A phone number listed for Anderson didn't work. His attorney, Richard Mulligan, declined comment and Anderson's son, James Anderson III, did not return a phone message Tuesday. Mary Cernicek, a spokeswoman for Bridger-Teton National Forest, declined to comment. Lightning causes the vast majority of wildfires in Wyoming. Relatively few are human-caused. The amount authorities are seeking from Anderson is on the high end of firefighting costs that agencies have attempted to recoup from individuals in Wyoming, State Forester Bill Crapser said. "When the attempts have been made, it's fairly successful," Crapser said. "The problem is, when you're talking about a $6 million or $9 million fire cost, you're probably going to end up with whatever the insurance policy is on it." According to a Forest Service report, Anderson told investigators he had burned twigs, shrub branches and papers in the barrel at his son's home at 6 a.m. Later, after watching football and getting a sandwich, he said he saw smoke outside through a garage window. He called 911, according to the investigation report obtained by AP through a separate FOIA request. The wildfire began when burning material got out of the barrel's rusted-out bottom and flames spread across the property, investigators determined. Firefighters arrived at the home around 2:45 p.m. to find the fire burning toward the national forest land beyond the home, the investigation report said.
{ "pile_set_name": "Pile-CC" }
Barbara Low, who was among a core of female scientists whose research in the 1940s unleashed a bonanza of lifesaving antibiotics, and whose gumption gained her followers a foothold in a male-dominated field, died on Jan. 10 at her home in the Riverdale section of the Bronx. She was 98. Her death was confirmed by Lucky Tran, a spokesman for the Irving Medical Center of Columbia University, where Dr. Low taught for nearly 60 years and was professor emeritus of biochemistry and molecular biophysics. Her death was announced belatedly because it took time for the university to gather biographical details, Dr. Tran said. Dr. Low’s role in identifying the structure of penicillin was something of a fluke. As a student at Oxford University in England, she was a protégée of the future Nobel laureate Dorothy Crowfoot Hodgkin, who, having been barred from teaching men, taught at Oxford’s Somerville College, a women’s school at the time.
{ "pile_set_name": "OpenWebText2" }
Dental implants are widely used as artificial substitutes for the root portion of missing teeth. A dental implant allows a dental restoration, such as a dental prosthesis, to be securely anchored to the jaw via an abutment mounted to the implant. An endosseous implant may have an externally threaded body. The threaded body can be configured for self-tapping into the bone tissues of the jaw. An endosseous implant can have an internal passage that is configured, such as internally threaded, for receiving and securing the anchoring stem of a permanent abutment therein. Following implantation of an implant in the intraoral cavity and healing of the surrounding tissues, a physical model of the intraoral cavity is produced for facilitating design and manufacture of the permanent abutment and prosthesis that are to be mounted onto the implant. In one procedure, an analog is placed in the physical model that is similar to the patient's intraoral cavity. The analog can be configured with an internal passage similar to the internal passage of the implant for receiving and securing the permanent abutment. The dental technician can then use the physical model to design and/or build a dental prosthesis for the patient. The dental technician mounts an abutment to the physical model via the internal passage of the analog. The dental technician then proceeds to build a dental prosthesis to fit onto the abutment and match surrounding teeth in the intraoral cavity of the patient. The methods and apparatus for constructing dental models can be less than ideal in at least some instances. Accurate placement of the analog in the physical model can be important for correct design and manufacture of the permanent abutment and prosthesis, and also for the outcome of the dental procedure. Accurate placement of an analog into a physical dental model, however, can be difficult. For example, manual positioning and orientation of an analog can be less than ideal with respect to accuracy, outcome and user convenience. In some dental models, which may employ a separate implant analog that is separately coupled to the dental model, inaccuracies in the placement of such implant analogs may compromise the accurate positioning of the abutment, and therefore degrade the accuracy of the prosthesis subsequently fabricated on the abutment and model. Thus, there is a need for improved dental models for dental procedures involving a dental implant. Ideally, such improved models would be simple to use, provide improved outcomes, include relatively few discrete parts, and provide accurate positioning and orienting of the permanent abutment.
{ "pile_set_name": "USPTO Backgrounds" }
Acute and chronic toxicity of azinphos-methyl to two estuarine species, Mysidopsis bahia and Cyprinodon variegatus. The acute and chronic toxicity of azinphos-methyl (Guthion) was evaluated for two estuarine species in the laboratory. Mysids (Mysidopsis bahia) and sheepshead minnows (Cyprinodon variegatus) were selected as the representative invertebrate and vertebrate estuarine test species, respectively. The toxicological endpoints determined for each species included the 96-h LC50, the no-observed-effect concentration (NOEC), the maximum acceptable toxicant concentration (MATC), and the acute-to-chronic ratio. The 96-h LC50 value derived for sheepshead minnows (2.0 microg/L) was seven times higher than the 96-h LC50 value (0.29 microg/L) derived for mysids. The MATCs were 0.024 microg/L and 0.24 microg/L for the mysid and the sheepshead minnow, respectively. The estimated acute-to-chronic ratios were 12 for mysids and 8.3 for sheepshead minnows.
{ "pile_set_name": "PubMed Abstracts" }
Karen Jespersen Karen Moustgaard Jespersen (born 17 January 1947 in Copenhagen) is a Danish journalist and former politician representing the party Venstre. Career Jespersen served as the editor of now defunct Politisk Revy magazine from 1974 to 1977. She was a member of the Left Socialists, represented the Social Democrats and was Social Minister from 25 January 1993 to 28 January 1994 in the Cabinet of Poul Nyrup Rasmussen I, and 27 September 1994 to 23 February 2000 in the Cabinet of Poul Nyrup Rasmussen II, III and IV. She was Interior Minister 23 February 2000 to 27 November 2001 in the Cabinet of Poul Nyrup Rasmussen IV. She has been a member of the Folketing since 12 December 1990. On 12 September 2007 she replaced Eva Kjer Hansen as Social Minister and Minister for Equal Rights in the Cabinet of Anders Fogh Rasmussen II, thus becoming the first Danish politician to have represented both political wings as a government minister. On 23 November 2007, the Ministry of Interior Affairs, Ministry of Family and Consumer Affairs and the Social Ministry were merged into a Ministry of Welfare, and Jespersen became Minister of Welfare in the Cabinet of Anders Fogh Rasmussen III. She held this post until April 2009 when Denmark changed prime ministers to Lars Løkke Rasmussen. Eisenhower Fellowships selected Karen Jespersen in 1987 to represent Denmark. In the book Islamister og Naivister: et anklageskrift (Islamists and Naivists: a bill of indictment), which she wrote together with her husband, political commentator Ralf Pittelkow, she warns of an underestimation of the Islamist threat. On 14 January 2007, she declared that she was no longer member of the Social Democrats. In a press release 1 February 2007, the Liberal Party Venstre announced that Karen Jespersen had joined the party and would be a candidate for parliament (Folketinget) in the next election. 18 June 2015 was her last day in the Folketing. References Category:1947 births Category:Living people Category:Danish women journalists Category:Danish Interior Ministers Category:Politicians from Copenhagen Category:Members of the Folketing Category:Government ministers of Denmark Category:Venstre (Denmark) politicians Category:Social Democrats (Denmark) politicians Category:Danish journalists Category:21st-century Danish politicians Category:21st-century Danish women politicians Category:Women government ministers of Denmark Category:Women members of the Folketing Category:Female interior ministers
{ "pile_set_name": "Wikipedia (en)" }
Header Ads The 'child of prostitute' story is about far beyond Duterte's messy mouth Philippine President Rodrigo Duterte is sad he offended President Obama — kind of. He now "laments" that calling Obama "child of a prostitute" brought on so much debate. On Monday, Duterte lashed out at the United States for bringing up issues around a "medication war" that has killed 2,400 Filipinos. The United States reacted by wiping out a meeting. Today, Duterte attempted to walk the remark back. "Not individual," he said. The occurrence, obviously, is standing out as truly newsworthy. Despite the fact that Duterte is enamored with swearing — the man swore at the pope — it's not each day that you hear a sitting president affront an associate. It's absolutely not each day that you hear a sitting president say the words "child of a prostitute." Be that as it may, the story is about far beyond swear words — for two key reasons. [Nearly 2,000 have passed on in Duterte's 'war on medications' in the Philippines — one is a 5 year-old] In the first place, this is about the medication war, not Duterte's dialect. At the point when Duterte was running for office, he guaranteed a full-out war on medications. What he has conveyed is a war on suspected medication clients, merchants and their families. An expected 2,400 individuals have been killed in two months. The tsunami of extrajudicial and vigilante killings are destroying the Philippines. Late casualties incorporate a 4-year-old young lady out to get popcorn with her dad, and a 5-year old shot to death in her family's store. Duterte is not especially keen on discussing human rights — he has said as much. Presently, on the grounds that he reviled out Obama's mom, he doesn't need to; rather than apologizing for directing executions, he can say sorry for his filthy mouth. Second, U.S.- Philippine ties are no sideshow. Duterte may think that its interesting to utilize a hostile to gay slur to allude to the U.S. diplomat and to affront Obama's mom, yet his remarks play to a strong strain of hostile to U.S. supposition — opinion that could move the equalization of force in the South China Sea. The Philippines is a previous U.S. state. Around 25 years prior, Filipino government officials battled to remove U.S. strengths, promising to free the nation from remote control. Presently, with China squeezing its cases to the greater part of the South China Sea, some — however not all — need U.S. powers back. An assention upheld by previous president Benigno Aquino III, the Enhanced Defense Cooperation Agreement, or EDCA, would put more U.S. ships at Philippine ports. China, obviously, is not avid for a U.S. return. Beijing was angry that Aquino took the South China Sea question to the Permanent Court of Arbitration in the Hague. What's more, the nation is pushing hard — hard — for the Philippines to disregard the decision and settle on China-Philippine talks. Duterte has demonstrated some readiness to work with Beijing. Duterte's abuse may appear to be inconsequential, But when they prompt the cancelation of abnormal state U.S.- Philippine gatherings, welcoming China, they are definitely not. The 'child of prostitute' story is about far beyond Duterte's messy mouth Reviewed by Maathavan on 05:56 Rating: 5
{ "pile_set_name": "Pile-CC" }
The people have spoken—and they are feelin’ the Bern. At midnight on Sunday, Time magazine cut off voting for its annual Person of the Year poll. In the number one spot, with 10.2 percent of the overall vote, was Democratic presidential candidate Bernie Sanders. It’s the first time in history that a presidential candidate has won the poll before being elected, and only three U.S. presidents have made the number one spot: Franklin D. Roosevelt, Ronald Reagan, and Barack Obama. Though the magazine’s editors will make the final selection for the official Person of the Year honor, it’s clear that Sanders’ skyrocketing popularity is only on the rise. Though Hillary Clinton currently holds an average 24.5 percent lead in the national polls, the former secretary of state placed low on the Time poll list, at number 29—just below Triple Crown-winning horse American Pharoah. Sanders, an Independent senator from Vermont, beat some of the world’s most beloved figures in the poll, with twice as many votes as second-place winner Malala Yousafzai. Pope Francis trailed slightly behind, with 3.7 percent of the votes. Even the vague nomination of “refugees” garnered 3 percent of the overall votes. The lowest ranking members of the poll were largely representative of the conservative right: GOP candidates Carly Fiorina, Jeb Bush, and Ted Cruz, as well as anti-gay-marriage Kentucky county clerk Kim Davis were all in the bottom 10. The partisan disconnect between the poll’s high and low contenders may say more about the demographic of voters than the worthiness of those being ranked. According to Time, Person of the Year isn’t necessarily a popularity contest but a choice of “the person Time believes most influenced the news this year, for better or worse.” That means Kim Davis, with her impact on 2015’s same-sex marriage news, could still be in the running for the magazine’s cover. The winner of Time’s Person of the Year award will be announced this Wednesday morning on the Today show. As of press time, Sanders has not responded to his poll victory with a statement. Illustration by Max Fleishman
{ "pile_set_name": "OpenWebText2" }
For black North Carolinians, the Great Recession never ended. From 2009 to 2014, black residents’ incomes plummeted 21 percent in Winston-Salem, 17 percent in Raleigh, and 15 percent in Charlotte and Greensboro. North Carolinians of other races aren’t faring much better. In 2015, the average North Carolina household earned $400 less than it did in 2009, after adjusting for inflation. Wages are stagnant or declining because blue-collar North Carolinians face excessive competition from foreign labor. By striking fairer trade deals and lowering immigration, elected leaders could tighten the labor market. Employers would have to boost wages to attract workers. Many companies have shipped jobs overseas to take advantage of cheaper labor. From 2001 to 2011, America lost 3.2 million jobs to China alone. North Carolina – a former powerhouse in textile manufacturing – has suffered more from outsourcing than any other state, according to the Labor Department. To add insult to injury, many employers pass over local workers in favor of cheaper immigrant laborers. Two dozen senior U.S. intelligence and military officers – who tried to warn George W. Bush before the Iraq war that those pushing war were lying – write: Our U.S. Army contacts in the area have told us this is not what happened. There was no Syrian “chemical weapons attack.” Instead, a Syrian aircraft bombed an al-Qaeda-in-Syria ammunition depot that turned out to be full of noxious chemicals and a strong wind blew the chemical-laden cloud over a nearby village where many consequently died. **************************** Former U.N weapons inspector Scott Ritter warned before the start of the Iraq war that claims that Saddam Hussein possessed weapons of mass destruction were false. Sunday, Ritter wrote that current claims that the leader of Syria launched a chemical weapons attack was false. “those who hated him still hate him while those who supported him now also hate him“. The latest US cruise missile attack on the Syrian airbase is an extremely important event in so many ways that it is important to examine it in some detail. I will try to do this today with the hope to be able to shed some light on a rather bizarre attack which will nevertheless have profound consequences. But first, let’s begin by looking at what actually happened. One-hundred and fifty-two years ago, April 9, 1865 was a Palm Sunday just as today, and in the central part of war-torn Virginia, a major turning point occurred in American history. General Robert E. Lee, that “chevalier sans peur“—that knight without fear—surrendered the tattered remnants of the proud Army of Northern Virginia to General Ulysses S. Grant, setting in motion the end phase of the War for Southern Independence. That war was in reality not a “civil war,” that is, it was not a war between two aggrieved parties within the American nation. Rather, it was a war between two ideas of government, and, in reality, two ideas of history and progress. For the North, which now controlled the Federal government, it was a war to suppress what was seen as a rebellion against constituted national authority. For the states of the Southern Confederacy, it was a defense of their inherited and inherent rights under the old Constitution of 1787, rights that had never been ceded to the Federal government. And, more, it became for them a Second War for Independence against an arbitrary and overreaching government that had gravely violated that Constitution. China has flexed its military muscle on state television as tensions escalate between the US and North Korea. China Central Television (CCTV) yesterday revealed footage of the country's various missiles in a daily military programme. According to media, one of the weapons featured in the programme was DF-21 missile. The anti-ship ballistic missile boasts a firing distance of up to 1,926 miles (3,100km) and has been dubbed 'the killer of aircraft carrier'. Trey Gowdy’s office had no official statement but his top aide, Mandy Gonsales, said that the committee isn’t happy with the circumstances of McGill’s death or the fact that his own doctor was summoned to pronounce. Abedin says she called the private doctor because she hoped he could save her friend. There will be no autopsy on the body, which will be cremated before sunset in accordance with his religion. McGill is the single son of deceased parents and has no family. No services are planned. Just days after being summoned to appear before Trey Gowdy’s congressional committee to testify about Hillary Clinton’s email server, one of her aides was found dead at home of “natural causes.” Johnston Wilson McGill, 34, was pronounced dead on his couch by a private doctor after suffering an apparent heart attack. A spokesman for Clinton’s former campaign said that Huma Abedin, Clinton’s deputy campaign coordinator, found McGill when she stopped by for coffee to discuss Clinton’s plans for a run for Mayor of New York. Abedin told police in a statement that McGill suffered from an abnormal heart arhythmia and that his doctor had always said a sudden and massive cardiac event was a possibility. She also told her publicist to make sure that the whole world knew that he was planning to cooperate with the congressional panel. President Trump signed into law the first major national pro-life bill in more than a decade, freeing states to withhold federal family planning money from Planned Parenthood and other clinics that also perform abortions. The Obama administration had issued a last-minute rule trying to prevent GOP-led states from discriminating against abortion providers, insisting that all women’s health clinics be treated the same. But Republicans, aided by Mr. Trump and Vice President Mike Pence, have now erased that rule. A federal complaint was unsealed today charging Candace Marie Claiborne, 60, of Washington, D.C., and an employee of the U.S. Department of State, with obstructing an official proceeding and making false statements to the FBI, both felony offenses, for allegedly concealing numerous contacts that she had over a period of years with foreign intelligence agents. The charges were announced by Acting Assistant Attorney General Mary B. McCord for National Security, U.S. Attorney Channing D. Phillips of the District of Columbia and Assistant Director in Charge Andrew W. Vale of the FBI’s Washington Field Office. “Candace Marie Claiborne is a U.S. State Department employee who possesses a Top Secret security clearance and allegedly failed to report her contacts with Chinese foreign intelligence agents who provided her with thousands of dollars of gifts and benefits,” said Acting Assistant Attorney General McCord. “Claiborne used her position and her access to sensitive diplomatic data for personal profit. Pursuing those who imperil our national security for personal gain will remain a key priority of the National Security Division.” The House and Senate Intelligence Committees are expanding their investigations into former National Security Adviser Susan Rice's alleged "unmasking" of U.S. persons who were incidentally collected in surveillance of foreign officials. An unnamed member of the House Intelligence Committee confirmed that Rice is now under "a full-blown investigation," Fox News reported on Wednesday. "We will be performing an accounting of all unmasking for political purposes focused on the previous White House administration. This is now a full-blown investigation," the committee member said. A Huffington Post blogger proposed Thursday to deny white men the right to vote for 20 years to correct the wrongs they afflicted on the world. Shelley Garland, a MA philosophy student, pondered taking away white men’s voting powers as punishment for their “toxic white masculinity” in an op-ed. Garland argues that if it hadn’t been for white men, President Donald Trump wouldn’t have been elected and the U.K. would still be a part of the E.U. The only way to fix this, Garland reasons, is to take away white men’s voting rights for about twenty years. The new commander of South Korea's Marine Corps took office on Thursday calling on his 30,000 troops to be ready to "mercilessly retaliate" against North Korea's provocations. In a change-of-commander ceremony, Lt. Gen. Jun Jin-goo pointed out that the elite forces have played a key role in the front-line defense of the country. "The Marine Corps has protected places most difficult to defend but should be done so at all costs" from Baengnyeong Island near the western sea border with North Korea to Pohang, Ullleung Island and Jeju Island, he said in his speech at the event held the headquarters of the Marine Corps in Hwaseong, Gyeonggi Province. "The jets of the so-called US-led coalition launched a strike at about 17:30-17:50 [local time, 14:30-14:50 GMT] on a Daesh warehouse where many foreign fighters were present. First a white cloud and then a yellow one appeared at the site of the strike, which points at the presence of a large number of poisonous substances. A fire at the site continued until 22:30 [19:30 GMT]," The Syrian army yet again denied possessing chemical weapons. According to the Syrian General Staff, the US-led coalition's strike killed several hundred people, including civilians. Hundreds were poisoned as a result of the strike on Daesh's headquarters and depot with chemical weapons. "This confirms that Daesh and al-Nusra terrorists possess chemical weapons and are capable of using, obtaining and transporting it," the document said. Furthermore, if true, the US coalition just did exactly what Russia has claimed occurred in the initial chemical attack (that prompted President Trump's "Tomahawk" torrent). The Russian Defense Ministry said the day after the initial chemical weapons release that an airstrike near Khan Shaykhun was carried out by Syrian aircraft, struck a terrorist warehouse that stored chemical weapons slated for delivery to Iraq. Of course, the propaganda battle is not over so what we need now is some YouTube clip to 'prove' what the US coalition did. Abandoning his tough talk on China and reversing himself on several other campaign themes, President Trump’s 12th week in office could go down as the moment he showed himself to be another establishment Republican, not the unconventional crockery-smashing raging bull he played on the stump. In the past week, Mr. Trump ordered missile strikes against the Syrian military after telling voters during the campaign that the U.S. was involved in too many military operations overseas.Mr. Trump proclaimed Wednesday that NATO is no longer obsolete, after questioning the U.S. commitment to the alliance on the campaign trail. Also this week, Mr. Trump declared that he will not label China as a currency manipulator, after campaigning relentlessly on a promise to do just that. ` Now, some 30 days after Napolitano broke the story, CNN seems to have just confirmed it: British and other European intelligence agencies intercepted communications between associates of Donald Trump and Russian officials and other Russian individuals during the campaign and passed on those communications to their US counterparts, US congressional and law enforcement and US and European intelligence sources tell CNN. The communications were captured during routine surveillance of Russian officials and other Russians known to western intelligence. British and European intelligence agencies, including GCHQ, the British intelligence agency responsible for communications surveillance, were not proactively targeting members of the Trump team but rather picked up these communications during what's known as "incidental collection," these sources tell CNN. The European intelligence agencies detected multiple communications over several months between the Trump associates and Russian individuals -- and passed on that intelligence to the US. The US and Britain are part of the so-called "Five Eyes" agreement (along with Canada, Australia and New Zealand), which calls for open sharing among member nations of a broad range of intelligence. Mr. Graham had predicted Mr. Trump’s about-face since last year, when as president-elect he intervened to save hundreds of manufacturing jobs at the Carrier air conditioning plant in Indiana. The bank said it sends any surplus from the interest and fees it assesses back to the Treasury, resulting in a $5.6 billion profit for taxpayers since fiscal 2007. That appeared to help sway the president. President Trump’s embrace of the Export-Import Bank is a major blow to conservatives, who had been on the verge of nixing what they — and Mr. Trump, until now — called a sop to wealthy corporations. During the election campaign, Mr. Trump dismissed the obscure lending agency as “featherbedding” for politicians and huge companies that don’t need it, enthusing opponents who had squeezed its lending powers and said it should die off. But the president now says he is convinced that the corporate welfare produces jobs — an about-face that irked conservatives who have been fighting for years to end the loan program. “Unless the Trump administration has reforms that can get Ex-Im out of the business of picking winners and losers with taxpayer dollars, then it will be difficult to see any upside in this,” said Doug Sachtleben, a spokesman for the Club for Growth, a conservative group that urged Congress to let the bank expire. Three weeks before voters head to the polls to choose the next president of France, all 11 first-round candidates lined up in a large semicircle for a nationally televised debate that quickly dissolved into a cacophony of insults and shouts that steamed rapidly out of control. For an American viewer, it was a Gallic clone of our own Republican primary debates a year before. A week after this televised contest, France's leading daily newspaper, Le Monde, asked, "What would the first months of an Emmanuel Macron presidency look like?" -- effectively baptizing one of the two leading candidates the winner even before the first ballot is cast. It appeared to be a form of wishful thinking, not unlike the broad assumptions of most American newspapers in October that Hillary Clinton was the presumptive heir to the presidency of the United States -- not Donald Trump. Remembrance To die for one’s country is not only an act of bravery, it is THE act of bravery. For soldiers, it is just an extension of their military career, a part of their duty. As leaders have asked their soldiers to sacrifice themselves for the good of the society, it is only right for leaders to go through the same motion. They should practice what they have preached. As war is seen as a noble act, tu sat serves as redemption in case of defeat. It is also a way to tell the enemy: “You might have won the battle/war but you don’t deserve to win because you don’t have the chinh nghia (just cause).” And it is not only just cause: it is the moral belief that the cause they are fighting for deserves their total sacrifice. Continues below Follow by Email Counter Core Creek Militia ==============================My sixth great grandfather, his wife, and five of his six children were killed in battle with the Tuscarora Indians at Core Creek, NC. The Seven Blackbirds ==============================My third great grandfather was an Ensign in the Revolutionary War, and saved his unit's flag after being wounded at the Battle of Brandywine. He was also at Kingston (Kinston), Wilmington, Charleston, Two Sisters and Augusta. He was at the defeat at Brier Creek and also Bee Creek. Requiem Aeternam - Eternal Rest Grant unto Them ============================== My second great grandfather was killed in action on May 3, 1863 at the Battle of Chancellorsville. ============================= My great grandfather and great uncle knew all the men in the "Civil War Requiem" video as they were part of the 53rd NC which was the sole unit defending Fort Mahone. (Fort Mahone was named "Fort Damnation" by the Yankees) *Handpicked men of the 53rd (My great grandfather was one of these) made the final, night assault at Petersburg in an attempt to break Grant's line. This was against Fort Stedman which was a few miles to the slight northeast. They initially succeeded, but reinforcements drove them back. This video is made from photographs which were taken the day after the 53rd evacuated the lines the night before to begin the retreat to Appomattox. I have many more pictures taken by the same photographer, one of these shows a 14 year old boy and the other is the famous picture of the blond, handsome soldier with his musket. =========================== *General Gordon promised the men a gold medal and 30 days leave if they accomplished their task and many years after the War my great grandfather wrote General Gordon, who was then governor of Georgia about this incident. They exchanged several letters which I have framed. See first link below. =========================== *The Attack On Fort Stedman ============================ "His Colored Friends" ============================ Lee's Surrender ============================= My Black NC Kinfolks ============================ Punished For Being Caught! Great Grandfather Koonce He was a drummer boy in the WBTS, survived the War only to die a few years later. He was caught in an ice storm on his way home, but instead of seeking shelter, continued on his horse until the end. His clothes had to be cut off and he died a few days later.
{ "pile_set_name": "Pile-CC" }
Tissue microarray technology for high-throughput molecular profiling of cancer. Tissue microarray (TMA) technology allows rapid visualization of molecular targets in thousands of tissue specimens at a time, either at the DNA, RNA or protein level. The technique facilitates rapid translation of molecular discoveries to clinical applications. By revealing the cellular localization, prevalence and clinical significance of candidate genes, TMAs are ideally suitable for genomics-based diagnostic and drug target discovery. TMAs have a number of advantages compared with conventional techniques. The speed of molecular analyses is increased by more than 100-fold, precious tissues are not destroyed and a very large number of molecular targets can be analyzed from consecutive TMA sections. The ability to study archival tissue specimens is an important advantage as such specimens are usually not applicable in other high-throughput genomic and proteomic surveys. Construction and analysis of TMAs can be automated, increasing the throughput even further. Most of the applications of the TMA technology have come from the field of cancer research. Examples include analysis of the frequency of molecular alterations in large tumor materials, exploration of tumor progression, identification of predictive or prognostic factors and validation of newly discovered genes as diagnostic and therapeutic targets.
{ "pile_set_name": "PubMed Abstracts" }
Max Georg von Twickel Max Georg von Twickel (22 August 1926 – 28 November 2013) was a German Roman Catholic bishop. Born at Havixbeck, von Twickel was ordained in 1952 for the Roman Catholic Diocese of Munster. He was also named titular bishop of Lugura and auxiliary bishop of the Munster Diocese in 1973; he retired in 2001. References Category:1926 births Category:2013 deaths Category:German Roman Catholic titular bishops Category:Place of death missing Category:Auxiliary bishops
{ "pile_set_name": "Wikipedia (en)" }
The underlying causes of glaucoma are not fully understood. However, it is known that elevated intraocular pressure is one of the symptoms associated with the development of glaucoma. Elevations of intraocular pressure can ultimately lead to impairment or loss of normal visual function due to damage to the optic nerve. It is also known that the elevated intraocular pressure is caused by an excess of fluid (i.e., aqueous humor) within the eye. The excess intraocular fluid is believed to result from blockage or impairment of the normal drainage of fluid from the eye via the trabecular meshwork. The current drug therapies for treating glaucoma attempt to control intraocular pressure by means of increasing the drainage or "outflow" of aqueous humor from the eye or decreasing the production or "inflow" of aqueous humor by the ciliary processes of the eye. In some cases, patients become refractory to drug therapy. In other cases, the use of drug therapy alone is not sufficient to adequately control intraocular pressure, particularly if there is a severe blockage of the normal passages for the outflow of aqueous humor. Thus, some patients require surgical intervention to correct the impaired outflow of aqueous humor and thereby normalize or at least control their intraocular pressure. The outflow of aqueous humor can be improved by means of intraocular surgical procedures known to those skilled in the art as trabeculectomy procedures. These procedures are collectively referred to herein as "glaucoma filtration surgery." The procedures utilized in glaucoma filtration surgery generally involve the creation of a fistula to promote the drainage of aqueous humor into a surgically prepared filtration bleb. Alternatively, filtration devices have been used to shunt aqueous humor via a cannula from the anterior chamber into a dispersing device implanted beneath a surgically created bleb. A number of designs for filtration implants are known. See, for example, Prata et al., Ophthalmol. 102:894-904 (1995) which reviews a variety of available filtration implants made from polypropylene, polymethylmethacrylate or silicone materials. See also, Hoskins et al., Ophthalmic Surgery 23:702-707 (1992). Wound fibroplasia is a common cause of failure for glaucoma filtration devices. The fibroplasia results in encapsulation of the device, limiting aqueous humor outflow. There is a need for an improved glaucoma filtration device material which exhibits flexibility, is resistant to bioerosion and tissue adhesion, and does not elicit a significant immune response.
{ "pile_set_name": "USPTO Backgrounds" }
/* This source file must have a .cpp extension so that all C++ compilers recognize the extension without flags. Borland does not know .cxx for example. */ #ifndef __cplusplus # error "A C compiler has been selected for C++." #endif /* Version number components: V=Version, R=Revision, P=Patch Version date components: YYYY=Year, MM=Month, DD=Day */ #if defined(__COMO__) # define COMPILER_ID "Comeau" /* __COMO_VERSION__ = VRR */ # define COMPILER_VERSION_MAJOR DEC(__COMO_VERSION__ / 100) # define COMPILER_VERSION_MINOR DEC(__COMO_VERSION__ % 100) #elif defined(__INTEL_COMPILER) || defined(__ICC) # define COMPILER_ID "Intel" /* __INTEL_COMPILER = VRP */ # define COMPILER_VERSION_MAJOR DEC(__INTEL_COMPILER/100) # define COMPILER_VERSION_MINOR DEC(__INTEL_COMPILER/10 % 10) # define COMPILER_VERSION_PATCH DEC(__INTEL_COMPILER % 10) # if defined(__INTEL_COMPILER_BUILD_DATE) /* __INTEL_COMPILER_BUILD_DATE = YYYYMMDD */ # define COMPILER_VERSION_TWEAK DEC(__INTEL_COMPILER_BUILD_DATE) # endif #elif defined(__PATHCC__) # define COMPILER_ID "PathScale" # define COMPILER_VERSION_MAJOR DEC(__PATHCC__) # define COMPILER_VERSION_MINOR DEC(__PATHCC_MINOR__) # if defined(__PATHCC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__PATHCC_PATCHLEVEL__) # endif #elif defined(__clang__) # define COMPILER_ID "Clang" # define COMPILER_VERSION_MAJOR DEC(__clang_major__) # define COMPILER_VERSION_MINOR DEC(__clang_minor__) # define COMPILER_VERSION_PATCH DEC(__clang_patchlevel__) #elif defined(__BORLANDC__) && defined(__CODEGEARC_VERSION__) # define COMPILER_ID "Embarcadero" # define COMPILER_VERSION_MAJOR HEX(__CODEGEARC_VERSION__>>24 & 0x00FF) # define COMPILER_VERSION_MINOR HEX(__CODEGEARC_VERSION__>>16 & 0x00FF) # define COMPILER_VERSION_PATCH HEX(__CODEGEARC_VERSION__ & 0xFFFF) #elif defined(__BORLANDC__) # define COMPILER_ID "Borland" /* __BORLANDC__ = 0xVRR */ # define COMPILER_VERSION_MAJOR HEX(__BORLANDC__>>8) # define COMPILER_VERSION_MINOR HEX(__BORLANDC__ & 0xFF) #elif defined(__WATCOMC__) # define COMPILER_ID "Watcom" /* __WATCOMC__ = VVRR */ # define COMPILER_VERSION_MAJOR DEC(__WATCOMC__ / 100) # define COMPILER_VERSION_MINOR DEC(__WATCOMC__ % 100) #elif defined(__SUNPRO_CC) # define COMPILER_ID "SunPro" # if __SUNPRO_CC >= 0x5100 /* __SUNPRO_CC = 0xVRRP */ # define COMPILER_VERSION_MAJOR HEX(__SUNPRO_CC>>12) # define COMPILER_VERSION_MINOR HEX(__SUNPRO_CC>>4 & 0xFF) # define COMPILER_VERSION_PATCH HEX(__SUNPRO_CC & 0xF) # else /* __SUNPRO_CC = 0xVRP */ # define COMPILER_VERSION_MAJOR HEX(__SUNPRO_CC>>8) # define COMPILER_VERSION_MINOR HEX(__SUNPRO_CC>>4 & 0xF) # define COMPILER_VERSION_PATCH HEX(__SUNPRO_CC & 0xF) # endif #elif defined(__HP_aCC) # define COMPILER_ID "HP" /* __HP_aCC = VVRRPP */ # define COMPILER_VERSION_MAJOR DEC(__HP_aCC/10000) # define COMPILER_VERSION_MINOR DEC(__HP_aCC/100 % 100) # define COMPILER_VERSION_PATCH DEC(__HP_aCC % 100) #elif defined(__DECCXX) # define COMPILER_ID "Compaq" /* __DECCXX_VER = VVRRTPPPP */ # define COMPILER_VERSION_MAJOR DEC(__DECCXX_VER/10000000) # define COMPILER_VERSION_MINOR DEC(__DECCXX_VER/100000 % 100) # define COMPILER_VERSION_PATCH DEC(__DECCXX_VER % 10000) #elif defined(__IBMCPP__) # if defined(__COMPILER_VER__) # define COMPILER_ID "zOS" # else # if __IBMCPP__ >= 800 # define COMPILER_ID "XL" # else # define COMPILER_ID "VisualAge" # endif /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) # endif #elif defined(__PGI) # define COMPILER_ID "PGI" # define COMPILER_VERSION_MAJOR DEC(__PGIC__) # define COMPILER_VERSION_MINOR DEC(__PGIC_MINOR__) # if defined(__PGIC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__PGIC_PATCHLEVEL__) # endif #elif defined(_CRAYC) # define COMPILER_ID "Cray" # define COMPILER_VERSION_MAJOR DEC(_RELEASE) # define COMPILER_VERSION_MINOR DEC(_RELEASE_MINOR) #elif defined(__TI_COMPILER_VERSION__) # define COMPILER_ID "TI" /* __TI_COMPILER_VERSION__ = VVVRRRPPP */ # define COMPILER_VERSION_MAJOR DEC(__TI_COMPILER_VERSION__/1000000) # define COMPILER_VERSION_MINOR DEC(__TI_COMPILER_VERSION__/1000 % 1000) # define COMPILER_VERSION_PATCH DEC(__TI_COMPILER_VERSION__ % 1000) #elif defined(__SCO_VERSION__) # define COMPILER_ID "SCO" #elif defined(__GNUC__) # define COMPILER_ID "GNU" # define COMPILER_VERSION_MAJOR DEC(__GNUC__) # define COMPILER_VERSION_MINOR DEC(__GNUC_MINOR__) # if defined(__GNUC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__GNUC_PATCHLEVEL__) # endif #elif defined(_MSC_VER) # define COMPILER_ID "MSVC" /* _MSC_VER = VVRR */ # define COMPILER_VERSION_MAJOR DEC(_MSC_VER / 100) # define COMPILER_VERSION_MINOR DEC(_MSC_VER % 100) # if defined(_MSC_FULL_VER) # if _MSC_VER >= 1400 /* _MSC_FULL_VER = VVRRPPPPP */ # define COMPILER_VERSION_PATCH DEC(_MSC_FULL_VER % 100000) # else /* _MSC_FULL_VER = VVRRPPPP */ # define COMPILER_VERSION_PATCH DEC(_MSC_FULL_VER % 10000) # endif # endif # if defined(_MSC_BUILD) # define COMPILER_VERSION_TWEAK DEC(_MSC_BUILD) # endif /* Analog VisualDSP++ >= 4.5.6 */ #elif defined(__VISUALDSPVERSION__) # define COMPILER_ID "ADSP" /* __VISUALDSPVERSION__ = 0xVVRRPP00 */ # define COMPILER_VERSION_MAJOR HEX(__VISUALDSPVERSION__>>24) # define COMPILER_VERSION_MINOR HEX(__VISUALDSPVERSION__>>16 & 0xFF) # define COMPILER_VERSION_PATCH HEX(__VISUALDSPVERSION__>>8 & 0xFF) /* Analog VisualDSP++ < 4.5.6 */ #elif defined(__ADSPBLACKFIN__) || defined(__ADSPTS__) || defined(__ADSP21000__) # define COMPILER_ID "ADSP" /* IAR Systems compiler for embedded systems. http://www.iar.com */ #elif defined(__IAR_SYSTEMS_ICC__ ) || defined(__IAR_SYSTEMS_ICC) # define COMPILER_ID "IAR" #elif defined(_SGI_COMPILER_VERSION) || defined(_COMPILER_VERSION) # define COMPILER_ID "MIPSpro" # if defined(_SGI_COMPILER_VERSION) /* _SGI_COMPILER_VERSION = VRP */ # define COMPILER_VERSION_MAJOR DEC(_SGI_COMPILER_VERSION/100) # define COMPILER_VERSION_MINOR DEC(_SGI_COMPILER_VERSION/10 % 10) # define COMPILER_VERSION_PATCH DEC(_SGI_COMPILER_VERSION % 10) # else /* _COMPILER_VERSION = VRP */ # define COMPILER_VERSION_MAJOR DEC(_COMPILER_VERSION/100) # define COMPILER_VERSION_MINOR DEC(_COMPILER_VERSION/10 % 10) # define COMPILER_VERSION_PATCH DEC(_COMPILER_VERSION % 10) # endif /* This compiler is either not known or is too old to define an identification macro. Try to identify the platform and guess that it is the native compiler. */ #elif defined(__sgi) # define COMPILER_ID "MIPSpro" #elif defined(__hpux) || defined(__hpua) # define COMPILER_ID "HP" #else /* unknown compiler */ # define COMPILER_ID "" #endif /* Construct the string literal in pieces to prevent the source from getting matched. Store it in a pointer rather than an array because some compilers will just produce instructions to fill the array rather than assigning a pointer to a static array. */ char const* info_compiler = "INFO" ":" "compiler[" COMPILER_ID "]"; /* Identify known platforms by name. */ #if defined(__linux) || defined(__linux__) || defined(linux) # define PLATFORM_ID "Linux" #elif defined(__CYGWIN__) # define PLATFORM_ID "Cygwin" #elif defined(__MINGW32__) # define PLATFORM_ID "MinGW" #elif defined(__APPLE__) # define PLATFORM_ID "Darwin" #elif defined(_WIN32) || defined(__WIN32__) || defined(WIN32) # define PLATFORM_ID "Windows" #elif defined(__FreeBSD__) || defined(__FreeBSD) # define PLATFORM_ID "FreeBSD" #elif defined(__NetBSD__) || defined(__NetBSD) # define PLATFORM_ID "NetBSD" #elif defined(__OpenBSD__) || defined(__OPENBSD) # define PLATFORM_ID "OpenBSD" #elif defined(__sun) || defined(sun) # define PLATFORM_ID "SunOS" #elif defined(_AIX) || defined(__AIX) || defined(__AIX__) || defined(__aix) || defined(__aix__) # define PLATFORM_ID "AIX" #elif defined(__sgi) || defined(__sgi__) || defined(_SGI) # define PLATFORM_ID "IRIX" #elif defined(__hpux) || defined(__hpux__) # define PLATFORM_ID "HP-UX" #elif defined(__HAIKU__) # define PLATFORM_ID "Haiku" #elif defined(__BeOS) || defined(__BEOS__) || defined(_BEOS) # define PLATFORM_ID "BeOS" #elif defined(__QNX__) || defined(__QNXNTO__) # define PLATFORM_ID "QNX" #elif defined(__tru64) || defined(_tru64) || defined(__TRU64__) # define PLATFORM_ID "Tru64" #elif defined(__riscos) || defined(__riscos__) # define PLATFORM_ID "RISCos" #elif defined(__sinix) || defined(__sinix__) || defined(__SINIX__) # define PLATFORM_ID "SINIX" #elif defined(__UNIX_SV__) # define PLATFORM_ID "UNIX_SV" #elif defined(__bsdos__) # define PLATFORM_ID "BSDOS" #elif defined(_MPRAS) || defined(MPRAS) # define PLATFORM_ID "MP-RAS" #elif defined(__osf) || defined(__osf__) # define PLATFORM_ID "OSF1" #elif defined(_SCO_SV) || defined(SCO_SV) || defined(sco_sv) # define PLATFORM_ID "SCO_SV" #elif defined(__ultrix) || defined(__ultrix__) || defined(_ULTRIX) # define PLATFORM_ID "ULTRIX" #elif defined(__XENIX__) || defined(_XENIX) || defined(XENIX) # define PLATFORM_ID "Xenix" #else /* unknown platform */ # define PLATFORM_ID "" #endif /* For windows compilers MSVC and Intel we can determine the architecture of the compiler being used. This is because the compilers do not have flags that can change the architecture, but rather depend on which compiler is being used */ #if defined(_WIN32) && defined(_MSC_VER) # if defined(_M_IA64) # define ARCHITECTURE_ID "IA64" # elif defined(_M_X64) || defined(_M_AMD64) # define ARCHITECTURE_ID "x64" # elif defined(_M_IX86) # define ARCHITECTURE_ID "X86" # elif defined(_M_ARM) # define ARCHITECTURE_ID "ARM" # elif defined(_M_MIPS) # define ARCHITECTURE_ID "MIPS" # elif defined(_M_SH) # define ARCHITECTURE_ID "SHx" # else /* unknown architecture */ # define ARCHITECTURE_ID "" # endif #else # define ARCHITECTURE_ID "" #endif /* Convert integer to decimal digit literals. */ #define DEC(n) \ ('0' + (((n) / 10000000)%10)), \ ('0' + (((n) / 1000000)%10)), \ ('0' + (((n) / 100000)%10)), \ ('0' + (((n) / 10000)%10)), \ ('0' + (((n) / 1000)%10)), \ ('0' + (((n) / 100)%10)), \ ('0' + (((n) / 10)%10)), \ ('0' + ((n) % 10)) /* Convert integer to hex digit literals. */ #define HEX(n) \ ('0' + ((n)>>28 & 0xF)), \ ('0' + ((n)>>24 & 0xF)), \ ('0' + ((n)>>20 & 0xF)), \ ('0' + ((n)>>16 & 0xF)), \ ('0' + ((n)>>12 & 0xF)), \ ('0' + ((n)>>8 & 0xF)), \ ('0' + ((n)>>4 & 0xF)), \ ('0' + ((n) & 0xF)) /* Construct a string literal encoding the version number components. */ #ifdef COMPILER_VERSION_MAJOR char const info_version[] = { 'I', 'N', 'F', 'O', ':', 'c','o','m','p','i','l','e','r','_','v','e','r','s','i','o','n','[', COMPILER_VERSION_MAJOR, # ifdef COMPILER_VERSION_MINOR '.', COMPILER_VERSION_MINOR, # ifdef COMPILER_VERSION_PATCH '.', COMPILER_VERSION_PATCH, # ifdef COMPILER_VERSION_TWEAK '.', COMPILER_VERSION_TWEAK, # endif # endif # endif ']','\0'}; #endif /* Construct the string literal in pieces to prevent the source from getting matched. Store it in a pointer rather than an array because some compilers will just produce instructions to fill the array rather than assigning a pointer to a static array. */ char const* info_platform = "INFO" ":" "platform[" PLATFORM_ID "]"; char const* info_arch = "INFO" ":" "arch[" ARCHITECTURE_ID "]"; /*--------------------------------------------------------------------------*/ int main(int argc, char* argv[]) { int require = 0; require += info_compiler[argc]; require += info_platform[argc]; #ifdef COMPILER_VERSION_MAJOR require += info_version[argc]; #endif (void)argv; return require; }
{ "pile_set_name": "Github" }
A Buffer Status Report (BSR) is of great importance in an uplink system due to its provision of User Equipment (UE) side information required for scheduling at a base station (evolved NodeB or eNB). In the Long Term Evolution-Advance (LTE-A) Release 8/9, a buffer status report is relatively simple because there is only one Media Access Control Protocol Data Unit (MAC PDU) in each Transport Time Interval (TTI). However in the LTE-A Release 10, a plurality of MAC PDUs can be transmitted in one TTI due to the introduction of Carrier Aggregation (CA), i.e., a plurality of Component Carriers (CCs), and thus it is desirable to address some new issues occurring with a buffer status report.
{ "pile_set_name": "USPTO Backgrounds" }
1. Technical Field The present disclosure relates generally to a power supply apparatus and a method of operating the same, and more particularly to a power supply apparatus with input voltage detection and a method of operating the same. 2. Description of Related Art With the development and progress of science and technology, electronic products with a wide range of different functions have gradually been developed. These electronic products not only meet different demands of users, but also provide more convenient life for the users. Each electronic product includes various electronic components, and the required voltages for supplying different electronic components are usually not the same. In order to provide appropriate voltage levels for normally operating each of the electronic components, the power supply or power conversion unit is used to convert the AC voltage or DC voltage into the appropriate voltage levels. In addition, in order to avoid the malfunction and damage of the power supply or the power conversion unit from the abnormal supply power of the AC power source, it is usually to install a detection circuit at the input side of the power supply or the power conversion unit. Reference is made to FIG. 1 which is a schematic circuit block diagram of a related art input detection circuit for a power supply. The power supply is supplied power by an external AC power source VS, and an AC detection circuit 10A is installed at the input side of the power supply to directly detect whether the external AC power source VS is normal or abnormal. Reference is made to FIG. 2 which is a schematic view of showing a failed detection of an AC detection circuit of a related art power supply apparatus. When the live wire VSL and the neutral wire VSN are both abnormal, the AC detection circuit 10A generates a signal to a protection circuit of the power supply to provide a protection for the power supply. In particular, the output system 20A, the power supply, and the AC detection circuit 10A are commonly grounded. Once one of the live wire or the neutral wire is abnormal, a loop Ls is formed via the grounding of the AC detection circuit 10A and the output system 20A. Accordingly, the AC detection circuit 10A fails to detect whether the live wire VSL or the neutral wire VSN of the external AC power source VS is abnormal or not. Also, the AC detection circuit 10A does not generate the signal to the protection circuit of the power supply and fail to provide a protection for the power supply. Accordingly, it is desirable to provide a power supply apparatus with input voltage detection and a method of operating the same to use an input detection module without any grounding or an input detection module with an independent grounding which is different from that of the power supply for detecting whether the AC power source is abnormal or not.
{ "pile_set_name": "USPTO Backgrounds" }
Ribavirin disposition in high-risk patients for acquired immunodeficiency syndrome. Ribavirin is a broad-spectrum antiviral drug that has in vitro activity against human immunodeficiency virus. To determine the kinetics of ribavirin, 17 symptom-free homosexual men with lymphadenopathy were studied. Single doses of ribavirin, 600, 1200, or 2400 mg, were given orally or intravenously. The plasma ribavirin concentration-time profiles were well fitted by a three-compartment open model. Ribavirin followed linear kinetics over the dose range studied. The mean 1-hour postinfusion concentrations after intravenous ribavirin, 600, 1200, and 2400 mg, were 8.0, 19.7, and 37.1 mumol/L, respectively. The mean +/- SD plasma beta-phase half-life, terminal-phase (gamma) half-life, and volume of distribution at steady state were 2.0 +/- 1.1 hours, 35.5 +/- 14.0 hours, and 647 +/- 258 L, respectively. The mean ribavirin renal clearance and total body clearance were 99 +/- 30 and 283 +/- 37 ml/min, respectively. After an oral dose of 600, 1200, and 2400 mg, the mean peak plasma ribavirin concentrations (which occurred 1.5 hours after administration) were 5.1, 9.9, and 12.6 mumol/L, respectively. The mean absorption half-life and bioavailability of ribavirin were 0.5 hour and 45%. Ribavirin had no plasma protein binding and the drug accumulated within red blood cells. In conclusion, ribavirin is incompletely absorbed from the gastrointestinal tract, its renal excretion accounts for approximately one third of the drug's elimination, and drug accumulation (greater than threefold) will result with repetitive dosing at the 6- to 8-hour dosing interval currently used.
{ "pile_set_name": "PubMed Abstracts" }
Patrognathus Patrognathus is an extinct genus of conodonts. Use in stratigraphy The Tournaisian, the oldest age of the Mississippian (also known as Lower Carboniferous) contains eight conodont biozones, four of which contain Patrognathus species: the zone of Siphonodella quadruplicata and Patrognathus andersoni (upper zone of Patrognathus andersoni) the lower zone of Patrognathus andersoni the zone of Patrognathus variabilis the zone of Patrognathus crassus References External links Category:Ozarkodinida genera Category:Mississippian conodonts Category:Tournaisian life Category:Fossil taxa described in 1969 Category:Mississippian first appearances Category:Mississippian extinctions
{ "pile_set_name": "Wikipedia (en)" }
Q: Using mock objects without tying down unit tests I'm currently writing a set of unit tests for a Python microblogging library, and following advice received here have begun to use mock objects to return data as if from the service (identi.ca in this case). However, surely by mocking httplib2 - the module I am using to request data - I am tying the unit tests to a specific implementation of my library, and removing the ability for them to function after refactoring (which is obviously one primary benefit of unit testing in the firt place). Is there a best of both worlds scenario? The only one I can think of is to set up a microblogging server to use only for testing, but this would clearly be a large amount of work. A: You are right that if you refactor your library to use something other than httplib2, then your unit tests will break. That isn't such a horrible dependency, since when that time comes it will be a simple matter to change your tests to mock out the new library. If you want to avoid that, then write a very minimal wrapper around httplib2, and your tests can mock that. Then if you ever shift away from httplib2, you only have to change your wrapper. But notice the number of lines you have to change is the same either way, all that changes is whether they are in "test code" or "non-test code".
{ "pile_set_name": "StackExchange" }
Q: NSString allocation and initializing What is the difference between: NSString *string1 = @"This is string 1."; and NSString *string2 = [[NSString alloc]initWithString:@"This is string 2.]; Why am I not allocating and initializing the first string, yet it still works? I thought I was supposed to allocate NSString since it is an object? In Cocoa Touch, -(IBAction) clicked: (id)sender{ NSString *titleOfButton = [sender titleForState:UIControlStateNormal]; NSString *newLabelText = [[NSString alloc]initWithFormat:@"%@", titleOfButton]; labelsText.text=newLabelText; [newLabelText release]; } Why do I not allocate and initialize for the titleOfButton string? Does the method I call do that for me? Also, I'm using XCode 4, but I dislike iOS 5, and such, so I do not use ARC if that matters. Please don't say I should, I am just here to find out why this is so. Thanks! A: The variable string1 is an NSString string literal. The compiler allocates space for it in your executable file. It is loaded into memory and initialized when your program is run. It lives as long as the app runs. You don't need to retain or release it. The lifespan of variable string2 is as long as you dictate, up to the point when you release its last reference. You allocate space for it, so you're responsible for cleaning up after it. The lifespan of variable titleOfButton is the life of the method -clicked:. That's because the method -titleForState: returns an autorelease-d NSString. That string will be released automatically, once you leave the scope of the method. You don't need to create newLabelText. That step is redundant and messy. Simply set the labelsText.text property to titleOfButton: labelsText.text = titleOfButton; Why use properties? Because setting this retain property will increase the reference count of titleOfButton by one (that's why it's called a retain property), and so the string that is pointed to by titleOfButton will live past the end of -clicked:. Another way to think about the use of retain in this example is that labelsText.text is "taking ownership" of the string pointed to by titleOfButton. That string will now last as long as labelsText lives (unless some other variable also takes ownership of the string).
{ "pile_set_name": "StackExchange" }
Q: how can i insert no more than 5 sentences and no more than 50 letters per each sentence in C before each sentence it needs to say the number of the sentence im writing. starting the count from one. what i mean is: How many sentences you want to enter [1-5]? 2 sentence 1: write what you want sentence 2: here as long as its less then 50 letters my problem is that i dont know how to limit the number of letters, without needing to insert them all. if i write for(i=0; i<50; i++) i will need to enter all the 50 letters, but if i want i need to be able to write even only 1. so that is what i have done so far: (note that i dont need to ask the user how many letters he wants to enter) char text[5][50]={0}; int x=0, i=0, n=0, m=0; printf("How many sentences you want to enter [1-5]?\n"); scanf("%d", &n); printf("how many letters [1-50]?\n"); scanf("%d", &m); for (x=1; x<=n; x++)// will print max of 5 sentences { printf("Sentence %d: \n", x); for(i=0; i<m; i++)// will print max of 50 letters { scanf(" %c", &text[x][i]); } } thanks a lot for the help! A: for(i=0;i<n;i++) { if(fgets(text,50,stdin) != NULL) /* Read just 50 character */ { // Do your stuff } } PS: fgets() comes with a newline character
{ "pile_set_name": "StackExchange" }
Gary Dauberman has moved up in James Wan’s crew. He’s written the three Annabelle movies and makes his directorial debut on Annabelle Comes Home. He also produced The Curse of La Llorona and wrote and produced The Nun. He’s also co-created DC Universe’s new Swamp Thing streaming series, which Wan produced via his Atomic Monster production company. Dauberman spoke with /Film by phone about Swamp Thing this week. He’s about to make the press rounds again next month for Annabelle Comes Home and he also gave us updates on Salem’s Lot, It: Chapter 2, the Train to Busan remake, Annabelle Comes Home and Are You Afraid Of the Dark. Swamp Thing premieres today on DC Universe. Did you come to Swamp Thing more through James Wan than through the comics? No, I came to Swamp Thing through Alan Moore and Bernie Wrightson. Bernie Wrightson has long been one of my heroes. I knew his artwork and I knew Swamp Thing from him and then Alan Moore’s iconic run on that comic back in the ‘80s that I did discover later. I wasn’t reading it as it was coming out but it was just an eye-opening experience reading that. It felt so different from all the other comics I was reading at the time. Being able to revisit those comics for the show, they still feel so different and fresh from anything they put out there still. It was great that they still hold up and still invoke the proper creepiness and atmosphere as when I read it for the first time. For the show, when I found out that James was doing Swamp Thing, I did whatever I could to be involved. Were you privy to DC Universe’s series Titans and Doom Patrol while you were developing Swamp Thing? No. Titans had not come out yet while we were writing the pilot so it came out while we were making the show. Those, although I couldn’t wait to see them, they didn’t really inform our process in any way making the show. Had you already researched the swamp for your movie Swamp Devil? [Laughs] Oh man, yeah. I wrote that in one week way back when. I have not seen any of those movies but I enjoyed working on those scripts. That’s funny you bring that up. To be fair, I haven’t seen it either. I just saw that you wrote another swamp movie. It was one of those made for Syfy movies very early on when they were doing it weekly. I think it was every Saturday night or something. Those movies taught me a lot. They would say, “We have a total, we have a location, we have a cast. We need a script. Can you write something very quickly?” They really sharpened my skills in terms of working towards a deadline and working towards knowing that we’re going into production. That really helped me a lot later on when we were working on TV and they need a script quickly because they’re working towards production. It helped to build that muscle. How much real science is in Swamp Thing? Look, I’m not a scientist so you could make up anything, I’d be like, “Yeah, that sounds correct.” I know through production we work very closely with consultants trying to make the events that happen in the series to be as plausible as possible and ground it. I think that’s been achieved. Does anything go on DC Universe streaming as far as language and violence? I don’t know if sexuality would be an issue with Swamp Thing. Yeah, that was one of our first questions was, “What can we get away with?” And they said, “Whatever you want, really.” which was super freeing. It allowed us not to have to write around anything that normally you’d have to on network. We could just barrel straight through and if we wanted to get graphic, we could get graphic. We have a lot of body horror and gore and violence in this TV series so we were able to embrace it all. It was a very “anything goes” mentality. So are the roots ripping people in half only the beginning? That is only the beginning, literally and figuratively. It’s one of the first things, especially when you read the comics and all that stuff. Those things are a lot of fun to write. That’s one of the cool things about the swamp, right? There’s a very kitchen sink aspect to the swamp. Because of its origins, we’re able to explore and hit on other subgenres of horror. If we wanted to tell a haunted house supernatural beat, we felt we could get away with it. If we want to do something a little bit more body horror, we could do that. We certainly have psychological horror aspects of the show. Whatever dark corner we wanted to shine a light on we were able to do that just by the very nature of the nature of the swamp. Is Swamp Thing a beauty and the beast story? It’s definitely something we lean into, we played around with. The cool thing about the Swamp Thing is it’s an exploration of identity. There’s an existential crisis happening for Alec Holland, for Abby Arcane, for a lot of the characters in the town. Beauty and the beast certainly is an aspect but it’s also Alec Holland, exploring who he’s become, who he is and what’s the swamp becoming. So while we have the beauty and the beast component, it’s only one part of the whole. We haven’t seen the last of Andy Bean, right? Oh man, I hope not but I’ll save that for some reveals. Having worked with him on It: Chapter 2, and I’d seen his early audition for Stanley, this guy brings such an energy to everything he does. He’s so watchable we were also really cheerleaders for him to play Alec Holland because he brings such a level of engagement to all his performances. It’s really a treat watching him. You lost three episodes at the end of the season. Did that change the arc of the first season? No, we knew what we were building towards and we were getting there. We felt we could accomplish that in 10 episodes so it kind of worked out. We were able to land where we wanted to land. Are there any things from those back three you could bring back in a second season? Yes but I won’t speak specifically to those because it would spoil this season. As a fan of Moore and Wrightson, what were the elements you just had to get into Swamp Thing? We really like the southern Gothic feel. Mark and I talked a lot about that. We talked about the panels, Bernie and Alan, really dripped with this melodrama and really this atmosphere. His sense of dread we really liked. We knew we wanted to get some of the headier ideas in there as well because that’s what’s so great about the comic. It explored and asked big questions of things. We were able to do that with the show as well, ask big questions. What makes a man? What makes a monster? It was those kind of elements we wanted to bring to the show. The spirit of the comic was so different from a lot of the stuff that was out there that first and foremost with the show, we wanted to do that as well. We wanted to make this different from a lot of the stuff out there. I think we really succeeded in that regard. Was North Carolina southern enough to capture the gothic vibe? Oh yeah. The sets down there were amazing, and the crew and cast really drilled down on that. It was great to shoot there. They just announced you’re doing Salem’s Lot. They’ve done that as a TV miniseries twice but never as a film before. Did you have a unique way into Salem’s Lot as a feature film? I did have a unique way into it but again, I think the book in itself is unique. Certainly now, I haven’t seen a scary vampire movie in a long, long time and I’d really love to tackle that. It’s one of my favorite books. It’s one of my favorite Stephen King books. We felt it should have the cinematic treatment that we gave It. It was a miniseries as well. The experience of bringing that to the big screen was such a joy that I was so happy we will have the opportunity to do that for Salem’s Lot. James Wan has said he won’t produce Train to Busan unless they find a really good reason to do an American remake. Have you gotten close to finding a good reason? Yes. I won’t get more into that but that’s one of those movies that’s so f***ing great, it’s so well done, you don’t want to do anything that’s going to be less than. I think we’re certainly getting there. It feels like there’s a reason to make the American version without ruining the experience of the original. Now we’ve seen the trailer to It: Chapter 2 and we’re happy the kids are still involved. Could you estimate what proportion of the movie the kids are in? Maybe 10% of it? No, I don’t want to say that. I don’t want to put a number on it. Everyone, when they saw Sophia Lillis, said Jessica Chastain should play adult Bev. How big a coup was it to get her? I always had my fingers crossed because I knew the Muschiettis had their relationship with Jessica. It was whispered about but I’m such a fan of hers. While they know her, I only knew her seeing her on the big screen so you’re like really? Could this really happen? I’m sitting there as a fan and then when it really happens you’re just like “Holy sh*t.” It’s elevated immediately with her signing on. Do you have any other Stephen King books on your wish list? Yeah, I have so many favorites. Salem’s Lot is the only thing that’s in front of me right now that I want to work on. It’s been fun exploring the dark corners of that town. I’m kind of a one track mind. I don’t plot too far ahead and I’m overjoyed I get to work on Salem’s Lot so right now that’s all there is for me. There’s a number of stuff in the works that I’m not a part of that I’m very excited to see when they eventually come out. Has Are You Afraid of the Dark shot yet? No, that actually hasn’t shot yet. That’s one of those things I’m no longer a part of. I just had a different vision to make it and thought it best to part ways. That happens. It’s good to get that on record. For sure. I don’t think that’s out there. It’s unfortunate but as you said, that’s sometimes how it goes. Annabelle Comes Home is the first Annabelle movie that’s not a prequel. Was that a different thing? Yes, it is the first Annabelle movie that’s not a prequel and it was a different thing. It presented its own sort of challenges because you’re locked into everything post Warrens, right? You have The Conjuring 1, Conjuring 2, all those things that have happened. You can’t and don’t want to mess with any of that. You want to make sure you don’t step on any toes mythology-wise. I was excited to dig in further on the Judy Warren character. Are Ed and Lorraine more than just cameos in it? Yeah, they’re definitely part of the story. They definitely influence the story. Any talks of more Nun movies? That I can’t speak to. It’s always an ongoing conversation just because we love talking about the universe. Wouldn’t it be cool if we did this? Wouldn’t it be cool if we did that? There’s a lot of talk about everything, evolving this universe.
{ "pile_set_name": "OpenWebText2" }
Q: Python Bokeh dependencies not found This question has been asked but not answered. The only difference is I am on a Arch Linux 64 bit. I am using python 2.7 and the package that got installed of bokeh is 0.10.0 I followed the conda install bokeh instructions from here and did the conda update conda and conda update anaconda Still it does not work. Not only is bokeh.plotting not working, but neither is bokeh.sampledata which leads me to believe none of it is working. Has any one else had this problem with this or any package and successfully solved it? I do not know if this helps, but there are three versions of bokeh in my pkgs folder. Two of them are bokeh 0.9.0 and one of them is bokeh 0.10.0 which is the one that comes up when I call conda. In the site-packages/bokeh folder there is a plotting.py. I tried to install it in python 3.4 and this is what the terminal returned (py34)[bob@bob anaconda]$ conda install bokeh Fetching package metadata: .... Solving package specifications: . Package plan for installation in environment /home/bob/anaconda/envs/py34: The following packages will be downloaded: package | build ---------------------------|----------------- numpy-1.9.3 | py34_0 5.7 MB pytz-2015.6 | py34_0 173 KB setuptools-18.3.2 | py34_0 346 KB tornado-4.2.1 | py34_0 557 KB wheel-0.26.0 | py34_1 77 KB jinja2-2.8 | py34_0 301 KB bokeh-0.10.0 | py34_0 3.9 MB ------------------------------------------------------------ Total: 10.9 MB The following NEW packages will be INSTALLED: libgfortran: 1.0-0 openblas: 0.2.14-3 wheel: 0.26.0-py34_1 The following packages will be UPDATED: bokeh: 0.9.0-np19py34_0 --> 0.10.0-py34_0 jinja2: 2.7.3-py34_1 --> 2.8-py34_0 numpy: 1.9.2-py34_0 --> 1.9.3-py34_0 pip: 7.0.3-py34_0 --> 7.1.2-py34_0 pytz: 2015.4-py34_0 --> 2015.6-py34_0 setuptools: 17.1.1-py34_0 --> 18.3.2-py34_0 tornado: 4.2-py34_0 --> 4.2.1-py34_0 Proceed ([y]/n)? y Fetching packages ... numpy-1.9.3-py 100% |##########################| Time: 0:00:00 6.21 MB/s pytz-2015.6-py 100% |##########################| Time: 0:00:00 1.44 MB/s setuptools-18. 100% |##########################| Time: 0:00:00 2.63 MB/s tornado-4.2.1- 100% |##########################| Time: 0:00:00 3.57 MB/s wheel-0.26.0-p 100% |##########################| Time: 0:00:00 1.28 MB/s jinja2-2.8-py3 100% |##########################| Time: 0:00:00 2.19 MB/s bokeh-0.10.0-p 100% |##########################| Time: 0:00:00 5.74 MB/s Extracting packages ... [ COMPLETE ]|#############################################| 100% Unlinking packages ... [ COMPLETE ]|#############################################| 100% Linking packages ... [ COMPLETE ]|#############################################| 100% (py34)[bob@bob anaconda]$ python bokeh.py Traceback (most recent call last): File "bokeh.py", line 1, in <module> from bokeh import plotting File "/home/bob/anaconda/bokeh.py", line 1, in <module> from bokeh import plotting ImportError: cannot import name 'plotting' A: You have a file /home/bob/anaconda/bokeh.py in your current directory, which is being imported instead of bokeh. You might look at what that file is, if it is really needed. If it's a file you made, it's not recommended to put things in the anaconda directory (some subdirectory of your Documents directory is a better place). It's also not really a good idea to have anaconda be your current directory.
{ "pile_set_name": "StackExchange" }
Q: Upload document : Default value not applied on choice column if baseFieldControl.ControlMode = SPControlMode.Display I created a new site column, a choice column with 5 values to pick from in a drop down list : 1, 2, 3, 4 (default value), 5 I then created a document content type using this column. The problem I am currently having is that when uploading a new document, I programmatically set the ControlMode of the baseFieldControl to SPControlMode.Display for certain users that should not be able to modify this column's value but should still be able to upload new document to the library. On the edit form, the value that is displayed is "4" which is normal as it is the defalut value, but once you save, if you go have a look the items properties, the saved value is "1". If you do the same test but using radio buttons it doesn't even save a value, not even the first one. So basically all I want is to be able to set a field "readonly" on the edit form when adding a new document but I want the default value to be saved properly. Thanks for any help you can provide on that issue. Alex A: If you want to use your BaseFieldControl with the Display Mode and set a value, you must set the value before setting the ControlMode. You may find a different workaround Here
{ "pile_set_name": "StackExchange" }
Q: WordPress: Change main width based on active sidebar I'm writing a WP theme, and I want the main content area to change width based on whether there's an active sidebar or not. To make this easier, I'm using bootstrap. The problem is that the output is blank. Here's the code I'm trying to use to do the calculations: <!-- Calculate content width based on sidebars --> <?php if ( is_active_sidebar( 'left_sidebar' ) && !is_active_sidebar( 'right_sidebar' ) ) { $mainspan = "9"; } ?> <?php if ( !is_active_sidebar( 'left_sidebar' ) && is_active_sidebar( 'right_sidebar' ) ) { $mainspan = "span9"; } ?> <?php if ( is_active_sidebar( 'left_sidebar' ) && is_active_sidebar( 'right_sidebar' ) ) { $mainspan = "span6"; } ?> <?php if ( !is_active_sidebar( 'left_sidebar' ) && !is_active_sidebar( 'right_sidebar' ) ) { $mainspan = "span12"; } ?> The result should be loaded here, but I get a blank value: <main class="<?php echo $mainspan; ?>"> A: Andrewsi asked if there were any includes in the file. That lead me to realize that the code itself was functional, but was not working because one was being included into the other. The IF statements were written in the Header.php, which was included into the Index.php where the variable call was located. Placing the IF statements into the Index.php fixed the issue.
{ "pile_set_name": "StackExchange" }
Michael Farrell (powerlifter) Michael Farrell (born 2 October 1962) is an Australian Paralympic powerlifter. He was born in the South Australian town of Elliston. He won a bronze medal at the 1988 Seoul Games in the Men's Up To 100 kg event. He finished eight in the Men's Over 100 kg at the 1996 Atlanta Games. References Category:Paralympic powerlifters of Australia Category:Powerlifters at the 1988 Summer Paralympics Category:Powerlifters at the 1996 Summer Paralympics Category:Paralympic bronze medalists for Australia Category:Medalists at the 1988 Summer Paralympics Category:Living people Category:1962 births
{ "pile_set_name": "Wikipedia (en)" }
胸腺肿瘤是一类少见疾病,发病仅为0.17/10万,亚裔人种略高于白种人,约0.3-0.4/10万。胸腺瘤研究与投入均较少,诊断与治疗缺乏统一的指南,尤其在术前定性诊断、组织学分型、术前治疗、手术方式、术后治疗以及对术后复发的治疗等一系列问题上均处于不规范状态。很多医生也对该疾病的治疗经验有限,而且研究进展缓慢,但是对于患者和家属,这是不可接受的理由。2003年美国患者Barbara Neibauer身患胸腺肿瘤,虽经积极的治疗,但仍于两年后去世。Barbara Neibauer去世后,为促进胸腺瘤的研究,其家族出资于2005年成立了全球首个关于胸腺肿瘤的基金会即FTCR(Foundation for Thymic Cancer Research)。2010年5月5日由美国NIH(National Institutes of Health)牵头,在FTCR的基础上于纽约成立了专业胸腺肿瘤学术组织,即ITMIG(International Thymic Malignacy Interest Group)。随后,日本也成立了相应的胸腺肿瘤学术组织即JART(Japanese Association for Research on the Thymus)。2012年6月由方文涛、陈克能等教授发起,在上海成立了中国胸腺肿瘤协作组(ChART),并与JART一同加入到ITMIG。 对胸腺瘤的深入研究,显然要寻找有效的方法来综合全世界的数据,这也是ITMIG(International Thymic Malignancy Interest Group)的主要目标。想要合作,就必须对该疾病使用相同的术语,但令人惊讶的是,现行的胸腺瘤实践中存在众多模糊定义和对相同概念有众多不同的解释,使得数据之间无法进行比较。因此,ITMIG联合全球的胸腺医务工作者对关键定义和方法共同讨论并达成共识,旨在促进胸腺瘤的研究。这项工作的结果收录在《胸部肿瘤杂志》(Journal of Thoracic Oncology)。ITMIG在建立这一广泛接受的共识时,首先由小组集中讨论提出初步建议,之后由扩大工作组(约80人组成)修订。然后将这些工作集中后于2010年11月15日-16日在耶鲁大学召开了为期2天的会议,对推荐的定义和方法进行讨论,并将结果分发给ITMIG的每位成员进一步讨论并反馈意见,最终修订版本经ITMIG批准并被采用。希望将来不管是单中心研究还是ITMIG的合作研究,都能采用这些标准。其中,因为胸腺瘤有一些不同于其它恶性肿瘤的特点,如除发病率低外,胸腺瘤还表现为惰性生物学行为,这对预后的临床评估有着诸多影响,因此,ITMIG特地撰写了"Standard Outcomes for Thymic Malignancies",以供大家参考。 ITMIG共识中多数涉及患者的临床治疗,包括影像、活检、手术、放疗和化疗。对应用最广泛的Masaoka-Koga分期作了详细的细节规定。统一的定义将推动ITMIG的两个主要项目,即建立全球合作数据库和与IASLC的合作项目,为AJCC第8版肿瘤分期手册制订一个正式有效的胸腺瘤分期系统。必须强调的是,尽管要求统一的定义和方法,但并不意味它们是一成不变的,希望将来的研究能改善临床实践和共同使用的术语。同时鼓励对这些原则进行质疑和改进。 在ChART成立短短不到一年的时间里,已向ITMIG提供了信息较为完整的胸腺瘤病例资料近900例,占ITMIG目前所收集病例的四分之一,得到了ITMIG的充分肯定。2012年11月在日本福冈举办的第三届胸腺肿瘤会议上,ChART的4篇文章参加会议口头和壁报交流,并获得本次大会的Barbara Neibauer奖。同时积极参与了ITMIG目前正在筹划开展的两项前瞻性多中心国际合作课题,其一是前瞻性观察研究,其二是Ⅲ期胸腺瘤的全球多中心随机对照研究。 然而,与ITMIG一样,我国胸腺肿瘤学术界长期以来同样存在定义不规范,名词不统一等一系列问题,为此,经ChART研究,有必要就胸腺肿瘤的影像诊断规范、活检标本及切除标本的处理规范、临床及病理分期、预后评估标准、微创的标准用语及术式、放疗及化疗的定义及报告规范等问题做深入讨论,旨在进一步规范胸腺肿瘤的诊疗。首届论坛由北京大学肿瘤医院承办,邀请了影像学专家,病理学专家探讨胸腺肿瘤的诊断问题,同时邀请了外科专家,放疗科专家及内科专家,就治疗问题进行探讨,并将ITMIG提出的相关规定综合成册,供大家讨论应用。 2013年3月
{ "pile_set_name": "PubMed Central" }
/* * This file is part of Telegram Server * Copyright (C) 2015 Aykut Alparslan KOÇ * * Telegram Server is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * Telegram Server is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package org.telegram.tl.auth; import org.telegram.mtproto.ProtocolBuffer; import org.telegram.tl.*; public class BindTempAuthKey extends TLObject { public static final int ID = -841733627; public long perm_auth_key_id; public long nonce; public int expires_at; public byte[] encrypted_message; public BindTempAuthKey() { } public BindTempAuthKey(long perm_auth_key_id, long nonce, int expires_at, byte[] encrypted_message){ this.perm_auth_key_id = perm_auth_key_id; this.nonce = nonce; this.expires_at = expires_at; this.encrypted_message = encrypted_message; } @Override public void deserialize(ProtocolBuffer buffer) { perm_auth_key_id = buffer.readLong(); nonce = buffer.readLong(); expires_at = buffer.readInt(); encrypted_message = buffer.readBytes(); } @Override public ProtocolBuffer serialize() { ProtocolBuffer buffer = new ProtocolBuffer(32); serializeTo(buffer); return buffer; } @Override public void serializeTo(ProtocolBuffer buff) { buff.writeInt(getConstructor()); buff.writeLong(perm_auth_key_id); buff.writeLong(nonce); buff.writeInt(expires_at); buff.writeBytes(encrypted_message); } public int getConstructor() { return ID; } }
{ "pile_set_name": "Github" }
Q: whats the standard way to setup a casperjs for travis-ci testing surprisingly enough, when thinking about js tremendous popularity in GitHub repos, there is no "offical" guide to testing frontend js with Travis-ci (only node.js, a very specific subset). from my research I found out a lot of big js projects don't have Travis-ci integration (e.g jQuery) or have a very minimal travis setup (see backbone) which uses the default npm test. I know travis-ci runs npm testas default and runs the test named scripts from package.json. and I found a few examples running phantomjs for headless testing (which the docs don't give any details about setting up) but couldn't find canonical examples for how to setup casper.js integration tests with travis-ci. I'll be help for help and guidance with this A: Seems the canonical way is hiding the tests behind the default npm test which usually triggers a script (or a grunt task) running the test suite. you can look at the .travis.yml in a small project I coded to see how to install casperjs for the testing.
{ "pile_set_name": "StackExchange" }
Green Lantern: Rebirth (New Edition) Collects the six-issue miniseries that restored Hal Jordan as Green Lantern along with the Wizard Magazine preview story and a new cover by Ethan Van Sciver. The road to Blackest Night begins here, so don’t miss this new printing of the smash-hit re-launch of the decade! Written by Geoff Johns, with art by Ethan Van Sciver (who also provides the cover artwork) and Prentis Rollins.
{ "pile_set_name": "Pile-CC" }
Wanessa Cooper Bio: Birth Place: Czech Republic Skinny brunette MILF Wanessa Cooper is a Czech born babe with a sex drive that can only be measured in light years! Currently a girl-girl lesbian only performer, Wanessa brings a sensuality to her scenes that has to be seen to be believed. Very tactile in her approach to lovemaking, Wanessa loves to cup the curve of perky tits and to run her body over the shape of a round ass. While she can be smooth to the touch, Wanessa also likes to get rough. The brunette babe loves to participate in bondage play with other ladies, and the more dominant she can be the better! With a surprisingly powerful body for such a small frame, Wanessa knows what she likes, and knows exactly how to get it! All models appearing on this website are 18 years or older.Click here for records required pursuant to 18 U.S.C. 2257 Record Keeping Requirements Compliance Statement. By entering this site you swear that you are of legal age in your area to view adult material and that you wish to view such material.
{ "pile_set_name": "Pile-CC" }
Necrotizing fasciitis following venomous snakebites in a tertiary hospital of southwest Taiwan. Necrotizing fasciitis following venomous snakebites is uncommon. The purpose of this study was to describe the initial clinical features of necrotizing fasciitis after snakebites, and to identify the risk factors for patients with cellulitis who later developed necrotizing fasciitis. Sixteen patients with surgically confirmed necrotizing fasciitis and 25 patients diagnosed with cellulitis following snakebites were retrospectively reviewed over a 6-year period. Differences in patient characteristics, clinical presentations, snake species and laboratory data were compared between the necrotizing fasciitis and the cellulitis groups. None of the 41 patients died after being bitten by a snake. Twenty-nine patients (70.7%) were bitten by a cobra. Enterococcus species and Morganella morganii were the most common pathogens identified in wound cultures. Relative to the cellulitis group, the necrotizing fasciitis group had significantly higher rates of hemorrhagic bullae (p=0.000), patients with underlying chronic disease (p=0.019), white blood cell counts (p=0.035), segmented white cell counts (p=0.02), and days of hospitalization (p=0.001). Victims of venomous snakebites should be admitted for close monitoring of secondary wound infections. The risk factors of developing necrotizing fasciitis from cellulitis following snakebites were associated with chronic underlying diseases and leukocytosis (total white blood-cell counts ≥10000cells/mm3 and ≥80% of segmented leukocyte forms). Physicians should be alert to a worsening wound condition after a snakebite, and surgical interventions should be performed for established necrotizing fasciitis with the empirical use of third-generation cephalosporins plus other regimens.
{ "pile_set_name": "PubMed Abstracts" }
> >This one's mainly for the guys....but it's still pretty damn funny :) > > > For the Men > The Perfect Breakfast > > >You're sitting at the table and your son is on the cover of the box of >Wheaties. >Your mistress is on the cover of Playboy. >And your wife is on the back of the milk carton. > > > >
{ "pile_set_name": "Enron Emails" }
VAL-D'OR, QC, Feb. 13, 2019 /CNW/ - Orbit Garant Drilling Inc. (TSX: OGD) ("Orbit Garant" or the "Company") today announced its financial results for the three and six-month periods ended December 31, 2018. All dollar amounts are in Canadian dollars unless otherwise stated. Percentage calculations are based on numbers in the financial statements and may not correspond to rounded figures presented in this news release. 2 EBITDA is a non-IFRS financial measure and is defined as earnings before interest, taxes, depreciation, and amortization. See "Reconciliation of Non-IFRS financial measures". "Our decline in revenue and metres drilled for the quarter reflects lower drilling activity in both Canada and Chile compared to the same quarter last year, which was a record second quarter in both revenue and metres drilled. This past quarter still represents our second highest revenue total for this three-month period in company history. With the recent rapid expansion of our global operations, our margins are more impacted by the slowdown in drilling activity. Our profitability for the quarter was also impacted by costs related to our acquisition in Burkina Faso," said Eric Alexandre, President & CEO of Orbit Garant. "We're pleased with the early results of the Burkina Faso acquisition, with drill deployment up from seven at the time of acquisition to 12 at quarter-end. Looking more broadly at our business, with global gold and base metal reserves of mining companies continuing to be depleted, we expect mineral exploration spending to increase in order to address this decline. With our expanded international presence and our strong position in Canada, we are increasingly well positioned to capitalize on market opportunities as they arise," added Mr. Alexandre. Second Quarter Results Revenue for the three-month period ended December 31, 2018 ("Q2 FY2019") totalled $33.7 million, compared to $43.0 million for the three-month period ended December 31, 2017 ("Q2 FY2018"). Drilling Canada revenue was $23.6 million, compared to $28.3 million in Q2 FY2018, reflecting a decline in metres drilled during the quarter, partially offset by an increase in average revenue per metre drilled. Drilling International revenue was $10.1 million, compared to $14.7 million in Q2 FY2018. The decline in international revenue is primarily attributable to the conclusion of a large drilling contract in Chile, partially offset by increased drilling activity in Burkina Faso. Orbit Garant's fleet drilled a total of 311,318 metres in the quarter, compared to 371,161 metres drilled in the very strong Q2 FY2018 period. Consolidated average revenue per metre drilled decreased 6.7% to $107.85 compared to $115.64 in Q2 FY2018. The decline in consolidated average revenue per metre drilled is primarily attributable to a lower proportion of specialized drilling in international markets in the quarter, partially offset by higher prices on drilling contracts in Canada. Gross profit and gross margin for Q2 FY2019 were $2.9 million and 8.6%, respectively, compared to $5.1 million and 11.7%, respectively, in Q2 FY2018. Depreciation expenses totalling $2.2 million are included in cost of contract revenue for Q2 FY2019 compared $2.0 million in Q2 FY2018. Adjusted gross margin, excluding depreciation expenses, was 15.2% in Q2 FY2019, compared to 16.3% in Q2 FY2018. The decrease in gross profit and margins was primarily attributable to lower drilling volumes in Canada, partially offset by higher gross profit and margins on international contracts. General and administrative (G&A) expenses were $4.8 million (representing 14.4% of revenue) in Q2 FY2019, compared to $4.3 million (representing 10.0% of revenue) in Q2 FY2018. Increased G&A expenses are primarily attributable to $0.7 million of acquisition and integration costs related to the Company's acquisition of the drilling business of Projet Production International BF S.A. ("PPI") in Burkina Faso during Q2 FY2019. Earnings before interest, taxes, depreciation and amortization ("EBITDA")¹ totalled $0.9 million in Q2 FY2019, compared to $3.3 million in Q2 FY2018. The Company's net loss for Q2 FY2019 was $1.7 million, or $0.04 per share, compared to net earnings of $0.8 million, or $0.02 per share, in Q2 FY2018. Lower gross profit and margins, as discussed above, and costs related to the acquisition of PPI contributed to the Company's net loss for Q2 FY2019. During Q2 FY2019, Orbit Garant generated $7.6 million from financing activities compared to $5.5 million in Q2 FY2018. The Company drew down a net amount of $7.1 million during Q2 FY2019 on its secured, three-year revolving credit facility (the "Credit Facility"), compared to $1.4 million in Q2 FY2018, largely due to the financing impact of the PPI acquisition. As at December 31, 2018, the Company had $27.1 million drawn under the Credit Facility, compared to $18.1 million as at June 30, 2018. As at December 31, 2018, Orbit Garant had working capital of $57.1 million ($53.3 million as at June 30, 2018), and 37,008,756 common shares issued and outstanding. Orbit Garant's unaudited interim condensed consolidated financial statements and management's discussion and analysis for the three and six-month periods ended December 31, 2018 are available on the Company's website at www.orbitgarant.com or SEDAR at www.sedar.com. Conference call Eric Alexandre, President and CEO, and Alain Laplante, Vice President and CFO, will host a conference call for analysts and investors on Thursday, February 14, 2019 at 10:00 a.m. (ET). The dial-in numbers for the conference call are 416-764-8609 or 1-888-390-0605. A live webcast of the call will be available on Orbit Garant's website at: http://www.orbitgarant.com/en/sites/fog/investors.aspx. The webcast will be archived following conclusion of the call. To access a replay of the conference call dial 416-764-8677 or 1-888-390-0541, passcode: 490114 #. The replay will be available until February 21, 2019. RECONCILIATION OF NON - IFRS FINANCIAL MEASURES Financial data has been prepared in conformity with IFRS. However, certain measures used in this discussion and analysis do not have any standardized meaning under IFRS and could be calculated differently by other companies. The Company believes that certain non-IFRS financial measures, when presented in conjunction with comparable IFRS financial measures, are useful to investors and other readers because the information is an appropriate measure to evaluate the Company's operating performance. Internally, the Company uses this non-IFRS financial information as an indicator of business performance. These measures are provided for information purposes, in addition to, and not as a substitute for, measures of financial performance prepared in accordance with IFRS. Management believes that EBITDA is an important measure when analyzing its operating profitability, as it removes the impact of financing costs, certain non-cash items and income taxes. As a result, Management considers it a useful and comparable benchmark for evaluating the Company's performance, as companies rarely have the same capital and financing structure. Reconciliation of EBITDA (unaudited) (in millions of dollars) 3 months ended December 31, 2018 3 months ended December 31, 2017 6 months ended December 31, 2018 6 months ended December 31, 2017 Net earnings (net loss) for the period (1.7) 0.8 (1.3) 2.5 Add: Finance costs 0.5 0.5 0.9 0.9 Income tax expense (0.4) (0.3) - 0.6 Depreciation and amortization 2.5 2.3 4.7 4.4 EBITDA 0.9 3.3 4.3 8.4 Adjusted Gross Margin Although adjusted gross margin and margin are not recognized financial measures defined by IFRS, Management considers them to be important measures as they represent the Company's core profitability, without the impact of depreciation expense. As a result, Management believes they provide a useful and comparable benchmark for evaluating the Company's performance. Reconciliation of Adjusted Gross Margin (unaudited) (in millions of dollars) 3 months ended December 31, 2018 3 months ended December 31, 2017 6 months ended December 31, 2018 6 months ended December 31, 2017 Contract revenue 33.7 43.0 71.0 85.5 Cost of contract revenue (including depreciation) 30.8 38.0 62.5 73.7 Less depreciation (2.2) (2.0) (4.2) (3.9) Direct costs 28.6 36.0 58.3 69.8 Adjusted gross profit 5.1 7.0 12.7 15.7 Adjusted gross margin (%) (1) 15.2 16.3 17.9 18.4 (1) Adjusted gross profit, divided by contract revenue X 100 About Orbit Garant Headquartered in Val-d'Or, Quebec, Orbit Garant is one of the largest Canadian-based mineral drilling companies, providing both underground and surface drilling services in Canada and internationally through its 236 drill rigs and more than 1,300 employees. Orbit Garant provides services to major, intermediate and junior mining companies, through each stage of mining exploration, development and production. The Company also provides geotechnical drilling services to mining or mineral exploration companies, engineering and environmental consultant firms, and government agencies. For more information, please visit the Company's website at www.orbitgarant.com. Forward-looking information This news release may contain forward-looking statements (within the meaning of applicable securities laws) relating to business of Orbit Garant Drilling Inc. (the "Company") and the environment in which it operates. Forward-looking statements are identified by words such as "believe", "anticipate", "expect", "intend", "plan", "will", "may" and other similar expressions. These statements are based on the Company's expectations, estimates, forecasts and projections. They are not guarantees of future performance and involve risks and uncertainties that are difficult to control or predict. These risks and uncertainties are discussed in the Company's regulatory filings available at www.sedar.com. There can be no assurance that forward-looking statements will prove to be accurate as actual outcomes and results may differ materially from those expressed in these forward-looking statements. Readers, therefore, should not place undue reliance on any such forward-looking statements. Further, a forward-looking statement speaks only as of the date on which such statement is made. The Company undertakes no obligation to publicly update any such statement or to reflect new information or the occurrence of future events or circumstances.
{ "pile_set_name": "Pile-CC" }
Midsole thickness affects running patterns in habitual rearfoot strikers during a sustained run. The purpose of this study was to: (1) investigate how kinematic patterns are adjusted while running in footwear with THIN, MEDIUM, and THICK midsole thicknesses and (2) determine if these patterns are adjusted over time during a sustained run in footwear of different thicknesses. Ten male heel-toe runners performed treadmill runs in specially constructed footwear (THIN, MEDIUM, and THICK midsoles) on separate days. Standard lower extremity kinematics and acceleration at the tibia and head were captured. Time epochs were created using data from every 5 minutes of the run. Repeated-measures ANOVA was used (P < .05) to determine differences across footwear and time. At touchdown, kinematics were similar for the THIN and MEDIUM conditions distal to the knee, whereas only the THIN condition was isolated above the knee. No runners displayed midfoot or forefoot strike patterns in any condition. Peak accelerations were slightly increased with THIN and MEDIUM footwear as was eversion, as well as tibial and thigh internal rotation. It appears that participants may have been anticipating, very early in their run, a suitable kinematic pattern based on both the length of the run and the footwear condition.
{ "pile_set_name": "PubMed Abstracts" }