text
stringlengths
12
4.76M
timestamp
stringlengths
26
26
url
stringlengths
32
32
NOT PRECEDENTIAL UNITED STATES COURT OF APPEALS FOR THE THIRD CIRCUIT ______ No. 10-2394 ______ UNITED STATES OF AMERICA v. JOSE CRUZ-ALEMAN, a/k/a Jose Conrad, a/k/a Jose Armando Cruz, a/k/a El Tigre JOSE CRUZ-ALEMAN, Appellant ______ On Appeal from the United States District Court for the Eastern District of Pennsylvania (D.C. Criminal No. 09-cr-00761-1) District Judge: Honorable Paul S. Diamond ______ Submitted Pursuant to Third Circuit L.A.R. 34.1(a) March 21, 2011 Before: FUENTES, SMITH and VAN ANTWERPEN, Circuit Judges (Filed: April 1, 2011) ______ OPINION OF THE COURT ______ VAN ANTWERPEN, Circuit Judge. 1 Jose Cruz-Aleman pleaded guilty to illegal reentry in violation of 8 U.S.C. § 1326, and the District Court sentenced him to 60 months‟ imprisonment. Cruz-Aleman appeals his sentence, arguing that the District Court committed procedural error by not considering a variance argument based on a pending – but not yet effective – Guidelines amendment. For the reasons that follow, we will affirm the District Court‟s sentence. I. Because we write only for the parties, we include only those facts necessary to our analysis. Cruz-Aleman is a native and citizen of El Salvador. In October, 2007, Cruz- Aleman pleaded guilty to first-degree assault in Maryland and was deported. On July 1, 2009, police arrested Cruz-Aleman in Philadelphia. On December 3, 2009, a federal grand jury returned an indictment charging Cruz-Aleman with one count of illegal reentry after deportation in violation of 8 U.S.C. § 1326. On January 20, 2010, Cruz-Aleman pleaded guilty to the indictment. The Probation Department prepared a pre-sentence report which set Cruz-Aleman‟s base offense level at 8, added 16 levels for his prior 2007 assault conviction under U.S.S.G. § 2L1.2(b)(1)(A)(ii), and subtracted 3 levels for acceptance of responsibility under U.S.S.G. § 3E1.1(a), resulting in a total offense level of 21. The pre-sentence report set Cruz-Aleman‟s criminal history category at IV because he had accumulated 7 criminal history points: 3 points for the prior 2007 assault conviction under U.S.S.G. § 4A1.1(a), 2 points for a 2003 assault conviction under U.S.S.G. § 4A1.1(b), and 2 final points under U.S.S.G. § 4A1.1(e) because Cruz-Aleman‟s conviction in this case occurred less than two years following his release from prison on the 2007 assault conviction. Offense level 2 21 at criminal history category IV resulted in an advisory Guidelines range of 57 to 71 months. A sentencing hearing was scheduled for May 3, 2010. Prior to the hearing, both Cruz-Aleman and the Government submitted sentencing memoranda. In his memorandum, Cruz-Aleman first objected to the pre-sentence report‟s 16-level enhancement based on his 2007 Maryland assault conviction. Cruz- Aleman claimed this conviction was the result of a constitutionally flawed guilty plea proceeding and could not be used to enhance his sentence. Additionally, Cruz-Aleman argued for a downward variance, contending that: (1) his personal history and characteristics weighed in favor of a lower sentence; (2) an illegal reentry offense did not require a lengthy sentence; (3) he was innocent of any prior crimes; (4) he committed the illegal reentry offense to have a better life; (5) the 16-level enhancement was unnecessarily severe and resulted in “double-counting”; (6) the 2007 assault was an act of self-defense; and (7) his prior conviction was not proven to a jury beyond a reasonable doubt. Finally, Cruz-Aleman argued in his sentencing memorandum that the District Court “should consider a downward variance based on the U.S. Sentencing Commission‟s recent decision to amend the Guidelines by deleting U.S.S.G. § 4A1.1(e).”1 App. 60-61. This Guideline added “recency points” to a defendant‟s criminal history category “if the defendant committed the instant offense less than two 1 U.S.S.G. § 4A1.1(e) (2009) provided, in relevant part: “(e) Add 2 points if the defendant committed the instant offense less than two years after release from imprisonment on a sentence counted under (a) or (b) . . . .” The amendment passed without Congressional action and is reflected in the 2010 Guidelines. See U.S.S.G. § 4A1.1 (2010). 3 years after release from imprisonment . . . .” U.S.S.G. § 4A1.1(e) (2009). The proposed Guidelines amendment eliminated “recency points.” Sentencing Guidelines for United States Courts, 75 Fed. Reg. 27,388, 27,393 (May 14, 2010). The Sentencing Commission submitted the amendment on April 29, 2010, prior to Cruz-Aleman‟s May 3, 2010 sentencing hearing. Id. at 27,388. Due to a mandatory waiting period during which Congress could overrule the amendment, the amendment would not take effect until November 1, 2010. Id.; see 28 U.S.C. § 994(p). Cruz-Aleman asked the District Court to consider applying the amendment prospectively, thereby reducing his criminal history category from IV to III. At the May 3, 2010 sentencing hearing, the District Court stated on the record that it had received the pre-sentence investigation report and sentencing memoranda from Cruz-Aleman and the Government. The District Court then asked both parties whether they had submitted or wanted to submit additional materials, and both parties declined. Finally, at the District Court‟s request, defense counsel and a Spanish-speaking interpreter reviewed the pre-sentence report with Cruz-Aleman. App. 83. The District Court denied Cruz-Aleman‟s objection to the 16-level enhancement for a prior conviction, calculated the advisory Guidelines range at 57 to 71 months, and then heard arguments on “why this court should vary below the advisory guideline range,” App. 92. Cruz-Aleman‟s counsel argued for a downward variance based on: (1) Cruz-Aleman‟s personal history and lack of education; (2) the “back story” of Cruz- Aleman‟s prior convictions; (3) the “double-counting” resulting from the 16-level enhancement; and (4) sentencing disparities resulting from the lack of a “fast track” 4 program in the Eastern District of Pennsylvania. Cruz-Aleman‟s counsel also addressed arguments raised in the Government‟s sentencing memorandum. Finally, Cruz-Aleman‟s counsel concluded, “And, Your Honor, if I‟ve missed anything that I set forward in my sentencing memorandum, I would incorporate everything that is in there.” App. 98. The District Court responded, “Very well.” App. 98. Notably, although Cruz-Aleman raised the argument for a variance based on the pending “recency points” amendment in his sentencing memorandum, Cruz-Aleman‟s counsel did not explicitly mention this argument at the sentencing hearing. After the Government responded, the District Court explained the sentence it intended to impose. The District Court stated that it had considered the advisory Guidelines range, as well as the “nature and circumstances of the offense and the defendant‟s history and characteristics.” App. 101. The District Court specifically discussed Cruz-Aleman‟s prior convictions, lack of employment, lack of education, and family circumstances. The District Court also addressed the need to avoid sentencing disparities, the lack of a “fast track” program in the Eastern District of Pennsylvania, and the need to provide restitution to victims. Finally, the District Court stated, “I have also considered the other arguments made by [Cruz-Aleman‟s counsel] respecting her request for a downward variance.” App. 104. After stating the reasons for the proposed sentence, but before imposing sentence, the District Court asked if either party knew of “any legal reason” why the proposed sentence could not be imposed. App. 105. Cruz-Aleman‟s counsel responded, “Your Honor, I don‟t. I‟d be reiterating the variance arguments from before.” App. 105. The 5 District Court then sentenced Cruz-Aleman to 60 months‟ imprisonment, near the low end of the 57 to 71 month Guidelines range. On May 12, 2010, Cruz-Aleman timely appealed. II. We have jurisdiction to review the sentence pursuant to 18 U.S.C. § 3742 and 28 U.S.C. § 1291. We review the procedural reasonableness of the sentence under an abuse of discretion standard. Gall v. United States, 552 U.S. 38, 46 (2007); United States v. Tomko, 562 F.3d 558, 568 (3d Cir. 2009) (en banc). III. On appeal, Cruz-Aleman argues that the District Court committed procedural error by failing to consider his request to apply the pending “recency points” amendment that had been approved by the Sentencing Commission but had not yet taken effect. We reject this argument because the record shows that the District Court did consider Cruz- Aleman‟s request for a variance on this ground. A sentence must be both procedurally and substantively reasonable, and we must first “ensure that the district court committed no „significant procedural error.‟” United States v. Merced, 603 F.3d 203, 214 (3d Cir. 2010) (citation omitted). In United States v. Gunter, we instructed district courts to follow a three-step sentencing process. 462 F.3d 237, 247 (3d Cir. 2006). To be “procedurally reasonable,” the District Court must: (1) calculate the correct Guidelines range; (2) rule on departure motions; and (3) exercise its discretion by considering the relevant § 3553(a) factors. See id. 6 Cruz-Aleman‟s appeal centers on the third step of this process. We have said that to comply with step three, a district court must give “meaningful consideration” to the § 3553(a) factors. United States v. Sevilla, 541 F.3d 226, 232 (3d Cir. 2008) (quoting United States v. Cooper, 437 F.3d 324, 329 (3d Cir. 2006)). However, a district court “need not discuss every argument made by a litigant if an argument is clearly without merit.” Cooper, 437 F.3d at 329. After arriving at its proposed sentence, the district court “must adequately explain the chosen sentence to allow for meaningful appellate review.” Gall, 552 U.S. at 50. Here, the District Court complied with the three-step procedure set forth in Gunter and gave “meaningful consideration” to Cruz-Aleman‟s variance arguments. First, after overruling Cruz-Aleman‟s objection to the pre-sentence report‟s 16 level enhancement for the 2007 conviction,2 the District Court calculated the Guidelines range at 57 to 71 months. There were no departure motions made at the second step of the process. At step three, the District Court gave Cruz-Aleman the opportunity to argue for a variance based on the § 3553(a) factors. In her argument, Cruz-Aleman‟s counsel raised a plenitude of grounds for variance, but did not explicitly mention the pending “recency points” amendment. Cruz-Aleman‟s counsel did ask the District Court to “incorporate” everything set forth in her sentencing memorandum, and the District Court acceded to this request. App. 98. 2 Cruz-Aleman does not appeal the District Court‟s application of the 16-level enhancement. 7 After hearing argument from the Government, the District Court thoroughly discussed the § 3553(a) factors and the arguments Cruz-Aleman‟s counsel had raised at the sentencing hearing. While the District Court did not explicitly address the pending “recency points” amendment, the District Court clearly stated: “I have also considered the other arguments made by [Cruz-Aleman‟s counsel] respecting her request for a downward variance.” App. 104. The District Court‟s failure to explicitly address the “recency points” amendment was not procedural error.3 In Rita v. United States, the Supreme Court noted that “sometimes a judicial opinion responds to every argument; sometimes it does not . . . .” 551 U.S. 338, 356 (2007). There, the Court ultimately held that “the sentencing judge should set forth enough to satisfy the appellate court that he has considered the parties‟ arguments and has a reasoned basis for exercising his own legal authority.” Id.; see Cooper, 437 F.3d at 332 (“There are no magic words that a district judge must invoke when sentencing, but the record should demonstrate that the court considered the § 3553(a) factors and any sentencing grounds properly raised by the parties which have recognized legal merit and factual support in the record.”). 3 In United States v. Merced, we concluded that a sentence was procedurally unreasonable where the district court failed to adequately explain how its sentence avoided unwarranted sentencing disparities in violation of 18 U.S.C. § 3553(a)(6). 603 F.3d 203 (3d Cir. 2010). Here, Cruz-Aleman argues that the District Court procedurally erred by failing to adequately explain its refusal to apply the pending “recency points” amendment in violation of 18 U.S.C. § 3553(a)(5), which requires a district court to consider “any pertinent policy statement (A) issued by the Sentencing Commission . . . .” But, as subsection (B) makes clear, this statute applies only to policy statements “in effect on the date the defendant is sentenced.” 18 U.S.C. § 3553(a)(5)(B). 8 Here, the District Court considered the parties‟ sentencing memoranda and oral arguments, analyzed the § 3553(a) factors, and responded to arguments made by Cruz- Aleman‟s counsel at the sentencing hearing. Most importantly, while the District Court did not explicitly address the “recency points” argument, the District Court “considered the other arguments made by [Cruz-Aleman‟s counsel] respecting her request for a downward variance.” App. 104. This statement, coupled with the District Court‟s thorough review of Cruz-Aleman‟s other variance arguments and the § 3553(a) factors, demonstrates that the District Court gave Cruz-Aleman‟s sentence “meaningful consideration.” Cruz-Aleman relies on United States v. Ausburn, in which we stated: “the court must acknowledge and respond to any properly presented sentencing argument which has colorable legal merit and a factual basis.” 502 F.3d 313, 329 (3d Cir. 2007). This reliance is misplaced. The District Court adequately responded to Cruz-Aleman‟s properly presented “recency points” argument when it said that it had “considered the other arguments made by [Cruz-Aleman‟s] counsel . . . .” App. 104. Additionally, even Cruz-Aleman concedes that the District Court was not required to apply a pending – but not yet effective – sentencing amendment. Appellant‟s Br. 16 (“The sentence is procedurally unreasonable because the district court failed to consider Mr. Cruz-Aleman‟s request that the court exercise its discretion [to apply the pending amendment].”) (emphasis added). Indeed, Cruz-Aleman merely requested that the District Court apply the pending amendment. Cruz-Aleman‟s sentencing memorandum states: “The Court should also consider a downward variance based on the U.S. 9 Sentencing Commission‟s recent decision to amend the Guidelines by deleting U.S.S.G. § 4A1.1(e).” App. 60 (emphasis added). The record indicates that the District Court did just what Cruz-Aleman asked: it considered the “recency points” argument. See App. 104. Finally, the District Court sentenced Cruz-Aleman to 60 months‟ incarceration, near the bottom of the advisory Guidelines range of 57 to 71 months. When a district court‟s sentence falls within a properly-calculated Guidelines range, it will be upheld even with a “less extensive” explanation. See United States v. Levinson, 543 F.3d 190, 197 (3d Cir. 2008). The District Court‟s explanation here was sufficient. In conclusion, the record indicates that Cruz-Aleman asked the District Court to consider applying the “recency points” Guidelines amendment prospectively. The District Court considered this request but declined to act upon it. The District Court gave meaningful consideration to Cruz-Aleman‟s arguments and did not commit procedural error.4 IV. For the reasons set forth, we will affirm the District Court‟s sentence. 4 Cruz-Aleman also argues that his Fifth and Sixth Amendment rights under the United States Constitution were violated when his maximum sentence was increased based on a prior conviction that was “neither charged in the indictment nor proved to a jury beyond a reasonable doubt.” Appellant‟s Br. 24. Cruz-Aleman concedes that the Supreme Court rejected this argument in Almendarez-Torres v. United States, 523 U.S. 224 (1998). This Court has also rejected such an argument in United States v. Ordaz, 398 F.3d 236, 241 (3d Cir. 2005), and United States v. Coleman, 451 F.3d 154, 159-60 (3d Cir. 2006), and we do so here as well. 10
2023-12-22T01:27:17.568517
https://example.com/article/7605
############################################################################### # $Id$ ############################################################################### #' calculates Standard Deviation for univariate and multivariate series, also #' calculates component contribution to standard deviation of a portfolio #' #' calculates Standard Deviation for univariate and multivariate series, also #' calculates component contribution to standard deviation of a portfolio #' #' TODO add more details #' #' This wrapper function provides fast matrix calculations for univariate, #' multivariate, and component contributions to Standard Deviation. #' #' It is likely that the only one that requires much description is the #' component decomposition. This provides a weighted decomposition of the #' contribution each portfolio element makes to the univariate standard #' deviation of the whole portfolio. #' #' Formally, this is the partial derivative of each univariate standard #' deviation with respect to the weights. #' #' As with \code{\link{VaR}}, this contribution is presented in two forms, both #' a scalar form that adds up to the univariate standard deviation of the #' portfolio, and a percentage contribution, which adds up to 100%. Note that #' as with any contribution calculation, contribution can be negative. This #' indicates that the asset in question is a diversified to the overall #' standard deviation of the portfolio, and increasing its weight in relation #' to the rest of the portfolio would decrease the overall portfolio standard #' deviation. #' #' @param R a vector, matrix, data frame, timeSeries or zoo object of asset #' returns #' @param \dots any other passthru parameters #' @param clean method for data cleaning through \code{\link{Return.clean}}. #' Current options are "none", "boudt", "geltner", or "locScaleRob". #' @param portfolio_method one of "single","component" defining whether to do #' univariate/multivariate or component calc, see Details. #' @param weights portfolio weighting vector, default NULL, see Details #' @param mu If univariate, mu is the mean of the series. Otherwise mu is the #' vector of means of the return series , default NULL, , see Details #' @param sigma If univariate, sigma is the variance of the series. Otherwise #' sigma is the covariance matrix of the return series , default NULL, see #' Details #' @param use an optional character string giving a method for computing #' covariances in the presence of missing values. This must be (an #' abbreviation of) one of the strings \code{"everything"}, \code{"all.obs"}, #' \code{"complete.obs"}, \code{"na.or.complete"}, or #' \code{"pairwise.complete.obs"}. #' @param method a character string indicating which correlation coefficient #' (or covariance) is to be computed. One of \code{"pearson"} (default), #' \code{"kendall"}, or \code{"spearman"}, can be abbreviated. #' @param SE TRUE/FALSE whether to ouput the standard errors of the estimates of the risk measures, default FALSE. #' @param SE.control Control parameters for the computation of standard errors. Should be done using the \code{\link{RPESE.control}} function. #' @author Brian G. Peterson and Kris Boudt #' @seealso \code{\link{Return.clean}} \code{sd} ###keywords ts multivariate distribution models #' @examples #' #' if(!( Sys.info()[['sysname']]=="Windows") ){ #' # if on Windows, cut and paste this example #' #' data(edhec) #' #' # first do normal StdDev calc #' StdDev(edhec) #' # or the equivalent #' StdDev(edhec, portfolio_method="single") #' #' # now with outliers squished #' StdDev(edhec, clean="boudt") #' #' # add Component StdDev for the equal weighted portfolio #' StdDev(edhec, clean="boudt", portfolio_method="component") #' #' } # end CRAN Windows check #' #' @export StdDev <- function (R , ..., clean=c("none","boudt","geltner", "locScaleRob"), portfolio_method=c("single","component"), weights=NULL, mu=NULL, sigma=NULL, use="everything", method=c("pearson", "kendall", "spearman"), SE=FALSE, SE.control=NULL) { # @author Brian G. Peterson # Descripion: # wrapper for univariate and multivariate standard deviation functions. # Fix parameters if SE=TRUE if(SE){ # Setting the control parameters if(is.null(SE.control)) SE.control <- RPESE.control(estimator="SD") # Fix the method portfolio_method="single" if(SE.control$cleanOutliers=="locScaleRob") clean="locScaleRob" else clean="none" } # Setup: portfolio_method = portfolio_method[1] clean = clean[1] R <- checkData(R, method="xts", ...) columns=colnames(R) if (is.null(weights) & portfolio_method != "single"){ message("no weights passed in, assuming equal weighted portfolio") weights=t(rep(1/dim(R)[[2]], dim(R)[[2]])) } # check weights options if (!is.null(weights)) { if (is.vector(weights)){ # message("weights are a vector, will use same weights for entire time series") # remove this warning if you call function recursively if (length (weights)!=ncol(R)) { stop("number of items in weighting vector not equal to number of columns in R") } } else { weights = checkData(weights, method="matrix", ...) if (ncol(weights) != ncol(R)) { stop("number of columns in weighting timeseries not equal to number of columns in R") } #@todo: check for date overlap with R and weights } } # end weight checks if(clean!="none"){ R = as.matrix(Return.clean(R, method=clean)) } # Option to check if RPESE is installed if SE=TRUE if(isTRUE(SE)){ if(!requireNamespace("RPESE", quietly = TRUE)){ stop("Package \"pkg\" needed for standard errors computation. Please install it.", call. = FALSE) } # Checking all parameters if(portfolio_method!="single") stop("For SE computation, the portfolio method must be \"single\" to return an output.") # Computation of SE (optional) ses=list() # For each of the method specified in se.method, compute the standard error for(mymethod in SE.control$se.method){ ses[[mymethod]]=RPESE::EstimatorSE(R, estimator.fun = "SD", se.method = mymethod, cleanOutliers=SE.control$cleanOutliers, fitting.method=SE.control$fitting.method, freq.include=SE.control$freq.include, freq.par=SE.control$freq.par, a=SE.control$a, b=SE.control$b, ...) } ses <- t(data.frame(ses)) } switch(portfolio_method, single = { if (is.null(weights)) { tsd=t(sd.xts(R, na.rm=TRUE)) rownames(tsd)<-"StdDev" } else { #do the multivariate calc with weights if(!hasArg(sigma)|is.null(sigma)) sigma=cov(R, use=use, method=method[1]) tsd <- StdDev.MM(w=weights,sigma=sigma) } if(SE) # Check if computation of SE return(rbind(tsd, ses)) else return(tsd) }, # end single portfolio switch component = { # @TODO: need to add another loop here for subsetting, I think, when weights is a timeseries #if (mu=NULL or sigma=NULL) { # pfolioret = Return.portfolio(R, weights, wealth.index = FALSE, contribution=FALSE, method = c("simple")) #} # for now, use as.vector weights=as.vector(weights) names(weights)<-colnames(R) if (is.null(sigma)) { sigma = cov(R, use=use, method=method[1]) } return(Portsd(w=weights,sigma)) } # end component portfolio switch ) } # end StdDev wrapper function ############################################################################### # R (http://r-project.org/) Econometrics for Performance and Risk Analysis # # Copyright (c) 2004-2020 Peter Carl and Brian G. Peterson # # This R package is distributed under the terms of the GNU Public License (GPL) # for full details see the file COPYING # # $Id$ # ###############################################################################
2023-08-23T01:27:17.568517
https://example.com/article/8243
Q: 2d game physics, doing it right I have a sneaking suspicion I'm doing this wrong. It works now, to the extend that gravity pulls the object down toward the ground, but I'm having trouble manipulating the speed of the object. What this is, is a ball jumping and falling towards the ground. I have another function called "jump" that just adds jSpeed to it's yVel I can increase gravity, and it falls faster. I can increase the jSpeed speed, and it'll rise up longer, but not faster But I can't get it to do everything faster. It just looks painfully slow, which may or may not be because of my emulator running at 11 fps, on average. Is it just my emulator, or is it something on my end? float time = elapsedTime/1000F; if (speed < maxSpeed){ speed = speed + accel; } if(mY + mVelY < Panel.mHeight){ //0,0 is top-left mVelY += (speed); } if (!(mY + height >= Panel.mHeight)){ mVelY = mVelY + gravity ; } mX = (float) (mX +(mVelX * time)); mY = (float) (mY + (mVelY * time)); A: I think you have the right general ideas here but a lot about your code is confusing. My issues are mostly about the variable speed - it seems to me that your ball is being accelerated up by the variables speed and accel until reaching a maximum speed. Opposing this is gravity pulling (accelerating) the ball down. Now typically this isn't how a 'jump' the way you describe it is. So for me when the player hits 'jump' you should set the YVel to jspeed and just let the gravity part of the equation bring it back down - that is if you deleted the code: if(mY + mVelY < Panel.mHeight){ //0,0 is top-left mVelY += (speed); } Then it would go up for a bit and then loose momentum to the gravity and come back down - where as this code above keeps pushing it upwards until it hits the top, and as soon as it starts descending pushes it back up. I wonder if the code around speed and maxspeed is supposed to act on XVel and not YVel as you have coded - that would make more sense.
2023-10-31T01:27:17.568517
https://example.com/article/9592
Show Notes — Cancer Doesn’t Have Me: Shauna had breast cancer, had the tumors removed, and is now undergoing weekly chemotherapy — Elective Treatment: Chemda discusses her reasons for choosing not to receive radiation treatments after her tumor was removed. Shauna and Chemda discuss their new leases on life while Justy has a panic attack. — What Guns Are You Looking At?: Shauna tutors children. She gives her take on the recent school shootings and explains how she hides her illness from the children. — Crashing: A tourist helicopter crashed into NYC’s East River killing all 5 passengers. Shauna tries to convince Keith that she can teach him how to swim. — Bigger Than A Football: Kevin Daly forced doctors to look at a problem with his stomach who, after having to insist on a CAT scan, found a 30-pound tumor that took 4 hours to cut out — Drinking And Dying: Arif Hoosein jumped out of a moving car because his girlfriend, Savittrie Beria-Lackhan, was driving drunk. Once out of the car, he got struck by another vehicle and died. — A Dick A Day: A Canadian psychiatrist who practiced gay-conversion therapy was found guilty of having sex with his male patients, saying it was to cure them of their disease
2023-11-22T01:27:17.568517
https://example.com/article/5583
Scientists at the University of York have shown that a sperm tail utilises interconnected elastic springs to transmit mechanical information to distant parts of the tail, helping it to bend and ultimately swim towards an egg. Previous studies, from approximately 50 years ago, showed that the sperm tail, or flagellum, was made up of a complex system of filaments, connected by elastic springs resembling a cylinder-like structure. For many years scientists believed that this system provided the sperm tail with a scaffold, allowing it to swim in a hostile environment towards an egg. New research at the University of York, however, has shown through a mathematical model that this system is not only needed to maintain the structure of the tail, but it is also vital to how it transmits information to very distant parts of the tail, allowing it to bend and move in its own unique way. Dr Hermes Gadêlha, mathematical biologist at the University's Department of Mathematics, said: "Sperm flagella with this sort of internal structure can be seen in almost all forms of life. Interestingly, although the sperm tail has an internal structure that is conserved across most species - animal and human - they all create slightly different movements in order to reach an egg. "This suggests that the tail's structure is not the whole story to how they make their distinct tail-bending motion." Dr Gadêlha and collaborators had previously developed a mathematical formula for the way in which sperm move rhythmically through fluid, creating distinct fluid patterns, but scientists now needed to understand what was going on inside the sperm tail that allowed them to move in this way. advertisement To understand the structure of the tail, scientists examined how different parts of the tail bent by moving the tail of a dead sperm. Surprisingly a movement that started near the head of the sperm, resulted in an opposite-direction bend at the tip of the tail, called the 'counterbend phenomenon', suggesting that mechanical information is transmitted along the interconnected elastic bands in order to create movement along the full length of the tail. Dr Gadêlha calculated these bending movements to form a mathematical model that would help hypothesise the triggers needed within the tail to make these distinct movements. Dr Gadêlha said: "If we imagine that the communication to distant parts of the tail is a bit like the communication between blindfolded rowers in a canoe boat. Blindfolded rowers can't see each other's motion to communicate what movement to make, and in the absence of shouting to each other, they must instead feel the mechanics of the boat and the movement that each rower is making in order to synchronise their motion. "It seems that the molecular motors - the 'rowers' inside the sperm tail - are doing a similar thing, but in a much more complex 'boat'. "The mechanism of a sperm tail first creates a sliding motion between filaments, inside this cylindrically arranged structure, finally resulting in a tail bending, a bit like the piston that converts back and forth motion in to rotation of the wheel on a train. Any one movement in this complex sequence appears to be able to trigger motion right through to the distant parts of the tail. "The big question now is, how does the tail transmit specific biomechanical information to allow these 'rowers' to self-organise?" The research is published in Journal of the Royal Society Interface.
2024-01-08T01:27:17.568517
https://example.com/article/8914
#!/bin/sh # Copyright 1998-2019 Lawrence Livermore National Security, LLC and other # HYPRE Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) #============================================================================= # struct: Run PFMG base "true" 2d case #============================================================================= mpirun -np 1 ./struct -n 12 12 1 -d 2 -solver 1 -relax 1 \ > pfmgbase2d.out.0 #============================================================================= # struct: Run PFMG parallel and blocking #============================================================================= mpirun -np 3 ./struct -n 4 12 1 -P 3 1 1 -d 2 -solver 1 -relax 1 \ > pfmgbase2d.out.1 mpirun -np 3 ./struct -n 4 4 1 -P 1 3 1 -b 3 1 1 -d 2 -solver 1 -relax 1 \ > pfmgbase2d.out.2 #============================================================================= # struct: Run PFMG 2d run as 3d #============================================================================= mpirun -np 4 ./struct -n 3 1 12 -P 4 1 1 -c 1 0 1 -solver 1 -relax 1 \ > pfmgbase2d.out.3 mpirun -np 2 ./struct -n 1 12 6 -P 1 1 2 -c 0 1 1 -solver 1 -relax 1 \ > pfmgbase2d.out.4 mpirun -np 3 ./struct -n 12 4 1 -P 1 3 1 -c 1 1 0 -solver 1 -relax 1 \ > pfmgbase2d.out.5
2023-08-03T01:27:17.568517
https://example.com/article/2253
Report Back from a Families Belong Together Event Broletariat The event that I went to was a smaller event stuck between two major cities in my state. Despite this, however, I think the overall tone and atmosphere at this event is characteristic of what went down at other such events. The structure of the rally allowed for about 30 minutes prior to the first speaker for mingling purposes. There were several tables set up by various local organizations which were more or less connected to the event at hand. After the short period of mingling, the rest of the rally was a series of speeches given by representatives of various organizations. Breaking it down numerically, there were four religious representatives, three political representatives two of which were running and one of which was the mayor, and two Latinx organization representatives. It is worth mentioning as well that at the end an individual who didn’t represent any organization beyond himself was also allowed some time to give a speech. The themes on which the speakers spoke are summarized as follows; the separation of children from parents is not a political issue, it is a human rights issue; religion in general condemns the separation of children from parents; there were some first-hand accounts from people who had been to the southern border; rallies like this are not enough, we also need to turn up to vote; we need to speak truth to power and if they don’t listen then vote for leaders who do listen. I, of course, did not attend this rally to speak truth to power, but rather to begin seizing power, even if only in very modest ways. During the mingling period I spoke with a number of different individuals and organizations about their potential interest in ride sharing for undocumented immigrants. Consistently when I have attended meetings or rallies related to immigration issues, the main pipeline pointed out for deportation in NC has been traffic stops. Undocumented immigrants are not permitted to obtain driver’s licenses and as such can potentially be deported from a simple traffic stop. The natural solution to my mind is simply to ride share such that someone with a license can carpool with someone who lacks a license. The model for this can be two-fold. One member of Siembra suggested using a hotline that people could call to obtain a ride if they need one on short notice which I think is a great idea. I also think it would be worthwhile to co-ordinate the schedules of those offering and receiving rides such that a more stable and permanent ride sharing situation could be arranged. This stable and permanent ride sharing could form the basis of welding together two separate sections of the working class, those with documentation/citizenship status and those without. In the former model, it would simply be a case of compiling the availabilities and locations of willing drivers to match with incoming callers. In the latter model, it would be a case of matching schedules and destinations between drivers and riders. Essential to the success of this ride share program, Red Rides, is the attitude that no thanks is offered from rider to driver. This is no charity. It is a political act meant to simultaneously aid and empower the working class by building a transportation network which could be utilized for a plethora of other applications. Riders should never be made to feel indebted to any given driver. It is our obligation as members of the working class to help our most vulnerable sections, an obligation which needs no thanks.
2024-03-24T01:27:17.568517
https://example.com/article/3428
Some guy with a cell phone camera recorded this incident at a recent press conference showing off Honda’s Asimo humanoid robot. If you fast-forward the video clip to about 0:59 seconds, you can witness the pratfall: Watch in horror as the brave little robot attempts to climb the stairs and instead does his best Chevy Chase impression. The best part of the clip is when the stage crew attempts to conceal the lifeless, limp body of the poor little guy. For a moment there, I had flashbacks to the famous Derek Smalls pod sequence in This is Spinal Tap. [via Digital World Tokyo]
2024-05-29T01:27:17.568517
https://example.com/article/5525
1. Introduction =============== Among the diterpenoids isolated from octocorals, the briarane-type metabolites (3,8-cyclized cembranes) are a major group of compounds \[[@B1-marinedrugs-10-01156],[@B2-marinedrugs-10-01156],[@B3-marinedrugs-10-01156]\]. The compounds of this type were suggested to be of marine origin and the octocorals belonging to the genus *Briareum* have been proven to be the most important source of briarane-type compounds \[[@B4-marinedrugs-10-01156],[@B5-marinedrugs-10-01156],[@B6-marinedrugs-10-01156],[@B7-marinedrugs-10-01156]\]. In previous studies, a series of interesting terpenoid derivatives, including briarane \[[@B8-marinedrugs-10-01156],[@B9-marinedrugs-10-01156],[@B10-marinedrugs-10-01156],[@B11-marinedrugs-10-01156],[@B12-marinedrugs-10-01156],[@B13-marinedrugs-10-01156],[@B14-marinedrugs-10-01156],[@B15-marinedrugs-10-01156],[@B16-marinedrugs-10-01156],[@B17-marinedrugs-10-01156],[@B18-marinedrugs-10-01156],[@B19-marinedrugs-10-01156],[@B20-marinedrugs-10-01156],[@B21-marinedrugs-10-01156],[@B22-marinedrugs-10-01156],[@B23-marinedrugs-10-01156],[@B24-marinedrugs-10-01156],[@B25-marinedrugs-10-01156],[@B26-marinedrugs-10-01156],[@B27-marinedrugs-10-01156],[@B28-marinedrugs-10-01156],[@B29-marinedrugs-10-01156],[@B30-marinedrugs-10-01156],[@B31-marinedrugs-10-01156],[@B32-marinedrugs-10-01156],[@B33-marinedrugs-10-01156],[@B34-marinedrugs-10-01156],[@B35-marinedrugs-10-01156]\], cembrane \[[@B36-marinedrugs-10-01156]\] and carotenoid \[[@B37-marinedrugs-10-01156]\], had been isolated from the octocorals belonging to the genus *Briareum* that were distributed in the waters off Taiwan, at the intersection point of the Kuroshio current and the South China Sea surface current. In a continuation of our search for new substances from the Formosan marine invertebrates, the chemical constituents of a specimen octocoral identified as *Briareum* sp. (Briareidae) were studied. A fraction of its organic extract (fraction H, see Experimental Section) displayed inhibitory effects on the generation of superoxide anion (inhibition rate 36.8%) and the release of elastase (inhibition rate 90.3%) at a concentration of 10 μg/mL. We further isolated two new briarane-type diterpenoids, briarenolides, F (**1**) and G (**2**) ([Figure 1](#marinedrugs-10-01156-f001){ref-type="fig"}), from the octocoral *Briareum* sp. In this paper, we report the isolation, structure determination and bioactivity of briaranes **1** and **2**. ![The structures of briarenolides F (**1**) and G (**2**).](marinedrugs-10-01156-g001){#marinedrugs-10-01156-f001} 2. Results and Discussion ========================= Briarenolide F (**1**) was isolated as a white powder. The molecular formula of **1** was established as C~28~H~40~O~12~ (nine degrees of unsaturation) from a sodium adduct at *m/z* 591 in the ESIMS spectrum and further supported by HRESIMS (C~28~H~40~O~12~Na, *m/z* 591.2420, alculated 591.2417). The IR spectrum of **1** showed bands at 3498, 1789 and 1743 cm^−1^, consistent with the presence of hydroxy, γ-lactone and ester carbonyl groups. The ^13^C NMR and DEPT spectra of **1** showed that this compound had 28 carbons ([Table 1](#marinedrugs-10-01156-t001){ref-type="table"}), including seven methyls, four sp^3^ methylenes, eight sp^3^ methines, three sp^3^ quaternary carbons, an sp^2^ methine and five sp^2^ quaternary carbons. From the ^1^H and ^13^C NMR spectra ([Table 1](#marinedrugs-10-01156-t001){ref-type="table"}), **1** was found to possess two acetoxy groups (δ~H~ 1.99, 2.01, each 3H × s; δ~C~ 170.6, 2 × qC; 21.3, 2 × CH~3~), an *n*-butyrate group (δ~H~ 0.94, 3H, t, *J* = 7.2 Hz; 1.63, 2H, sext, *J* = 7.2 Hz; 2.27, 2H, t, *J* = 7.2 Hz; δ~C~ 13.7, CH~3~; 18.4, CH~2~; 36.3, CH~2~; 173.1, qC), a γ-lactone moiety (δ~C~ 171.0, qC-19) and a trisubstituted olefin (δ~H~ 5.65, 1H, br d, *J* = 13.6 Hz, H-4; δ~C~130.3, CH-4; 128.8, qC-5). The presence of a tetrasubstituted epoxide containing a methyl substituent was established from the signals of two quaternary oxygenated carbons at δ~C~ 68.8 (qC-8) and 58.4 (qC-17) and further confirmed by the proton signal of a methyl singlet at δ~H~ 1.49 (3H, s, H~3~-18). Thus, from the above NMR data, five degrees of unsaturation were accounted for and **1** was identified as a tetracyclic compound. marinedrugs-10-01156-t001_Table 1 ###### ^1^H (400 MHz, CDCl~3~) and ^13^C (100 MHz, CDCl~3~) NMR data, ^1^H--^1^H COSY and HMBC correlations for briarane **1**. C/H δ (*J* in Hz) δ~C,~ Mult. ^1^H--^1^H COSY HMBC (H→C) ------------ --------------------------- ---------------------- --------------------------- ------------------------------------- 1 44.9, qC 2 5.22 d (8.0) 75.4, CH H~2~-3 C-1, -4, -10, -15, acetate carbonyl 3α 1.89 m 33.3, CH~2~ H-2, H-3β, H-4 N.O. β 3.81 dd (17.2, 13.6)   H-2, H-3α, H-4 C-4, -5 4 5.65 br d (13.6) 130.3, CH H~2~-3, H~3~-16 N.O. 5 128.8, qC 6 4.65 d (2.4) 84.9, CH H-7 C-4, -5, -7, -8, -16 7 5.33 d (2.4) 77.2, CH H-6 C-5, -6 8 68.8, qC 9 4.59 d (4.8) 75.1, CH OH-9 C-1, -8, -10, -11, -17 10 2.06 d (4.0) 39.4, CH H-11 C-1, -8, -9, -15, -20 11 2.28 m 40.2, CH H-10, H-12, H~3~-20 C-1, -10, -12 12 4.98 ddd (12.4, 5.2, 5.2) 70.7, CH H-11, H~2~-13 C-20, *n*-butyrate carbonyl 13 1.87--1.97 m 26.5, CH~2~ H-12, H-14 C-12 14 4.88 dd (3.2, 2.4) 74.9, CH H~2~-13 N.O. 15 1.37 s 15.9, CH~3~ C-1, -2, -10, -14 16 1.77 br s 25.4, CH~3~ H-4 C-4, -5, -6 17 58.4, qC 18 1.49 s 8.8, CH~3~ C-8, -17, -19 19 171.0, qC 20 1.22 d (7.2) 10.6, CH~3~ H-11 C-10, -11, -12 2-OAc 1.99 s 170.6, qC21.3, CH~3~ Acetate carbonyl 6-OOH 8.71 br s N.O. 9-OH 2.95 d (4.8) H-9 N.O. 12-OC(O)Pr   173.1, qC   2.27 t (7.2) 36.3, CH~2~ H~2~-3 *\'* C-1 *\'* , -3 *\'* , -4 *\'*   1.63 sext (7.2) 18.4, CH~2~ H~2~-2 *\'* , H~3~-4 *\'* C-1 *\'* , -2 *\'* , -4 *\'*   0.94 t (7.2) 13.7, CH~3~ H~2~-3 *\'* C-2 *\'* , -3 *\'* 14-OAc   170.6, qC     2.01 s 21.3, CH~3~ Acetate carbonyl N.O. = Not observed. ^1^H--^1^H couplings in the COSY spectrum of **1** enabled identification of the C-2/-3/-4, C-6/-7, C-10/-11/-12/-13/-14, C-4/-16 (by allylic coupling) and C-11/-20 units ([Table 1](#marinedrugs-10-01156-t001){ref-type="table"}), which were assembled with the assistance of an HMBC experiment. The HMBC correlations between protons and quaternary carbons of **1**, such as H-2, H-9, H-10, H-11, H~3~-15/C-1; H-3β, H-6, H-7, H~3~-16/C-5; H-6, H-9, H-10, H~3~-18/C-8; H-9, H~3~-18/C-17; and H~3~-18/C-19, permitted the elucidation of the carbon skeleton ([Table 1](#marinedrugs-10-01156-t001){ref-type="table"}). The vinyl methyl at C-5 was confirmed by the allylic coupling between H-4/H~3~-16 in the ^1^H--^1^H COSY spectrum and by the HMBC correlations between H~3~-16/C-4, -5, -6 and H-6/C-16. The ring junction C-15 methyl group was positioned at C-1 from the HMBC correlations between H~3~-15/C-1, -2, -10, -14; H-2/C-15; and H-10/C-15. In addition, the carbon signal at δ~C~ 173.1 (qC) was correlated with the signal of the methylene protons at δ~H~ 2.27 in the HMBC spectrum and was consequently assigned as the carbon atom of the *n*-butyrate carbonyl. Additionally, the *n*-butyrate positioned at C-12 was confirmed by the connectivity between H-12 (δ~H~ 4.98) and the carbonyl carbon (δ~C~ 173.1, qC) of the *n*-butyrate. Furthermore, an acetate ester at C-2 was established by a correlation between H-2 (δ~H~ 5.22) and the acetate carbonyl (δ~C~ 170.6, qC) observed in the HMBC spectrum of **1**. The presence of a hydroxy group at C-9 was deduced from the ^1^H--^1^H COSY correlation between a hydroxy proton (δ~H~ 2.95) and H-9 (δ~H~ 4.59). The presence of a hydroperoxy group in **1** was supported by a hydroperoxy proton signal at δ~H~ 8.71 as a broad signlet \[[@B22-marinedrugs-10-01156],[@B32-marinedrugs-10-01156],[@B38-marinedrugs-10-01156]\]. Due to absence of HMBC correlations for H-14 (δ~H~ 4.88) and the hydroperoxy proton (δ~H~ 8.71), the positions for the remaining acetoxy and hydroperoxy groups could not be determined by this method. By comparison the ^1^H and ^13^C NMR data of C-14 oxymethine for **1** (δ~H~ 4.88; δ~C~ 74.9) with those of a known briarane analogue, excavatolide F (**3**) (δ~H~ 4.94; δ~C~ 74.1) ([Figure 2](#marinedrugs-10-01156-f002){ref-type="fig"}) \[[@B10-marinedrugs-10-01156]\], which possesses a similar cyclohexane moiety as that of **1**, the remaining acetoxy group in **1** was placed at C-14. Thus, the hydroperoxy group is positioned at C-6, an oxymethine at δ~C~ 84.9 (CH), by analysis of the ^1^H--^1^H COSY correlations and characteristic NMR signal analysis. ![The structures of briarenolide F (**1**) and excavatolide F (**3**).](marinedrugs-10-01156-g002){#marinedrugs-10-01156-f002} In all naturally-occurring briaranes, H-10 is *trans* to the C-15 methyl group, and these two groups are assigned as α- and β-oriented in most briarane derivatives \[[@B4-marinedrugs-10-01156],[@B5-marinedrugs-10-01156],[@B6-marinedrugs-10-01156],[@B7-marinedrugs-10-01156]\]. The relative configuration of **1** was elucidated from the interactions observed in a NOESY experiment and was found to be compatible with that of **1** offered by computer modeling ([Figure 3](#marinedrugs-10-01156-f003){ref-type="fig"}) \[[@B39-marinedrugs-10-01156]\] and that obtained from vicinal proton coupling constant analysis. In the NOESY experiment of **1**, the correlations of H-10 with H-2, H-9, H-11 and H-12, but not with H~3~-15 and H~3~-20, indicated that these protons (H-2, H-9, H-10, H-11 and H-12) were situated on the same face, and these were assigned as α protons, since the C-15 and C-20 methyls are β-substituents at C-1 and C-11, respectively. H-14 was found to exhibit an interaction with H~3~-15, but not with H-10, revealing the β-orientation of this proton. The configuration at C-9 is worthy of comment. H-9 was found to exhibit correlations with H-10, H-11, H~3~-18 and H~3~-20. From a consideration of molecular models, H-9 was found to be reasonably close to H-10, H-11, H~3~-18 and H~3~-20, while it was placed on the α face in **1**. The C-16 vinyl methyl showed correlations with H-4 and H-6, demonstrating the *Z* configuration of Δ^4,5^ and the hydroperoxy group at C-6 was α-oriented. The *cis* relationship between H-6 and H-7 was established by a correlation between H-6 and H-7 and a small coupling constant (*J* = 2.4 Hz) between these two protons. Moreover, an acetyl methyl (δ~H~ 2.01) exhibited correlations with H-12 and H-2, further supporting an acetoxy group was positioned on the α-position at C-14 in **1**. Based on the above findings, the configurations of all chiral carbons of **1** were assigned as 1*S*\*, 2*S*\*, 6*S*\*, 7*S*\*, 8*R*\*, 9*S*\*, 10*S*\*, 11*R*\*, 12*S*\*, 14*S*\*, 17*R*\*, and the structure of **1** was established unambiguously. To the best of our knowledge, briarane derivatives possessing a hydroperoxy group are rarely found \[[@B22-marinedrugs-10-01156],[@B32-marinedrugs-10-01156],[@B38-marinedrugs-10-01156]\] and briarenolide F (**1**) is the first briarane derivative possessing a 6-hydroperoxy group. A double bond positioned at C-4(5) in briarane-type metabolites is also rarely found \[[@B31-marinedrugs-10-01156],[@B40-marinedrugs-10-01156],[@B41-marinedrugs-10-01156],[@B42-marinedrugs-10-01156]\]. ![The stereoview of **1** (generated from computer modeling) and the calculated distances (Å) between selected protons with key NOESY correlations.](marinedrugs-10-01156-g003){#marinedrugs-10-01156-f003} Briarenolide G (**2**) was isolated as a white powder whose HRESIMS showed a molecular ion at *m/z* 397.1989 implying that **2** had the molecular formula C~22~H~30~O~5~ (C~22~H~30~O~5~Na, calculated 397.1991). The IR spectrum revealed absorptions for hydroxy (3397 cm^−1^) and ester carbonyl (1757 and 1734 cm^−1^) groups. The ^1^H NMR data ([Table 2](#marinedrugs-10-01156-t002){ref-type="table"}) showed resonances due to an acetyl methyl (δ~H~ 2.03, 3H, s), three vinyl methyls (δ~H~ 1.78, 3H, br s, H~3~-16; 1.87, 3H, d, *J* = 1.6 Hz, H~3~-18; 1.63, 3H, d, *J* = 0.8 Hz, H~3~-20), a quaternary methyl (δ~H~ 0.77, 3H, s, H~3~-15), two olefinic protons (δ~H~ 5.29, 1H, br s, H-6; 5.12, 1H, m, H-12) and an oxymethine signal (δ~H~ 4.78, 1H, br s, H-14). The ^13^C NMR and DEPT spectra of **2** ([Table 2](#marinedrugs-10-01156-t002){ref-type="table"}) revealed the presence of a tetrasubstituted (δ~C~ 160.8, qC-8; 125.1, qC-17) and two trisubstituted (δ~C~ 144.4, qC-5; 124.6, CH-6; 136.3, qC-11; 117.8, CH-12) carbon-carbon double bonds, a hemiketal carbon (δ~C~ 106.7, qC-7), an acetate carbonyl (δ~C~ 170.9, qC), an α,β-unsaturated-γ-lactone carbonyl (δ~C~ 171.1, qC-19), a tetrasubstituted carbon atom bearing a carbon substituent (δ~C~ 39.1, qC-1) and an oxymethine (δ~C~ 77.2, CH-14). marinedrugs-10-01156-t002_Table 2 ###### ^1^H (400 MHz, CDCl~3~) and ^13^C (100 MHz, CDCl~3~) NMR data, ^1^H--^1^H COSY and HMBC correlations for briarane **2**. C/H δ~Η~ (*J* in Hz) δ~C~, Mult. ^1^H--^1^H COSY HMBC (H→C) -------- ---------------------- ------------- ------------------ ------------------------ 1 39.1, qC 2α 1.69 m 35.4, CH~2~ H-2β, H~2~-3 C-3, -14 β 1.29 m   H-2α, H~2~-3 C-1, -3, -10, -14 3 1.72 m 23.7, CH~2~ H~2~-2, H~2~-4 C-2 4α 1.90 m 29.6, CH~2~ H~2~-3, H-4β N.O. β 3.72 m   H~2~-3, H-4α C-3, -16 5 144.4, qC 6 5.29 br s 124.6, CH H~3~-16 C-4, -7, -16 7 106.7, qC 8 160.8, qC 9α 2.54 br d (15.2) 25.3, CH~2~ H-9β, H-10 C-8, -10, -11, -17 β 2.37 dd (15.2, 10.8)   H-9α, H-10 C-7, -8, -10, -11, -17 10 3.76 d (10.8) 35.9, CH H~2~-9 N.O. 11 136.3, qC 12 5.12 m 117.8, CH H~2~-13, H~3~-20 N.O. 13α 2.06 m 29.3, CH~2~ H-12, H-13β C-11, -12, -14 β 2.33 m   H-12, H-13α C-11 14 4.78 br s 77.2, CH H~2~-13 C-12 15 0.77 s 21.9, CH~3~ C-1, -2, -10, -14 16 1.78 br s 23.9, CH~3~ H-6 C-4, -5, -6 17 125.1, qC 18 1.87 d (1.6) 9.1, CH~3~ C-8, -17, -19 19 171.7, qC 20 1.63 d (0.8) 21.9, CH~3~ H-12 C-10, -11, -12 7-OH 3.35 s C-6, -7, -8 14-OAc   170.9, qC     2.03 s 21.7, CH~3~ Acetate carbonyl N.O. = Not observed. From the ^1^H--^1^H COSY experiment of **2** ([Table 2](#marinedrugs-10-01156-t002){ref-type="table"}), it was possible to establish the separate spin systems that map out the proton sequences from H~2~-2/H~2~-3/H~2~-4 and H~2~-9/H-10. These data, together with the HMBC correlations between H~2~-2/C-1, -3, -10; H~2~-3/C-2; H-4β/C-3; H-6/C-4, -7; and H~2~-9/C-7, -8, -10, established the connectivity from C-1 to C-10 in the ten-membered ring ([Table 2](#marinedrugs-10-01156-t002){ref-type="table"}). The vinyl methyl at C-5 was confirmed by the HMBC correlations between H~3~-16/C-4, -5, -6; H-4β/C-16; and H-6/C-16, and further supported by the allylic coupling between H-6 and H~3~-16. The methylcyclohexene ring, which is fused to the ten-membered ring at C-1 and C-10, was elucidated by the ^1^H--^1^H COSY correlations between H-12/H~2~-13/H-14 and H-12/H~3~-20 (by allylic coupling) and by the HMBC correlations between H~2~-2/C-14, H~2~-9/C-11 and H~3~-20/C-10, -11, -12. The ring junction C-15 methyl group was positioned at C-1 from the HMBC correlations between H~3~-15/C-1, -2, -10, -14. In addition, the acetate ester at C-14 was established by a correlation between H-14 (δ~H~ 4.78) and the acetate carbonyl observed in the HMBC spectrum of **2**. The presence of a hydroxy group at C-7 was deduced from the HMBC correlations between the hydroxy proton (δ~H~ 3.35, 1H, s, OH-7) and C-6, C-7, and C-8. The C-7 hydroxy group was concluded to be a part of hemiketal constellation on the basis of a characteristic carbon signal at δ~C~ 106.7 (a quaternary hemiketal carbon, qC-7). These data, together with the HMBC correlations between H~3~-18/C-8, -17, -19, were used to establish the molecular framework of **2**. NOESY measurements were carried out in order to deduce the relative stereochemical features of **2**([Figure 4](#marinedrugs-10-01156-f004){ref-type="fig"}). Thus, H~3~-15 gave a correlation with H-14, but not with H-10, indicating that H~3~-15 and H-14 are located on the same face (assigned as the β-face) and that H-10 lies on the opposite side, α-face. The NOESY spectrum showed correlations between H-6/H~3~-16 and H-12/H~3~-20, revealing the *Z* geometry of the C-5/6 and C-11/12 double bonds in **2**. Due to the absence of NOESY correlations for the C-7 hydroxy group, the configuration at that chiral center could not be determined by this method. By comparison of the ^13^C NMR chemical shifts of C-6 (δ~C~ 124.6), C-7 (δ~C~106.7) and C-8 (δ~C~ 160.8) for **2** with those of an unnamed known 7β-hydroxybriarane analogue **4** (δ~C~ 124.8, C-6; 106.2, C-7, 160.1, C-8), which was obtained from a Caribbean octocoral *Briareum polyanthes* \[[@B43-marinedrugs-10-01156]\] ([Figure 5](#marinedrugs-10-01156-f005){ref-type="fig"}), we deduced that the C-7 hydroxy group was β-oriented and the configuration of all the chiral carbons in **2** were assigned as 1*R*\*, 7*S*\*, 10*S*\*, 14*S*\*. ![The stereoview of **2** (generated from computer modeling) and the calculated distances (Å) between selected protons with key NOESY correlations.](marinedrugs-10-01156-g004){#marinedrugs-10-01156-f004} ![The structures of briarenolide G (**2**) and briarane (**4**).](marinedrugs-10-01156-g005){#marinedrugs-10-01156-f005} The *in vitro* anti-inflammatory effects of briaranes **1** and **2** were tested. Briarenolide F (**1**) was found to display a significant inhibitory effect on the generation of superoxide anion by human neutrophils ([Table 3](#marinedrugs-10-01156-t003){ref-type="table"}). marinedrugs-10-01156-t003_Table 3 ###### Inhibitory effects of briaranes **1** and **2**on the generation of superoxide anion and the release of elastase by human neutrophils in response to FMLP/CB. Superoxide Anion Elastase Release ------------------- ------------------ ------------------ -------------- -------------- **1** 3.82 ± 0.45 76.65 ± 4.21 \>10.0 27.48 ± 6.60 **2** \>10.0 22.04 ± 3.43 \>10.0 12.98 ± 4.68 DPI *^b^* 0.82 ± 0.31 Elastatinal *^b^* 31.82 ± 5.92 *^a^* Percentage of inhibition (Inh%) at a concentration of 10 µg/mL; *^b^* DPI (diphenylene indoniumn) and elastatinal were used as reference compounds. 3. Experimental Section ======================= 3.1. General Experimental Procedures ------------------------------------ Optical rotations were measured on a Jasco P-1010 digital polarimeter. Infrared spectra were recorded on a Varian Diglab FTS 1000 FT-IR spectrometer; peaks are reported in cm^−1^. The NMR spectra were recorded on a Varian Mercury Plus 400 NMR spectrometer. Coupling constants (*J*) are given in Hz. ^1^H and ^13^C NMR assignments were supported by ^1^H--^1^H COSY, HMQC, HMBC and NOESY experiments. ESIMS and HRESIMS were recorded on a Bruker APEX II mass spectrometer. Column chromatography was performed on silica gel (230--400 mesh, Merck, Darmstadt, Germany). TLC was carried out on precoated Kieselgel 60 F~254~ (0.25 mm, Merck), and spots were visualized by spraying with 10% H~2~SO~4~ solution followed by heating. HPLC was performed using a system comprised of a Hitachi L-7100 pump and a Rheodyne injection port. A normal phase column (Hibar 250 × 10 mm, Merck, silica gel 60, 5 μm) was used for HPLC. 3.2. Animal Material -------------------- Specimens of the octocorals *Briareum* sp. were collected by hand using scuba equipment off the coast of southern Taiwan in July 2011 and stored in a freezer until extraction. A voucher specimen (NMMBA-TW-SC-2011-77) was deposited in the National Museum of Marine Biology and Aquarium. This organism was identified by comparison with previous descriptions \[[@B44-marinedrugs-10-01156],[@B45-marinedrugs-10-01156],[@B46-marinedrugs-10-01156],[@B47-marinedrugs-10-01156]\]. 3.3. Extraction and Isolation ----------------------------- Sliced bodies of *Briareum* sp. (wet weight 6.32 kg, dry weight 2.78 kg) were extracted with a mixture of methanol (MeOH) and dichloromethane (DCM) (1:1). The extract was partitioned between ethyl acetate (EtOAc) and H~2~O. The EtOAc layer was separated on silica gel and eluted using *n-*hexane/EtOAc (stepwise, 100:1--pure EtOAc) to yield 18 fractions A--R. Fraction H was chromatographed on silica gel and eluted using *n*-hexane/acetone (stepwise, 40:1--pure acetone) to afford 45 fractions H1--H45. Fraction H11 was separated by normal-phase HPLC (NP-HPLC) using a mixture of *n*-hexane and EtOAc (5:2) as the mobile phase to afford compound **2** (0.4 mg). Fraction H16 was further purified by normal-phase HPLC using a mixture of *n*-hexane and acetone as the mobile phase (7:2) to afford compound **1** (2.3 mg). Briarenolide F (**1**): white powder; mp 141--142 °C; \[α\]25D +32 (*c* 0.1, CHCl~3~); IR (neat) ν~max~ 3498, 1789, 1743 cm^--1^; ^1^H (CDCl~3~, 400 MHz) and ^13^C (CDCl~3~, 100 MHz) NMR data, see [Table 1](#marinedrugs-10-01156-t001){ref-type="table"}; ESIMS: *m/z* 591 \[M + Na\]^+^; HRESIMS: *m/z* 591.2420 (calcd for C~28~H~40~O~12~Na, 591.2417). Briarenolide G (**2**): white powder; mp 78--80 °C; \[α\]25D −97 (*c* 0.02, CHCl~3~); IR (neat) ν~max~ 3397, 1757, 1734 cm^--1^; ^1^H (CDCl~3~, 400 MHz) and ^13^C (CDCl~3~, 100 MHz) NMR data, see [Table 2](#marinedrugs-10-01156-t002){ref-type="table"}; ESIMS: *m/z* 397 \[M + Na\]^+^; HRESIMS: *m/z* 397.1989 (calcd for C~22~H~30~O~5~Na, 397.1991). 3.4. Molecular Mechanics Calculations ------------------------------------- Implementation of the MM2 force field \[[@B39-marinedrugs-10-01156]\] in CHEM3D PRO software from CambridgeSoft Corporation (version 9.0, Cambridge, MA, USA; 2005) was used to calculate the molecular models. 3.5. Superoxide Anion Generation and Elastase Release by Human Neutrophils -------------------------------------------------------------------------- Human neutrophils were obtained by means of dextran sedimentation and Ficoll centrifugation. Measurements of superoxide anion generation and elastase release were carried out according to previously described procedures \[[@B48-marinedrugs-10-01156],[@B49-marinedrugs-10-01156]\]. Briefly, superoxide anion production was assayed by monitoring the superoxide dismutase-inhibitable reduction of ferricytochrome *c*. Elastase release experiments were performed using MeO-Suc-Ala-Ala-Pro-Valp-nitroanilide as the elastase substrate. 4. Conclusions ============== Briarane-type natural products (3,8-cyclized cembranoid) were found in various marine organisms, particularly with the octocorals belonging to the genus *Briareum* (family Briareidae) \[[@B4-marinedrugs-10-01156],[@B5-marinedrugs-10-01156],[@B6-marinedrugs-10-01156],[@B7-marinedrugs-10-01156]\]. It is interesting to note that the briarane-type natural products are major constituents of the extracts of octocorals *Briareum* spp. distributed in the tropical and subtropical Indo-Pacific Ocean. In the past 35 years, over 500 briarane analogues have been obtained and the number is still increasing based on their structural complexity and interesting bioactivities. It is worth noting that only three hydroperoxybriarane analogues have been isolated to date \[[@B22-marinedrugs-10-01156],[@B32-marinedrugs-10-01156],[@B38-marinedrugs-10-01156]\] and that briarenolide F (**1**) is the first 6-hydroperoxybriarane. 7-Hydroxybriarane derivatives are also rarely found \[[@B43-marinedrugs-10-01156],[@B50-marinedrugs-10-01156],[@B51-marinedrugs-10-01156],[@B52-marinedrugs-10-01156]\]; the new briarane, briarenolide G (**2**) was the first 7-hydroxybriarane derivative isolated from the octocorals collected off the waters of Taiwan. The study material *Briareum* sp. has begun to be transplanted in tanks for the extraction of natural products in order to establish a stable supply of bioactive material. This research was supported by grants from the National Museum of Marine Biology and Aquarium (Grant No. 100100101 and No. 100200311); the National Dong Hwa University: the Division of Marine Biotechnology, Asia-Pacific Ocean Research Center, National Sun Yat-sen University (Grant No. 00C-0302-05); the Department of Health Clinical Trial and Research Center of Excellence (Grant No. DOH101-TD-C-111-004); the National Research Program for Biopharmaceuticals, National Science Council (Grant No. NSC 101-2325-B-291-001, 100-2325-B-291-001 and 98-2320-B-291-001-MY3), Taiwan, awarded to Y.-H.K. and P.-J.S. *Samples Availability*: Not available.
2023-09-05T01:27:17.568517
https://example.com/article/2140
Another striking provision in the bill, tucked halfway into the text, calls for "consumption lounges." The proposal goes beyond what is offered by most other states where marijuana is legal. Patrons would be free to purchase cannabis products in a dispensary and then walk to a separate area to imbibe. And they would also be allowed to Bring Their Own. Weed, that is. Think breweries or bars with a twist.
2023-11-16T01:27:17.568517
https://example.com/article/9739
Ex-Paterson schools employee draws 5-year prison term for scheme to overbill district A 76-year-old former employee of the Paterson School District was sentenced Thursday to five years in prison without parole for hiring her own company to do work for the district and overbilling the district by more than $190,000. Anna Taliaferro also was ordered to pay $191,000 in restitution, must forfeit her entire pension and is banned for life from seeking public employment in the state. Her attorney, Dwayne Warren, said at Taliaferro’s sentencing in Superior Court in Paterson that his client deserved some leniency because she was a prominent figure in her community for decades when she worked in the school district as a coordinator who educated parents on how to become better advocates for their children. But Veronica Allende, a deputy state attorney general, said that is precisely why Taliaferro deserves a stiff sentence. “She took advantage of a position of trust,” Allende said. “This is a classic case of official misconduct.” Taliaferro, formerly of Paterson but now living in Virginia Beach, Va., was convicted in December of official misconduct, pattern of official misconduct, forgery, theft by deception, tampering with public records and misconduct by a corporate official. She organized programs and taxpayer-funded conferences for parents in her capacity as a district employee and coordinator of the Paterson Resource Center. She was, at the same time, the president of the New Jersey Association of Parent Coordinators, a non-profit organization. Prosecutors said during Taliaferro’s three-month-long trial that she “outsourced” the district’s conference organizing work to her non-profit organization, while stating in disclosure statements that she had no financial interest in any organization that was doing business with the school. In addition to overbilling the district for various services, Taliaferro ran her nonprofit on district time using district employees and resources, prosecutors said. “She, in effect, charged the district through [the non-profit] for doing what the district already was paying her to do as coordinator of the Paterson Resource Center,” the state Attorney General’s Office said in a press release Thursday. Taliaferro’s convictions carry a sentence of five to 10 years in prison, but Superior Court Judge Raymond Reddin imposed a sentence on the lower range, saying Taliaferro deserved some leniency because of her age and her lack of prior criminal history. But Reddin said the law does not allow him to afford Taliaferro any more leniency because it requires a mandatory minimum term of five years without parole for a conviction on official misconduct. Reddin ordered Taliaferro to surrender herself to the state Department of Corrections on June 6.
2023-11-23T01:27:17.568517
https://example.com/article/6203
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) /* Copyright (c) 2020 Marvell International Ltd. */ #include <linux/dma-mapping.h> #include <linux/qed/qed_chain.h> #include <linux/vmalloc.h> #include "qed_dev_api.h" static void qed_chain_init(struct qed_chain *chain, const struct qed_chain_init_params *params, u32 page_cnt) { memset(chain, 0, sizeof(*chain)); chain->elem_size = params->elem_size; chain->intended_use = params->intended_use; chain->mode = params->mode; chain->cnt_type = params->cnt_type; chain->elem_per_page = ELEMS_PER_PAGE(params->elem_size, params->page_size); chain->usable_per_page = USABLE_ELEMS_PER_PAGE(params->elem_size, params->page_size, params->mode); chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(params->elem_size, params->mode); chain->elem_per_page_mask = chain->elem_per_page - 1; chain->next_page_mask = chain->usable_per_page & chain->elem_per_page_mask; chain->page_size = params->page_size; chain->page_cnt = page_cnt; chain->capacity = chain->usable_per_page * page_cnt; chain->size = chain->elem_per_page * page_cnt; if (params->ext_pbl_virt) { chain->pbl_sp.table_virt = params->ext_pbl_virt; chain->pbl_sp.table_phys = params->ext_pbl_phys; chain->b_external_pbl = true; } } static void qed_chain_init_next_ptr_elem(const struct qed_chain *chain, void *virt_curr, void *virt_next, dma_addr_t phys_next) { struct qed_chain_next *next; u32 size; size = chain->elem_size * chain->usable_per_page; next = virt_curr + size; DMA_REGPAIR_LE(next->next_phys, phys_next); next->next_virt = virt_next; } static void qed_chain_init_mem(struct qed_chain *chain, void *virt_addr, dma_addr_t phys_addr) { chain->p_virt_addr = virt_addr; chain->p_phys_addr = phys_addr; } static void qed_chain_free_next_ptr(struct qed_dev *cdev, struct qed_chain *chain) { struct device *dev = &cdev->pdev->dev; struct qed_chain_next *next; dma_addr_t phys, phys_next; void *virt, *virt_next; u32 size, i; size = chain->elem_size * chain->usable_per_page; virt = chain->p_virt_addr; phys = chain->p_phys_addr; for (i = 0; i < chain->page_cnt; i++) { if (!virt) break; next = virt + size; virt_next = next->next_virt; phys_next = HILO_DMA_REGPAIR(next->next_phys); dma_free_coherent(dev, chain->page_size, virt, phys); virt = virt_next; phys = phys_next; } } static void qed_chain_free_single(struct qed_dev *cdev, struct qed_chain *chain) { if (!chain->p_virt_addr) return; dma_free_coherent(&cdev->pdev->dev, chain->page_size, chain->p_virt_addr, chain->p_phys_addr); } static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *chain) { struct device *dev = &cdev->pdev->dev; struct addr_tbl_entry *entry; u32 i; if (!chain->pbl.pp_addr_tbl) return; for (i = 0; i < chain->page_cnt; i++) { entry = chain->pbl.pp_addr_tbl + i; if (!entry->virt_addr) break; dma_free_coherent(dev, chain->page_size, entry->virt_addr, entry->dma_map); } if (!chain->b_external_pbl) dma_free_coherent(dev, chain->pbl_sp.table_size, chain->pbl_sp.table_virt, chain->pbl_sp.table_phys); vfree(chain->pbl.pp_addr_tbl); chain->pbl.pp_addr_tbl = NULL; } /** * qed_chain_free() - Free chain DMA memory. * * @cdev: Main device structure. * @chain: Chain to free. */ void qed_chain_free(struct qed_dev *cdev, struct qed_chain *chain) { switch (chain->mode) { case QED_CHAIN_MODE_NEXT_PTR: qed_chain_free_next_ptr(cdev, chain); break; case QED_CHAIN_MODE_SINGLE: qed_chain_free_single(cdev, chain); break; case QED_CHAIN_MODE_PBL: qed_chain_free_pbl(cdev, chain); break; default: return; } qed_chain_init_mem(chain, NULL, 0); } static int qed_chain_alloc_sanity_check(struct qed_dev *cdev, const struct qed_chain_init_params *params, u32 page_cnt) { u64 chain_size; chain_size = ELEMS_PER_PAGE(params->elem_size, params->page_size); chain_size *= page_cnt; if (!chain_size) return -EINVAL; /* The actual chain size can be larger than the maximal possible value * after rounding up the requested elements number to pages, and after * taking into account the unusuable elements (next-ptr elements). * The size of a "u16" chain can be (U16_MAX + 1) since the chain * size/capacity fields are of u32 type. */ switch (params->cnt_type) { case QED_CHAIN_CNT_TYPE_U16: if (chain_size > U16_MAX + 1) break; return 0; case QED_CHAIN_CNT_TYPE_U32: if (chain_size > U32_MAX) break; return 0; default: return -EINVAL; } DP_NOTICE(cdev, "The actual chain size (0x%llx) is larger than the maximal possible value\n", chain_size); return -EINVAL; } static int qed_chain_alloc_next_ptr(struct qed_dev *cdev, struct qed_chain *chain) { struct device *dev = &cdev->pdev->dev; void *virt, *virt_prev = NULL; dma_addr_t phys; u32 i; for (i = 0; i < chain->page_cnt; i++) { virt = dma_alloc_coherent(dev, chain->page_size, &phys, GFP_KERNEL); if (!virt) return -ENOMEM; if (i == 0) { qed_chain_init_mem(chain, virt, phys); qed_chain_reset(chain); } else { qed_chain_init_next_ptr_elem(chain, virt_prev, virt, phys); } virt_prev = virt; } /* Last page's next element should point to the beginning of the * chain. */ qed_chain_init_next_ptr_elem(chain, virt_prev, chain->p_virt_addr, chain->p_phys_addr); return 0; } static int qed_chain_alloc_single(struct qed_dev *cdev, struct qed_chain *chain) { dma_addr_t phys; void *virt; virt = dma_alloc_coherent(&cdev->pdev->dev, chain->page_size, &phys, GFP_KERNEL); if (!virt) return -ENOMEM; qed_chain_init_mem(chain, virt, phys); qed_chain_reset(chain); return 0; } static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain) { struct device *dev = &cdev->pdev->dev; struct addr_tbl_entry *addr_tbl; dma_addr_t phys, pbl_phys; __le64 *pbl_virt; u32 page_cnt, i; size_t size; void *virt; page_cnt = chain->page_cnt; size = array_size(page_cnt, sizeof(*addr_tbl)); if (unlikely(size == SIZE_MAX)) return -EOVERFLOW; addr_tbl = vzalloc(size); if (!addr_tbl) return -ENOMEM; chain->pbl.pp_addr_tbl = addr_tbl; if (chain->b_external_pbl) { pbl_virt = chain->pbl_sp.table_virt; goto alloc_pages; } size = array_size(page_cnt, sizeof(*pbl_virt)); if (unlikely(size == SIZE_MAX)) return -EOVERFLOW; pbl_virt = dma_alloc_coherent(dev, size, &pbl_phys, GFP_KERNEL); if (!pbl_virt) return -ENOMEM; chain->pbl_sp.table_virt = pbl_virt; chain->pbl_sp.table_phys = pbl_phys; chain->pbl_sp.table_size = size; alloc_pages: for (i = 0; i < page_cnt; i++) { virt = dma_alloc_coherent(dev, chain->page_size, &phys, GFP_KERNEL); if (!virt) return -ENOMEM; if (i == 0) { qed_chain_init_mem(chain, virt, phys); qed_chain_reset(chain); } /* Fill the PBL table with the physical address of the page */ pbl_virt[i] = cpu_to_le64(phys); /* Keep the virtual address of the page */ addr_tbl[i].virt_addr = virt; addr_tbl[i].dma_map = phys; } return 0; } /** * qed_chain_alloc() - Allocate and initialize a chain. * * @cdev: Main device structure. * @chain: Chain to be processed. * @params: Chain initialization parameters. * * Return: 0 on success, negative errno otherwise. */ int qed_chain_alloc(struct qed_dev *cdev, struct qed_chain *chain, struct qed_chain_init_params *params) { u32 page_cnt; int rc; if (!params->page_size) params->page_size = QED_CHAIN_PAGE_SIZE; if (params->mode == QED_CHAIN_MODE_SINGLE) page_cnt = 1; else page_cnt = QED_CHAIN_PAGE_CNT(params->num_elems, params->elem_size, params->page_size, params->mode); rc = qed_chain_alloc_sanity_check(cdev, params, page_cnt); if (rc) { DP_NOTICE(cdev, "Cannot allocate a chain with the given arguments:\n"); DP_NOTICE(cdev, "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu, page_size %u]\n", params->intended_use, params->mode, params->cnt_type, params->num_elems, params->elem_size, params->page_size); return rc; } qed_chain_init(chain, params, page_cnt); switch (params->mode) { case QED_CHAIN_MODE_NEXT_PTR: rc = qed_chain_alloc_next_ptr(cdev, chain); break; case QED_CHAIN_MODE_SINGLE: rc = qed_chain_alloc_single(cdev, chain); break; case QED_CHAIN_MODE_PBL: rc = qed_chain_alloc_pbl(cdev, chain); break; default: return -EINVAL; } if (!rc) return 0; qed_chain_free(cdev, chain); return rc; }
2024-07-14T01:27:17.568517
https://example.com/article/5156
Life is not easy. Especially when you are in a family of invisible illnesses and disabilities. It can be serious, funny and downright hard! But we make it. Just like everyone else. We just do it in a different style. Saturday, March 28, 2009 After the last post, I really hate to change the direction and emotional energy of the blog, but this is supposed to be a real account of the kinds of difficulties the Unique Family goes through. So, today, I am going to cover an area of my life that is unfolding even as I write this. A while back, I posted an audioblog that attempted to reveal some of the turmoil that has surrounded my personal life for the last several years. I am talking about my husband and the multiple areas of his life that we are finding out are stunted and deformed. When I wrote his Update, Update, Part 1, I focused on the epilepsy and sleep apnea, and only lightly touched upon the depression into which he slips every once and awhile. Today, I am going to tell you what I think is going on. Now, I am not a doctor. We haven't seen any doctors to confirm anything yet, but if there are any parents of children with autism, adhd or learning disabilities out there reading this, you know a problem when you see it. For years before I met him, my husband put up a very good front for his family and friends. He projected himself as a friendly, outgoing, funny, computer entrepreneur. He had loads of friends, loved to eat out and go to the movies. And yet, certain things didn't line up. Little stories would sneak out now and then. Like his fear of needles. No one loves them, but we would not jeopardize our health to avoid them. He would. He seemed to be indecisive. Any decision took so long to make and then he second-guessed himself two, three or more times. The people closest to him didn't seem to be going anywhere. For all his "out-goingness" and entrepreneurial drive, he picked people that weren't progressing or growing to be his closest friends. Still, he seemed to be looking positively into the future when he met me, and he wanted my children and I to be a part of it. He won me over with his sense of humor, his loyalty and patience, especially since he walked into my life when I was on a downward spiral with my health. But,the truth is, the whole thing was a facade. He is not a successful businessman.He is not a person who can lead a business or a family.He cannot handle financial responsibility.He does not have many friends and frequently offends the ones he has.He is not respected at his job. I could go on, but, one, I think you get the idea, and two, seeing it in print is depressive. But it is the truth. The past several months have been one of major enlightenment for me as the house of cards slowly fell apart. Financially, physically and emotionally. When I did the audio-blog and even the post about going on a Faith Walk, it was in the midst of understanding that the task before was not small. It was a monumental undertaking that might drain everything I had in me in order to see it through. I faced the fact that my marriage was not going to be what I hoped it would be, followed by a little trickle of fear that hissed, "Run for your life!" Today, almost a month since the beginning of the Faith Walk, I'm ready to state some truths and affirm the mission I intend to embark on as long as I can. The Truths: 1. My husband is a survivor. He has survived verbal, physical and sexual abuse.2. My husband has epilepsy and sleep apnea. This may have affected his cognitive functioning.3. My husband may have a learning disability.4. My husband suffers with emotional disorders, specifically depression and anxiety disorder. My affirmations: I believe that my husband is not a malicious, mean-spirited person, but that for so many years of not getting the help he needed, as an act of survival, he uses deception, lies and fantasy to cope with his deficiencies. He lies not only to me, but to himself. I believe despite all of his difficulties, he can feel love and does love me and my children. I believe we are living in the best of times to get help for the ailments/disorders he has. I believe intellectual and mental disabilities need to be treated with dignity, respect and kindness. Negativity only reinforces the need to deceive and hide. I believe that the future will be a rough one. I cannot guarantee that I am cut out for this. I respect my right to say I can't. I ultimately believe that love is the ultimate key to solving these problems. Without love, I turn off the ability to seek information to alleviate them. Without love, I cannot act with compassion. Without love, I manipulate and abuse, making me no different from his other abusers. I don't know what else to say, except to ask those who pray, do so. Those who believe in positive energy, speak words of affirmation about us. Those with similar knowledge or life experiences, speak freely. I appreciate everything and anything you do. Photos were taken from Google Image searches. Final painting is the work of d.Lawrence Coyle Tuesday, March 24, 2009 I am going to have to start a section in my blog called "People I Met Through Twitter." Maybe, it will be on the new blog site I am currently slaving away on day and night. If you haven't read Momentous Decision, click here to read it. It talks about my idea to start a new blog. The end result is not exactly like I posted, but it outlines the spark that ignited my dream of new bloggie directions. But this post is about a wonderful company I befriended on Twitter called Smart Knit Kids. Actually, the company is called Therawear and one of their products is called Smart Knit Kids. Their Twitter name is @smartknitkids. Twitter is a great invention and businesses are getting on everyday. I think that is wonderful, but if you are not willing to do what @smartknitkids did, come to the front of the classroom, listen and learn. First, even though there was a logo (everyone wants to tell you to put a face; not necessary), the person tweeting came across as real. They laughed (LOL!) email-style, they conversed and then presented a product I might be interested in. They explained the benefits to me; how it was a product specifically made for my children's sensory needs. I was expecting a pair of socks. In less than a week, I got five. Two for each child and one pink pair for me. (Now, pink is not my favorite color, but I was intrigued.) The package came by UPS and included an handwritten note. Okay, that blew me away. But, what was better, was that their product lived up to what they said. Instant integrity. My oldest, who has Aperger's immediately put them on. Lately, he has a thing for socks and I am always looking for good ones. My youngest son, who has all manner of podiatric problems and is always complaining of ill-fitting socks and hurting feet, tried on the Large, but really needed the Xtra Large. That fit perfectly. Well, you know I have to have customer feedback. The exchange went something like this: "I think the large is too small." - young son"Well, more for me, then." - oldest son"Hey!!" - young son"Here is the xtra large for you. How is that?" - ever-vigilant Mom"Oh, that is much better. Perfect." - young son"So...how do they feel?" - worried Mom, who is accustomed to rejection."They are fine." - young son"Yea, just fine." - oldest son Now, that may not sound like music to anyone's ears, but in my house, that is a rave review! Neither one of them gave them back or took them off. In fact, three hours later as I write this, they still have them on. Next, I visited their site. I was a little shocked. It was your corporate-looking website with an online store that I have probably Google-searched passed many times. But, I knew someone in here. Her name is Rose. She sent me a note with her name on it, so I was comfortable. I found the socks, priced them and prepared to place our first order. And, their socks are value-priced compared to other specialty sockwear! Now, some may say, well, Judi, of course, you write a good review because they sent you FIVE pairs of FREE socks. Okay, you are missing it. They didn't have to do that, I didn't have to even talk to them online and this whole thing could not have happened. I could be blogging about something else, but I am not. I am blogging about a company that wasn't afraid to get to know me, the customer, before they sold me a product. Then, their product lived up to their word. In other words, there was a connection. One of honesty and friendship, and a sale. In my new blog, I want to write and discuss all about these kinds of connections. Big and small. Online and off. And, Therawear/Smart Knit Kids will be in there. Thanks for not spamming me and thanks for making a quality product. And, this is the only time you will see a posted picture of me in pink socks! Only you could get me to do it! Saturday, March 21, 2009 The kids have to get haircuts today and we need to do grocery shopping. I am starting the third week of new classes and the work will begin to pile up from here on out. I love how teachers lull you in the beginning and then BAM!, read 8 chapters and give me 2000 words on this inane subject written in APA style. But I digress. I just wanted to let everyone know...WE HAVE HEAT! Yes, our homeowners' insurance office sent someone over to look at our poor old furnace. It turned out to be a busted thermostat (!!) and a connection that needed to be reset on the furnace. Another person told us that the gas valve had died and the whole furnace needed to be replaced...to the tune of $1800. The guy from the insurance told us that even though it is the original furnace: "you know, they don't make them like this anymore and you have more life left to it." In other words, the furnace is fine. *Sigh* All I can say is paying the $335 was better than $1800...and my Popsicle Toes have thawed out. Wednesday, March 18, 2009 I don't often post two times in one day. Even, though I may often have more things to say, I try to keep the posts down to one a day. And, they have gotten shorter and more to the point (I hope!). But, today, I had two things to say. One was announced in the post below this one. The other is to finally finish a series of posts regarding my family. This post makes Updates, Updates, Part 5. I finally have to talk about myself. Sometimes, this is very easy. I feel I could talk about myself all day. Not sure you want to hear it, but some days I am so in touch with who and what I am, I could go on and on. Other days, I question my reasons for being, my motives, my ever changing and unenlightening emotions, etc. Basically, some days, I don't know who the heck I am or what the heck is going on. But in Updates, Updates, I usually focus on the reason each member is a part of this Unique Family. When it comes to me, I am a member, because, well, I was the one who heard the phrase in my head, that fateful morning, when I didn't think I could go on anymore. I heard "You have a Unique Family." And, it was I who had to place myself in the Unique Family first and then bring the rest. Without sounding too psychological, I had to see myself as part of something unique and wonderful, even when, on the surface, it didn't look that way. I certainly didn't feel like I wanted to be included. Certainly didn't feel like I would write about it, either. And yet, both of those things happened. So, I will try to write about myself and what I face every day. I have struggled with some limitations from early on and others surfaced as I got older. For all I can find out, I was born visually and partially hearing impaired. I didn't find out until I was 37 that I have degenerative myopia. That is a fancy way of saying my vision never stops changing. I usually have to change my glasses every year or so. It is at the point now that I have to use assistive technology to get things done - CCTV, large 22” monitor on my computer, large text and icons, text readers and speech to text synthesizers, etc. There are other items I would like to purchase, like an Optelec Farview. This would help me outside of the house. I still have a driver's license, but I don't drive much, and never on the highway. My life and the lives of others are too precious to me. I have also learned that my hearing loss is not just about being partially deaf. I have permanent nerve damage in my inner ear, which has caused vertigo to be a permanent part of my life. I had a few episodes of it as a child, but after 2006, it decided to show up every day. There is therapy, and I may try it, but for now, three different meds keep the world from shimmying, and the nausea and migraines at bay. I have mobility impairments (herniated discs - 5 in all) as well, but most days this is hardly noticeable. After being practically bed-bound for a year, I had back surgery in 2005, that returned me to the land of the walking, but, no hikes, long walks or drives for me. I can drive 20 minutes one way, but the return trip, I will pay for it. Anything longer, someone else is doing it, and even then, I will stiffly get out of the car. I have an allergy to dust mites that I didn't know about until last year. Due to this allergy being untreated all of my life; I also suffer with multiple chemical sensitivities and a weakened immune system. I am getting stronger (I made it through this whole winter without one cold, flu-like episode or bout of sinusitis! Not even a runny nose!), but I still can have violent reactions to fragrances, everyday cleaning supplies and chemicals, and dust that can halt my activities in a second. I don't need an epipen, but I will always have Benadryl, Zyrtec or Allergan handy. I continue to battle diabetes through diet (vegetarian/vegan) and herbal supplements, but have had to start taking meds for hypertension. Since I have been treating my allergy and the inner ear problem, I have not suffered much with bruxism, TMJ or trigeminal neuralgia (TN). I have survived and overcome Bell’s Palsy, RSD/CRPS, optic neuritis, and a 40% disability in my lower left leg (due to a fractured tibia that went undiagnosed for 3 weeks). I have had seizures, multiple faints and chronic fatigue syndrome-like symptoms since my late 20s. To look at me, you wouldn’t be able to tell all of that. In fact, if I go back to wearing contact lenses, you won’t even be able to tell I have a vision issue. I am truly a person with invisible disabilities. If you read the post on "Learning to Be Less Than Perfect," you know that during my childhood, most of this went untreated. As an adult, I didn't really acknowledge my weak state of health even as I was staring disability in the face. I continued to work and ignore my health needs, trying to be stoic like my parents. Even after receiving disability, I continued to try to work; at home and in temp jobs. I only stopped in 2006, after the bout with Bell's Palsy left me shaken, broken and scared. I had to pay attention or no one knew what would happen next. I thank God, today, for that life-altering experience. I would not be writing today, starting an online business, making the friends I have, if I had not faced one of the most difficult periods of my life. Here I go again, making a really long post. Sorry. So, now you know a little about the final member of the Unique Family. The one God chose to bring these stories to you. Perhaps this is one of my purposes in life. I hope it makes a difference in yours. P.S. While looking for a pic for this post, I came across this quote from The Jungle of Life: There it is. That's the big decision. Kind of a let-down, maybe? Well, I took all night to decide this. Last night, after thinking long and hard about it, I decided to start a blog related to my business. It will be self-hosted through my hubby's hosting company and it will chronicle An Extra Hand Serivces' rebirth; its twists and turns, stops and starts. I also want to have another platform for announcements, giveaways, business and financial tips and motivational, encouraging lines for businesses, big or small. What about this blog, you ask? It is going to remain exactly what it started out to be: the events in the life of the Unique Family. I don't want to water down the message of how we live with chronic illnesses and disabilities with "Hey, click here for this or that!" I know, you are thinking, most people wouldn't separate out their business from their personal. Well, for me, this blog has always been about making friends, sharing stories, baring our souls, laughing at pictures and comments of encouragement. And, I want to keep it that way. It is dear and precious to me. It has helped me get to the place that I can even think about working again on a business idea. The new blog will be about business; running one, setting up one, getting customers, hitting roadblocks and overcoming them. I hope to have guest bloggers, who will give tips and inspiration. I will upload videos (not of me!-if you listened to my audioblog, you know I have no stage presence!) and have more of a community spirit to it. Of course, friends will be there, too, because many of you have businesses, so there will be an overlap. Think of the new blog as the home office (where home may intrude a little!) and this blog as the living room, where it is all about the home front. I hope everyone here will also come into the office every once in a while. You are always welcome! Monday, March 16, 2009 Now, why do I look at that A- and wish it were an A? I mean, I came so close. I was at 94.73, but my final project fell short of the instructor's rubric, so I didn't make it. I admit it is my mother talking. I had a seriously Type-A Mom, back before they had terms like that. If you brought home a 95, she'd say "why couldn't it be a 96 or a 97?" I once received a final grade of 99 in a biology class. A final grade! And, yes, she asked me (and the teacher, mind you!) why didn't I get an 100? No one had ever got a 99, and she kept pushing for that perfect grade. The pressure used to be ridiculous. It didn't help that my older brother graduated high school at 15 1/2. Being the middle child, I was expected to be as good, if not better. I wasn't. I was the artsy, dreamy, talk to myself under the kitchen table-type kid. I sat in mimosa trees, smelling the blossoms and deciphered shapes in the clouds. I started out badly in school, nearly flunking 3rd grade. Was I too rambunctious (old term for ADHD)or didn't turn in my work? Nope. I just talked too much! LOL!! I laugh at that now. I just couldn't stop getting involved with everyone in the class and finding out how they were doing. Well, my mother had a real good talking to me (in those days, that meant, spanking) and I realized that I wasn't going to master anything talking all the time. So, I buckled down. Real hard. And, went on to be salutatorian of my middle school and graduate in the top 2% of my high school class. And, still, she kept pushing for more. Little did she know that physically, I was pushing myself to utter sickness and exhaustion. I never missed a day, until one day, in utter pain, I just walked out the the school. Top grades and all, I needed to rest. College proved to be disastrous. My eyes couldn't take it and my body seemed to be constantly racked with some virus or flu. To my darling mother's utter consternation, I never finished a degree. Five colleges and no degree. She was mortified. I really think I was relieved. Now, I am the parent and I have two lovely boys, who are far from stellar in grades. My older Aspie son is average, not your savant Aspie in any way. My youngest son probably has permanent memory damage and has a speech/language deficit. I learned early on that I could not have the same attitude of my mother. I had to cut them some slack. And, today, I realized that I have to cut myself some slack, too. I deserve to be okay with less than perfect. I deserve to turn the record player of parental disapproval off and enjoy my return to school. Some days I face #autism head on. Some days I hide from it and I don't know exactly why. Answers? I answered with this: But isn't that like life? Look at the hiding days as reflective. No one takes life on head on everyday. Even God rested. #autism Boy, I am sure my mother is rolling over in the grave. But, when I look back on that tweet, I realize I have learned to take it easy. Perfect grades don't make perfect lives. And, all of us have something to offer, even if it is less than perfect. Some days we are gung-ho, and other days, we need to hide. Saturday, March 14, 2009 Today, is my young son's birthday and I invited several friends over. He never get to see most of them since becoming homebound, so having guests over always puts a smile on his face. He walked by just now and said, "It's not that bad, Mom. It's almost as good as going out." Bless his growing heart! With the little I scraped together, I bought a huge picture cake from Giant and three Ultimate Meat Pizzas from Walmart. The first pizza was inhaled between Kirby, Kingdom Hearts and Supersmash Brothers Brawl. I got two slices from the second one(I still eat meat, even though most of my regular meals are veggie). Now, there are calls for the cake. I will have to make this quick, so I can get back to lighting candles. Just to hear the sound of laughter and good innocent young teen ribbing felt good. The house wasn't too cold today and everyone pitched in and cleaned three rooms in half an hour. Sometimes, good things happen in the midst of it all. Thank goodness for those times. I use them like finger holds, clinging to each meager one as I continue to climb. Have you ever had something difficult and you couldn't even write it down? I am going through some difficult times and I ended up using an audioblog for this post rather than write it. I wrote somewhere, that writing things down makes them permanent. So does blogging, audio or not, but it was just easier. The Unique Family just got a little more unique. Unique, hard to handle and difficult to talk about. Listen if you want. If I feel comfortable with this, I will make more. Friday, March 13, 2009 Okay, I have been tooting the business horn for a little while here. It is important in the Unique Family, due to our current precarious financial situation. But today, I am going to talk about the young son. If you have read Updates, Updates Part 2, you know my youngest son has a rare disorder of the autonomic nervous system. In simple words, it doesn't work properly. My explanation for this is usually, think about whatever you don't think about in your body; heart rate, blood pressure, etc. Okay, now imagine if all those things didn't work the way they are supposed to. That's Dysautonomia. If you want more information, click on the title of this blogpost and you will go to the only support group for children with this disorder. Well, my son's birthday was Thursday and he is now 13. I have been commanded to not refer to him as a "boy" anymore, but a young man. I will try to comply without giggling. I can still remember chasing a soaking wet naked, shrieking body down a hallway! Back to the point I was trying to make. I took a little bit of money and bought a Wii Fit. Yes, we have the Wii. It actually was bought six months before Christmas 2008 and stored away for the end of the year. Now, we have the Wii Fit. Boy, was he excited. Everyone in the house did their balance testing (don't ask me about mine. That board and I are not on speaking terms right now.) and then he went on to try everything: step aerobics, yoga, walking in place. Whatever that board came with, he tried it before the night was over. Enter the next day. First, he couldn't get out of bed. Then he spent the entire morning moaning, groaning about his back, sides and legs. Sounds no different than an out of shape person, right? Except that along with the stiffness came the crushing fatigue. He was not able to do anything past open a food packet for the dog. When we went out on an errand, he sat in his wheelchair the whole time and only lasted one store. Such is the life of someone with dysautonomia. While I am grateful he can even stand up without fainting now, and he managed to do two sessions on the board, it will be days before he is back up to anywhere near normal. This was supposed to be a alternative to the mind-numbing physical therapy sessions we were trying for the last two months. Sessions that were boring, time-consuming (have to drive to the hospital) and produced no results. And, doubling his main medication has not seemed to make any difference. *Sigh* At least, the neighbors will have something to do when they come over. Wednesday, March 11, 2009 Two or three blogposts ago, I announced that I would be resurrecting my old business, An Extra Hand Services. Poor thing, she doesn't even have a website yet! Not even a email. For now, the little company that could is just trying to breathe its first breath. But, as God would have it, yes, I am mentioning God, good things have already begun to happen the very day after that post. You may call it the Law of Attraction or a superior power, it doesn't matter to me. What matters is the real changes and miracles that happened within a week. Within this one week, people have just been walking up to me and blessing me with money. I don't know if they know our plight, but they come up with some silly reason, like, "oh, thank you for driving my daughter to school the other day, Here's $20.00." Now, you and I both know a car ride for less than 3 minutes would never cost that, but she pressed it into my hand and walked away. Here's another one. My brother, who borrowed some money from us last year and never paid, sent us a check. For the whole thing. $1500. I know I don't have to put numbers out there, but numbers speak volumes. I want you to know this is real. And, just yesterday, I discovered a service that I would love to offer through my newly re-birthed company. It is called Send Out Cards. This, to me, is just the kind of thing I like. Something that can make a big difference with just a little time. Sent Out Cards allows you to pick a "real" card from over 10,000 cards online, personalize it, even upload a picture and then somewhere in Utah, they print a real card with your words and send it to whoever you want. A real card, everyone. Not an e-card with an expiration date on it. A real card stock greeting card with a REAL stamp on the envelope. Of course, I am knocking my head and going, "Why didn't I think of that?" In these days of Twitterific tweets, e-chain mail and suspicious links, wouldn't it be nice to get a real card in the mail? I sure think so. I think it is such a great idea that I am offering to send a free card to 10 people who email me with their addresses. Or better yet, I will send it to someone for you. And then I will blog about the results and your comments. Send your addresses to: agapepantry@yahoo.com (this is a 2nd email address I have. Must tell you the idea behind that one day!) The first ten full mailing addresses I receive, I will send card to it. It could be to you or to someone you know needs to get a card right now. Tell me what kind of card you want sent. And, what you want it to say on your behalf. I just want you to see how easy this is. And, if I could do it, you could do it, right from your computer. Also, send me your birthdays. Once I get them into their easy-to-navigate contact manager, I will never forget them again (I usually forget EVERY year!) If there are any businesses out there that would like more information, please send an email to that address as well. I am setting up commercial accounts and am happy to be your back office and help you reach out to your customers, clients, prospects, friends and family. In these recessionary times, it will be appreciated so much more. A real card with real sentiment. There is even a way to send in your own handwriting so that you can "type" in your own handwriting. An Extra Hand Services is proud to be a independent distributor of Send Out Cards. View the video below to see a heartwarming true story related to Send Out Cards: Saturday, March 7, 2009 From a child that rocked and flapped and disappeared into his own world to a junior in high school, who is facing the stiffest tests in any young person's life. Don't ask me how I got here. One day at a time doesn't seem to do it justice. And I haven't gone crazy and neither has he. Don't say there aren't miracles. I don't know any numbers and I am too tired to do the research, but how many children with Asperger's take their SATs or the ACTs? I am not talking about gifted Aspies. I am the mother of the run-of-the-mill child with average intelligence according to all the tests he's taken, but with definite below-average verbal skills. I won't parade the numbers in front of you, but he was diagnosed with Receptive/Expressive Speech Disorder since he was in elementary school. His recent testing put his reading comprehension age around 8 years old. And yet, his word recognition age is around 21 years old. Should he even attempt the SAT with a vacillating scores like that? He is pulling a C in Math and we have struggled to keep those grades in the high C range. It just seemed too far out there. I know this is a side topic, but he is always talking outside the box,creating new words and positively amazing us all with his quirky insights.I have talked/tweeted with other mothers who tell me great stories. I still remember a child grunting, whistling and humming.but now, he loves to create words that aren't in the dictionary (he knows this, because he loves to read the dictionary!). One word that family and friends have adopted is "linner." Linner is the comparative word for brunch. Brunch = breakfast and lunch. Linner = lunch and dinner. Linner is like a very late brunch or early supper. I told him people used to use the word supper, but he just replied, No, Mom, that word is used just like dinner now, so we need a new word." Can't argue with that one, so we use linner. Well, back to the topic at hand; here we are in his junior year, and everyone is talking college. "College!?! What!!?!!," as I gasp and sputter. Yes, the school and this crazy program I signed him up for (since we don't get therapy at all, I sign him up for every free program I can get my hands on. He has been in AVID, Education Talent Search,etc) are sending home reams of paper and thick, glossy books entitled "The 411 on College." I signed him up for the SAT. Then I took a look at the SAT. Kinda backwards, I know, but a lot is going on in our house lately. It hit me real hard: there is an essay requirement on the SAT. ESSAY. 8 yr. old comprehension. Okay, that's not good. I did make a half-hearted attempt to search the library and online for help, but quickly realized that this test was just not going to happen.The study guides were thick, newsprint looking monstrosities. The tapes had suspiciously vanished. Dead end. Then, I headed online. YouTube (which, by the way, my son loves at the moment) had videos, but I couldn't get into any of them. Maybe I am wrong, and if someone finds a great one, let me know. Nothing moved me at all. It didn't look like this was going to happen. At least, not by May 13th (remember, like a dope, I scheduled before the due diligence) My son has become very resilient over the years. We have no more meltdowns, we have no more stiff as a board "honey, are you there?" episodes. But, remembering my SAT almost through me into a panic and I remember scoring very high. I just couldn't do this to him. So, I decided that he would take the ACT. What is the difference? SATs test critical thinking, logic and reasoning, where the ACT focuses more on what have you learned scholastically. I don't think I need to tell you that they really don't want my son to draw conclusions or make critical thinking analyses. They would never believe their eyes. The kind of leaps and connections he makes here at home are out of this world. But, still the problem was preparation. Even with a total multiple choice test with no essay, he needed prep. And then, I found it. E-Prep. I fell in love. Here was a site that looked like it was MADE for us. Video run instruction. The ability to stop videos at any time. An entire prep course in video, showing, not just telling. I am in love. I don't do reviews very often, and this is not really one either. Check the site out, but this is the answer for BOTH my children. For the oldest, who has a fantastic memory, he will quickly remember the video instructions. For my young son, who has a damaged memory system, the moving visuals that can be repeated are perfect in order to increase retention and recall. Unlike static words or audio, videos always seem to ease learning and remembering for him. The downside? Yes, there is one. The course is not free. But they do give options that run from $69.00 to $249.00. Somehow, some way, I will scrounge up the money for one of the courses in the middle. Hopefully, it will be enough to give him a good grade. And, then we can start discussing what he would like to study and what he would like to be. Now, that folks, is a WHOLE other post. Must tell you about the fun we are having getting him to volunteer and find a job. Thursday, March 5, 2009 Yesterday's post was a very hard one to write. I don't often let out that I feel overwhelmed and scared. But that post brought some quick assurances and good vibes from so many people. Thanks to all of you. Well, what do you do when you get up the next day and nothing is any different? Did you read the last post to the end? There was a list of things to do and I am doing them. Today, I printed out two flyers that I put near my computer to gaze at when ever I take those 15 minutes breaks per hour to rest my eyes. One says, "Opportunity is missed by most people because it is dressed in overalls and looks like work." The quote is by Thomas Edison and was tweeted to me this morning by @Outlaw_Marketer. The other one, I neglected to keep the tweeter's handle (in the future, I will be linking all tweets to web pages), but the quote went like this, "Vision without action is merely a dream. Action without vision just passes the time. Vision with action can change the world." Those are the new quotes near my computer to keep me going when I feel so overwhelmed. They join "Obstacles are placed before you to test your resolve and commitment towards obtaining your goals" and If God brings you to it...He will bring you through it." All these words mean a lot to me. They are reminders that others are suffering, others are in a worse position than I am and that I should give in for a moment, but get back to the business of doing what I do best: fighting my way out of a challenge. So, today, I am up early (5:30am) with my head brimming with ideas. If you haven't noticed the new banner on the right, it is for a program I am involved in though a great online herbal store called iHerb.com. I am not a great salesperson, so I will just say occasionally you will see announcements about what they are doing in the side bar. This is a company that I have personally been involved with for nearly 13 years. I posted a note on my Facebook about my involvement. Also, I am following a bunch of coaches through Tweeter, and three of my favorite ones right now is Pat Weber (@patweber) who is a great coach for someone like me - introverted and shy (did you know those are two separate types?), J.Sewell Perkins (@thesciccoach), who appeared on a great blogtalkradio show that I will follow as Tracey Tarrant parades experts in front of me every week. The third great coach who is just starting out, but is SO sincere, I just love her to death is Joanne Julius Hunold, CPC (@intandem). She is another coach for us introverted who is very real and down to earth. I mention these women because in Tip #1, I mention getting rid of the negative speak and getting positive people in front of you. Well, if you are mostly housebound like I am that is not easy. So these women are my neighbors (along with all of you who comment or send emails or call!). I have made new friends who challenge me to go beyond my current (cold) circumstances and keep looking for those opportunities. Even if they seem like work. Yesterday, I explained to my two boys about the circumstances in our home. Things are not well and I let them know it. But I assured them just like I did when I became disabled, Mommy will take care of it. Trust me. My oldest son, who always seems to know the right thing to say, exclaimed, "Well, we can just pray to God about it. But if you say you will take care of it, I know you always do." Who says Asperger's needs to be cured? He warms my heart! Well, I hit the wall...I am okay...and I am back in the business of taking care of my family. Wednesday, March 4, 2009 Below is a list of Tweets I sent this morning. Not so much to my followers, but to me. You see, I am feeling like I could hit a depressive wall soon. You know that dissonate wall when your dreams and goals smack into reality really hard? Well, I see it coming. Here is the problem. I want to do so much. It's the reason I am back in school. I want to provide a safe home and environment for my children with a little land around it. I want to have a place for my sister to retire to when she is ready. I want to live comfortable with enough to eat and read. And, right now, as I start my fourth day in a house without heat, I wonder if I will even live long enough to see my children grow up. My mother died when my sister was only 19, and I was 29. I have always felt we never had enough time. Maybe that is what is wrong. I feel like I am starting late. I mean, I am 43 and trying to complete my associate's degree. I am so slow reading and comprehending. I wonder what am I doing. So, this morning, in the bitter frigid air of my dining room, I decided that before I hit that wall, I would put on my inspirational music. You know what I mean. We all have those songs that keep us going when there is nothing tangible left to go on for. Of course, I cried. Listening to these songs always makes me feel like I have been running 100 miles an hour in the wrong direction. I have been focusing on goals and accomplishments; tangible ones like grades. when some days I need to focus on internal goals. The ability to look inward, take stock of our lives and think about the good stuff. Like the friends I have made through this blog, Facebook and Twitter. Like the difference people tell me I am making in their lives. How they appreciate me. Yes, little ole me. Folks, I can tell you right now I am about to enter a Faith Walk. It is a term I came up with over 10 years ago, after my parents died within seven weeks of each other, while I was pregnant with my 2nd son and after my then 1st husband decided he didn't want to be married anymore. A Faith Walk is a period of time you go through when you don't see ANY positiveness around. I mean it. Nothing looks good at all, and the disasters seem to pile up. How do you keep going through one of these? Everyone is different, but this is how I survived then and now. 1. Get rid of anyone's conversation that is not positive. Really. Even though you don't see anything good, surround yourself with good sounds and words from other people. 2. Go to the library (I had to walk to it one time) and come home with something funny, something miraculous and something spiritual. I don't care if you don't believe in any religion. Now is a time of negativity. Reach out for something beyond yourself and your situation. 3. Get the music that will keep you going playing. Put on constant repeat if you have to. Sing at the top of your lungs. Crying while singing is permitted. Getting angry at your situation is encouraged. This leads to letting the problem have its space and gets you ready to move on to solutions. 4, Take control of whatever you can. Control makes one feel a sense of power. The feeling of powerlessness has to be avoided at all cost. If all you can do is straighten the edges of the covers of the sickbed you lay in, by God, straighten them! Sit back and admire. Say to yourself, I did that! 5. Look forward to change. I mean this. I know you don't know where it is coming from, but get in the frame of mind that change eventually comes. I don't care if you think "this will never change." I was a single parent for 10 years. During that time, I became disabled and hit serious poverty. But that same situation is not here today. Back then, everyday I got up and looked for change. Expect it. It is coming. 6. Repeat this process for as long as it takes. So, to get back to the beginning. Here is the tweets I sent out this morning. It is some of the lyrics to one of my favorite uplifting songs by Bill Gaither called "Thanks for Sunshine." If you click on the little music player under the picture, you should be able to hear it. Well, I have to get going. This faith walk looks like it is going to be a long one.__________________________________________________________________________________________________________________________________________ judielise Listening to my favorite "Thank You" song and thought to tweet some of the lines. ThankU lines. Tuesday, March 3, 2009 Due to some really bizarre happenings in my life over the last 3 years (that I will have to blog about later), I am opening my company up again. For those of you who don't know, I started a virtual assistant company back in 1996, back when hardly anyone knew about virtual assistants. I performed clerical, customer service, and light graphic design work. I had a fabulous run for 10 years, working for everything from non-profits to sales executives and business directors, who needed extra staff. My greatest accomplishment was a mail program that I designed from scratch for a client that boosted his sales 532% in one year. That is still my crowning glory, thought it was tough designing the mail piece, coordinating with the mail company and keeping the back end database with updates twice a month. I closed it down in 2006 due to illness. Now that I have been getting my health under control and I am back in school, several people have asked me (not including old clients who have begged me) to open it up again. I have hesitated and I will tell you why. Sitting at a computer for 12-16 hours a day is no fun. It was hard on my back and after back surgery in 2005, I tried to limit my time sitting. Also, it is extremely hard on my eyes. I don't talk about it much, but I value my eyesight more than anything besides my children and try my best to protect what sight I have. But, because of life circumstances and the fact that I now sit here anyway, going to school, blogging, Twittering, etc., I have decided to start it up again. Plus, to tell you the truth, I miss it! Stay tuned for further posts about progress, projects, the trials and tribs of business ownership (again!), who I am working with and whether I survive this time around. If you go way back to the beginning of this blog, I brought over some links and articles regarding sleep issues in children. This is always a very big concern in our home, because of young's son inability to have good sleep experiences. I realize I am jumping around a bit in this blog (yesterday, new apps, today sleep!), but I have yet to figure out how to put blogposts into groups and categories. Once I do that, this jumping from subject to subject will be better organized. In my daily perusal of medical journals (yes, I have strange hobbies!), I came across these two article titles: I know reading medical jargon and study results is not everyone's cup of tea, but when it affects your everyday life, you get smart and interested really fast. Basically, the sleep world finally woke up and realized that a lot of the issues with cognitive function and performance can be linked to faulty sleep habits, patterns and brain wiring/firing. In our case, my young son barely seems to have a circadian clock. He was diagnosed with Moderate Obstructive Sleep Apnea and Delayed Sleep Phase Syndrome. It looked like narcolepsy (before he got his CPAP machine) and insomnia, which is kind of impossible to have. The big issue is that this has DEFINITELY affected his cognitive functions, executive functioning and memory. His short-term memory is shot and long-term is sketchy. He can not draw inferences and his brain stubbornly refuses to make leaps of connection from one subject (or even word!) to another related subject (or phrase!). Teaching him is very difficult (on bad days, I say it is non-existent) and I struggle to keep information flowing and relevant. Okay, I am getting off subject. This is not about my son, per se, but is about awareness. Parents, please listen to me. My son went through the usual diagnosis of ADHD for years. However, though the behavior is similar, it is NOT the same disorder. My son remains unresponsive to ADHD meds. In fact, he is on Concerta (also took Ritalin for awhile, but made the sleep phase problem worse!), but it does not increase focus or attention. Makes him more hyper actually. The point I want you to take away today is sleep disorders are very real. They have very real symptoms that mimic other disorders. And most doctors are not thinking "sleep disorder" immediately. This is changing. I had the chance to attend a sleep conference last year in D.C. sponsored by the Sleep Foundation. There I heard the latest research into sleep disorders. Out of that meeting came the recommendations for doctors to begin asking questions regarding sleep for children as young as 2 years old, especially if there were hyperactivity symptoms. Sleep issues are real. Here in the Unique Family, we live through it everyday. Three us (out of 5) wear CPAPs (hubby fights wearing his. Grrr!). All three have memory and cognitive issues. Pass this along to anyone you think it might benefit. Also, let me know, anyone in your family/Friends that you are concerned about? Monday, March 2, 2009 Today, I have come to the conclusion that I am a geek. How have I come to this conclusion? Because even though I spent 12 hours typing up my final project APA-styled paper for my IT class, I still found time to follow Tweeter and find new apps online. I will talk about one I played around with today. I actually got this from a blog I follow in my Google Reader (don't ask me how many blogs, I have totally lost count). It is from Jose Picardo who blogs at Box of Tricks. He is a wonderful person to listen to (great video content) and learn from. And one of his apps that he has written about is Animoto. Animoto takes your pictures and creates a wonderful dynamic slide show out of them with music. I just had to try it. Doesn't matter that the furnace died yesterday and I am sitting here as cold as ice in Baltimore's first big snow in nearly 6 years. Doesn't matter that I should be tired or that my eyes should rest. It is amazing how you can make your body wake up when you are excited about something. So, below, is my 1st attempt at Animoto and I love it! I used my rose flower pictures because I already have them batched at Flickr . A couple f steps later and this is what I got. I had stopped taking pictures because I hate putting them in albums where they languish. This may actually spur me to purchase a digital SLR. After all, my major in high school was photography. Check Animoto out and give it a try. Just found a very interesting twit app that I really like. TweetSheep.com generates a tag word cloud from the profiles of the people you follow. It is one of those little apps that seem to be poppping up all over the place. My TweetSheep cloud serves as an introspective look into who I tend to be drawn to on Twitter. It also let me know whether I am not searching for people who cover or represent certain subjects that are dear to me. I'll give you some examples. I have posted a pic of my current cloud. You can see that autism practically jumps off the page. I have said it already that the autism community is represented very heavily in Twitter. Then mom comes up quite big and so does technology. But as an IT tech major, that tells me my connection to the tech world is not so strong in Twitter. Now at first glance, that may not look good. But I already know enough about me to know that even though I love technology, I love it in context to people and learning. What is minuscule in my cloud is the disability tag. So small I could barely read it. Not good. I want to know more about disability advocacy, assistive technology and the issues that surround that community. Now, the thought comes to my mind: Am I not searching/connecting to Tweeters involved in these areas or are they not online? Here is an open call. Anyone who has followers or follows Tweeters involved in disability issues, AT and the like, please post some people to follow in the comments.
2024-02-08T01:27:17.568517
https://example.com/article/2980
Mona Singh is a popular Indian model-turned-television-actress who appeared in Ekta Kapoor’s Indian drama soap, Kya Huaa Tera Vaada, as the female lead protagonist, Mona Pradeep Singh. She is the ex-wife of Pradeep Singh (played by Pawan Shankar), and the story revolves around their life as a young couple who resides in Mumbai, along with their three children- Bulbul (played by Sargun Mehta), Rano (played by Ananya Agarwal) and Rajbir (played by Yatin Mehta). It is a family story that portrays the struggles of Mona after his husband has an affair with other women. She also did a pivotal role in the comedy drama film, 3 Idiots, under the direction of Rajkumar Hirani, in 2009. She played the role of Mona Sahasrabuddhe, the elder sister of Pia. Later, she acted as Koel Datta in the Bollywood comedy thriller film, Utt Pataang. Mona Singh was born on October 8, 1981, in Pune, India. Besides modeling and acting, she is also a TV presenter and a good dancer. She was once a winner in the dance reality show, Jhalak Dikhhla Jaa, on Colors TV. Her debut in television was from the popular series, Jassi Jaissi Koi Nahin, the Indian version of Yo Soy Betty, La Fea or Ugly Betty. She played the lead female protagonist role of Jasmeet Walia, the wife of Armaan (played by Apurva Agnihotri). This is the serial where Mona won the Best Actress in Drama Series and Outstanding Debut in the 2003 Aspara Awards. She also got the Television Personality of the Year and Best Female Actress in 2004 Indian Telly Awards. Mona also received the Best Popular Actress in the Indian Television Academy Awards in 2004, Best Female Actress in Indian Telly Awards, and Best Actress for Drama in the Indian Television Academy Awards in 2005. The show itself also won many awards over the years that it was aired. Mona was also part of the AIDS event in November 2006. Subsequently, she signed on with Sony Entertainment Television for being the brand ambassador of the channel for a span of 13 months. Mona was also part of the show, Extreme Makeover and Extra Shots. She became an ambassador for several brands as well. Mona Singh had a controversy when an MMS video leaked and went viral on the Internet. But the actress says, it was not her and she was saddened that someone would morph her body into someone else’s body just to make an overly sensational video. She filed a complaint against the Cybercrime Cell and she hopes that they can easily track the culprit responsible for this controversy. Another Version of this Bio... Mona Singh was born on 8th October, 1981, in Chandigarh, India. She made her debut on the small screen with the television drama named “Jassi Jaisi Koi Nahi”. Her character called Jassi accumulated lots of fame and applause for her. Jassi Jaisi Koi Nahi aired on Sony Entertainment Television, accumulated lots of T.R.P, and became one of the most popular television dramas of that era. Later in her career, she put her legs on the dance floor of Jhalak Dikhhla Jaa season 1 on Colors, and she grabbed the winner’s trophy by defeating Sweta Shalve. Owing to extreme popularity as a television actress, she got an opportunity to become the brand ambassador of Sony Entertainment Television for 13 months. Later part in her career, she hosted Jhalak Dikhla Jaa season 4 and also Extra Shots on Set Max. She made her maiden appearance on the Silver Screen with a movie called “3 Idiots”, in 2009, as an elder sister of lead actress, Kareena Kapoor Khan. She also endorsed various brands. In 2011, she got an opportunity to host Entertainment Ke Liye Kuch Bhi Karega, which aired on Sony Entertainment Television. She continuously hosted 5 seasons of that show. She bagged the “Apsara Award” for outstanding debut in 2003. She is currently working with a new television drama called “Kya Hua Tera Wada,” as Mona, a female protagonist. On top of this, she is also shooting for a movie called ‘Z Plus’ that satirises security provided to politicians.
2024-07-15T01:27:17.568517
https://example.com/article/6298
Q: Unit Testing Example with OCUnit I'm really struggling to understand unit testing. I do understand the importance of TDD, but all the examples of unit testing I read about seem to be extremely simple and trivial. For example, testing to make sure a property is set or if memory is allocated to an array. Why? If I code out ..alloc] init], do I really need to make sure it works? I'm new to development so I'm sure I'm missing something here, especially with all the craze surrounding TDD. I think my main issue is I can't find any practical examples. Here is a method setReminderId that seems to be a good candidate for testing. What would a useful unit test look like to make sure this is working? (using OCUnit) - (NSNumber *)setReminderId: (NSDictionary *)reminderData { NSNumber *currentReminderId = [[NSUserDefaults standardUserDefaults] objectForKey:@"currentReminderId"]; if (currentReminderId) { // Increment the last reminderId currentReminderId = @(currentReminderId.intValue + 1); } else { // Set to 0 if it doesn't already exist currentReminderId = @0; } // Update currentReminderId to model [[NSUserDefaults standardUserDefaults] setObject:currentReminderId forKey:@"currentReminderId"]; return currentReminderId; } A: Update: I've improved on this answer in two ways: it's now a screencast, and I switched from property injection to constructor injection. See How to Get Started with Objective-C TDD The tricky part is that the method has a dependency on an external object, NSUserDefaults. We don't want to use NSUserDefaults directly. Instead, we need to inject this dependency somehow, so that we can substitute a fake user defaults for testing. There are a few different ways of doing this. One is by passing it in as an extra argument to the method. Another is to make it an instance variable of the class. And there are different ways of setting up this ivar. There's "constructor injection" where it's specified in the initializer arguments. Or there's "property injection." For standard objects from the iOS SDK, my preference is to make it a property, with a default value. So let's start with a test that the property is, by default, NSUserDefaults. My toolset, by the way, is Xcode's built-in OCUnit, plus OCHamcrest for assertions and OCMockito for mock objects. There are other choices, but that's what I use. First Test: User Defaults For lack of a better name, the class will be named Example. The instance will be named sut for "system under test." The property will be named userDefaults. Here's a first test to establish what its default value should be, in ExampleTests.m: #import <SenTestingKit/SenTestingKit.h> #define HC_SHORTHAND #import <OCHamcrestIOS/OCHamcrestIOS.h> @interface ExampleTests : SenTestCase @end @implementation ExampleTests - (void)testDefaultUserDefaultsShouldBeSet { Example *sut = [[Example alloc] init]; assertThat([sut userDefaults], is(instanceOf([NSUserDefaults class]))); } @end At this stage, this doesn't compile — which counts as the test failing. Look it over. If you can get your eyes to skip over the brackets and parentheses, the test should be pretty clear. Let's write the simplest code we can to get that test to compile and run — and fail. Here's Example.h: #import <Foundation/Foundation.h> @interface Example : NSObject @property (strong, nonatomic) NSUserDefaults *userDefaults; @end And the awe-inspiring Example.m: #import "Example.h" @implementation Example @end We need to add a line to the very beginning of ExampleTests.m: #import "Example.h" The test runs, and fails with the message, "Expected an instance of NSUserDefaults, but was nil". Exactly what we wanted. We have reached step 1 of our first test. Step 2 is to write the simplest code we can to pass that test. How about this: - (id)init { self = [super init]; if (self) _userDefaults = [NSUserDefaults standardUserDefaults]; return self; } It passes! Step 2 is complete. Step 3 is to refactor code to incorporate all changes, in both production code and test code. But there's really nothing to clean up yet. We are done with our first test. What do we have so far? The beginnings of a class that can access NSUserDefaults, but also have it overridden for testing. Second Test: With no matching key, return 0 Now let's write a test for the method. What do we want it to do? If the user defaults has no matching key, we want it to return 0. When first starting with mock objects, I recommend making them by hand at first, so that you get an idea of what they're for. Then start using a mock object framework. But I'm going to jump ahead and use OCMockito to make things faster. We add these lines to the ExampleTest.m: #define MOCKITO_SHORTHAND #import <OCMockitoIOS/OCMockitoIOS.h> By default, an OCMockito-based mock object will return nil for any method. But I'll write extra code to make the expectation explicit by saying, "given that it's asked for objectForKey:@"currentReminderId", it will return nil." And given all that, we want the method to return the NSNumber 0. (I'm not going to pass an argument, because I don't know what it's for. And I'm going to name the method nextReminderId.) - (void)testNextReminderIdWithNoCurrentReminderIdInUserDefaultsShouldReturnZero { Example *sut = [[Example alloc] init]; NSUserDefaults *mockUserDefaults = mock([NSUserDefaults class]); [sut setUserDefaults:mockUserDefaults]; [given([mockUserDefaults objectForKey:@"currentReminderId"]) willReturn:nil]; assertThat([sut nextReminderId], is(equalTo(@0))); } This doesn't compile yet. Let's define the nextReminderId method in Example.h: - (NSNumber *)nextReminderId; And here's the first implementation in Example.m. I want the test to fail, so I'm going to return a bogus number: - (NSNumber *)nextReminderId { return @-1; } The test fails with the message, "Expected <0>, but was <-1>". It's important that the test fail, because it's our way of testing the test, and ensuring that the code we write flips it from a failing state to a passing state. Step 1 is complete. Step 2: Let's get the test test to pass. But remember, we want the simplest code that passes the test. It's going to look awfully silly. - (NSNumber *)nextReminderId { return @0; } Amazing, it passes! But we're not done with this test yet. Now we come to Step 3: refactor. There's duplicate code in the tests. Let's pull sut, the system under test, into an ivar. We'll use the -setUp method to set it up, and -tearDown to clean it up (destroying it). @interface ExampleTests : SenTestCase { Example *sut; } @end @implementation ExampleTests - (void)setUp { [super setUp]; sut = [[Example alloc] init]; } - (void)tearDown { sut = nil; [super tearDown]; } - (void)testDefaultUserDefaultsShouldBeSet { assertThat([sut userDefaults], is(instanceOf([NSUserDefaults class]))); } - (void)testNextReminderIdWithNoCurrentReminderIdInUserDefaultsShouldReturnZero { NSUserDefaults *mockUserDefaults = mock([NSUserDefaults class]); [sut setUserDefaults:mockUserDefaults]; [given([mockUserDefaults objectForKey:@"currentReminderId"]) willReturn:nil]; assertThat([sut nextReminderId], is(equalTo(@0))); } @end We run the tests again, to make sure they still pass, and they do. Refactoring should only be done in "green" or passing state. All tests should continue to pass, whether refactoring is done in the test code or the production code. Third Test: With no matching key, store 0 in user defaults Now let's test another requirement: the user defaults should be saved. We'll use the same conditions as the previous test. But we create a new test, instead of adding more assertions to the existing test. Ideally, each test should verify one thing, and have a good name to match. - (void)testNextReminderIdWithNoCurrentReminderIdInUserDefaultsShouldSaveZeroInUserDefaults { // given NSUserDefaults *mockUserDefaults = mock([NSUserDefaults class]); [sut setUserDefaults:mockUserDefaults]; [given([mockUserDefaults objectForKey:@"currentReminderId"]) willReturn:nil]; // when [sut nextReminderId]; // then [verify(mockUserDefaults) setObject:@0 forKey:@"currentReminderId"]; } The verify statement is the OCMockito way of saying, "This mock object should have been called this way one time." We run the tests and get a failure, "Expected 1 matching invocation, but received 0". Step 1 is complete. Step 2: the simplest code that passes. Ready? Here goes: - (NSNumber *)nextReminderId { [_userDefaults setObject:@0 forKey:@"currentReminderId"]; return @0; } "But why are you saving @0 in user defaults, instead of a variable with that value?" you ask. Because that's as far as we've tested. Hang on, we'll get there. Step 3: refactor. Again, we have duplicate code in the tests. Let's pull out mockUserDefaults as an ivar. @interface ExampleTests : SenTestCase { Example *sut; NSUserDefaults *mockUserDefaults; } @end The test code shows warnings, "Local declaration of 'mockUserDefaults' hides instance variable". Fix them to use the ivar. Then let's extract a helper method to establish the condition of the user defaults at the start of each test. Let's pull that nil out to a separate variable to help us with the refactoring: NSNumber *current = nil; mockUserDefaults = mock([NSUserDefaults class]); [sut setUserDefaults:mockUserDefaults]; [given([mockUserDefaults objectForKey:@"currentReminderId"]) willReturn:current]; Now select the last 3 lines, context click, and select Refactor ▶ Extract. We'll make a new method called setUpUserDefaultsWithCurrentReminderId: - (void)setUpUserDefaultsWithCurrentReminderId:(NSNumber *)current { mockUserDefaults = mock([NSUserDefaults class]); [sut setUserDefaults:mockUserDefaults]; [given([mockUserDefaults objectForKey:@"currentReminderId"]) willReturn:current]; } The test code that invokes this now looks like: NSNumber *current = nil; [self setUpUserDefaultsWithCurrentReminderId:current]; The only reason for that variable was to help us with the automated refactoring. Let's inline it away: [self setUpUserDefaultsWithCurrentReminderId:nil]; Tests still pass. Since Xcode's automated refactoring didn't replace all instances of that code with a call to the new helper method, we need to do that ourselves. So now the tests look like this: - (void)testNextReminderIdWithNoCurrentReminderIdInUserDefaultsShouldReturnZero { [self setUpUserDefaultsWithCurrentReminderId:nil]; assertThat([sut nextReminderId], is(equalTo(@0))); } - (void)testNextReminderIdWithNoCurrentReminderIdInUserDefaultsShouldSaveZeroInUserDefaults { // given [self setUpUserDefaultsWithCurrentReminderId:nil]; // when [sut nextReminderId]; // then [verify(mockUserDefaults) setObject:@0 forKey:@"currentReminderId"]; } See how we continually clean as we go? The tests have actually become easier to read! Fourth Test: With matching key, return incremented value Now we want to test that if the user defaults has some value, we return one greater. I'm going to copy and alter the "should return zero" test, using an arbitrary value of 3. - (void)testNextReminderIdWithCurrentReminderIdInUserDefaultsShouldReturnOneGreater { [self setUpUserDefaultsWithCurrentReminderId:@3]; assertThat([sut nextReminderId], is(equalTo(@4))); } That fails, as desired: "Expected <4>, but was <0>". Here's simple code to pass the test: - (NSNumber *)nextReminderId { NSNumber *reminderId = [_userDefaults objectForKey:@"currentReminderId"]; if (reminderId) reminderId = @([reminderId integerValue] + 1); else reminderId = @0; [_userDefaults setObject:@0 forKey:@"currentReminderId"]; return reminderId; } Except for that setObject:@0, this is starting to look like your example. I don't see anything to refactor, yet. (There actually is, but I didn't notice until later. Let's keep going.) Fifth Test: With matching key, store incremented value Now we can establish one more test: given those same conditions, it should save the new reminder ID in user defaults. This is quickly done by copying the earlier test, altering it, and giving it a good name: - (void)testNextReminderIdWithCurrentReminderIdInUserDefaultsShouldSaveOneGreaterInUserDefaults { // given [self setUpUserDefaultsWithCurrentReminderId:@3]; // when [sut nextReminderId]; // then [verify(mockUserDefaults) setObject:@4 forKey:@"currentReminderId"]; } That test fails, with "Expected 1 matching invocation, but received 0". To get it passing, of course, we simply change the setObject:@0 to setObject:reminderId. Everything passes. We're done! Wait, we're not done. Step 3: Is there anything to refactor? When I first wrote this, I said, "Not really." But looking it over after watching Clean Code episode 3, I can hear Uncle Bob telling me, "How big should a function be? 4 lines is OK, maybe 5. 6 is… OK. 10 is way too big." That's at 7 lines. What did I miss? It must be violating the rule of functions by doing more than one thing. Again, Uncle Bob: "The only way to be really be sure that a function does one thing is to extract 'til you drop." Those first 4 lines work together; they calculate the actual value. Let's select them, and Refactor ▶ Extract. Following Uncle Bob's scoping rule from episode 2, we'll give it a nice, long descriptive name since its scope of use is very limited. Here's what the automated refactoring gives us: - (NSNumber *)determineNextReminderIdFromUserDefaults { NSNumber *reminderId = [_userDefaults objectForKey:@"currentReminderId"]; if (reminderId) reminderId = @([reminderId integerValue] + 1); else reminderId = @0; return reminderId; } - (NSNumber *)nextReminderId { NSNumber *reminderId; reminderId = [self determineNextReminderIdFromUserDefaults]; [_userDefaults setObject:reminderId forKey:@"currentReminderId"]; return reminderId; } Let's clean that up to make it tighter: - (NSNumber *)determineNextReminderIdFromUserDefaults { NSNumber *reminderId = [_userDefaults objectForKey:@"currentReminderId"]; if (reminderId) return @([reminderId integerValue] + 1); else return @0; } - (NSNumber *)nextReminderId { NSNumber *reminderId = [self determineNextReminderIdFromUserDefaults]; [_userDefaults setObject:reminderId forKey:@"currentReminderId"]; return reminderId; } Now each method is really tight, and it's easy for anyone to read the 3 lines of the main method to see what it does. But I'm uncomfortable having that user defaults key spread across two methods. Let's extract that into a constant at the head of Example.m: static NSString *const currentReminderIdKey = @"currentReminderId"; I'll use that constant wherever that key appears in the production code. But the test code continues to use the literals. This guards us from someone accidentally changing that constant key. Conclusion So there you have it. In five tests, I have TDD'd my way to the code you asked for. Hopefully it gives you a clearer idea of how to TDD, and why it's worth it. By following the 3-step waltz Add one failing test Write the simplest code that passes, even if it looks dumb Refactor (both production code and test code) you don't just end up at the same place. You end up with: well-isolated code that supports dependency injection, minimalist code that only implements what has been tested, tests for each case (with the tests themselves verified), squeaky-clean code with small, easy-to-read methods. All these benefits will save more time than the time invested in TDD — and not just in the long term, but immediately. For an example involving a full app, get the book Test-Driven iOS Development. Here's my review of the book.
2024-03-15T01:27:17.568517
https://example.com/article/9101
Little Esop – Organic clothing for little ones! Checks and stripes and fabulous vintage styles have received a makeover with modern functionality and scrumptuous organic fabrics in Little Esop’s lineup of clothing for babies and young toddlers! Suitable for both boys and girls and with quality workmanship, Little Esop is guarantied to be a staple in the home…and passed on from brother to sister and back again! We’re loving the ‘Building Blocks Battleship Romper’, perfect for boys and girls and with reinforced knees for little crawlers. For girls the ‘Helicopter Jumper dress’ in denim and red, and the ‘Tricycle dress’ in blue checks with yellow inserts are just too sweet. For little boys we adore the the ‘Pictionary pants’ and a comfy cotton tee. Just perfect as we move into spring and summer! Available in sizes from newborn to 2T…keep your little one comfy in stylish organic clothing! About the author Belinda I am a mum, wife, friend, and student. A lover of books and walks, sewing, knitting, museums, art galleries and playing games. I whole-heartedly support breastfeeding, babywearing and cloth-diapering and know that there are many different parenting paths to walk!
2023-10-05T01:27:17.568517
https://example.com/article/6175
In baseball, as with many other sports, including golf, over-the-line softball, and any other sport wherein a ball is struck by a person who is substantially stationary, it is important that the feet be properly positioned. Golfers for example have numerous devices to properly position the feet, align the hips, orient the head, hold the shoulders, etc., for the optimal golf stroke. It is in this field of devices that the instant trainer falls. Specifically, baseball players, especially young little leaguers and the like, need guidance in properly positioning their feet when swinging the bat. There is an optimal positioning of the feet at the beginning of the straddling movement preceding the swing, together with the proper positioning at the end of it. This requires a positioner for the rear foot, which remains more or less stationary, and a straddling positioner for the forward foot, which of necessity moves forward a step as the swinger straddles into the batting position. There is need for some way of gauging the ideal starting position of the foot which will later move, and the finishing position of this foot. This would require some moving structure, indicating where the forward foot should start at the beginning of the straddle, and after following the foot out would indicate where the foot should terminate its movement.
2024-03-25T01:27:17.568517
https://example.com/article/3430
The U.S. vs. Libya: On the horns of a dilemma This article is based on a talk given March 4 at a meeting of the New York branch of Workers World Party. The U.S. imperialist ruling class is on the horns of a dilemma over what to do about Libya. In modern terms, it finds itself in what could be called a lose-lose situation. Ever since a movement of junior officers deposed Libya’s monarchy in 1969, and especially since its leader, Moammar Gadhafi, nationalized Libya’s oil, the imperialists in the U.S. and in Europe have wanted to get rid of him. They tried to weaken his regime with economic sanctions, decades of CIA training and financing of opponents in exile, and in 1986 a direct air assault on Tripoli and Benghazi in which 60 people were killed by U.S. bombs — one of them Gadhafi’s infant daughter. The pressures on Libya were so great that in 2003, after the U.S. carried out its “shock and awe” assault on Iraq, Gadhafi made political and economic concessions to imperialism, opening up areas of the Libyan economy and ending state subsidies on many needed items. But while imperialist heads of state then congratulated Gadhafi and seemed to accept his regime, none of this was enough, especially for the U.S. When the protests against the U.S.-backed dictatorships in Tunisia and Egypt began at the end of 2010, and grew into such huge mass demonstrations that even Washington was forced to call on Hosni Mubarak to step down, the idea grew in Western circles that now was the time to dislodge Gadhafi. This seems to have struck a chord with some elements in Libya, especially in the eastern city of Benghazi, which is situated near Libya’s major oil fields, pipelines, refineries and ports. Protests began. However, they very soon morphed into a well-armed rebellion against the Libyan government aimed at seizing control of the country. While the U.S. and other imperialist powers have been involved in brokering a change of faces in Egypt and Tunisia in order to retain the same basic power structures — which are unacceptable to millions of people — they have cheered on the armed opposition in Libya since the beginning. What is their dilemma? It is this: After several weeks of fighting, Gadhafi has not been overthrown and has strong support in Tripoli, the capital city where one-third of Libya’s population lives. The rebel forces appear to be in retreat — and may not all have the same aims. The Western media cites those who have been calling for intervention. If the imperialists openly intervene to secure the military overthrow of Gadhafi, this would undermine their carefully orchestrated efforts to appear to side with the people of the region while urging nonviolence. This problem has been openly discussed, although in more veiled language, in the U.S. capitalist media. Biggest U.S. stakes are in the Gulf So which is more important to them, Libya — or Egypt, Tunisia, Algeria, Bahrain, Yemen, Kuwait, Oman — and possibly even Saudi Arabia, if the revolts spread? We ourselves have pointed out that U.S. oil corporations are salivating over the prospect of gaining control over the 47 billion barrels of oil under the desert sands of Libya. At the present time, the U.S. imports no oil from Libya. (Nevertheless, prices are being opportunistically hiked here at the gas pumps, supposedly because of the Libyan crisis.) Even more important to the billionaire class, U.S. oil companies like ConocoPhillips, Marathon, Hess and Occidental Petroleum, while profiting from the exploration, drilling, pumping, refining and exporting of Libya’s oil, have much larger interests elsewhere. Libya’s proven oil reserves, the largest in Africa, pale in comparison to those in the U.S.-aligned and -armed Gulf states — some 700 billion barrels, not counting Iran. Mass uprisings are shaking many of these states despite heavy repression — which gets very little attention in the Western media compared to Libya. The social gulf in these countries between rich and poor, haves and have-nots, is immense compared to Libya, where oil income has been used to attain the highest human development index in Africa. Certainly, the governments of these top-heavy oil states, like the absolute monarchy of King Abdullah Bin Abdul Aziz of Saudi Arabia, or the emirate of Kuwait run by the al-Sabah dynasty, are inherently unstable. They would have been overthrown long ago were it not for their powerful protector — the billionaire-dominated U.S. government, with its far-flung navy and web of bases around the world. However, with all its powerful weapons and hundreds of thousands of invading troops, the U.S. has not even been able to crush a resistance movement in impoverished Afghanistan or set up a stable comprador regime in Iraq. And these two aggressions, along with U.S. backing for Israel’s brutal occupation of Palestinian land, have turned public opinion in the region sharply against U.S. intervention. When Barack Obama was elected president, the strategists for imperialism hoped they could reverse this erosion of U.S. influence in the Arab world. They went on a charm offensive that in style was very different from the anti-Muslim agitation of the Bush period. Perhaps the masses saw this as an opening to rise up against dictators like Mubarak without triggering an automatic U.S. intervention. So which will it be? Will U.S. imperialism show its fangs again and, perhaps with the support of Britain, France, Germany and Italy, declare a “no-fly” zone over Libya in order to paralyze Gadhafi’s air force while rebels try to advance and take the capital? It’s a possibility, but one fraught with dangers for imperialism. First of all, the rebels may not be able to do it. Then the question of sending imperialist ground troops would be on the table, which could embroil the U.S. and its allies in another quagmire. On March 2, U.S. Secretary of Defense Robert Gates, a former head of the CIA, testified to Congress. He rather sharply answered the “loose talk” of those clamoring for a no-fly zone, saying that would require massive air strikes against Libya’s air-defense system as well as against its air force. Gates, Obama and others are hoping that U.S. and U.N. sanctions, clandestine operations, a simmering civil war, gunboat diplomacy and a hostile imperialist media will put enough pressure on the Libyan people that the imperialists can achieve their objectives. However, they will not rule out military intervention. Britain was just caught sending a team of MI6 intelligence officers and Special Forces soldiers into eastern Libya, reportedly for a meeting with rebels. But farmers in the area caught the British agents after their helicopter landed in the middle of the night and handed them over to the rebels, who then released them. (Guardian [Britain], March 7) It was an embarrassment for the British government — and undoubtedly also for those rebels who had been in secret negotiations with them. The imperialists have tried to use the mass popular rebellions in the region as a cover for carrying out their own operation against Libya — but it is fear of pushing these rebellions even further in an anti-imperialist direction that has so far restrained them from open intervention. E-mail: [email protected] Articles copyright 1995-2012 Workers World. Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved.
2024-01-16T01:27:17.568517
https://example.com/article/5501
The present invention relates to driving devices, and particularly to the treading type vehicle driving device, wherein when the pedal is treaded forwards and backwards, the vehicle moves forwards rapidly with less power. Referring to FIG. 1, a preferred vehicle treadle structure disclosed in U.S. Pat. No. 6,325,400, xe2x80x9cTreadle-type vehicle body forward drive structurexe2x80x9d is disclosed. The treadle structure is mainly formed by a base plate 6, a transmission assembly 7 and a treadle structure assembled to the base plate 6, a gear assembly 9 positioned between and exactly engaged with the transmission assembly 7 and the treadle structure 9, and a position limiting plate (not shown) for confining the transmission structure to the base plate 6. By the follower shaft 71 of the transmission assembly 7 to be assembled to the hub of the rear wheel E to drive the vehicle to move forwards, when the treadle rod 8 moves downwards slightly, the slant-cut fan gear 82 of the treadle rod may drive the gear assembly 9 engaged therewith moves along the teeth edge of the slant-cut gear C so as to drive the straight-cut gear of the gear assembly to be engaged to the straight-cut gear B of the transmission assembly 7. When the treadle rod 81 moves downwards continuously, the slant-cut fan gear 82 of the treadle structure 8 drives the slant-cut gear C of the gear assembly 9 to rotate synchronously with the straight-cut gear A so that the straight cut gear A is engaged with the straight-cut gear B of the transmission assembly 7 so as to drive the follower gear for driving the rear wheel to rotate. Thus, the vehicle moves forwards. However, the prior art structure only provides that one leg treads the pedal. More power and time are necessary and the speed is halved. Accordingly, the primary object of the present invention is to provide a treading type vehicle driving device including a substrate, a first transmission assembly, a second transmission assembly, a treadle structure, a gear assembly and a position limiting plate. The follower shaft of the first transmission assembly is engaged with a straight-cut gear B1 and straight-cut gear B2. And the follower shaft drives the rear wheel to rotate synchronously. The driving shaft of the second transmission assembly is engaged to a straight-cut gear D. The straight-cut gear D is engaged to the straight-cut gear B1 of the first transmission assembly. The slant-cut fan gear of the treadle structure is exactly engaged to the slant-cut gears C1 and C2 of the gear assembly for driving the gear assembly to move. The straight-cut gear A can move reciprocally along the trenches of the slant-cut gears C1 and C2 by the slant-cut fan gear so as to drive the straight-cut gear A to engage with the straight-cut gear B2 of the first transmission assembly (treading forwards), or to engage with the straight-cut gear D of the second transmission assembly 2A (treading backwards). When the pedal is treaded forwards and backwards, the vehicle moves forwards rapidly with less power. Another object of the present invention is to provide a treading type vehicle driving device, wherein the straight-cut gear A will separate from the engagement of the straight-cut gear B2 or straight-cut gear D of transmission assembly since the teeth trench of the slant-cut gears (forwards is C1 and backwards is C2) moves on the slant-cut fan gear; thereby, wheels rotate continuously.
2024-03-30T01:27:17.568517
https://example.com/article/5335
Q: Disposing an ImageList What's the appropriate way to dispose an ImageList object? Suppose I have some class with a private ImageList imageList member. Now, at some moment I perform the following code: // Basically, lazy initialization. if (imageList == null) { imageList = new ImageList(); Image[] images = Provider.CreateImages(...); foreach (var image in images) { // Does the 'ImageList' perform implicit copying here // or does it aggregate a reference? imageList.Images.Add(image); // Do I need to do this? //image.Dispose(); } } return imageList; In the same class I have the Dispose method implementation, which is performed the following way: public void Dispose() { if (!disposed) { // Is this enough? if (imageList != null) imageList.Dispose(); disposed = true; } } I'm sure there are some potential issues with this code, so could you please help me to make it correct. A: Yes, it makes a copy. Note the CreateBitMap call below. Therefore, to keep your resource use as low as possible, you should uncomment your dispose line. private int Add(ImageList.Original original, ImageList.ImageCollection.ImageInfo imageInfo) { if (original == null || original.image == null) throw new ArgumentNullException("value"); int num = -1; if (original.image is Bitmap) { if (this.owner.originals != null) num = this.owner.originals.Add((object) original); if (this.owner.HandleCreated) { bool ownsBitmap = false; Bitmap bitmap = this.owner.CreateBitmap(original, out ownsBitmap); num = this.owner.AddToHandle(original, bitmap); if (ownsBitmap) bitmap.Dispose(); } } else { if (!(original.image is Icon)) throw new ArgumentException(System.Windows.Forms.SR.GetString("ImageListBitmap")); if (this.owner.originals != null) num = this.owner.originals.Add((object) original); if (this.owner.HandleCreated) num = this.owner.AddIconToHandle(original, (Icon) original.image); } if ((original.options & ImageList.OriginalOptions.ImageStrip) != ImageList.OriginalOptions.Default) { for (int index = 0; index < original.nImages; ++index) this.imageInfoCollection.Add((object) new ImageList.ImageCollection.ImageInfo()); } else { if (imageInfo == null) imageInfo = new ImageList.ImageCollection.ImageInfo(); this.imageInfoCollection.Add((object) imageInfo); } if (!this.owner.inAddRange) this.owner.OnChangeHandle(new EventArgs()); return num; } When ImageList disposes, it disposes its copies of all the images. So again, yes, disposing of it when the form closes as you are is the right thing in addition to uncommenting your other dispose line. protected override void Dispose(bool disposing) { if (disposing) { if (this.originals != null) { foreach (ImageList.Original original in (IEnumerable) this.originals) { if ((original.options & ImageList.OriginalOptions.OwnsImage) != ImageList.OriginalOptions.Default) ((IDisposable) original.image).Dispose(); } } this.DestroyHandle(); } base.Dispose(disposing); }
2023-09-05T01:27:17.568517
https://example.com/article/9557
Caught: The Prison State and the Lockdown of American Politics, Marie Gottschalk, Princeton University Press, 2014 A Costly American Hatred, Joe Dole, Midnight Express Books, 2015 Literature on mass incarceration comes in waves. Around the turn of the century, with the issue still largely off the radar, academics like David Garland, Christian Parenti and Marc Mauer led the way with books that aimed to put the expanding prison system into the public eye. Then came a second wave of important works, which highlighted the structural and racial dimensions of the "prison-industrial complex." Among the most prominent of these were Angela Davis' Are Prisons Obsolete?, Ruth Wilson Gilmore's Golden Gulag, various books by Loic Wacquant - and, of course, the most famous of them all, The New Jim Crow by Michelle Alexander. Since the publication of Alexander's book, the critique of mass incarceration coming from progressive think tanks and many activists has developed a certain orthodoxy. This approach typically focuses intensively on the war on drugs, tends to cast the growth of prisons as a project largely driven by Republicans and places African-American men almost alone at the center of the process. This orthodoxy has helped make major inroads into the general apathy toward the subject and has made vital contributions to understanding the racial dimensions of prison expansion. At the same time, scholars and activists who have been engaged on these issues, whether they be advocates of prison reform or abolition, have begun to unpack the complexity of the process and look at the systemic nature of the process in new ways. One result of these reflections has been the generation of a new literature that reframes current critiques. The four books from this body of work considered here constitute a distinct alternative to any temptation to define mass incarceration as simply a set of policy errors that can be reversed through legalistic changes or unholy alliances between mainstream liberals and fiscal conservatives. Naomi Murakawa's The First Civil Right: How Liberals Built Prison America directly attacks our received understanding of the history of mass incarceration. She targets liberals for at least as much culpability in this punitive project as their conservative counterparts. With scrupulous attention to historical detail, Murakawa tracks the growth of the prison industrial complex from its LBJ roots in the 1960s to today. While she chronicles the support of Democratic Congress members (including the Black Caucus) for a host of repressive legislation, her most powerful points stress the hypocrisy of liberal ideology. She argues that "racially liberal Democrats who supported the notion of 'old' civil rights conceived of black crime as black sickness," and were often influenced by Daniel Patrick Moynihan and other conservative researchers. She calls the resultant program a "project of pity and administrative order." She reserves special disdain for the "I feel your pain" empathy of Bill Clinton and others who provided a humanistic justification for a law and order program. In the end, she maintains that Democrats advanced race-neutral policies that had extremely racialized outcomes. Perhaps the best example was the federal sentencing guidelines of 1984. The guidelines, backed by both Ted Kennedy and Strom Thurmond, provided a race-neutral formula intended to eliminate racial bias in the judiciary. In practice, the guidelines lengthened sentences dramatically and shifted the power of discretion into the hands of prosecutors who simply charged Black people more heavily than White people. Judges implemented the guidelines, but by the time the cases reached the sentencing stage, the prosecutors had already stacked the deck against Black defendants. While at times Murakawa may overemphasize the negative role of Democrats, her material makes a powerful case for rethinking this history and for viewing bipartisan solutions to present dilemmas with extreme skepticism. Historian Dan Berger offers us a similar, if more narrowly focused, reworking of the historical roots of mass incarceration. Rather than looking in the electoral sphere, Berger examines the enormous expanse of political activism of those in prison during the 1970s. Moving well beyond the framework of the "civil rights movement," Berger links prisoner activism with the social movements of the era, especially Black Power. He shows how high-profile political prisoners like George Jackson, Imari Obadele and others developed a deep intellectual project inside prisons, and a culture of self-education, which enabled them to engage in political debates at the highest level. These individuals didn't focus on legal issues for litigation purposes, but offered up their own rich analysis of the political economy of incarceration. The clearest expression of this was contained in the two books written by Jackson, Soledad Brother and Blood in My Eye. Berger demonstrates how in framing their own struggles, Jackson and others connected to the liberation struggles of the time in southern Africa, Southeast Asia and the socialist project in Cuba. Berger also points out how many of these men often adopted a guerrilla warfare focused strategy. In Jackson's case, theory turned into actions in the events of August 21, 1971, when he died in a hail of bullets fired by prison guards in an apparent escape attempt at San Quentin prison. While Berger clearly holds the spirit and principle of his subjects in great esteem, he also opens the door to some critique of their actions, especially in terms of gender politics. In a paragraph that begs for a far richer analysis, Berger notes that George Jackson's "masculinist appeals" revealed "his ... allegiance to a conservative patriarchal notion of respectability." For Berger, Jackson was a product of the "patriarchal culture" of his era as well as the "sex-segregated institution in which he came of age." (p. 113) Perhaps the most ambitious of these volumes is Marie Gottschalk's somewhat strangely titled book, Caught. Gottschalk's analysis offers a strong counternarrative to existing quick-fix solutions to mass incarceration. First, from both a statistical and theoretical perspective, she critiques the notion that releasing the "non-non-nons" (those with nonviolent, non-serious and non-sexual convictions) holds the key to substantive change. This view centers on the notion that the primary victims of mass incarceration are those who are relatively innocent - convicted either of low-level drug offenses or nonviolent crimes. Gottschalk points out that statistically even releasing all of the non-non-nons would leave the United States with the highest per capita incarceration rates in the industrialized world. But she also adds that such a process runs the risk of demonizing those with violent convictions, providing an avenue to increasingly harsh treatment for them as well as neglecting the reality that they are products of the same social forces as those captured for nonviolent crimes. Gottschalk also strongly attacks what she labels the "three-R solution" - recidivism, reentry and justice reinvestment. She argues that this package relies on creating "DIY social policies that stress individual solutions and personal responsibilities" (p. 79) while downgrading the requisite role of the state in welfare provision. Gottschalk totally rejects using recidivism as the barometer of criminal legal success. She points out that recidivism depends at least as much on policing and criminal legal practice as it does on behavior or broader social change. She also highlights the shortcomings of justice reinvestment. She points out that while the initial idea was to redirect savings from decarceration into community-based development initiatives, in fact, justice reinvestment funding from the federal government has largely been captured by law enforcement essentially serving to "reallocate resources within the criminal justice system" (p.100) rather than contribute to change in the broader community. A final point of attack for Caught is relying on "evidence-based" practice. In Gottschalk's view, this represents an extremely conservative approach, noting that change in the criminal legal system requires inspiration, not just data. As she puts it, "appeals to science are incapable of articulating a 'public idea' around which reform can be mobilized." (p. 261) Surprisingly, Gottschalk rejects attacking the roots of the problem - economic inequality and lack of social welfare provision - as a solution to mass incarceration. She argues that this will take decades and instead presses for sentencing reform, reducing barriers for people with felony convictions and investing in communities. However, her rejection of structural changes seems to contradict her notion that decarceration cannot be achieved without spending considerable resources on building communities. Achieving that reallocation is far more than a mere policy shift, but itself amounts to a major structural reform and shift of paradigm. The final work considered here is Joe Dole's A Costly American Hatred. Unlike the others here, Dole's work is not remarkable for breaking theoretical new ground. Much of what he writes about, apart from his very useful information about criminal justice in the state of Illinois where he is currently 15 years into a life term, has appeared elsewhere. But the spirit and breadth of his work astound. During my own years of incarceration, I found the prospect of writing nonfiction daunting. I could not conceive of writing without access to the internet and/or a massive library. Somehow, Dole, based for much of his sentence in one of the worst hellholes in the US prison system, the now shuddered supermax in Tamms, Illinois, has managed to produce a work with a sharp and broad narrative supported by hundreds of references. When I picked up this book, I expected an autobiography. I even feared a diatribe without much supporting evidence. But Dole's command of the subject matter and the range of his sources are awe-inspiring. He embarks on a journey similar in breadth to that of Gottschalk, tackling the war on drugs, the school-to-prison pipeline, the rise of debtor's prisons and many more of the essential elements of the prison industrial complex. In a highly readable style, peppered with lots of stunning data, Dole has created a wonderful primer on mass incarceration, which has the added feature of the voice of someone who is still living the nightmare of the system. Indeed, his chilling tales of his "neighbors'" habits of "cutting" (one even went so far as to cut off a testicle) lend an authenticity that reminds us that the voices of people inside the prison system play a crucial role in building a social movement against mass incarceration. Moreover, Dole spent several years soliciting support for the publication of this work from inside prison, a testament to his determination to make an intellectual contribution to our understanding of mass incarceration. The four books covered here clearly don't represent the entire menu of new writings. But they offer us much needed fresh perspectives on the framing of mass incarceration put forward by mainstream politicians and formulaic activists heavily invested in relief for "nonviolent offenders" and the gospel of re-entry. Those who prioritize change that gets at the economic, racial and gender underpinnings of mass incarceration would do well to spend time studying Murakawa, Berger, Gottschalk and Dole. They remind us that we still have much to learn about the political economy of mass incarceration as well as the building of a social movement that poses both opposition and alternatives. Hopefully other scholars will follow the example of these authors and take on the challenges of unpacking the big issues and shedding fresh light on current understandings. A further hope is that future authors will add a more serious gender lens to intersect with the rich analysis of race and class that these four books offer. Economic and Social Justice Calls The team explores the concept, economic theories and realities of achieving Full Employment in the current economy. Guests include Conor Williams, the secretary of the Transitional Jobs Collaborative in Milwaukee and Michael Darner, Executive Director of the Congressional Progressive Caucus. Listen to this month's call led by Jim Carpenter as we discuss the state of our current economy, the impact of poor economic choices, and the other factors that play into the declining situation around the country, and in the world in this open and guided conversation. PDAction Board Member Donald Whitehead, and former Ex. Dir. of the Coalition for the Homeless leads the discussion on homelessness, with input from Joel Segal, PDAmerica founding member and National Director of the Justice Action Mobilization Network. Special focus is given to the housing crisis, the role of the banks, programs used by other countries to alleviate the problem, as well as the fact that women are the most adversely affected by this issue. H Con Res 98 - Resolve to Eliminate Homelessness - has been introduced in Congress by Rep. Alma Adams (NC-12) and is endorsed on this call.
2024-03-18T01:27:17.568517
https://example.com/article/7165
/* * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #ifndef __BC_ANIMATION_EXPORTER_H__ #define __BC_ANIMATION_EXPORTER_H__ #include <stdlib.h> #include <stdio.h> #include <math.h> #include "BCAnimationCurve.h" extern "C" { #include "DNA_scene_types.h" #include "DNA_object_types.h" #include "DNA_anim_types.h" #include "DNA_action_types.h" #include "DNA_curve_types.h" #include "DNA_light_types.h" #include "DNA_camera_types.h" #include "DNA_armature_types.h" #include "DNA_material_types.h" #include "DNA_constraint_types.h" #include "DNA_scene_types.h" #include "BLI_math.h" #include "BLI_string.h" #include "BLI_listbase.h" #include "BLI_utildefines.h" #include "BKE_fcurve.h" #include "BKE_animsys.h" #include "BKE_scene.h" #include "BKE_action.h" // pose functions #include "BKE_armature.h" #include "BKE_object.h" #include "BKE_constraint.h" #include "BIK_api.h" #include "ED_object.h" } #include "MEM_guardedalloc.h" #include "RNA_access.h" #include "COLLADASWSource.h" #include "COLLADASWInstanceGeometry.h" #include "COLLADASWInputList.h" #include "COLLADASWPrimitves.h" #include "COLLADASWVertices.h" #include "COLLADASWLibraryAnimations.h" #include "COLLADASWParamTemplate.h" #include "COLLADASWParamBase.h" #include "COLLADASWSampler.h" #include "COLLADASWConstants.h" #include "COLLADASWBaseInputElement.h" #include "EffectExporter.h" #include "BCAnimationSampler.h" #include "collada_internal.h" #include "IK_solver.h" #include <vector> #include <map> #include <algorithm> // std::find typedef enum BC_animation_source_type { BC_SOURCE_TYPE_VALUE, BC_SOURCE_TYPE_ANGLE, BC_SOURCE_TYPE_TIMEFRAME, } BC_animation_source_type; typedef enum BC_global_rotation_type { BC_NO_ROTATION, BC_OBJECT_ROTATION, BC_DATA_ROTATION } BC_global_rotation_type; class AnimationExporter : COLLADASW::LibraryAnimations { private: COLLADASW::StreamWriter *sw; BCExportSettings &export_settings; BC_global_rotation_type get_global_rotation_type(Object *ob); public: AnimationExporter(COLLADASW::StreamWriter *sw, BCExportSettings &export_settings) : COLLADASW::LibraryAnimations(sw), sw(sw), export_settings(export_settings) { } bool exportAnimations(); // called for each exported object void operator()(Object *ob); protected: void export_object_constraint_animation(Object *ob); void export_morph_animation(Object *ob); void write_bone_animation_matrix(Object *ob_arm, Bone *bone); void write_bone_animation(Object *ob_arm, Bone *bone); void sample_and_write_bone_animation(Object *ob_arm, Bone *bone, int transform_type); void sample_and_write_bone_animation_matrix(Object *ob_arm, Bone *bone); void sample_animation(float *v, std::vector<float> &frames, int type, Bone *bone, Object *ob_arm, bPoseChannel *pChan); void sample_animation(std::vector<float[4][4]> &mats, std::vector<float> &frames, Bone *bone, Object *ob_arm, bPoseChannel *pChan); // dae_bone_animation -> add_bone_animation // (blend this into dae_bone_animation) void dae_bone_animation(std::vector<float> &fra, float *v, int tm_type, int axis, std::string ob_name, std::string bone_name); void dae_baked_animation(std::vector<float> &fra, Object *ob_arm, Bone *bone); void dae_baked_object_animation(std::vector<float> &fra, Object *ob); float convert_time(float frame); float convert_angle(float angle); std::vector<std::vector<std::string>> anim_meta; /* Main entry point into Animation export (called for each exported object) */ void exportAnimation(Object *ob, BCAnimationSampler &sampler); /* export animation as separate trans/rot/scale curves */ void export_curve_animation_set(Object *ob, BCAnimationSampler &sampler, bool export_tm_curves); /* export one single curve */ void export_curve_animation(Object *ob, BCAnimationCurve &curve); /* export animation as matrix data */ void export_matrix_animation(Object *ob, BCAnimationSampler &sampler); /* step through the bone hierarchy */ void export_bone_animations_recursive(Object *ob_arm, Bone *bone, BCAnimationSampler &sampler); /* Export for one bone */ void export_bone_animation(Object *ob, Bone *bone, BCFrames &frames, BCMatrixSampleMap &outmats); /* call to the low level collada exporter */ void export_collada_curve_animation(std::string id, std::string name, std::string target, std::string axis, BCAnimationCurve &curve, BC_global_rotation_type global_rotation_type); /* call to the low level collada exporter */ void export_collada_matrix_animation(std::string id, std::string name, std::string target, BCFrames &frames, BCMatrixSampleMap &outmats, BC_global_rotation_type global_rotation_type, Matrix &parentinv); BCAnimationCurve *get_modified_export_curve(Object *ob, BCAnimationCurve &curve, BCAnimationCurveMap &curves); /* Helper functions */ void openAnimationWithClip(std::string id, std::string name); bool open_animation_container(bool has_container, Object *ob); void close_animation_container(bool has_container); /* Input and Output sources (single valued) */ std::string collada_source_from_values(BC_animation_source_type tm_channel, COLLADASW::InputSemantic::Semantics semantic, std::vector<float> &values, const std::string &anim_id, const std::string axis_name); /* Output sources (matrix data) */ std::string collada_source_from_values(BCMatrixSampleMap &samples, const std::string &anim_id, BC_global_rotation_type global_rotation_type, Matrix &parentinv); /* Interpolation sources */ std::string collada_linear_interpolation_source(int tot, const std::string &anim_id); /* source ID = animation_name + semantic_suffix */ std::string get_semantic_suffix(COLLADASW::InputSemantic::Semantics semantic); void add_source_parameters(COLLADASW::SourceBase::ParameterNameList &param, COLLADASW::InputSemantic::Semantics semantic, bool is_rot, const std::string axis, bool transform); int get_point_in_curve(BCBezTriple &bezt, COLLADASW::InputSemantic::Semantics semantic, bool is_angle, float *values); int get_point_in_curve(const BCAnimationCurve &curve, float sample_frame, COLLADASW::InputSemantic::Semantics semantic, bool is_angle, float *values); std::string collada_tangent_from_curve(COLLADASW::InputSemantic::Semantics semantic, BCAnimationCurve &curve, const std::string &anim_id, const std::string axis_name); std::string collada_interpolation_source(const BCAnimationCurve &curve, const std::string &anim_id, std::string axis_name, bool *has_tangents); std::string get_axis_name(std::string channel, int id); const std::string get_collada_name(std::string channel_target) const; std::string get_collada_sid(const BCAnimationCurve &curve, const std::string axis_name); /* ===================================== */ /* Currently unused or not (yet?) needed */ /* ===================================== */ bool is_bone_deform_group(Bone *bone); #if 0 BC_animation_transform_type _get_transform_type(const std::string path); void get_eul_source_for_quat(std::vector<float> &cache, Object *ob); #endif #ifdef WITH_MORPH_ANIMATION void export_morph_animation(Object *ob, BCAnimationSampler &sampler); #endif }; #endif
2024-06-01T01:27:17.568517
https://example.com/article/5900
Feds: Monroe company violated some labor laws Saturday Apr 6, 2013 at 2:00 AM James Walsh KIRYAS JOEL — The U.S. Department of Labor filed a federal complaint this week against a Monroe company, accusing it of violating minimum wage and overtime laws, as well as failing to keep records of employees' hours, pay rates and earnings. The Department of Labor seeks a court order finding the Hatzlucha Meat Market Inc. and its officers, Abraham Schlesinger and Yitzchok Tyrnauer, liable for the unpaid compensation, according to documents filed Monday in Federal Court in White Plains.A call to the company Friday afternoon was not returned. The Labor Department found Hatzlucha's operators “willfully and repeatedly” violated labor laws, in that company records failed to accurately reflect “the hours worked each workday, the total hours worked each work week, the regular rate of pay, the total earnings for each work week, and/or the total compensation for each work week with respect to many of their employees.” Never miss a story Choose the plan that's right for you. Digital access or digital and print delivery.
2023-10-13T01:27:17.568517
https://example.com/article/4867
Found 1 course by Alex Genadinik, Joey Zanca Learn to become good at any sports with: muscle training, speed, quickness and agility training, and sports psychology Normal price $100NOW $12 Courses submitted to CouponTrump must be complete and ready for sale and should either include a discount code or be priced at $20 or less. By submitting courses for inclusion on CouponTrump, you verify that they meet these conditions. CouponTrump is an affiliate site and takes a percentage of sales revenue; if you do not want to market your courses in this way, please do not add them to this site.
2024-06-13T01:27:17.568517
https://example.com/article/1700
SHARE THIS ARTICLE Share Tweet Post Email While a chorus of market experts is telling investors to look outside of the U.S. for big returns, at least one loud voice is singing a very different tune -- Vanguard Group founder Jack Bogle. The 88-year-old investor, who started the first index fund in 1976, said that he’s fully invested in U.S. securities, with stocks and bonds having an equal share of his portfolio. And of course, it’s all indexed. “I believe the U.S. is the best place to invest,” Bogle said in a telephone interview. “We probably have the most technology oriented economy in the world. I would bet that the U.S. will do better than the rest of the world. It is a simple bet on which economy is going to be the strongest in the long run.” This advice flies in the face of the market’s recent conventional wisdom. Numerous equities analysts and strategists at firms like BlackRock Inc., Morgan Stanley and Deutsche Bank AG have encouraged investors to overweight European stocks because of robust growth expectations and relatively high valuations in U.S. equities. “Every single person I think I have ever talked to tells me I am wrong in this,” Bogle said. “If you believe in the majority, you can just throw my opinion in the waste basket. But on the other hand, I was brought up in this business and I am saying ‘the crowd is always wrong.’” ‘I Was Right’ Bogle’s been promoting the value of investing in U.S. assets since 1993, and returns back him up. The S&P 500 Index has climbed more than 421 percent since then, more than four times the performance of MSCI’s index of world equities excluding the U.S. European shares have gained about 180 percent, while Asia has added about 40 percent. “So I was right, really right,” Bogle said. If he were to adjust his portfolio, he’d add emerging-market securities rather than those from other developed markets because he believes they have greater potential. Still, he would limit his exposure to 5 percent. MSCI’s emerging-market index has risen 17 percent this year, double the S&P 500. Emerging markets attracted the biggest share of flows into equities this year, taking in $29.4 billion. That compares with $6.6 billion for U.S. equities and $18.3 billion for European stocks, according to Bank of America Merrill Lynch citing EPFR data. “I don’t think in the long run [emerging markets] will do as well as the U.S.,” he said. “They are more risky and more sensitive to interest rates, more sensitive to Federal Reserve statements and actions. They don’t have the diversity we have in the U.S.” ( Updates data in the sixth paragraph. )
2024-01-29T01:27:17.568517
https://example.com/article/9898
Prosecutors will throw out murder charges against former FBI agent LindleyDeVecchio today because testimony by their chief witness about all four murders in the case is at odds with tape recorded accounts she gave about the slayings to two newspaper reporters 10 years ago. The move became clear after Voice reporter Tom Robbins realised that the prosecution's star witness had told him very different things in a 1997 interview. The Brooklyn DA's office had pinned its hope on Linda Schiro, mistress to mobster Gregory "The Grim Reaper" Scapra Sr. Scarpa was an FBI informant and worked with DeVecchio; prosecutors alleged that DeVecchio would give Scarpa tips about other mobsters-turned-FBI informants, who Scarpa would then kill. Voice reporter Tom Robbins, who interviewed Schrio in 1997, wrote on Tuesday in his article, Tall Tales of a Mafia Mistress, that her story changed over the past 10 years. For instance, she told Robbins and fellow reporter Jerry Capeci (they had been talking to Schiro about writing a book about her life) that DeVecchio had nothing to do with the murder of Colombo mobster Larry Lampisi, but on the stand, she said that then-FBI agent gave ScarpaLampisi's address and the time when Lampisi would leave his house. The reporters had recorded many of the 1997 interviews and though some were missing or had audio issues, their tapes of Schiro recounting the Lampisi murder and murder of Joe Brewster. The prosecutors and defense attorneys listened to the tapes yesterday afternoon, and the prosecutors decided to drop their case. Robbins told the NY Times that he "had chosen to disclose the tapes after hearing the testimony this week and its role in the case": “I did not know what else to do." And he told the Daily News, his former employer, "Tell me what else I could have done? If you sit silent, then someone could go to jail for life. I chose not to live with that." After the tapes were disclosed to the judge, the judge ordered Schiro to get a lawyer because she may have perjured herself. Schiro had been paid $2,200/month (for food and rent) by the prosecution since March 2006. Her daughter told the Post, "She wasn't under oath at the time [that she spoke to the reporter.] She was under no obligation to tell the truth."
2024-04-30T01:27:17.568517
https://example.com/article/8491
Postoperative Delirium in Individuals Undergoing Transcatheter Aortic Valve Replacement: A Systematic Review and Meta-Analysis. To evaluate the incidence of in-hospital postoperative delirium (IHPOD) after transcatheter aortic valve replacement (TAVR). Systematic review and meta-analysis. Elective procedures PARTICIPANTS: Individuals undergoing TAVR. A literature search was conducted in PubMed, Embase, BioMedCentral, Google Scholar, and the Cochrane Central Register of Controlled Trials (up to December 2017). All observational studies reporting the incidence of IHPOD after TAVR (sample size > 25) were included in our meta-analysis. The reported incidence rates were weighted to obtain a pooled estimate rate with 95% confidence interval (CI). Of 96 potentially relevant articles, 31 with a total of 32,389 individuals who underwent TAVR were included in the meta-analysis. The crude incidence of IHPOD after TAVR ranged from 0% to 44.6% in included studies, with a pooled estimate rate of 8.1% (95% CI=6.7-9.4%); heterogeneity was high (Q = 449; I = 93%; pheterogeneity < .001). The pooled estimate rate of IHPOD was 7.2% (95% CI=5.4-9.1%) after transfemoral (TF) TAVR and 21.4% (95% CI=10.3-32.5%) after non-TF TAVR. Delirium occurs frequently after TAVR and is more common after non-TF than TF procedures. Recommendations are made with the aim of standardizing future research to reduce heterogeneity between studies on this important healthcare problem. J Am Geriatr Soc 66:2417-2424, 2018.
2024-05-26T01:27:17.568517
https://example.com/article/6311
Prostate-specific antigen versus prostate-specific antigen density as predictor of tumor volume, margin status, pathologic stage, and biochemical recurrence of prostate cancer. To compare prostate-specific antigen (PSA) and PSA density (PSAD) calculated by transrectal ultrasound (TRUS) volume (TRUS PSAD), pathologic volume (Path PSAD), and weight (Weight PSAD) for their ability to predict pathologic characteristics and biochemical recurrence of prostate cancer. We also compared all PSAD derivatives to determine consistency. Between 1993 and 2002, 306 patients were retrospectively identified who had had PSAD determined preoperatively by TRUS and subsequently underwent radical prostatectomy with whole mounting and close step sectioning. The determination of stage, margin status, tumor number, individual tumor volume, and total tumor volume was obtained from the pathologic evaluation. Clinical follow-up was available for 265 patients. The mean patient age was 62 years, the median Gleason score was 7, the median PSA level was 5.80 ng/mL, and the median TRUS PSAD was 0.16. The percentages of concordance for PSA, TRUS PSAD, Path PSAD, and Weight PSAD were similar in predicting margin status and extracapsular extension. Using linear regression analysis, PSA was more efficacious than TRUS PSAD, Path PSAD, or Weight PSAD in predicting the total tumor volume (R2 0.11, 0.08, 0.04, and 0.06, respectively). A significant positive correlation was found among TRUS PSAD, Path PSAD, and Weight PSAD. PSA was significantly better in predicting biochemical recurrence than TRUS, Path, or Weight PSAD (concordance 75.5%, 66.6%, 66.5%, and 70.4%, respectively). PSA and TRUS PSAD are significant and equivalent predictors of margin status and extracapsular extension. A marked difference may exist between PSA and TRUS PSAD in predicting the total tumor volume and biochemical recurrence.
2024-07-05T01:27:17.568517
https://example.com/article/2815
Hello, I have a problem with your tutorial, the switch just won't boot up after launch. Is this tutorial working in 9.2.0 ? Is it really the byte 29484 ? Also, hactoolnet says everytime "Failed to match key device_key_4x, rsa_oaep_kek_generation_source, and rsa_private_kek_generation_source" but still correctly encrypts/decrypts the save. Is this a known bug ? Click to expand...
2023-08-20T01:27:17.568517
https://example.com/article/8646
What Is Apologetics and Why Should You Care? This weekend, my in-laws had our three kids for an overnight, giving us a much needed break. They picked the kids up around noon on Saturday. My husband and I proceeded to spend the entire rest of the day on the couch, alternating between napping and reading. Napping and reading! We are so exhausted that it’s all we could fathom doing on a cherished day free from parenting responsibility. If you read my blog, you’re probably a parent. If you’re a parent, you’re probably exhausted like we are. I do not take it lightly that I’m going to suggest in this post you need to add more to your job description. But if you care about your kids’ spiritual development (as I know you do!), you have to hear me out. You need to learn apologetics and be ready to train your children with that knowledge. What is Apologetics? An apologetic is a reasoned defense for a belief. Christian apologetics is the defense of why we as Christians believe what we do. The biblical basis for this is 1 Peter 3:15: “But sanctify Christ as Lord in your hearts, always being ready to make a defense to everyone who asks you to give an account for the hope that is in you, yet with gentleness and reverence.” Apologetics addresses questions like: What evidence is there for God outside the Bible? How do we know the Gospels are really eye-witness accounts? Was Jesus really God? If God is real, why is there so much evil in the world? Hasn’t evolution disproven God? Take a moment and consider if you can honestly answer this sample of questions right now. These questions barely scratch the surface of what you need to be able to address with your kids in today’s world. Why Every Christian Parent Needs to Care It’s widely known that at least two-thirds of young adults who grew up in Christian families are turning away from Christianity today. Almost all spiritual leaders agree that training kids with a foundation in apologetics is one of the most important things parents and churches should be doing to address this alarming trend. You can probably see immediately why, in an increasingly secular world, your kids need robust answers to the tough questions of faith. Apologetics provides those answers. Instead of expanding further on the seemingly obvious, however, I’d like to give you a unique look at why apologetics is so necessary, using a consumer decision-making model that marketers have used for more than one hundred years (I’m in marketing professionally). I believe it has a lot to teach us about spiritual decision making and the role of apologetics. If that sounds complicated, don’t worry – it’s not. The following funnel represents the psychological steps behind a person’s purchase decisions. We’re going to look at how these same steps can apply to spiritual decisions. Awareness At this stage, a child is learning basic facts about spiritual beliefs and subconsciously assigning importance to those facts. For example, the most basic facts might include things like God is good, God loves me, Jesus died for me, the Bible is important, and I should behave in a way that pleases God. If your kids lived in a forest with no external influences, you could closely guard their understanding of these Christian beliefs and the importance they should have. But awareness is highly impacted by external factors that add other “facts” and change the relative importance of all those “facts.” One in four Americans under 30 describe their beliefs as atheist, agnostic or “nothing in particular.” The spiritual environment in which our kids are growing up is fundamentally changing the Awareness step in the spiritual decision-making process – frequently adding conflicting “facts” to our kids’ awareness, and often decreasing the relative importance of the things they learn from us. Visually, that change looks something like this: Learning apologetics directly helps our kids critically evaluate the “facts” that enter their awareness so they can determine with confidence what is relevant and what is important. Without such an understanding, the sheer volume of information they are being faced with today easily leads to spiritual confusion and indecisiveness. Interest It should come as no surprise that the more confusion that resides in the Awareness stage, the less interested many people are in sorting it out. It becomes the equivalent of throwing your hands in the air and saying, “there’s just no way of knowing.” Given that large numbers of people now claim to believe in “nothing in particular,” the statistics bear this out. Learning apologetics gives our kids tools to know they can sort out competing information in intellectually and spiritually meaningful ways; it naturally increases interest in searching for truth when a person believes there are meaningful answers available. Without developing interest at this stage, a person clearly won’t continue the path to commitment to Christ. Consideration Once people are aware of something and have gained an interest in it, they enter a Consideration stage where they compare the relative merits of it versus other options. Whereas in the Awareness stage, people are more like subconscious “fact collectors and sifters,” in Consideration they are active evaluators who are looking for solutions to problems. People often compare alternatives using mental rules such as what works the best, what they like the best, what they’ve used the most, what important people use, and what costs the least. Apologetics, at its core, directs people toward making spiritual decisions based on what is true. We can’t make the right choice amongst competing alternatives unless we have the right measuring stick. Apologetics gives kids the right measuring stick – what is true – and gives them the tools for actually doing the measuring. The large number of competing worldviews today makes apologetics absolutely critical at this step. Intent You might think that once you are aware of something, interested in it, and consider it amongst other options (i.e., the prior three stages), you would be ready to make a decision. This is overwhelmingly not how our mental process works, however. Our intentions to actually make a decision/commitment to something often change over time, due to new information entering our awareness or because of life events that cause us to question what we thought we knew. It can cause a loop where we cycle back to the Awareness stage and start again down the path to making a decision. Marketing research has shown that the strength of intention to proceed to commitment directly relates to the strength of underlying beliefs from the Awareness stage. Given how much apologetics strengthens the Awareness stage, it is clear that it later influences Intent as well. Decision (the goal!) Making a decision for Christ is obviously not the same as making a decision about buying a car. But the psychology behind the steps in our decision making processes have a clear relationship with spiritual decision making. At each step here, apologetics is a key lever for influencing our kids’ ultimate decisions for the Lord. We can’t just ignore that because it seems too time consuming or too difficult to learn. It’s too important. God doesn’t need defending, but our kids need help understanding. In my next post, I’ll give several ideas for how you can get started with apologetics! I’d love to hear if you’ve studied apologetics – if so, what questions led you to it? If not, what are your barriers to doing so? Reader Interactions Comments We also really believe in the necessity of knowing that what you believe is true and defensible in today’s relativistic and hostile culture. Our children are 6 and 8, but we have begun to teach them the basics of apologetics even at these young ages. This is not the only thing we are purposefully teaching them, but we feel it is a necessity. If our kids can’t answer questions like: Can I trust the Bible? Has science disproven God? Is it egotistical to suggest that Christianity is the only way? Does it matter what you think, as long as you are sincere? Does evil disprove the existence of a good God? etc, then we feel we will have failed in a large part in our upbringing of our kids. Knowing apologetics has been invaluable for us personally in evangelism, as well as in better understanding our own faith. Knowing the certainty of what you believe gives incredible confidence, which is felt by others. The questions I had were basically centered on two subjects: 1. Reconciling origin science and the biblical interpretation I held concerning Creation. 2. The reliability of the Bible, particularly how to know that the original writers got their description of Jesus’ life and teachings right. The area of textual criticism as it relates to the preservation of the text wasn’t really an issue for me so much as the question of whether or not the gospels are accurate eyewitness testimony. My sister who is now in her 60″s has these exact same two questions as you had. I am a Christian that unfortunately is not equipped to answer her questions. Can you tell me what resources you found that answered these two specific questions of yours? Thank your so much. “Marketing research has shown that the strength of intention to proceed to commitment directly relates to the strength of underlying beliefs from the Awareness stage.” Is this why first impressions are so powerful and often decisive? I can see how a poor initial awareness can lead nowhere and certainly not toward intent and commitment. But your statement, regarding first impressions, is made in the positive view of the power of first impressions, a much more constructive mode especially when thinking about the young who may not have any pre beliefs surrounding first awareness. Being trained as a scientist and pursing a career in medicine has taught me one thing regarding research: the perspective or world view of the individual conducting the research shapes the results of the research that is being conducted. I have personally seen doctorate level Biologists assign creatorial power to evolution or to “God.” I have been studying apologetics with as much zeal as i have been studying the human body and i would like to offer a caveat to the author of this blog: I would be very careful when illustrating a point of “science says there is no God,” even under the environment of “secular.” I have seen the study of secular science turn a man to a theist. I personally believe that God and science can “go hand in hand.” Hi Clayton, I agree completely! The statements in my chart weren’t intended to reflect what I believe to be true (and you’re right, there are some secular scientists who would admit that science can’t “disprove” God). But overwhelmingly, the popularized message that kids get today – outside of well studied philosophical circles – is that it’s a choice between faith and science. They are bombarded with the message that science is the antithesis of faith and that science leaves no room for God. This is the message I represented in this post – the one the average teenager/young adult (and even older adult!) would likely interpret from the bits and pieces they hear. I agree with you completely, that this is completely wrong. I’ve been reading extensively about origins science and am writing an ebook on views of creation/evolution, so it’s a topic near and dear to my heart as well. Thanks for your comment! Trackbacks […] We want our kids to successfully navigate the competing truth claims out there today, and to have a confidence in their faith that attracts others to it. Christian apologetics is one of the topics that I cover in this blog. To see apologetics related activities that I’ve previously posted on, click: “Can We Trust the Bible and Its Authors?” To read Natasha’s entire post, click here. […]
2023-09-05T01:27:17.568517
https://example.com/article/2163
State institutions of Cambodia This is a list of the state institutions of Cambodia. Royalty King of Cambodia Queen Mother Norodom Monineath Sihanouk King Norodom Sihamoni Norodom Ranariddh Legislative branch Parliament of Cambodia National Assembly of Cambodia Senate of Cambodia Executive branch Prime Minister of Cambodia Hun Sen 1998- Office of the Council of Ministers Khmer Rouge Trial Task Force Ministry of Agriculture, Forestry and Fisheries Ministry of Commerce Ministry of Culture and Fine Arts APSARA Ministry of Education, Youth and Sport National universities: Cambodia Agricultural Research and Development Institute (CARDI) Economics and Finance Institute (EFI) Institute of Technology of Cambodia (ITC) Moharishi Vedic University (MVU) National Institute of Education (NIE) National Polytechnic Institute of Cambodia (NPIC) National University of Management (NUM) Prek Leap National School of Agriculture (PNSA) Royal University of Agriculture (RUA) Royal University of Fine Arts (RUFA) Royal University of Law and Economic (RULE) Royal University of Phnom Penh (RUPP) Svay Rieng University (SRU) University of Health Science (UHS) Ministry of Environment Ministry of Environment, Cambodia Ministry of Finance and Economy National Bank of Cambodia Ministry of Foreign Affairs and International Cooperation Permanent Mission of the Kingdom of Cambodia to the United Nations Ministry of Health National Centre for HIV/AIDS Dermatology and STDs, Cambodia National Malaria Center of Cambodia National hospitals National Pediatric Hospital, Cambodia Ministry of Information Ministry of Industry, Mining and Energy Electricity Authority of Cambodia Cambodia National Petroleum Authority Ministry of the Interior Ministry of Justice Ministry of Labor and Vocational Training Ministry of Land Management, Urban Planning and Construction Council of Land Policy National Cadastral Commission National Social Land Concession Committee Ministry of National Defence Royal Cambodian Armed Forces Royal Cambodian Army Royal Cambodian Navy Royal Cambodian Air Force Ministry of Parliamentary Affairs and Inspection Ministry of Planning National Institute of Statistics Ministry of Posts and Telecommunications Camnet Internet Service National Information Communications Technology Development Authority, Cambodia Telecom Cambodia Ministry of Public Works and Transport Secretariat of Civil Aviation Sihanoukville Autonomous Port Ministry of Religions and Cults Buddhist Institute Ministry of Rural Development Ministry of Social Affairs, Veteran and Youth Rehabilitation Ministry of Tourism Ministry of Tourism Ministry of Water Resources and Meteorology Ministry of Women's Affairs Judicial branch Extraordinary Chambers in the Courts of Cambodia Supreme Court of Cambodia Provincial Please see Provinces of Cambodia, Districts of Cambodia, Commune council. See also Government of Cambodia Politics of Cambodia References See Government of Cambodia ministry listings Category:Government of Cambodia Category:Politics of Cambodia
2024-02-12T01:27:17.568517
https://example.com/article/9551
1. Field of the Invention The present invention relates to detection of organic compounds. More particularly, the present invention relates to fiber optic probes for detection and measurement of organic wastes in groundwater. 2. Discussion of Background Fiber optic sensing devices for detecting the existence and extent of chemicals or changes in chemical or physical parameters are well known. Such devices, whether they are probes for continuous remote operation or sensors used for laboratory analysis techniques, typically contain some type of indicator responsive to the presence of an analyte in fluids or gases. The indicator, upon mixing with the chemical compound, may react with it or be changed in some other way, to indicate the presence and concentration of the compound. Indicators are known for detecting such analytes as oxygen, carbon dioxide, hydrogen (i.e. pH), certain metal ions, and biological fluids such as glucose and ammonia. When used in conjunction with fiber optic devices, the optical characteristics of these indicators in response to the presence of a corresponding analyte offer a range of possibilities for detection and analysis. For example, analytes may affect the fluorescent emission, reflection, or absorption spectrum of light passing through the indicator in the presence of that particular analyte. Fiber optic detecting systems using spectrometric absorption analysis techniques typically include a light source, a sample cell containing the fluid of interest, an indicator and a detector, such as a spectrophotometer. Light is passed through the sample cell and received by the spectrophotometer, which measures the absorption spectrum of the received light. Devices for remote operation are typically in the form of probes using fiber optics to transmit and receive light signals. Numerous methods exist for securing the indicator within these probes. Typically, polymer matrices positioned in the probes are used to carry the indicators through absorption, adsorption, or other methods. Specific examples of sensing devices include U.S. Pat. No. 4,842,783, issued to Blaylock, and U.S. Pat. No. 5,119,463, issued to Vurek et al. Blaylock discloses a fiber optic sensor for detecting certain metal ions and pH, among other analytes. The sensor uses an indicator dye absorbed into a polymeric gel contained within the sensor. Similarly, Vurek et al disclose a fiber optic probe for determining the presence of O.sub.2, CO.sub.2, and pH using analyte indicators disposed on polymer matrices. In U.S. Pat. No. 5,096,671, Kane et al disclose a fiber optic sensor using a detection indicator dispersed within a hydrogel matrix. Also, Yafuso et al, in U.S. Pat. Nos. 4,999,306 and 4,886,338, disclose a fiber optic sensor for detecting the concentration of an ionic component in a medium, using a sensing material chemically bonded to an ionic matrix contained within the sensor. In one particular embodiment, sulfonic acid absorbed within a hydrogel is used as a pH indicator for the fiber optic sensor. Still, despite the abundance of fiber optic probes using well known indicators, it is believed that certain monitoring and detection needs have yet to be met adequately. For instance, the identification of trace amounts of organic compounds in aqueous solutions has been difficult to achieve due to high water content in the solutions interfering with the detection response. The detection of low levels of organic species, particularly in aqueous solutions, is a growing health and safety concert for our environment, thus, there is an immediate need for detectors of this kind. It is believed that no effective sensor or remote probe exists for detecting small amounts of organic species in aqueous solutions.
2023-11-07T01:27:17.568517
https://example.com/article/4334
1. Field of the Invention This invention relates to a dual-mode bandpass filter with at least two adjacent cavities being mounted in a planar relationship to one another. 2. Description of the Prior Art It is known to have a dual-mode bandpass filter with each cavity being mounted in an axial relationship to one another and containing a dielectric resonator. This type of filter is described in Institute of Electrical and Electronics Engineers Transaction on Microwave Theory and Techniques, Vol. MTT-30, No. 9, September, 1982, pages 1311-1316, by S. J. Fiedziuszko. A filter with axially mounted cavities has some disadvantages in that the cavities must be manufactured within a narrow tolerance as the dimensions of each cavity are critical. This makes the cavity relatively expensive to manufacture. Also, axially mounted cavities are more difficult to mount on a channel or can only be tuned over a relatively narrow range. It is an object of the present invention to provide a bandpass filter having adjacent cavities that are mounted in a planar relationship relative to one another, contain a dielectric resonator and are versatile in that they can be tuned over a relatively broad range.
2024-04-11T01:27:17.568517
https://example.com/article/9641
I've known a lot of people who have lost money when they sold their homes. In fact, I'm one of those people, and it's happened to me more than once. There are a number of factors can cause a financial loss when you sell your house, including the need to sell at the wrong time due to divorce or an impending foreclosure, or a downturn in the local real estate market. However, it's also common to lose money simply by making too many expensive changes to the house before putting it on the market. This is how I lost money on real estate, before I wised up. My most resounding failure in the fix it and flip it market was a house I bought in Spokane, Washington. Knowing what I know now, I would have restricted myself to replacing the carpets and the kitchen and bathroom fixtures, painting inside and out, and buying new appliances. I probably would have replaced the old-style windows, too, to make the place look nicer and appeal to the energy-conscious buyer. These fixes could have been done easily within the two years I needed to live there to avoid capital gains taxes. Since I didn't know what I know now, I made major renovations, which included moving the bathroom. I did most of the work myself, but the materials alone cost more than I could get back when the house was sold. With the exception of repairs done to the house to make it eligible for an FHA loan and watering the grass, I doubt that any of my major projects really helped me sell the house or increased its value. If a house is actually sound, with no structural damage or insect problems, the biggest reason it will sell for less than its worth is usually cosmetic. This was certainly true of the house I bought in Spokane. Dirty carpeting, and a wall in the living room covered with mirror tiles, kept most buyers from going any further into the house. I could see past the cosmetic problems and see the home's full potential - but my imagination went a bit too far. The floor plan was odd, and slightly inconvenient, but leaving the bathroom where it was would have been far more rational, financially. Why didn't I do that? Because my emotions and my nesting instincts took over, pushing aside all thought of future gain or loss. Let's face it - most people don't buy their own homes with the intention of making a profit, although they certainly hope the house will be a good investment. In fact, the emotional stress caused by the process of buying a house and moving into it can be enough to completely erase any thought of moving again a few years later. However, I know several families who have made a very good living by buying underpriced homes, living in them and fixing them up, and then selling them when the IRS will allow them to do so without paying extra taxes. Clearly, these folks don't make any changes to these houses without carefully considering the bottom line. After my Spokane adventure, I decided to learn from my mistakes, and find out how to stop losing money on houses. I read books by authors who are experienced in fixing and flipping houses - and then read them again. When I saw that most remodeling projects almost never recoup their costs when the house is sold, I was a little shocked, because I had been guilty of almost every mistake on the list at one time or another. I know many people who have also made the same mistakes, even when they started those remodeling projects with the intention of increasing the value of their homes. When I bought my next house, I kept that list very firmly in mind. For instance, my kitchen was badly in need of a major overhaul, (or so I believed), and it was far too small. I pored over the latest home decorating magazines, and ideas came flooding into my head. I thought about knocking out some walls, and I even tried to imagine adding on to the house to make the kitchen bigger. New cabinets would be needed, and new appliances... In the end I painted the kitchen cabinets and replaced the sink with a new one I purchased at Ikea. I covered the chipped orange Formica counters with printed cotton fabric, and coated it with many layers of water-based Verathane that was intended to protect wood floors. The complete "remodel" cost less than $400, as opposed to the thousands of dollars that I would have spent if I followed through on my idle dreams of a "perfect" kitchen. Since the house sold at a very good price within two weeks of listing it, my buyer obviously didn't mind that the kitchen didn't meet my idea of perfect. Because I kept my costs down, I made a handy profit on the sale. Would I have been able to sell the house for more money if the kitchen had been remodeled and expanded? Perhaps, but not enough to cover the cost of the remodel. Although the National Association of Realtors lists a kitchen remodel as one of the projects that will increase a house the most, they still advise that you should expect to get back only 80% of the costs. If your new kitchen is far fancier, bigger, and more expensive than any other kitchen in the neighborhood, the returns will be even less. A full kitchen remodel can cost thousands of dollars, so the 20% you don't get back can be a big chunk of change. Does this mean that you shouldn't make changes to your home that would make you happy? Not at all, especially if you intend to live there for many years. But it does pay to sit down with your spouse or partner before you start making your remodeling plans, determine exactly how long you'll be staying in the home, and then think about the full financial implications of the remodeling project. Even if you don't think of yourself as a professional house flipper, it might pay to slow down a bit and find ways to improve the home without spending money you'll never see again. As a bonus, your family might be able to avoid the stress and disruption of all that remodeling mess.
2024-02-01T01:27:17.568517
https://example.com/article/9900
Artur Doppelmayr Dies Obituary - Ropeway Pioneer Artur Doppelmayr * September 16, 1922 – † May 12, 2017 Artur Doppelmayr was a pioneer. He was a ropeway builder through and through, and transformed the Doppelmayr company from a workshop operation into an internationally successful industrial enterprise. We mourn the loss of our one-time CEO and chairman. Artur Doppelmayr, the only son of Emil Doppelmayr, was born in Dornbirn on September 16, 1922. He took over the company reins following his father’s death in 1967. Artur placed a strong emphasis on continuing the tradition of developing and implementing technical innovations that were in tune with the market. In the process, he set many milestones that were to be instrumental in shaping the success of the company. The construction of the first monocable gondola lift with four-passenger cabins in Mellau in the Bregenzerwald region in 1972 was a major event in the company’s history as it paved the way for the success of detachable ropeway technology. The company’s first chairlift based on detachable grip technology was to follow four years later. Further product innovations included the detachable quad chairlift in Breckenridge/Colorado (USA) in 1981, the 8-passenger gondola lift in Steamboat (USA) in 1986, and the 6-seater chairlift near Quebec (Canada) in 1991 – all of them world firsts.Artur Doppelmayr’s drive to expand the export business led to the founding of a whole series of subsidiaries, collaborations and agency agreements outside of Austria. This enabled him to create an international network, which consolidated the company’s market position and secured it for the future.In 1992, Artur’s oldest son, Michael, was appointed as CEO and continued to pursue the same farsighted approach. Artur Doppelmayr stood down from the operational side of the business and in 1994 became chairman of the supervisory board of Doppelmayr Holding AG.The company’s outstanding achievements were not only reflected in the great successes in the international ropeway market, but also recognized by the numerous awards and honors bestowed on Artur Doppelmayr both at home and abroad.As CEO, Artur Doppelmayr always fostered extremely sociable relationships with his employees. We mourn the passing of Artur, who was a role model to us and whose life’s work will always remain a part of our lives.On behalf of all employees and the management of the Doppelmayr Group.
2024-01-17T01:27:17.568517
https://example.com/article/2043
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup Label="ProjectConfigurations"> <ProjectConfiguration Include="Debug CLR|Win32"> <Configuration>Debug CLR</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug CLR|x64"> <Configuration>Debug CLR</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug Static|Win32"> <Configuration>Debug Static</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug Static|x64"> <Configuration>Debug Static</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug|Win32"> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug|x64"> <Configuration>Debug</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release CLR|Win32"> <Configuration>Release CLR</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release CLR|x64"> <Configuration>Release CLR</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release Static|Win32"> <Configuration>Release Static</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release Static|x64"> <Configuration>Release Static</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|Win32"> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|x64"> <Configuration>Release</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Debug CLR|Win32"> <Configuration>Unicode Debug CLR</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Debug CLR|x64"> <Configuration>Unicode Debug CLR</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Debug Static|Win32"> <Configuration>Unicode Debug Static</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Debug Static|x64"> <Configuration>Unicode Debug Static</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Debug|Win32"> <Configuration>Unicode Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Debug|x64"> <Configuration>Unicode Debug</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Release CLR|Win32"> <Configuration>Unicode Release CLR</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Release CLR|x64"> <Configuration>Unicode Release CLR</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Release Static|Win32"> <Configuration>Unicode Release Static</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Release Static|x64"> <Configuration>Unicode Release Static</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Release|Win32"> <Configuration>Unicode Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Unicode Release|x64"> <Configuration>Unicode Release</Configuration> <Platform>x64</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup Label="Globals"> <TargetName>MSMoneyDemo</TargetName> <ProjectGUID>{B9D17DF6-5A82-48B6-A373-50959475A998}</ProjectGUID> <RootNamespace>MSMoneyDemo</RootNamespace> <Keyword>MFCProj</Keyword> </PropertyGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" /> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Debug|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Release|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Debug Static|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Debug Static|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Release Static|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Release Static|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Release|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Release|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Static</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Debug CLR|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <CLRSupport>true</CLRSupport> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Debug CLR|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <CLRSupport>true</CLRSupport> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Release CLR|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <CLRSupport>true</CLRSupport> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Release CLR|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>MultiByte</CharacterSet> <CLRSupport>true</CLRSupport> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <CLRSupport>true</CLRSupport> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <CLRSupport>true</CLRSupport> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|Win32'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <CLRSupport>true</CLRSupport> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <PropertyGroup Label="Configuration" Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|X64'"> <ConfigurationType>Application</ConfigurationType> <UseOfMfc>Dynamic</UseOfMfc> <CharacterSet>Unicode</CharacterSet> <CLRSupport>true</CLRSupport> <WholeProgramOptimization>true</WholeProgramOptimization> </PropertyGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" /> <ImportGroup Label="ExtensionSettings"> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Release|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Debug|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Release|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Debug Static|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Debug Static|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Release Static|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Release Static|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Release|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Release|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Debug CLR|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Debug CLR|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Release CLR|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Release CLR|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|Win32' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <ImportGroup Label="PropertySheets"> <Import Project="$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props" Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|X64' and (exists('$(LocalAppData)\Microsoft\VisualStudio\10.0\Microsoft.Cpp.$(Platform).user.props'))" /> </ImportGroup> <PropertyGroup Label="UserMacros" /> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug Static|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug Static|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release Static|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release Static|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug CLR|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug CLR|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release CLR|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release CLR|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>true</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|Win32'"> <OutDir>$(SolutionDir)$(Configuration)\</OutDir> <IntDir>$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|X64'"> <OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir> <IntDir>$(Platform)\$(Configuration)\</IntDir> <LinkIncremental>false</LinkIncremental> </PropertyGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>EditAndContinue</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|X64'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|X64'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug Static|Win32'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>EditAndContinue</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug Static|X64'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release Static|Win32'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreaded</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release Static|X64'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreaded</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|Win32'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>EditAndContinue</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|X64'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|Win32'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>EditAndContinue</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|X64'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>true</MinimalRebuild> <BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release|Win32'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release|X64'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|Win32'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreaded</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|X64'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreaded</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug CLR|Win32'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <BasicRuntimeChecks> </BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <AssemblyDebug>true</AssemblyDebug> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug CLR|X64'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <BasicRuntimeChecks> </BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <AssemblyDebug>true</AssemblyDebug> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release CLR|Win32'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release CLR|X64'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|Win32'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <BasicRuntimeChecks> </BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <AssemblyDebug>true</AssemblyDebug> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|X64'"> <Midl> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>Disabled</Optimization> <PreprocessorDefinitions>WIN32;_WINDOWS;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <BasicRuntimeChecks> </BasicRuntimeChecks> <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <AssemblyDebug>true</AssemblyDebug> <SubSystem>Windows</SubSystem> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|Win32'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <ValidateAllParameters>true</ValidateAllParameters> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX86</TargetMachine> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|X64'"> <Midl> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MkTypLibCompatible>false</MkTypLibCompatible> <TargetEnvironment>X64</TargetEnvironment> </Midl> <ClCompile> <Optimization>MaxSpeed</Optimization> <IntrinsicFunctions>true</IntrinsicFunctions> <PreprocessorDefinitions>WIN32;_WINDOWS;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <MinimalRebuild>false</MinimalRebuild> <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary> <FunctionLevelLinking>true</FunctionLevelLinking> <PrecompiledHeader>Use</PrecompiledHeader> <WarningLevel>Level3</WarningLevel> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> </ClCompile> <ResourceCompile> <PreprocessorDefinitions>NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <Culture>0x0409</Culture> <AdditionalIncludeDirectories>$(IntDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> </ResourceCompile> <Link> <AdditionalDependencies>winmm.lib;%(AdditionalDependencies)</AdditionalDependencies> <GenerateDebugInformation>true</GenerateDebugInformation> <SubSystem>Windows</SubSystem> <OptimizeReferences>true</OptimizeReferences> <EnableCOMDATFolding>true</EnableCOMDATFolding> <TargetMachine>MachineX64</TargetMachine> </Link> </ItemDefinitionGroup> <ItemGroup> <ClCompile Include="MainFrm.cpp" /> <ClCompile Include="MSMCaptionBar.cpp" /> <ClCompile Include="MSMCaptionBarButton.cpp" /> <ClCompile Include="MSMCategoryBar.cpp" /> <ClCompile Include="MSMCategoryBarButton.cpp" /> <ClCompile Include="MSMDialog.cpp" /> <ClCompile Include="MSMLinksBar.cpp" /> <ClCompile Include="MSMLinksBarButton.cpp" /> <ClCompile Include="MSMMenuBar.cpp" /> <ClCompile Include="MSMoneyDemo.cpp" /> <ClCompile Include="MSMoneyDemoDoc.cpp" /> <ClCompile Include="MSMoneyDemoView.cpp" /> <ClCompile Include="MSMTasksPane.cpp" /> <ClCompile Include="MSMToolBar.cpp" /> <ClCompile Include="MSMVisualManager.cpp" /> <ClCompile Include="stdafx.cpp"> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug Static|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug Static|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release Static|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release Static|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Debug|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Debug Static|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Release|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Release|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Release Static|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug CLR|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug CLR|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release CLR|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release CLR|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Debug CLR|X64'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|Win32'">Create</PrecompiledHeader> <PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Unicode Release CLR|X64'">Create</PrecompiledHeader> </ClCompile> </ItemGroup> <ItemGroup> <ClInclude Include="MainFrm.h" /> <ClInclude Include="MSMCaptionBar.h" /> <ClInclude Include="MSMCaptionBarButton.h" /> <ClInclude Include="MSMCategoryBar.h" /> <ClInclude Include="MSMCategoryBarButton.h" /> <ClInclude Include="MSMDialog.h" /> <ClInclude Include="MSMLinksBar.h" /> <ClInclude Include="MSMLinksBarButton.h" /> <ClInclude Include="MSMMenuBar.h" /> <ClInclude Include="MSMoneyDemo.h" /> <ClInclude Include="MSMoneyDemoDoc.h" /> <ClInclude Include="MSMoneyDemoView.h" /> <ClInclude Include="MSMTasksPane.h" /> <ClInclude Include="MSMToolBar.h" /> <ClInclude Include="MSMVisualManager.h" /> <ClInclude Include="Resource.h" /> <ClInclude Include="stdafx.h" /> <ClInclude Include="targetver.h" /> </ItemGroup> <ItemGroup> <None Include="ReadMe.txt" /> <None Include="res\MSMoneyDemo.ico" /> <None Include="res\MSMoneyDemo.rc2" /> <None Include="res\MSMoneyDemoDoc.ico" /> <None Include="res\Toolbar.bmp" /> </ItemGroup> <ItemGroup> <ResourceCompile Include="MSMoneyDemo.rc" /> </ItemGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" /> <ImportGroup Label="ExtensionTargets"> </ImportGroup> </Project>
2023-09-16T01:27:17.568517
https://example.com/article/6407
Business strategies in subacute care. To increase market share in a managed care environment, subacute care providers must confront several issues: business strategies, information technology and human resource management. The backbone of the subacute care facility is its human resources, its link to the world is information technology and its foundation is the business plan.
2024-07-20T01:27:17.568517
https://example.com/article/6435
Nucleotide analogues as probes for DNA polymerases. Transmission of the genetic information from the parental DNA strand to the offspring is crucial for the survival of any living species. In nature, all DNA synthesis in DNA replication, recombination and repair is catalyzed by DNA polymerases and depends on their ability to select the canonical nucleobase pair from a pool of structurally similar building blocks. Recently, a wealth of valuable new insights into DNA polymerase mechanisms have been gained through application of carefully designed synthetic nucleotides and oligonucleotides in functional enzyme studies. The applied analogues exhibit features that differ in certain aspects from their natural counterparts and, thus, allow investigation of the involvement and efficacy of a chosen particular aspect on the entire complex enzyme mechanism. This review will focus on a depiction of the efforts that have been undertaken towards the development of nucleotide analogues with carefully altered properties. The different approaches will be discussed in the context of the motivation and the problem under investigation.
2024-01-01T01:27:17.568517
https://example.com/article/9042
Introduction {#Sec1} ============ Avian influenza H9N2 is a low pathogenic avian influenza virus (LPAIV) which in many countries continues to cause respiratory diseases, drop in egg production and increase in mortality among commercial domestic poultry and wild birds \[[@CR1]-[@CR3]\]. H9N2 viruses become endemic in poultry in many Eurasian countries particularly in some Asian and Middle Eastern countries \[[@CR4]\]. Molecular genetic analyses of H9N2 viruses, isolated during the last two decades revealed that these viruses are highly evolving and a genetically diverse population \[[@CR5]\]. Furthermore, H9N2 viruses have reassorted with other avian influenza subtypes to generate multiple novel subtype \[[@CR6]-[@CR10]\]. Additionally, its extensive species tropism, distribution and ability to donate internal genes to the highly pathogenic H5 and H7 subtypes \[[@CR11]-[@CR14]\] evoke particular concerns. The H9 viruses are now also being considered as potential pandemic threats, as they have acquired human virus-like receptor specificity \[[@CR15]\] and are able to be directly transmitted from birds to humans \[[@CR16]\]. Two distinct lineages of H9N2 influenza viruses, the North American lineage and the Eurasian lineage, have been defined. The Eurasian H9N2 viruses further grouped into three sub-lineages: G1, Y280 and Kr-p96323 based on their antigenic and genetic properties \[[@CR17]\]. The G1-H9N2 viruses are widespread and more likely affected in commercial poultry flocks with moderate clinical signs whereas, Y280 and Kr-p96323 are apparently circulating in natural reservoir hosts and more prevalent in their respective origin areas. Multiple clades of H9N2 viruses such as G1 and Y280 have been circulating together in China \[[@CR18]\]. Additionally, H9N2 co-circulating with other subtypes especially H5N1 in many countries raised the possibility of multiple reassorted viruses. Moreover, human infection with H9N2 viruses was observed and some of them belonged to the sub-lineage G1 \[[@CR13],[@CR19],[@CR20]\]. Currently the biological properties and risk assessment studies of different clades of H9N2 viruses were performed in animal models \[[@CR5]\]. Assessing the fitness of distinct clades of H9N2 showed that the North American H9N2 virus had the lowest risk profile while the Eurasian viruses displayed various levels of fitness across individual assays \[[@CR5]\]. Due to the continuous outbreaks of H9N2 in the several mentioned countries, the development of laboratory techniques for efficient isolation and detection in surveillance samples continue to be of high priority. Upon receiving a field sample, virus propagation and isolation are important for the recovery and production of a viable viral stock for further laboratory use. The embryonated chicken egg (ECE) and cell culture systems are generally the basic choice for influenza virus cultivation. ECEs are considered the gold standard method of isolation as they are able to support the growth of a large spectrum of AIVs and their subtypes. The advantage of the ECE system is the possibility to acquire a large volume of the viral stock \[[@CR21]\] from a single egg, thus, influenza vaccines have traditionally been prepared in ECEs \[[@CR22]\]. Effective AIV isolation can also be performed in the cell culture system. The Madin-Darby canine kidney (MDCK) is the cell line of choice for AIV propagation and is also recommended by the World Health Organization \[[@CR23]\]. After successful propagation, another important observation is the virus quantification. Virus quantification presents a rate-limiting step at many stages of vaccine development and production, for both egg and cell culture. Currently, one of the most widely used tools for the determination of virus concentration is the viral plaque assay, or variations such as tissue or egg culture infectious dose (TCID~50~/EID~50~). The viral plaque assay or TCID~50~/EID~50~ is a subjective and traditional biological technique that was originally applied to the quantification of viruses in the early 1950s \[[@CR24]\]. For influenza viruses, the hemagglutination (HA) assay is also widely applied. The HA assay is rapid, requires the use of animal red blood cells, and yields an HA titer value that is not readily translated into viruses per mL. Another currently used method for virus quantification includes quantitative real time PCR (qRT-PCR). Although the qRT-PCR method does not take into account the infectious properties of the virus rather can also detect defective virus particles, it is widely used by many researcher. In this study, four H9N2 isolates from different sources (Three G1-H9N2 and one European wild bird-H9N2) were propagated separately in two recommended effective biological systems. The virus quantification was carried out based on TCID~50~, HA as well as qRT-PCR simultaneously from all the viruses grown on both systems. The morphological changes of the embryos and cells at different time points during propagation were observed. Furthermore, the genetic evolution of the viruses in a single replication cycle in cell culture was analyzed as well as all four H9N2 viruses were checked for amantadine sensitivity. The aim of this study involved the following objectives; i) Replication efficiency of the H9N2 viruses of different origin, ii) Growth kinetics of the viruses in two different biological system, iii) Correlation between different viral quantification methods iv) Amantadine sensitivity and adaptive mutation of the viruses. Materials and methods {#Sec2} ===================== Viruses and cells {#Sec3} ----------------- Four Eurasian lineage H9N2 viruses: A/chicken/Bangladesh/VP01/2006 (BVP01), A/turkey/Germany/R869/2012 (GR869), A/chicken/Saudi Arabia/R61/2002 (SAR61) and A/chicken/Dubai/F5/2013 (DF5) were used in this study. Genetically, The BVP01 \[[@CR14]\], SAR61 and DF5 \[[@CR25]\] belong to G1 lineage, whereas GR869 is an Eurasian wild bird group (not published). The sequences accession number and details molecular analysis were available in recently published article \[[@CR14],[@CR25]\]. Madin-Darby canine kidney cells (both the parental MDCK \[ATCC® CCL-34™\] and its clone MDCK-II \[ATCC® CRL-2936™\]) were used for efficient viral propagation of the above mentioned viruses. Virus propagation in ECE {#Sec4} ------------------------ Specific pathogen free chicken eggs (VALO BioMedia GmbH, Germany) were used for the ECE propagation system. Firstly, the H9N2 viruses were inoculated blindly into the allantoic cavity route of 10-days-old ECE and incubated at 37 °C. Allantoic fluids (AFs) were harvested upon the death of the embryo or at 72 h post inoculation (hpi). The presence of virus was confirmed by HA and subjected to titration. Viral EID~50~ titers were determined by injecting 100 μL of 10-fold dilutions of the virus into the allantoic cavities of 10- days-old eggs. For each dilution four eggs were used for accurate calculation of the titer. The 50% end points were calculated according to the method of Reed and Muench \[[@CR26]\] for 50% egg infectious dose (EID~50~) and are expressed in log~10~ EID~50~/mL. The virus stock containing the titer of 7.5 log~10~ EID~50~/mL was further inoculated into embryonated chicken eggs. Three eggs at each incubation period of 2, 8, 16, 24, 32, 40, 48, 56 and 72 h were selected and AFs were harvested for the assessment of replication kinetics. Virus propagation in cell culture {#Sec5} --------------------------------- Confluent monolayers of MDCK and MDCK-II cells were maintained in cell growth medium (CGM) consisting of Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal calf serum (FCS), 1% Na-pyruvate and 1% non-essential amino acids (NEAA). Cells were cultured in T-75 cm^2^ flasks and incubated at 37 °C in the presence of 5% CO~2~. For virus inoculation, confluent monolayer of cells was maintained in T-25 cm^2^ flask as well as in 6-well and 96-well microplates for different purposes. The inoculum was prepared by diluting the virus in growth medium (GM) \[CGM without FCS and supplemented with TPCK-trypsin (2 μg/mL)\] at a multiplicity of infection (moi) of 0.2 and inoculated onto the monolayer of cells. Infected cells were incubated at 37 °C for 1 h to allow viral adsorption. Afterwards the inoculum was removed and GM was added to the monolayer. The supernatants were harvested at 2, 8, 16, 24, 32, 40, 48, 56 and 64 hpi and stored at −80 °C for virus titration. Assessment of viral replication kinetics {#Sec6} ---------------------------------------- Viral replication kinetics was monitored for infectious particles by tissue culture infectious dose (TCID~50~) assay, HA assay as well as for viral particles by qRT-PCR targeting the Matrix (M) gene. The influence of H9N2 viruses on cellular morphology was assessed using immunofluorescence. ### Hemagglutination assay {#Sec7} The HA assay was performed using 1% washed chicken red blood cells (RBCs) prepared in PBS according to the OIE manual \[[@CR27]\]. The harvested AFs and cell culture supernatants (CCSs) at selected time points were tested for hemagglutinating activity. The titers were recorded to draw the virus replication kinetics and were expressed by hemagglutination titer unit (HAU). Briefly, 25 μL of undiluted AF/CCS were added to the first wells of 96-well V- bottom shaped plates containing 25 μL/well of PBS. Serial two-fold dilutions were performed followed by addition of 25 μL PBS to each well. Subsequently, 25 μL of 1% washed chicken RBCs were added to each well and the plates were incubated for 45 min at room temperature (RT). The titration was read until the highest dilution giving complete agglutination and presented as log~2~ HAU/25 μL. ### TCID~50~ assay {#Sec8} To determine the infectivity titer, 2 × 10^4^ cells were seeded in 96-well microplates. The harvested virus from each incubation period of both propagation systems was subjected to 10-fold serial dilutions. Six replicates of 100 μL diluted inoculum were transferred to the monolayers of MDCK-II cells and allowed to adsorb at 37 °C for 1 h. The inoculum was discarded and 150 μL of GM was added to all wells containing monolayers. The cells were incubated at 37 °C, 5% CO~2~ incubator for 32 h. The plates were observed for cytopathic effect (CPE) and the presence of virus until a certain dilution was confirmed by immunofluorescence microscopy. The viral titer was calculated as log~10~ TCID~50~/mL as described by Karber-Spearman \[[@CR28],[@CR29]\]. ### Immunofluorescence {#Sec9} The 96-well microplates containing MDCK-II cells were inoculated with the virus and incubated for different time periods. Following incubation cells were washed 3 times with PBS, fixed with 3.7% formaldehyde for 10 min at RT and permeabilized with 90% ice cold methanol for 15 min. The plates were washed twice with PBS followed by addition of 3% bovine serum albumin (BSA) in PBS as a blocking solution and incubated at RT for 30 min. Plates were washed again twice and treated with influenza rabbit anti-nucleoprotein (NP) polyclonal primary antibody (PA5-32242, Thermo scientific, Germany) at a dilution of 1:5000. The treated plates were incubated at 37 °C for 1 h and washed twice with PBS. Secondary antibody (Alexa Fluor® 488 Goat Anti-Rabbit IgG, Life technologies, Germany) at 1:1000 dilutions was added and incubated again for 1 h at 37 °C. Final washing was performed twice and the plates were left to dry at RT. To investigate the morphological changes of the cell nucleus, an additional stain, 4, 6-diamidino-2-phenylindole (DAPI), was used. The plates were observed under fluorescence microscope (Olympus IX70) and images were captured for analysis. ### Quantitative real time PCR (qRT-PCR) {#Sec10} RNA was isolated from both AFs and CCSs using QIAmp Viral RNA mini kit (QIAGEN, Hilden, Germany) without using carrier RNA provided by the kit and reverse transcribed under the standard conditions of RevertAid Reverse Transcriptase (Thermo Scientific, Germany) using the Uni 12 primer \[[@CR30]\]. Diluted cDNA (1:10) was used for the PCR reaction. A unique primer pair was designed to amplify the target sequence of the M gene of AIV type A generating a product of 150 bp (primer sequences are available on request). The qRT-PCR reactions were carried out according to the manufacturer's instructions using the Rotor-Gene SYBR Green PCR master mix (QIAGEN, Hilden, Germany). A pJET 1.2 BVP/ H9N2-M plasmid was used as a positive control to develop a standard curve. The genome copy number was calculated based on the standard curve and expressed as log~10~ genome copies/reaction. Adaptive mutations (HA, NA and NS genes) {#Sec11} ---------------------------------------- To determine the viral adaptive mutations in cell culture, the hemagglutinin (HA), neuraminidase (NA), non-structural (NS) and polymerase basic (PB1 & PB2) genes were sequenced and analyzed after a single passage. Full length standard RT-PCR was performed and the PCR products were purified using GeneJET Gel Extraction Kit (Thermo Scientific, City, Germany). The purified products were subjected to direct nucleotide sequencing using Rhodamin Dye-Terminator Cycle Sequencing Ready Reaction Kit (Big Dye ® Terminator v1.1; Applied Biosystem), followed by analysis in an ABIPRISM™ 310 genetic Analyser (Applied Biosystems). The sequence data were edited aligned and analyzed using three software packages (DNASTER, BioEdit and MEGA). Screening of amantadine sensitivity {#Sec12} ----------------------------------- The sensitivity of the studied H9N2 viruses to amantadine was assayed in MDCK-II cells according to previously described method with modifications \[[@CR31]\]. Briefly, 10^4^ TCID~50~ of the studied H9N2 influenza viruses were tested against concentrations of 0, 2.5 and 7.5 μg/mL amantadine (Sigma Aldrich, Steinheim, Germany) in 96 well microtiter plates by checkerboard titration. The MDCK-II monolayer was washed and pre-incubated with each drug concentration in GM for 30 min. In a parallel plate, each titer of the virus isolate was incubated with the two different drug concentrations in GM for 1 h at RT. The drug-containing medium in the pre-incubated cell culture plate was replaced by drug treated virus solution from a parallel plate and further incubated for 1 h at 37 °C. Plates were washed and GM containing the desired concentration of the drug was added. Plates were incubated at 37 °C for 24 h. Following the removal of medium, plates were washed with PBS and fixed with 90% methanol overnight at −20 °C. Subsequently, endogenous peroxidase activity was blocked by incubation with 3% (v/v) H~2~O~2~ for 1 h followed by blocking in PBS containing 0.05% Tween-20 and 3% bovine serum albumin. Primary NP antibody was added and incubated at RT for 1 h. After washing with PBS containing 0.05% Tween-20, IgG HRP anti-rabbit antibody (Thermo scientific, Germany) at 1:1000 dilutions was added and the plates were incubated 1 h at RT. Freshly prepared substrate (Dako, Denmark) was added and incubated for 20 min at RT in the dark. Stop solution (H~2~SO~4~, 2.5 M) was added and the absorbance OD of the wells was read at 450/650 nm. The mean OD of the infected cultures without drug was equal to the total amount of NP protein (100%). Cultures infected with amantadine-sensitive virus should produce less than 50% of the total NP protein. Results {#Sec13} ======= Comparison of hemagglutination (HA) titer {#Sec14} ----------------------------------------- The HA titer was calculated both from the harvested AFs and CCSs of ECE- grown and cell culture-grown viruses, respectively. All tested H9N2 viruses did not show any differences in titers in MDCK and MDCK-II cells, thus only a diagram representative for MDCK-II is shown (Figure [1B](#Fig1){ref-type="fig"}). Most viruses started yielding HAU earliest at 16 hpi and reached maximum yield within 32 to 48 hpi which continued until 64 to 72 hpi (Figure [1](#Fig1){ref-type="fig"}). In the ECE system the currently circulating G1 lineage viruses BVP01, DF5 and SAR61 showed higher yields than the European wild type GR869. The highest titers reached by BVP01, DF5, SAR61 and GR869 9log~2~ (512), 9log~2~ (512), 8 log~2~ (256) and 5 log~2~ (32), respectively (Figure [1A](#Fig1){ref-type="fig"}).Figure 1**H9N2 growth curve based on hemagglutination unit titer.** Comparison of the hemagglutination unit titer of the four studied H9N2 viruses grown in embryonated chicken eggs (**A**) and MDCK-II cells (**B**) at different time points. On the other hand, in cell culture propagation system the highest titers recorded by BVP01, DF5, SAR61 and GR869 were 7log~2~ (128), 7log~2~ (128)~,~ 5log~2~ (32) and 3log~2~ (8)~,~ respectively (Figure [1B](#Fig1){ref-type="fig"}). The ECE-grown viruses produced two-fold higher virus yield than the cell culture-grown viruses. Replication kinetics based on TCID~50~ {#Sec15} -------------------------------------- TCID~50~ count was performed in MDCK-II cells to quantify the virus infectivity and to compare kinetics among the H9N2 viruses grown in the both systems. The ECE- and cell culture-grown H9N2 viruses showed detectable titers at 16 hpi and reached maximum titers at 32 hpi (Figure [2](#Fig2){ref-type="fig"}). The infectivity titers of all ECE-grown H9N2 viruses varied between 3.5 log~10~ TCID~50~/mL to 7.8 log~10~ TCID~50~/mL. The ECE-grown BVP01 and DF5 viruses reached maximum yields (\~8 log~10~ TCID~50~/mL) followed by SAR61 (\~7 log~10~ TCID~50~/mL) and GR869 (\~5 log~10~ TCID~50~/mL) (Figure [2A](#Fig2){ref-type="fig"}).Figure 2**H9N2 growth curve based on TCID** ~**50**~ **titer.** Replication kinetics of the four studied H9N2 viruses grown in embryonated chicken egg (**A**) and MDCK-II cells (**B**) at different selected time points. The infectivity titers of cell culture-grown H9N2 strains varied between 2.3 log~10~ TCID~50~/mL to 4.9 log~10~ TCID~50~/mL. Furthermore, the cell culture-grown BVP01, SAR61 and DF5 viruses reached maximum yields (\>4 log~10~ TCID~50~/mL) in contrast to GR869 (2.8 log~10~ TCID~50~/mL) (Figure [2B](#Fig2){ref-type="fig"}). Thus, the BVP01, SAR61 and DF5 viruses grown in both the biological systems achieved maximum titers compared to the GR869. Furthermore, all ECE-grown viruses showed higher virus yields than the cell culture-grown viruses. Comparison of viral genome copy number {#Sec16} -------------------------------------- ECE and MDCK-II systems were capable of supporting efficient viral replication and did not measured significant differences in genome copy number based on the M gene (Figure [3](#Fig3){ref-type="fig"}). The ECE-grown BVP01 (3 log~10~ genome copies/reaction), DF5 (3 log~10~ genome copies/reaction) and SAR61 (\~2 log~10~ genome copies/reaction) viruses showed detectable copy number earliest at 2 hpi. The maximum copy number (\~7 log~10~ genome copies/reaction) was obtained earliest at 16 hpi for BVP01 and DF5 H9N2 (Figure [3A](#Fig3){ref-type="fig"}) whereas, the maximum copy number (\~5.5 log~10~ genome copies/ reaction) for SAR61 was detected earliest at 48 hpi. The European wild type GR869 ECE-grown H9N2 showed detectable copy numbers earliest at 8 hpi (1 log~10~ genome copies/reaction) and reached the maximum (\~5 log~10~ genome copies/reaction) earliest at 48 hpi (Figure [3A](#Fig3){ref-type="fig"}).Figure 3**H9N2 growth curve based on M genome copies.** Comparison of the four studied H9N2 virus grown in embryonated chicken egg (**A**) and MDCK-II cells (**B**) based on the measurement of M genome copies at different time points. On the other hand, cell culture-grown BVP01, DF5, SAR61 and GR869 viruses exhibited detectable copy numbers earliest at 8 hpi. The maximum copy number of BVP01 and DF5 was \>6 log~10~ genome copies/reaction, detected at 48 hpi. Whereas, the SAR61 showed maximum \~5 log~10~ genome copies/reaction also at 48 hpi and the GR869 showed maximum \~4.3 log~10~ genome copies/reaction at 56 hpi (Figure [3B](#Fig3){ref-type="fig"}). Morphology and progression of infection {#Sec17} --------------------------------------- The general morphology and spread of ECE- and cell culture-grown H9N2 viruses were monitored at the specified time points. In ECE propagation, most embryos were alive until the last time point chosen in this experiment. However, some embryos showed nonspecific mortality earliest at 32 hpi when infected with the same virus and dose, which might be due to the adverse viral influence on the embryos. In the cell culture system, both MDCK and MDCK-II cell lines showed similar morphology during infection; therefore, only the MDCK-II-derived growth morphology is shown. Firstly, one selected virus (BVP01) was confirmed at 32 hpi in MDCK-II cells to check the antibody, selected stains and to adjust the suitable control (Figure [4](#Fig4){ref-type="fig"}). On the growth kinetics, the viruses showed CPE in inverted microscope observation earliest at 16 hpi, although the viral replication was detected at 8hpi when tested against nucleoprotein (NP) of AIV in IF stain (Figure [5](#Fig5){ref-type="fig"}). CPE was more extensive at 48 to 64 hpi and \>90% cells have detached by 72 hpi. Additionally, on TCID~50~ count, the presence of virus by producing CPE was confirmed at the 2^nd^ dilution while the virus was actually present until the 5^th^ dilution as confirmed by IF stain. Thus, the TCID~50~ titers were considered not only based on CPE but also considering the highest dilution where NP protein was recognized by IF stain. The progression of viral infection in MDCK-II cells varied in morphology at different time points (Figure [5](#Fig5){ref-type="fig"}). Influenza NP protein was detected by IF at the studied time points suggestive of a high permissibility and efficient spread of virus infection in the cells. At the early time points of infection the viral NP protein was mostly detected in the nucleus and later disseminated to the cytoplasm. Significant damage and loss of nuclear structure of the cells at later time points was clearly observed by DAPI stain alone. At 64 hpi, very few cells remained attached to the culture plates.Figure 4**Detection of influenza virus H9N2 in infected cells.** Confirmation of the presence of influenza BVP01/H9N2 virus at 32 hpi in MDCK-II cells observed by immunofluorescence assay (magnification 20×) using influenza A anti-NP antibodies. **A**: Infected cells positive for NP protein; **B**: Mock control negative for NP staining; **C**: Merge between NP and DAPI staining on infected cells and **D**: Mock control positive for DAPI staining.Figure 5**Morphological changes of infected cells.** Progression of infection and viral replication in MDCK-II cell at different time points of BVP01/H9N2 virus infection as observed by immunofluorescence assay (magnification 20×) using influenza A anti-NP antibodies. I: Infected cells positive for NP protein and M: Mock control negative for NP staining. Sequence analysis and mutation {#Sec18} ------------------------------ The HA, NA, NS, PB1 and PB2 genes play important roles in host-virus interactions and virulence. Thus, the respective gene sequences of cell culture-grown viruses were compared to the original sequences of the stock virus (AF stock passage) used as inoculum. The analysis of the HA, NA and NS genes revealed 2-6 nucleotide mutations resulting in 1-2 amino acid substitutions depending on the respective genes (Table [1](#Tab1){ref-type="table"}), whereas the polymerase genes remain similar as inoculum. Importantly, none of these nucleotide or amino acid mutations was observed in the conserved region, which could have altered pathogenesis or virulence. No amino acid changes were observed at the HA cleavage site, Receptor binding site and within the N-glycosylation sites of the HA protein. All of the viruses possessed their respective motif in the PDZ domain at the C terminal end of the NS1 protein as inoculum.Table 1Sequence analysis after a single passage of the viruses in MDCK-II compared with the sequence from the AF stock inoculumVirusesHANANSntaantaantaaBVP016G270R1no1NS1:S27LT279INS2: noGR8693nonono1NS1: Q142ENS2: noSAR612nonono1NS1: noNS2: R77KDF55no1nonoNS1: noNS2:nont: Nucleotide, aa: Amino acid, no: No substitutions. Amantadine sensitivity {#Sec19} ---------------------- The drug sensitivity of all four studied H9N2 viruses was confirmed using in situ ELISA in MDCK-II cells. All tested viruses were shown to be sensitive to amantadine. Amantadine (2.5 μg/mL or 7.5 μg/mL) inhibited the growth of all viruses at titers of 4 log~10~ TCID~50~/mL. The presence of NP at amantadine concentration of 2.5 μg/mL was 45%, 34%, 26%, and 55% for BVP01, DF5, GR869 and SAR61, respectively. At amantadine concentrations of 7.5 μg/mL the presence of NP was accordingly 28%, 34%, 20% and 45%. Discussion {#Sec20} ========== Efficient isolation and propagation of influenza viruses is important in epidemiological surveillance, study of host-pathogen interactions, diagnosis and vaccine production. Successful and efficient propagation of H9N2 virus on ECE and cell culture system depends on viral dose or moi, molecular genetic properties of the virus, receptor binding properties of the host cell and some other virus- or host-related factors \[[@CR32]-[@CR34]\]. The main focus of this study was to evaluate efficient H9N2 virus propagation in two different biological systems. Thus, four different geographical sources of H9N2 viruses were propagated in ECEs as well as MDCK and MDCK-II cell lines. Analyses of the replication kinetics, correlation between the virus quantification methods, virus adaptations to cell culture and the sensitivity to amantadine of the H9N2 viruses were performed. The replication of influenza virus in a host cell is a polygenic process depending on the host cell endocytic pathways for entry and transfer of viral genome as well as activation of host cell signaling \[[@CR35],[@CR36]\]. Preferentially, avian influenza viruses bind to terminal α-2,3 SA linkage, whereas human influenza bind to α−2,6 SA. The allantoic cells of ECEs contain α−2,3 SA Gal while the amniotic cells of ECEs and MDCK cells contain both linkages \[[@CR37]\]. Therefore, the allantoic cavities are considered to be the preferential sites for avian influenza virus. In this study, four H9N2 viruses were propagated and replication kinetics were measured based on HA titer, TCID~50~ titer and qRT-PCR for genome (M) copies. The replication ability of the four virus strains were assessed separately in both systems. The four H9N2 viruses were found to grow efficiently in ECEs and cell lines. In the cell culture system, both parenteral MDCK and cloned MDCK-II cells were used however; both cell lines have shown the same morphology and pattern of replication kinetics. Thus, only MDCK-II yields were shown in this study. The cell culture-grown viruses exhibited 2-3 fold lower virus titers than the ECE grown virus titers, however, in genome copy number it was 1-1.5 fold lower than the ECE grown viruses. Although the qRT-PCR method does not take into account the infectious properties of the virus rather can also detect defective virus particles. Among the four studied viruses, the G1 H9N2 (BVP01, DF5 and SAR61) isolates achieved maximum virus yields in contrast to the European wild bird (GR869) isolate based on HA titer, TCID~50~ titer as well as on genome copy count. Generally, all the analyzed strains grew well in both propagation systems, however, the maximum virus yields and the earliest time points to achieve highest titers varied with the propagation systems as well as with individual viruses. The ECE system allowed rapid replication and yielded maximum titers within 16-32 h, whereas cell culture supported relatively slow replication and yielded maximum titers after 48 h. In addition, the choice of different moi on virus replication is an important issue \[[@CR34]\]. The moi of 0.2 used in this study for cell culture, was found to be relatively high as inoculum to generate maximum yields at longer time points. More than 80% cells were found detached in both the cell lines after 64 hpi. Thus, the ECE-propagated viruses reached relatively higher virus titers compared to cell culture-propagated viruses at respective time points. Therefore, the ECE may be considered the better system for the primary virus isolation from field samples as well as for the virological surveillance study. Specific receptor binding properties of embryos also facilitate better replication of avian origin influenza virus compared to cell lines \[[@CR38]\]. The ECE and cell culture are completely different biological systems which possess differential host-related factors involved in efficient viral propagation as well as in variation of viral replication kinetics. The G1 lineage H9N2 viruses are adapted to poultry and produced more severe infections than the other representative lineage of H9N2. Thus, the genetic background of all studied H9N2 viruses were also considered for achievement of different replication kinetics at both propagation systems. Genetically, the BVP01, DF5 and SAR61 (not published) belonged to the G1 lineage, while GR869 is an Eurasian wild-type reassortant virus (not published) isolated from turkey which was not adapted to domestic poultry flocks. Moreover, the genetic variation at HA cleavage motif, HA receptor binding site (RBS) and C terminal domain of NS1 protein (Table [2](#Tab2){ref-type="table"}) of four studied viruses thought to be involve in achievement of different replication pattern or in virus particle count from two propagation systems. The binding property of the virus to the host cell is determined by two factors, the RBS affinity of the virus and receptor density on the host cell surface \[[@CR39]\]. The RBS motif of the HA protein of the BVP01 and DF5 revealed an important Q234L (Glutamine to Leucine) mutation, like the HK-G1strain. However, the SAR61 and GR869 viruses contain Q at 234 positions (H3 numbering: 226) which is a typical avian virus signature, and it has been reported that the presence of this aa results in a preference for binding to α 2,3-linked sialic acid (avian receptors) whereas viruses having L234 (H9 numbering) showed a preference for 2,6-linked sialic acid (human receptors) besides avian receptor and a potential cause of reported human infections \[[@CR40]\].Table 2Molecular genetic background of genome regions important for viral replicationVirusHA cleavage siteRight pocket 146−150Left pocket 232−237NA stalk deletionNS1-C domainM2 blocker2627303134A/chicken/Bangladesh/ VP01/2006PAKSSR\*GLFGTSKSNGLIGRNoKSEVLVASGA/turkey/Germany/R869/2012PAASGR\*GLFGTSKANGQQGRNoESEVLVASGA/chicken/Saudi Arabia/R61/2002PARSSR\*GLFGTSKSNGQQGRNoEPEVLVASGA/chicken/Dubai/ F5/2013HARSSR\*GLFGTSKSNGLIGRNoGSEVLVANG Due to be the member of a separate clade with different HA cleavage motif, RBS as well as NS1C-terminal domain (Table [2](#Tab2){ref-type="table"}) the GR869 virus showed comparatively lower replication pattern than the other three strains of G1 H9N2. Again it was noticed that within the three studied G1 H9N2 viruses, the BVP01 and the DF5 produced kinetics were somewhat similar with higher replication profile in contrast of SAR61 which showed lower kinetics. Epidemiologically, BVP01 and DF5 viruses were known to induce clinical signs and caused mortality in commercial poultry flocks which gives the back draw to yield highest titers in all the virus quantification methods taken into account in this study from both the propagation systems. Amantadine blocks the ion channel formed by M2 protein and inhibits the early step of replication. Substitutions of amino acid residue such as L26F, V27A, A30T, S31N and G34E of the M2 protein are already known to confer resistance to the amantadine \[[@CR41],[@CR42]\]. The amino acid residue analysis of M2 protein of all studied H9N2 viruses except one position in DF5 H9N2 (had substitution S31N), did not showed any substitutions on those mentioned residues (Table [2](#Tab2){ref-type="table"}). However, at the amantadine screening on MDCK-II cells at two different concentrations confirmed all the studied H9N2 were sensitive to the amantadine M2 blocker drug. The nucleotide sequences of HA, NA and NS gene from cell culture-grown viruses did exhibit some mutations compared to the sequence of the inoculum. However, none of these substitutions were found to alter molecular determinants of pathogenesis. In conclusion, the virus replication kinetics based on HA titer, TCID~50~ titer and the viral genome copies revealed that the three G1-H9N2 viruses (BVP01, DF5 and SAR61) reached highest kinetic level compared to the European wild type H9N2 (GR869) virus probably due to the variable genetic constitutions at their specific conserved region. The species variation of the GR869 virus may have also affected the replication profile. The virus quantification methods used in this study have a correlation among themselves as the ECE-grown and cell culture-grown viruses achieved similar pattern of replication kinetics from all the quantification methods. However, this study revealed that the ECE propagation system allowed better replication as maximum virus yield were more than the cell culture-grown viruses. Thus, the replication efficiency of the ECE-grown H9N2 viruses was greater probably due to the specific binding property between the virus and the host cell. The different virus quantification methods varied insignificantly. All the studied viruses were amantadine sensitive and did not exhibit any significant mutations that are required for the alteration of pathogenesis after a single replication cycle. This study adds more insights in the growth properties of the Eurasian lineage of avian influenza H9N2 in two traditional effective propagation systems which further help in figuring out a suitable system for the influenza vaccine production. **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** Conceive and designed the experiments: RP, TWV; performed the experiments: RP, AAS; Analyzed the data: RP, AAS, MG; Contributed on reagents/materials/experiment/analysis tool: AAS, KH, MG, AR, MYH; Wrote the paper: RP, AAS, TWV. All authors read and approved the final manuscript. This study was supported by The German Academic Exchange Service (DAAD) by a reference number A/10/98746. We would like to thank Professor Dr Timm C. Harder, Friedrich-Loeffler-Institute, Germany and Department of Pathology, Bangladesh Agricultural University, Bangladesh, for kindly providing us the respective viruses.
2024-04-14T01:27:17.568517
https://example.com/article/1284
On Wednesday’s edition of Breitbart News Daily, SiriusXM host Alex Marlow introduced a unique surprise guest: Breitbart News Executive Chairman Stephen K. Bannon, the founding host of Breitbart News Daily, currently on leave from Breitbart to serve as chief executive officer of Donald Trump’s presidential campaign. “This is officially not an interview. This is our birthday,” Bannon declared, referring to the one-year anniversary of Breitbart News Daily on SiriusXM. LISTEN: Bannon said his vision for Breitbart News Daily was inspired by the comments section at Breitbart.com, which was part of the effort to make the site “as reader-driven as possible.” He recalled testing the waters for a radio program with the same structure with the Breitbart weekend radio shows, and was pleased to discover “the callers were so engaged, it’s kind of that crowdsourcing, you know the algorithms of crowdsourcing, how much great information you can get if you just open up the floodgates.” “It just turned out that people got into it, so it was a concept really driven off the site that we provide the news, you provide the commentary,” Bannon said. “Because you know at Breitbart, on the site itself, we don’t try to have a lot of opinion pieces. We really went to news. And the comments were so – of course they exploded. A small article would get 3,000 comments. I figured you could transfer that to radio, and it turned out after we initially started, it really exploded, so I think we’re on to something.” Marlow said this approach demonstrates that “our populism is not for show. It’s not a business model thing, it’s genuine. It’s the real deal.” Bannon agreed, remembering how Andrew Breitbart would say “hey, one day I hope the comments section is as smart and funny as I remember Ace of Spades had that great comment section, and you would just be laughing out loud at what the comments said.” “I think it is a true thing of populism, that if you really turn it over to the people – and if they engage, which they clearly [have],” Bannon said. “People keep saying, ‘Oh, you know, the Tea Party people or the people that go to Sarah Palin events, the people that go to these grassroots events, are now the people that back Trump, right? They’re morons, they’re idiots, they don’t know anything.’ If you really listen to them, they know a lot. They’re very funny and incredibly insightful, so I thought it was a bet we should take, and it turned out to be bigger than our expectations.” Of course, Marlow could not resist the temptation to ask Bannon about his current preoccupation, the 2016 presidential race, and if he had any inside scoops from the Trump campaign to offer. “If you think about it, a year ago when we started the broadcast, who would have ever thought we’d have gone through the year, collectively, together, both at the site and on the radio show, that we’ve gone through? I would just tell people, I think you’re in a very historic moment, and I think that the next six days are going to be as action-packed and probably thrilling as has the last year, because the thing is totally unpredictable, and it’s just truly something to live through,” Bannon said. “I would tell everybody, obviously to vote, but to remember that this is a very historic moment, and I think you’ll be talking about this one for a long time, in the future, for a long time to come,” he emphasized. “It just seems like there’s something very important going on in this. And as you can tell, it’s almost every day there’s other twists and turns. Not only is it quite engaging, from the point of people who love politics, or love history, or just love current events, but it’s also something that’s quite thrilling.” Marlow asked Bannon, as a history buff, if there was any precedent for a “sea change in the electorate” comparable to the movement from Clinton to Trump over the past few days. “Clearly there’s a lot of parallels, I think, to Andrew Jackson, to what happened during the rise of Andrew Jackson’s populism,” Bannon replied, also finding similarities to the fall of the Whigs and William Jennings Bryan’s Populist movement. “But no, I don’t think we’ve seen anything like this in a long time,” he continued. “And what really amazes me still is how many pundits, and how many people that follow this day in and day out, don’t really understand the kind of historic nature of what’s going on, and how this has really been a sea change. You know, they don’t have the honor that we have, Alex, not just to work at Breitbart and to see what’s going on with the support that Breitbart’s getting from the people, but really to listen to SiriusXM, the Patriot Channel and Breitbart News on the weekends and daily.” “I don’t say that as a promotional tool,” he added. “I still, on the [SiriusXM] On Demand, try to catch as much of the show as possible – and not to skip to the guests, but to listen to the callers. You know, if you think about it, Alex, before I stepped away temporarily to take the job over at Trump, very much what has been implemented, or very much what was followed is really what the callers have said. If you’ve really listened to the callers over the last, what is it, 90 days, much of the insight or savvy, however you want to say it, really the callers speak to this every day – whether it’s the debates or other things that are happening.” “I still think that most of the people in the Establishment don’t realize how deep this movement is and how powerful it is,” Bannon said. “The best thing about going over and working with Mr. Trump on the campaign is actually getting out to the rallies every day, and seeing it now for the last 90 days, you really see that the passion of the people – whether it’s in Maine, or Arizona, or Nevada, or last night in Wisconsin, or today in Florida – you really see how engaged people are in this entire process.” Bannon thought this heightened level of engagement signaled a profound change in politics, no matter the outcome of next week’s election, although he remained confident that Trump would win. “I read some of these articles about this big civil war that’s coming in the Republican Party, and it’s pretty stunning to me people haven’t seen this. It’s been at this now for what, six years, really since 2010 with the Tea Party revolt,” he observed. “I think it’s the level of engagement. I think you can see it in the show. When people call up, they know what they’re talking about. They’re engaged, they know the details,” Bannon said. He told the story of how Nigel Farage, formerly leader of the U.K. Independence Party, came to a Trump rally in Mississippi at the invitation of the governor, and came on stage with Trump and some other guests. “The next day I was catching ‘Morning Joe,’ and they had the correspondent, I think it was Nicholas Corasaniti from the New York Times,” Bannon recalled. “And he was sitting there, just kind of smug, smirking on ‘Morning Joe,’ and they were all laughing about how this thing was so bizarre. Why would they have a guy that nobody knew, and the guy actually said, I’ll bet you 99% of the people – and there was like 15,000 people in this arena – that 99% of the people would not know who this guy was. And if you were there, and you saw it, 120% of the people knew who Nigel Farage was, right? Because people that are part of this movement, not only do they go to Breitbart and other sites – I mean, Nigel Farage is kind of a cult hero in this global populist movement.” “That probably explains the gulf between the mainstream media, and the arrogance of the elites in this country, versus what’s truly going on,” he said. “Here you’ve got this arena, and not only do they have Nigel Farage, these people could give you a better understanding of Brexit, and knew about Brexit – following Breitbart and listening to Breitbart when you, and [SiriusXM producer Caroline Magyarits], and [Breitbart London Editor-in-Chief Raheem Kassam] did the show for a week from London, besides all the London office stuff we had – that people knew the details of Brexit and knew more about Brexit. The night of Brexit, when they won, you saw CNN and others had no earthly idea what was even going on, couldn’t even pronounce the people’s names, didn’t know the issues.” “And yet, the little guys, the men and women in Mississippi, knew who Nigel Farage was and could give you quite a level of detail of UKIP and Brexit and the issues facing Europe,” he continued. “The elites, who really didn’t understand it, didn’t understand the issues, sitting there mocking them – this is another example of what idiots the Trump guys are, that they have somebody that clearly these rubes down in Mississippi don’t know. I think that encapsulates still the dismissive attitude towards a lot that’s going on, but it’s quite powerful, and clearly we’ve got six days to go before this election, but I think regardless of the outcome there’s been a sea change in American politics. This movement, as I keep saying, it’s just at the top of the first inning.” Bannon also stressed that the movement is global, contrary to media attempts to portray it as entirely provincial – a natural mistake for elites who deliberately confuse border security and constructive nationalism with close-minded xenophobia. “People want more control of their country, and they’re very proud of their countries. They want borders, they want sovereignty. It’s not just a thing that’s happening in any one geographic space. You can see it happening in Asia, you can see it happening in Europe, you can see it happening in the Middle East, and you’re seeing it happen in the United States,” he told Marlow. “People understand the issues, and they understand the interconnectivity to it. Like you said, Brexit, you go in there and it’s amazing. You go, whether you’re in Wisconsin or Maine or Mississippi, people know the details of it, and they know what drove that vote.” Bannon addressed the topic of polling and the “hidden vote” – the idea that Donald Trump will draw support on Election Day from people who are essentially invisible to pollsters because they’ve been checked out of the political process for several previous election cycles. Some have suggested Trump will benefit from a “Brexit movement,” similar to the unexpectedly strong support for the U.K. exiting from the European Union. “I think it’s one thing to have to poll a referendum. It’s different than having to poll an election between people,” he pointed out. “So I’m not sure they’re exactly analogous, when you look at the polling aspect of it. But I think that the passion and the determination to make something happen is the same.” Bannon noted that his “term of duty here ends on the evening of [November] the Eighth, so I look forward to getting back and being part of” Breitbart News Daily. “It’s our first anniversary, and hopefully our second anniversary of the show will be even bigger and have a bigger impact than it’s had in its first year,” he said. “It’s been a great run, and we owe SiriusXM and Dave Gorab and Liz [Aiello], and all the [SiriusXM] executives that helped pull this together, and of course Caroline Magyarits, and Miss Barrett, and all the production staff have just done such a great job, along with [Breitbart Senior West Coast Editor] Rebecca Mansour and all the Breitbart people.” “It takes a lot to put the show on, particularly seven days a week, but you and Raheem and [Breitbart Washington Political Editor] Matt Boyle have done an extraordinary job,” he told Marlow. “I look forward to being a part of it again. It’s really something I miss a lot. It’s so much fun to start the day with an engaged citizenry that we do every day on Breitbart News. It’s just a ton of fun, and I look forward to getting back.” Breitbart News Daily airs on SiriusXM Patriot 125 weekdays from 6:00 a.m. to 9:00 a.m. Eastern. Listen to the complete audio of Bannon’s interview above.
2023-10-01T01:27:17.568517
https://example.com/article/3980
Public policy and cross border entrepreneurship in EU border regions: an enabling or constraining influence? Smallbone, David and Xheneti, Mirela (2008) Public policy and cross border entrepreneurship in EU border regions: an enabling or constraining influence? In: 31st Institute for Small Business and Entrepreneurship (ISBE) Conference: International Entrepreneurship - promoting excellence in education, research and practice, 5-7 November, Belfast, UK.
2023-10-19T01:27:17.568517
https://example.com/article/8420
__author__ = 'larsmaaloee' import pylab as Plot import matplotlib.pyplot as plt from DBN.dbn import generate_input_data_list, generate_output_for_test_data, generate_output_for_train_data from pca import pca_2d, pca_3d, pca_2d_for_2_components, pca_3d_movie from DataPreparation.data_processing import get_all_class_indices, get_all_class_names, \ get_class_names_for_class_indices import env_paths as ep import serialization as s from numpy import * class Visualise: def __init__(self, testing=True, classes_to_visualise=None, image_data=False, binary_output=False): """ @param testing: Should be True if test data is to be plottet. Otherwise False. @param classes_to_visualise: A list containing the classes to visualise @param image_data: If the visualization should be done on image data. @param binary_output: If the output of the DBN must be binary. """ if not check_for_data: print 'No DBN data or testing data.' return self.path = "output" self.output = [] self.testing = testing self.input_data = [] self.output_data = [] self.classes_to_visualise = classes_to_visualise self.image_data = image_data self.binary_output = binary_output def __generate_input_data(self): """ Generate the input data for the DBN so that it can be visualized. """ if not len(self.input_data) == 0: return try: self.input_data = s.load(open('output/input_data.p', 'rb')) self.class_indices = s.load(open('output/class_indices.p', 'rb')) if not self.classes_to_visualise == None: self.__filter_input_data(self.classes_to_visualise) except: self.input_data = generate_input_data_list(training=False) if self.testing else generate_input_data_list() self.class_indices = get_all_class_indices(training=False) if self.testing else get_all_class_indices() if not self.classes_to_visualise == None: self.__filter_input_data(self.classes_to_visualise) s.dump([input.tolist() for input in self.input_data], open('output/input_data.p', 'wb')) s.dump(self.class_indices, open('output/class_indices.p', 'wb')) self.legend = get_class_names_for_class_indices(list(set(sorted(self.class_indices)))) def __generate_output_data(self): """ Generate the output data of the DBN so that it can be visualised. """ if not len(self.output_data) == 0: return try: self.output_data = s.load(open('output/output_data.p', 'rb')) self.class_indices = s.load(open('output/class_indices.p', 'rb')) if not self.classes_to_visualise == None: self.__filter_output_data(self.classes_to_visualise) except: self.output_data = generate_output_for_test_data(image_data=self.image_data, binary_output=self.binary_output) if self.testing else generate_output_for_train_data( image_data=self.image_data, binary_output=self.binary_output) self.class_indices = get_all_class_indices(training=False) if self.testing else get_all_class_indices() if not self.classes_to_visualise == None: self.__filter_output_data(self.classes_to_visualise) s.dump([out.tolist() for out in self.output_data], open('output/output_data.p', 'wb')) s.dump(self.class_indices, open('output/class_indices.p', 'wb')) self.legend = get_class_names_for_class_indices(list(set(sorted(self.class_indices)))) def __filter_output_data(self, classes_to_visualise): """ Filter the output or input data corresponding to the classes to visualise. @param classes_to_visualise: A list containing names for the classes to visualise. """ class_names = get_all_class_names() class_indices_to_visualise = [] for i in range(len(class_names)): if class_names[i] in classes_to_visualise: class_indices_to_visualise.append(i) if not len(class_indices_to_visualise) == len(classes_to_visualise): print 'Not all classes to visualise were correct.' return tmp_output_data = [] tmp_class_indices = [] for i in range(len(self.output_data)): out = self.output_data[i] idx = self.class_indices[i] if idx in class_indices_to_visualise: tmp_output_data.append(out) tmp_class_indices.append(idx) self.output_data = tmp_output_data self.class_indices = tmp_class_indices def __filter_input_data(self, classes_to_visualise): """ Filter the output or input data corresponding to the classes to visualise. @param classes_to_visualise: A list containing names for the classes to visualise. """ class_names = get_all_class_names() class_indices_to_visualise = [] for i in range(len(class_names)): if class_names[i] in classes_to_visualise: class_indices_to_visualise.append(i) if not len(class_indices_to_visualise) == len(classes_to_visualise): print 'Not all class names were correct.' return tmp_output_data = [] tmp_class_indices = [] for i in range(len(self.input_data)): out = self.input_data[i] idx = self.class_indices[i] if idx in class_indices_to_visualise: tmp_output_data.append(out) tmp_class_indices.append(idx) self.input_data = tmp_output_data self.class_indices = tmp_class_indices def visualise_2d_data(self): """ In case the number of output units of the DBN is 2, this method will visualise the output of the DBN on a 2D plot. """ self.__generate_output_data() if len(self.output_data[0]) != 2: # The output dimensions must be 2 return f = Plot.figure() f.hold() plt.title('2D data') for c in sorted(set(self.class_indices)): class_mask = mat(self.class_indices).T.A.ravel() == c plt.plot(array(self.output_data)[class_mask, 0], array(self.output_data)[class_mask, 1], 'o') plt.legend(self.legend) plt.show() plt.savefig(self.path + '/2dplotlow.png', dpi=200) def visualise_data_pca_2d(self, input_data=False, number_of_components=9): """ Visualise the input data or the output data of the DBN on a 2D PCA plot. Depending on the number of components, the plot will contain an X amount of subplots. @param input_data: False if the output data of the DBN should be plottet. Otherwise True. @param number_of_components: The number of principal components for the PCA plot. """ if input_data: self.__generate_input_data() pca_2d(array(self.input_data), self.class_indices, self.path, 'high_dimension_data', number_of_components, self.legend) else: self.__generate_output_data() pca_2d(array(self.output_data), self.class_indices, self.path, 'low_dimension_data', number_of_components, self.legend) def visualise_data_pca_2d_two_components(self, component1, componen2, input_data=False): """ Visualise the input data or the output data of the DBN on a 2D PCA plot. Specify two components, which will be the plotted. @param input_data: False if the output data of the DBN should be plottet. Otherwise True. @param component1: Principal component 1. @param componen2: Principal component 2. """ if input_data: self.__generate_input_data() pca_2d_for_2_components(array(self.input_data), component1, componen2, self.class_indices, self.path, 'high_dimension_data', self.legend) else: self.__generate_output_data() pca_2d_for_2_components(array(self.output_data), component1, componen2, self.class_indices, self.path, 'low_dimension_data', self.legend) def visualise_data_pca_3d(self, component1, component2, component3, input_data=False): """ Visualise the input data or the output data of the DBN on a 3D PCA plot for principal components 1 and 2. Parameters ---------- input_data: False if the output data of the DBN should be plottet. Otherwise True. """ if input_data: self.__generate_input_data() pca_3d(array(self.input_data), component1, component2, component3, self.class_indices, self.path, 'high_dimension_data', self.legend) else: self.__generate_output_data() pca_3d(array(self.output_data), component1, component2, component3, self.class_indices, self.path, 'low_dimension_data', self.legend) def visualise_data_pca_3d_movie(self, component1, component2, component3, input_data=False): """ Visualise the input data or the output data of the DBN on a 3D PCA plot movie for principal components 1 and 2. Parameters ---------- input_data: False if the output data of the DBN should be plottet. Otherwise True. """ if input_data: self.__generate_input_data() pca_3d_movie(array(self.input_data), component1, component2, component3, self.class_indices, self.path, 'high_dimension_data', self.legend) else: self.__generate_output_data() pca_3d_movie(array(self.output_data), component1, component2, component3, self.class_indices, self.path, 'low_dimension_data', self.legend) def check_for_data(): """ Check for DBN network data. """ if not (os.path.exists(ep.get_test_data_path()) or os.path.exists(ep.get_dbn_weight_path())): return False return True
2023-08-23T01:27:17.568517
https://example.com/article/3207
The present invention relates to a novel vinyl chloride copolymer and a vinyl chloride copolymer composition, and more particularly to a copolymer of vinyl chloride or a monomer mixture containing vinyl chloride with 1,2-polybutadiene oligomer and/or epoxidized 1,2-polybutadiene oligomer, and a composition containing the vinyl chloride copolymer. Polyvinyl chloride is employed widely, since it is inexpensive as compared with other synthetic resins and moldings made thereof have various excellent physical properties. Plasticization of polyvinyl chloride by the addition of a plasticizer is a well known technique for providing soft and rubber-like articles. However, usual plasticized polyvinyl chlorides cannot be employed in a field requiring a low compression set, e.g. packings, since it is impossible to lower the compression set of moldings. Accordingly, various improvements of polyvinyl chloride have been attempted for lowering the compression set. One of these attempts is the improvement based on cross-linking techniques by copolymerization of vinyl chloride with a cross-linking agent, e.g. (1) copolymerization of vinyl chloride with a divinyl compound such as butadiene, isoprene or divinylbenzene, (2) copolymerization of vinyl chloride with a diallyl compound such as diallyl phthalate or diallyl maleate, (3) copolymerization of vinyl chloride with a dimethacrylate such as diethylene glycol dimethacrylate, and (4) copolymerization of vinyl chloride with a 2,5-divinyltetrahydropyran derivative. However, the cross-linked polymers obtained by such a copolymerization have high melting points and do not melt with ease, and as a result, the unmolten fine polymer particles remain in the obtained moldings, resulting in lowering of tensile strength due to stress concentration to that portion. This tendency is particularly noticeable in plasticized resin systems with a plasticizer. The preparation of the above-mentioned cross-linked polymers has been scarcely practiced industrially, because the cross-linked polymers have the defect as mentioned above. It is an object of the present invention to provide a vinyl chloride copolymer which can be molded into articles having excellent creep characteristics, particularly having a low compression set. A further object of the present invention is to provide a vinyl chloride copolymer which has a good processability and gives flexible moldings having excellent creep characteristics, particularly a low compression set, without lowering a tensile strength. Another object of the present invention is to provide a vinyl chloride copolymer composition capable of giving soft moldings having excellent creep characteristics, particularly a low compression set, without lowering a tensile strength. A further object of the present invention is to provide a vinyl chloride copolymer composition capable of giving hard moldings having a delustered surface without conducting a delustering operation. These and other objects of the present invention will become apparent from the description hereinafter.
2024-06-24T01:27:17.568517
https://example.com/article/7030
Q: Symmetric function I have a continuous function $\mu(x,y)=G(h_1(x+y), h_2(|x-y|))$ such that $\mu(x,x)=0$ and $\mu(x,y)+\mu(y,z)=\mu(x,z)$ for $x\le y\le z$. I want to show that the only possible case is $\mu(x,y)=c\cdot (x+y)|x-y|$ for some constant $c$. If I assume that I have a polynomial, no problem, since I have elementary symmetric polynomial as the basis of symmetric polynomials. My question: is it possible for $\mu(x,y)$ to not be a polynomial? A: If $g(s,t) = a(s+t) - a(s-t)$ for an arbitrary continuous function $a$, $u(x,y) = g(x+y, y-x) = a(2y) - a(2x)$ satisfies $u(x,x) = 0$ and $u(x,y) + u(y,z) = u(x,z)$ for all $x,y,z$. Then $\mu(x,y) = g(x+y,|x-y|)$ agrees with $u(x,y)$ when $x \le y$, and so $\mu(x,x) = u(x,x) = 0$ and $\mu(x,y) + \mu(y,z) = u(x,y) + u(y,z) = u(x,z) = \mu(x,z)$ for $x \le y \le z$. There are lots more solutions than $c (x+y)|x-y|$.
2024-05-21T01:27:17.568517
https://example.com/article/6429
The church is at a critical juncture on sensitive matters such as these. Churches need to create safe spaces where their people can be honest about what they feel and what they’ve experienced. All of our stories belong at the table. We need to listen to each other and learn to love each other and then pick up the scriptures and ask, “What does it look like to follow Jesus with our hearts, minds, and bodies?” If I shared my story for any reason, it was this one. Merritt describes unwanted sexual contact as a child and then struggles over his sexual identity as an adult. He doesn’t label himself with a sexual orientation label and describes a fluidity that is characteristic of some people. I appreciate that he does not peg his same-sex attraction on his childhood and in fact says that it is “dangerous” to assume a connection. Merritt’s experience is similar to so many who are same-sex or bisexually attracted but maintain loyalty to beliefs which are incongruous with same-sex sexual behavior or relationships. The American Psychological Association’s sexual orientation task force report calls this experience, telic congruence.
2024-01-09T01:27:17.568517
https://example.com/article/1302
Q: What does the Ostrogradsky instability have to do with stability? Ostrogradsky's instability theorem says that under some conditions, a system governed by a Lagrangian which depends on time derivatives beyond the first is "unstable". In the proof, one computes the Hamiltonian and shows that the dependence on one of the canonical momenta is linear. I don't understand either the statement or the proof. I think this is because I don't remember my Hamiltonian mechanics very well. Questions: What is meant by saying that the system is unstable? There are several notions of stability out there. Is the theorem saying the for any choice of initial conditions, the system is, say, Lyapunov unstable? What does the linearity of the Hamiltonian have to do with stability? My sense is that Ostrogradsky's theorem is sometimes taken to justify the idea that Lagrangians in physics shouldn't depend on higher time derivatives. Why is this? What's wrong with studying systems which are unstable? If I understand what "unstable" means, isn't it just another word for "chaotic"? And certainly there are chaotic systems in the real world... A: Not a specialist here either, but I think the Scholarpedia page linked in the OP already provides the answers. What is meant by saying that the system is unstable? There are several notions of stability out there. Is the theorem saying the for _any_ choice of initial conditions, the system is, say, Lyapunov unstable? It's unstable in the sense of presenting explosive vacuum decay: In fact the system instantly evaporates into a maelstrom of positive and negative energy particles. What does the linearity of the Hamiltonian have to do with stability? The root of the problem seems to be that: Because the Hamiltonian is linear in all but one of the conjugate momenta it is possible to arbitrarily increase or decrease the energy by moving different directions in phase space. A similar explanation is found in this answer: $H$ has only a linear dependence on $P_1$, and so can be arbitrarily negative. In an interacting system this means that we can excite positive energy modes by transferring energy from the negative energy modes, and in doing so we would increase the entropy — there would simply be more particles, and so a need to put them somewhere. Thus such a system could never reach equilibrium, exploding instantly in an orgy of particle creation. My sense is that Ostrogradsky's theorem is sometimes taken to justify the idea that Lagrangians in physics _shouldn't_ depend on higher time derivatives. Why is this? What's wrong with studying systems which are unstable? If I understand what "unstable" means, isn't it just another word for "chaotic"? And certainly there are chaotic systems in the real world... It's certainly not because one wants to avoid chaotic systems, but rather that [explosive vacuum decay] certainly does not describe the universe of human experience in which all particles have positive energy and empty space remains empty.
2024-05-05T01:27:17.568517
https://example.com/article/1270
/* * cryptui dll resources * * Copyright 2008 Juan Lang * Copyright 2010 Claudia Cotună * Michael Stefaniuc * 2014 Ștefan Fulea * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA */ #include "cryptuires.h" LANGUAGE LANG_ROMANIAN, SUBLANG_DEFAULT STRINGTABLE { IDS_CERTIFICATE "Certificat" IDS_CERTIFICATEINFORMATION "Informații certificat" IDS_CERT_INFO_BAD_SIG "Acest certificat are o semnătură nevalidă. Este posibil ca certificatul să fi fost alterat sau corupt." IDS_CERT_INFO_UNTRUSTED_CA "Acest certificat rădăcină nu este acreditat. Pentru a-l acredita, adăugați-l în depozitul de certificate rădăcină acreditate al sistemului." IDS_CERT_INFO_UNTRUSTED_ROOT "Acest certificat nu a putut fi validat față de un certificat rădăcină acreditat." IDS_CERT_INFO_PARTIAL_CHAIN "Emitentul acestui certificat nu a putut fi aflat." IDS_CERT_INFO_BAD_PURPOSES "Nu au putut fi verificate toate rolurile intenționate pentru acest certificat." IDS_CERT_INFO_PURPOSES "Acest certificat este intenționat pentru următoarele roluri:" IDS_SUBJECT_HEADING "Emis pentru: " IDS_ISSUER_HEADING "Emis de: " IDS_VALID_FROM "Valid de la " IDS_VALID_TO " la " IDS_CERTIFICATE_BAD_SIGNATURE "Acest certificat are o semnătură nevalabilă." IDS_CERTIFICATE_BAD_TIME "Acest certificat a expirat sau încă nu este valabil." IDS_CERTIFICATE_BAD_TIMENEST "Perioada de valabilitate a acestui certificat o depășește pe cea a emitentului său." IDS_CERTIFICATE_REVOKED "Acest certificat a fost revocat de către emitentul său." IDS_CERTIFICATE_VALID "Acest certificat este valabil." IDS_FIELD "Câmp" IDS_VALUE "Valoare" IDS_FIELDS_ALL "<Toate>" IDS_FIELDS_V1 "Doar câmpurile versiunii 1" IDS_FIELDS_EXTENSIONS "Doar extensii" IDS_FIELDS_CRITICAL_EXTENSIONS "Doar extensii critice" IDS_FIELDS_PROPERTIES "Doar proprietăți" IDS_FIELD_VERSION "Versiune" IDS_FIELD_SERIAL_NUMBER "Număr de serie" IDS_FIELD_ISSUER "Emitent" IDS_FIELD_VALID_FROM "Valabil de la" IDS_FIELD_VALID_TO "Valabil până la" IDS_FIELD_SUBJECT "Subiect" IDS_FIELD_PUBLIC_KEY "Cheie publică" IDS_FIELD_PUBLIC_KEY_FORMAT "%s (%d biți)" IDS_PROP_HASH "Hash SHA1" IDS_PROP_ENHKEY_USAGE "Utilizări adiționale ale cheii (proprietate)" IDS_PROP_FRIENDLY_NAME "Nume uzual" IDS_PROP_DESCRIPTION "Descriere" IDS_CERTIFICATE_PROPERTIES "Proprietățile certificatului" IDS_CERTIFICATE_PURPOSE_ERROR "Introduceți OID sub forma 1.2.3.4" IDS_CERTIFICATE_PURPOSE_EXISTS "OID introdus există deja." IDS_SELECT_STORE_TITLE "Selectare depozit de certificate" IDS_SELECT_STORE "Selectați un depozit de certificate." IDS_IMPORT_WIZARD "Asistent de importare a certificatelor" IDS_IMPORT_TYPE_MISMATCH "Fișierul conține obiecte care nu respectă criteriile date. Selectați alt fișier." IDS_IMPORT_FILE_TITLE "Importare fișier" IDS_IMPORT_FILE_SUBTITLE "Specificați fișierul pe care doriți să-l importați." IDS_IMPORT_STORE_TITLE "Depozit de certificate" IDS_IMPORT_STORE_SUBTITLE "Depozitele de certificate sunt colecții de certificate, liste de certificate revocate și liste de certificate acreditate." IDS_IMPORT_FILTER_CERT "Certificat X.509 (*.cer; *.crt)" IDS_IMPORT_FILTER_PFX "Schimb de informații personale (*.pfx; *.p12)" IDS_IMPORT_FILTER_CRL "Listă de certificate revocate (*.crl)" IDS_IMPORT_FILTER_CTL "Listă de certificate acreditate (*.stl)" IDS_IMPORT_FILTER_SERIALIZED_STORE "Depozit Microsoft înseriat de certificate (*.sst)" IDS_IMPORT_FILTER_CMS "Mesaje CMS/PKCS #7 (*.spc; *.p7b)" IDS_IMPORT_FILTER_ALL "Toate fișierele (*.*)" IDS_IMPORT_EMPTY_FILE "Selectați un fișier." IDS_IMPORT_BAD_FORMAT "Formatul fișierului nu este recunoscut. Selectați alt fișier." IDS_IMPORT_OPEN_FAILED "Nu a putut fi deschis" IDS_IMPORT_DEST_DETERMINED "Determinat de program" IDS_IMPORT_SELECT_STORE "Selectați un depozit" IDS_IMPORT_STORE_SELECTION "Depozitul de certificate selectat" IDS_IMPORT_DEST_AUTOMATIC "Determinat automat de către program" IDS_IMPORT_FILE "Fișier" IDS_IMPORT_CONTENT "Conținut" IDS_IMPORT_CONTENT_CERT "Certificat" IDS_IMPORT_CONTENT_CRL "Lista certificatelor revocate" IDS_IMPORT_CONTENT_CTL "Lista certificatelor acreditate" IDS_IMPORT_CONTENT_CMS "Mesaj CMS/PKCS #7" IDS_IMPORT_CONTENT_PFX "Schimb de informații personale" IDS_IMPORT_CONTENT_STORE "Depozit de certificate" IDS_IMPORT_SUCCEEDED "Importarea a fost realizată cu succes." IDS_IMPORT_FAILED "Importarea a eșuat." IDS_WIZARD_TITLE_FONT "Arial" IDS_PURPOSE_ALL "<Toate>" IDS_PURPOSE_ADVANCED "<Roluri avansate>" IDS_SUBJECT_COLUMN "Emis pentru" IDS_ISSUER_COLUMN "Emis de" IDS_EXPIRATION_COLUMN "Data de expirare" IDS_FRIENDLY_NAME_COLUMN "Nume uzual" IDS_ALLOWED_PURPOSE_ALL "<Toate>" IDS_ALLOWED_PURPOSE_NONE "<Niciunul>" IDS_WARN_REMOVE_MY "Nu veți mai putea decripta sau semna mesaje cu acest certificat.\nSigur doriți să eliminați acest certificat?" IDS_WARN_REMOVE_PLURAL_MY "Nu veți mai putea decripta sau semna mesaje cu aceste certificate.\nSigur doriți să eliminați aceste certificate?" IDS_WARN_REMOVE_ADDRESSBOOK "Nu veți mai putea cripta sau verifica mesaje semnate cu acest certificat.\nSigur doriți să eliminați acest certificat?" IDS_WARN_REMOVE_PLURAL_ADDRESSBOOK "Nu veți mai putea cripta sau verifica mesaje semnate cu aceste certificate.\nSigur doriți să eliminați aceste certificate?" IDS_WARN_REMOVE_CA "Certificatele emise de către această autoritate de certificare nu vor mai fi acreditate.\nSigur doriți să eliminați acest certificat?" IDS_WARN_REMOVE_PLURAL_CA "Certificatele emise de către aceste autorități de certificare nu vor mai fi acreditate.\nSigur doriți să eliminați aceste certificate?" IDS_WARN_REMOVE_ROOT "Certificatele emise de către această autoritate de certificare rădăcină, sau de către orice alte autorități de certificare validate de ea, nu vor mai fi acreditate.\nSigur doriți să eliminați acest certificat rădăcină acreditat?" IDS_WARN_REMOVE_PLURAL_ROOT "Certificatele emise de către aceste autorități de certificare rădăcină, sau de către orice alte autorități de certificare validate de ele, nu vor mai fi acreditate.\nSigur doriți să eliminați aceste certificate rădăcină acreditate?" IDS_WARN_REMOVE_TRUSTEDPUBLISHER "Aplicațiile software semnate de către acest editor nu vor mai fi acreditate.\nSigur doriți să eliminați acest certificat?" IDS_WARN_REMOVE_PLURAL_TRUSTEDPUBLISHER "Aplicațiile software semnate de către acești editori nu vor mai fi acreditate.\nSigur doriți să eliminați aceste certificate?" IDS_WARN_REMOVE_DEFAULT "Sigur doriți să eliminați acest certificat?" IDS_WARN_REMOVE_PLURAL_DEFAULT "Sigur doriți să eliminați aceste certificate?" IDS_CERT_MGR "Certificate" IDS_FRIENDLY_NAME_NONE "<Niciunul>" IDS_PURPOSE_SERVER_AUTH "Asigură identificarea unui calculator de la distanță" IDS_PURPOSE_CLIENT_AUTH "Vă dovedește identitatea pentru un calculator de la distanță" IDS_PURPOSE_CODE_SIGNING "Garantează că aplicația provine de la un anumit autor\nProtejează aplicația de alterări după publicare" IDS_PURPOSE_EMAIL_PROTECTION "Protejează mesajele de email" IDS_PURPOSE_IPSEC "Permite comunicarea securizată pe Internet" IDS_PURPOSE_TIMESTAMP_SIGNING "Permite semnarea datelor cu ora curentă" IDS_PURPOSE_CTL_USAGE_SIGNING "Vă permite să semnați digital o listă de certificate acreditate" IDS_PURPOSE_EFS "Permite criptarea datelor de pe disc" IDS_PURPOSE_EFS_RECOVERY "Recupererea fișierelor" IDS_PURPOSE_WHQL "Verificarea modulelor-pilot din Windows" IDS_PURPOSE_NT5 "Verificarea componentelor de sistem din Windows" IDS_PURPOSE_OEM_WHQL "Verificarea componentelor de sistem din Windows OEM" IDS_PURPOSE_EMBEDDED_NT "Verificarea componentelor de sistem din Windows Embedded" IDS_PURPOSE_ROOT_LIST_SIGNER "Semnatarul listei rădăcină" IDS_PURPOSE_QUALIFIED_SUBORDINATION "Subordonare calificată" IDS_PURPOSE_KEY_RECOVERY "Recuperarea cheilor" IDS_PURPOSE_DOCUMENT_SIGNING "Semnarea documentelor" IDS_PURPOSE_LIFETIME_SIGNING "Semnătură pe viață" IDS_PURPOSE_DRM "Drepturi digitale" IDS_PURPOSE_LICENSES "Licențe de pachete de chei" IDS_PURPOSE_LICENSE_SERVER "Verificare a serverului de licențe" IDS_PURPOSE_ENROLLMENT_AGENT "Agent solicitare certificat" IDS_PURPOSE_SMARTCARD_LOGON "Autentificare prin Smart Card" IDS_PURPOSE_CA_EXCHANGE "Arhivare chei private" IDS_PURPOSE_KEY_RECOVERY_AGENT "Agent recuperare chei" IDS_PURPOSE_DS_EMAIL_REPLICATION "Serviciu Registru pentru replicare email" IDS_EXPORT_WIZARD "Asistent de exportare a certificatelor" IDS_EXPORT_FORMAT_TITLE "Format pentru exportare" IDS_EXPORT_FORMAT_SUBTITLE "Alegeți formatul în care va fi salvat conținutul." IDS_EXPORT_FILE_TITLE "Nume de fișier pentru exportare" IDS_EXPORT_FILE_SUBTITLE "Specificați numele de fișier cu care va fi salvat conținutul." IDS_EXPORT_FILE_EXISTS "Fișierul specificat există deja. Doriți să îl înlocuiți?" IDS_EXPORT_FILTER_CERT "Binar X.509 codificat în DER (*.cer)" IDS_EXPORT_FILTER_BASE64_CERT "Binar X.509 codificat în Base64 (*.cer)" IDS_EXPORT_FILTER_CRL "Listă de certificate revocate (*.crl)" IDS_EXPORT_FILTER_CTL "Listă de certificate acreditate (*.stl)" IDS_EXPORT_FILTER_CMS "Mesaje CMS/PKCS #7 (*.p7b)" IDS_EXPORT_FILTER_PFX "Schimb de informații personale (*.pfx)" IDS_EXPORT_FILTER_SERIALIZED_CERT_STORE "Depozit înseriat de certificate (*.sst)" IDS_EXPORT_FORMAT "Format fișier" IDS_EXPORT_INCLUDE_CHAIN "Include toate certificatele din calea de certificate" IDS_EXPORT_KEYS "Exportă cheile" IDS_YES "Da" IDS_NO "Nu" IDS_EXPORT_SUCCEEDED "Exportarea a fost realizată cu succes." IDS_EXPORT_FAILED "Exportarea a eșuat." IDS_EXPORT_PRIVATE_KEY_TITLE "Exportare cheie privată" IDS_EXPORT_PRIVATE_KEY_SUBTITLE "Certificatul conține o cheie privată care poate fi exportată odată cu certificatul." IDS_EXPORT_PASSWORD_TITLE "Introducere parolă" IDS_EXPORT_PASSWORD_SUBTITLE "Puteți proteja o cheie privată cu o parolă." IDS_EXPORT_PASSWORD_MISMATCH "Parolele nu se potrivesc." IDS_EXPORT_PRIVATE_KEY_UNAVAILABLE "Notă: Cheia privată pentru acest certificat nu a putut fi deschisă." IDS_EXPORT_PRIVATE_KEY_NON_EXPORTABLE "Notă: Cheia privată pentru acest certificat nu este exportabilă." IDS_INTENDED_USE_COLUMN "Scopul utilizării" IDS_LOCATION_COLUMN "Locație" IDS_SELECT_CERT_TITLE "Selectare certificat" IDS_SELECT_CERT "Selectați un certificat" IDS_NO_IMPL "Neimplementat încă" } IDD_GENERAL DIALOGEX 0, 0, 255, 236 CAPTION "Generale" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN CONTROL "", -1, "Static", WS_BORDER|SS_WHITERECT, 6,10,241,200 CONTROL "", IDC_CERTIFICATE_ICON,"RichEdit20W", ES_READONLY|WS_DISABLED,8,11,26,26 CONTROL "", IDC_CERTIFICATE_INFO,"RichEdit20W", ES_READONLY|WS_DISABLED,34,11,212,26 CONTROL "", -1, "Static", SS_BLACKFRAME, 16,37,222,1 CONTROL "", IDC_CERTIFICATE_STATUS,"RichEdit20W", ES_READONLY|ES_MULTILINE,8,38,238,78 CONTROL "", -1, "Static", SS_BLACKFRAME, 16,116,222,1 CONTROL "", IDC_CERTIFICATE_NAMES,"RichEdit20W", ES_READONLY|ES_MULTILINE|WS_DISABLED,8,118,238,90 PUSHBUTTON "Instala&re certificat…", IDC_ADDTOSTORE,53,216,95,14 PUSHBUTTON "&Declarația emitentului", IDC_ISSUERSTATEMENT,152,216,95,14 END IDD_DETAIL DIALOGEX 0, 0, 255, 236 CAPTION "Detalii" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "&Afișează:", -1, 6,12,40,14 COMBOBOX IDC_DETAIL_SELECT, 48,10,100,60, CBS_DROPDOWNLIST|WS_BORDER|WS_VSCROLL|WS_TABSTOP CONTROL "", IDC_DETAIL_LIST, "SysListView32", LVS_REPORT|LVS_SINGLESEL|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 6,28,241,100 CONTROL "", IDC_DETAIL_VALUE, "RichEdit20W", ES_READONLY|ES_MULTILINE|WS_TABSTOP, 6,136,241,70 PUSHBUTTON "&Editare proprietăți…", IDC_EDITPROPERTIES,53,216,95,14 PUSHBUTTON "&Copiere în fișier…", IDC_EXPORT,152,216,95,14 END IDD_HIERARCHY DIALOGEX 0, 0, 255, 236 CAPTION "Cale de certificare" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN GROUPBOX "&Cale certificate", -1,6,10,245,165, BS_GROUPBOX CONTROL "",IDC_CERTPATH, "SysTreeView32", TVS_HASLINES|WS_BORDER, 13,22,231,130 PUSHBUTTON "&Afișează certificat", IDC_VIEWCERTIFICATE,155,156,90,14 LTEXT "Sta&re certificat:", IDC_CERTIFICATESTATUS,6,183,200,14 CONTROL "", IDC_CERTIFICATESTATUSTEXT,"RichEdit20W", WS_BORDER|ES_READONLY|ES_MULTILINE|WS_DISABLED,6,195,245,36 END IDD_USERNOTICE DIALOGEX 0, 0, 255, 256 CAPTION "Exonerare" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN CONTROL "", IDC_USERNOTICE,"RichEdit20W", WS_BORDER|ES_READONLY|ES_MULTILINE|WS_DISABLED,6,10,241,200 PUSHBUTTON "Î&nchide", IDOK,73,216,85,14 PUSHBUTTON "&Alte informații", IDC_CPS,162,216,85,14 END IDD_CERT_PROPERTIES_GENERAL DIALOGEX 0, 0, 255, 236 CAPTION "Generale" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "N&ume uzual:", -1, 6,14,60,14 EDITTEXT IDC_FRIENDLY_NAME, 70,12,181,14, ES_AUTOHSCROLL|WS_TABSTOP LTEXT "&Descriere:", -1, 6,32,60,14 EDITTEXT IDC_DESCRIPTION, 70,30,181,14, ES_AUTOVSCROLL|ES_MULTILINE|WS_TABSTOP|WS_VSCROLL GROUPBOX "Rolurile certificatului", -1,6,48,245,185, BS_GROUPBOX AUTORADIOBUTTON "A&ctivează toate rolurile acestui certificat", IDC_ENABLE_ALL_PURPOSES, 12,58,230,14, BS_AUTORADIOBUTTON|WS_TABSTOP AUTORADIOBUTTON "De&zactivează toate rolurile acestui certificat", IDC_DISABLE_ALL_PURPOSES, 12,70,230,14, BS_AUTORADIOBUTTON AUTORADIOBUTTON "Activează d&oar următoarele roluri ale acestui certificat:", IDC_ENABLE_SELECTED_PURPOSES, 12,82,230,14, BS_AUTORADIOBUTTON CONTROL "", IDC_CERTIFICATE_USAGES,"SysListView32", LVS_REPORT|LVS_NOCOLUMNHEADER|LVS_SINGLESEL|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 24,100,220,106 PUSHBUTTON "Adăugare &rol…", IDC_ADD_PURPOSE,174,212,70,14 END IDD_ADD_CERT_PURPOSE DIALOGEX 0,0,200,68 CAPTION "Adăugare rol" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Adăugați identificatorul de obiect (OID) pentru rolul de certificat pe care doriți să-l adăugați:", -1, 6,6,190,28 EDITTEXT IDC_NEW_PURPOSE, 6,28,190,14, ES_AUTOVSCROLL|ES_MULTILINE|WS_TABSTOP|WS_VSCROLL PUSHBUTTON "Con&firmă", IDOK, 33,48,60,14 PUSHBUTTON "A&nulează", IDCANCEL, 100,48,60,14 END IDD_SELECT_STORE DIALOGEX 0,0,200,136 CAPTION "Depozit de certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Selectați depozitul de certificate pe care doriți să-l utilizați:", IDC_STORE_TEXT, 6,6,190,28 CONTROL "",IDC_STORE_LIST, "SysTreeView32", TVS_HASLINES|WS_BORDER|WS_TABSTOP, 6,28,188,70 CHECKBOX "Afișea&ză depozitele fizice", IDC_SHOW_PHYSICAL_STORES, 6,102,180,14, BS_AUTOCHECKBOX|WS_TABSTOP PUSHBUTTON "Con&firmă", IDOK, 90,118,50,14, BS_DEFPUSHBUTTON PUSHBUTTON "A&nulează", IDCANCEL, 144,118,50,14 END IDD_IMPORT_WELCOME DIALOGEX 0,0,317,143 CAPTION "Importare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Bun venit la Asistentul de importare a certificatelor", IDC_IMPORT_TITLE, 115,7,195,30 LTEXT "Acest asistent vă ajută să importați certificate, liste de certificate revocate și liste de certificate acreditate dintr-un fișier într-un depozit de certificate.\n\n\ Un certificat poate fi utilizat pentru identificarea proprie sau a calculatorului cu care comunicați. Poate fi utilizat și pentru autentificare sau pentru a semna mesaje. Depozitele de certificate sunt colecții de certificate, liste de certificate revocate și liste de certificate acreditate.\n\n\ Pentru a continua, apăsați pe „Înainte”.", -1, 115,40,195,120 END IDD_IMPORT_FILE DIALOGEX 0,0,317,178 CAPTION "Importare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "N&ume fișier:", -1, 21,1,195,10 EDITTEXT IDC_IMPORT_FILENAME, 21,11,208,14, ES_AUTOHSCROLL|WS_TABSTOP PUSHBUTTON "&Căutare…", IDC_IMPORT_BROWSE_FILE, 236,11,60,14 LTEXT "Notă: Următoarele formate de fișier pot conține mai multe certificate, liste de certificate revocate sau liste de certificate acreditate:", -1, 21,30,265,16 LTEXT "Standard sintaxă mesaje criptografice/Mesaje PKCS #7 (*.p7b)", -1, 31,53,265,10 LTEXT "Schimb de informații personale/PKCS #12 (*.pfx; *.p12)", -1, 31,68,265,10 LTEXT "Depozit Microsoft înseriat de certificate (*.sst)", -1, 31,83,265,10 END IDD_IMPORT_STORE DIALOGEX 0,0,317,143 CAPTION "Importare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Wine poate selecta automat depozitul de certificate sau puteți să specificați o locație pentru certificate.", -1, 21,1,220,25 AUTORADIOBUTTON "Selectează &automat depozitul de certificate", IDC_IMPORT_AUTO_STORE, 31,28,220,12, BS_AUTORADIOBUTTON|WS_TABSTOP AUTORADIOBUTTON "&Plasează toate certificatele în următorul depozit:", IDC_IMPORT_SPECIFY_STORE, 31,42,220,12, BS_AUTORADIOBUTTON EDITTEXT IDC_IMPORT_STORE, 44,61,185,14, ES_READONLY PUSHBUTTON "&Căutare…", IDC_IMPORT_BROWSE_STORE, 236,61,60,14 END IDD_IMPORT_FINISH DIALOGEX 0,0,317,178 CAPTION "Importare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Finalizare import de certificate", IDC_IMPORT_TITLE, 115,1,195,40 LTEXT "Ați finalizat cu succes procedura de importare a certificatelor.", -1, 115,33,195,24 LTEXT "Ați specificat următoarea configurație:", -1, 115,57,195,12 CONTROL "", IDC_IMPORT_SETTINGS, "SysListView32", LVS_REPORT|LVS_NOCOLUMNHEADER|LVS_SINGLESEL|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 115,67,174,100 END IDD_CERT_MGR DIALOGEX 0,0,335,270 CAPTION "Certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "&Rolul intenționat:", -1, 7,9,100,12 COMBOBOX IDC_MGR_PURPOSE_SELECTION, 83,7,245,60, CBS_DROPDOWNLIST|WS_BORDER|WS_VSCROLL|WS_TABSTOP CONTROL "", IDC_MGR_STORES, "SysTabControl32", WS_CLIPSIBLINGS|WS_TABSTOP, 7,25,321,140 CONTROL "", IDC_MGR_CERTS, "SysListView32", LVS_REPORT|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 15,46,305,111 PUSHBUTTON "I&mportare…", IDC_MGR_IMPORT, 7,172,65,14 PUSHBUTTON "E&xportare…", IDC_MGR_EXPORT, 76,172,65,14, WS_DISABLED PUSHBUTTON "Șt&erge", IDC_MGR_REMOVE, 145,172,65,14, WS_DISABLED PUSHBUTTON "A&vansate…", IDC_MGR_ADVANCED, 263,172,65,14 GROUPBOX "Rolurile intenționate ale certificatului", -1,7,194,321,47, BS_GROUPBOX LTEXT "", IDC_MGR_PURPOSES, 13,206,252,32 PUSHBUTTON "&Afișare…", IDC_MGR_VIEW, 269,218,51,14, WS_DISABLED PUSHBUTTON "Î&nchide", IDCANCEL, 277,249,51,14, BS_DEFPUSHBUTTON END IDD_CERT_MGR_ADVANCED DIALOGEX 0,0,248,176 CAPTION "Opțiuni avansate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN GROUPBOX "Rolul certificatului", -1, 7,7,234,141, BS_GROUPBOX LTEXT "Alegeți unul sau mai multe roluri pentru a fi afișate când va fi selectat „Roluri avansate”.", -1, 14,18,220,16 LTEXT "&Rolurile certificatului:", -1, 14,41,90,12, WS_TABSTOP CONTROL "", IDC_CERTIFICATE_USAGES,"SysListView32", LVS_REPORT|LVS_NOCOLUMNHEADER|LVS_SINGLESEL|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 14,51,220,90 PUSHBUTTON "Con&firmă", IDOK, 132,155,51,14, BS_DEFPUSHBUTTON PUSHBUTTON "A&nulează", IDCANCEL, 190,155,51,14 END IDD_EXPORT_WELCOME DIALOGEX 0,0,317,143 CAPTION "Exportare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Bun venit la Asistentul de exportare a certificatelor", IDC_EXPORT_TITLE, 115,7,195,30 LTEXT "Acest asistent vă va ajuta să exportați certificate, liste de certificate revocate și liste de certificate acreditate dintr-un depozit de certificate într-un fișier.\n\n\ Un certificat poate fi utilizat pentru identificarea proprie sau a calculatorului cu care comunicați. Poate fi utilizat și pentru autentificare sau pentru a semna mesaje. Depozitele de certificate sunt colecții de certificate, liste de certificate revocate și liste de certificate acreditate.\n\n\ Pentru a continua, apăsați pe „Înainte”.", -1, 115,40,195,120 END IDD_EXPORT_PRIVATE_KEY DIALOGEX 0,0,317,143 CAPTION "Exportare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Dacă alegeți să exportați cheia privată, vă va fi solicitată pe o pagină următoare o parolă pentru a proteja această cheie privată.", -1, 21,1,195,25 LTEXT "Doriți să exportați cheia privată?", -1, 21,27,195,10 AUTORADIOBUTTON "D&a, exportă cheia privată", IDC_EXPORT_PRIVATE_KEY_YES, 31,36,200,12, BS_AUTORADIOBUTTON|WS_TABSTOP AUTORADIOBUTTON "N&u, nu exporta cheia privată", IDC_EXPORT_PRIVATE_KEY_NO, 31,48,200,12, BS_AUTORADIOBUTTON LTEXT "", IDC_EXPORT_PRIVATE_KEY_UNAVAILABLE, 21,60,200,24 END IDD_EXPORT_PASSWORD DIALOGEX 0,0,317,143 CAPTION "Exportare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "&Parolă:", -1, 21,1,195,10 EDITTEXT IDC_EXPORT_PASSWORD, 21,11,208,14, ES_AUTOHSCROLL|WS_TABSTOP LTEXT "&Confirmare parolă:", -1, 21,35,195,10 EDITTEXT IDC_EXPORT_PASSWORD_CONFIRM, 21,45,208,14, ES_AUTOHSCROLL|WS_TABSTOP END IDD_EXPORT_FORMAT DIALOGEX 0,0,317,143 CAPTION "Exportare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Selectați formatul pe care doriți să îl utilizați:", -1, 21,1,195,10 AUTORADIOBUTTON "Binar X.509 codificat în &DER (*.cer)", IDC_EXPORT_FORMAT_DER, 31,18,280,12, BS_AUTORADIOBUTTON|WS_TABSTOP AUTORADIOBUTTON "X.509 codificat în &base64 (*.cer):", IDC_EXPORT_FORMAT_BASE64, 31,30,280,12, BS_AUTORADIOBUTTON AUTORADIOBUTTON "Standard sintaxă mesaje criptografice/Mesaj &PKCS #7 (*.p7b)", IDC_EXPORT_FORMAT_CMS, 31,42,280,12, BS_AUTORADIOBUTTON CHECKBOX "In&clude toate certificatele din calea de certificare, dacă este posibil", IDC_EXPORT_CMS_INCLUDE_CHAIN, 44,57,280,8, BS_AUTOCHECKBOX|WS_TABSTOP|WS_DISABLED AUTORADIOBUTTON "Schimb de informații personale/P&KCS #12 (*.pfx)", IDC_EXPORT_FORMAT_PFX, 31,72,280,12, BS_AUTORADIOBUTTON|WS_DISABLED CHECKBOX "Incl&ude toate certificatele din calea de certificare, dacă este posibil", IDC_EXPORT_PFX_INCLUDE_CHAIN, 44,87,280,8, BS_AUTOCHECKBOX|WS_TABSTOP|WS_DISABLED CHECKBOX "Acti&vează criptarea puternică", IDC_EXPORT_PFX_STRONG_ENCRYPTION, 44,102,280,8, BS_AUTOCHECKBOX|WS_TABSTOP|WS_DISABLED CHECKBOX "Șt&erge cheia privată dacă exportarea reușește", IDC_EXPORT_PFX_DELETE_PRIVATE_KEY, 44,117,280,8, BS_AUTOCHECKBOX|WS_TABSTOP|WS_DISABLED END IDD_EXPORT_FILE DIALOGEX 0,0,317,143 CAPTION "Exportare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "N&ume fișier:", -1, 21,1,195,10 EDITTEXT IDC_EXPORT_FILENAME, 21,11,208,14, ES_AUTOHSCROLL|WS_TABSTOP PUSHBUTTON "&Căutare…", IDC_EXPORT_BROWSE_FILE, 236,11,60,14 END IDD_EXPORT_FINISH DIALOGEX 0,0,317,178 CAPTION "Exportare certificate" STYLE DS_SHELLFONT | WS_VISIBLE | DS_MODALFRAME FONT 8, "MS Shell Dlg" BEGIN LTEXT "Finalizare export de certificate", IDC_EXPORT_TITLE, 115,1,195,40 LTEXT "Ați finalizat cu succes procedura de exportare a certificatelor", -1, 115,33,195,24 LTEXT "Ați specificat următoarea configurație:", -1, 115,57,195,12 CONTROL "", IDC_EXPORT_SETTINGS, "SysListView32", LVS_REPORT|LVS_NOCOLUMNHEADER|LVS_SINGLESEL|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 115,67,174,100 END IDD_SELECT_CERT DIALOG 0,0,278,157 CAPTION "Selectare certificat" FONT 8, "MS Shell Dlg" BEGIN LTEXT "Selectați un certificat dorit", IDC_SELECT_DISPLAY_STRING, 7,7,264,26 CONTROL "", IDC_SELECT_CERTS, "SysListView32", LVS_REPORT|LVS_SINGLESEL|WS_CHILD|WS_VISIBLE|WS_TABSTOP|WS_BORDER, 7,40,264,89 PUSHBUTTON "Con&firmă", IDOK, 91,136,51,14, BS_DEFPUSHBUTTON PUSHBUTTON "A&nulare", IDCANCEL, 149,136,51,14 PUSHBUTTON "&Afișează certificat", IDC_SELECT_VIEW_CERT, 207,136,65,14, WS_DISABLED END
2024-01-20T01:27:17.568517
https://example.com/article/9432
Cam Newton lay motionless, a rag doll tossed to the ground. The reigning NFL MVP tried to get up, but his body failed him. All Newton could do was roll over and clutch his helmet. It was the NFL season opener in early September, and the Denver Broncos defense had just bashed Newton in the head again, one of many helmet-to-helmet hits on the Carolina Panthers' quarterback that night. Medical staff members — trained to spot head injuries — reviewed the play and went through the league's concussion protocol. Then, they made a decision, one that was questioned by many, including the NFL Player's Association. They let Newton keep playing. SEE ALSO: Browns wide receiver trolls the NFL with robotic touchdown celebration After the game, many wondered if the Panthers had, in fact, followed proper protocol. And as it turns out, the NFL investigated the same thing. The league announced Wednesday it would enhance its concussion protocol after the confusion surrounding Newton's helmet-to-helmet hits in Week 1. The NFL says its spotters saw "no indications of a concussion" for Cam Newton: https://t.co/2DVhLPnkVL pic.twitter.com/Xx5zMPBpZs — SB Nation (@SBNation) September 9, 2016 Here's what actually happened that night, according to a statement from the league. Sideline medical staff members contacted a concussion spotter in the booth for video of the hit on Newton, but a technology glitch delayed that process. Under the league's concussion protocol during Week 1, once that contact was made, the booth spotter lost the ability to call a medical timeout. The sideline medical staff made their own assessment of Newton — without video from the booth — and let him play on. The new protocol requires the booth spotter "to remain in contact with the club medical team and provide video support until the medical team confirms that a concussion evaluation has occurred," the statement said. Image: Denver Post via Getty Images The league has periodically made changes like this to the concussion protocol since implementing it in 2009. That's largely because as public discussion about head trauma has increased in recent years, the NFL got into some hot water. In 2013, a pair of ESPN reporters published League of Denial: The NFL's Concussion Crisis, a book/documentary film exploring the link between the sport and brain injuries, and the NFL's reluctance to acknowledge that link. Some have accused the league of trying to cover the problem up. According to the New York Times, the NFL's own lengthy concussion research studies left out more than 100 diagnosed concussions. The league finally admitted the connection between football and brain injuries in March. A month later, an appeals court approved a class action settlement between the NFL and thousands of retired players, who received varying levels of compensation for dealing with repeated head trauma. So — as part of that ongoing safety push — the NFL announced in September that it would commit $100 million to concussion prevention initiatives. Before this season, it added stricter enforcement of its concussion protocol — teams can be fined or forced to forfeit draft picks if they violate it. With such a sensitive and complex injury, the specifics of concussion protocol are necessarily and equally complex. They also remain a topic of conversation among hardcore and casual football fans alike. Donald Trump mocked the league's player safety procedures at a campaign stop in Florida last week, when a woman in the audience attendance fainted and then returned to the crowd. Woman faints, gets back up, then Trump bemoans "softer NFL rules" — "Concussion, oh! Oh! Got a little ding on the head—no no you can't play" pic.twitter.com/hdOHKiJmBQ — Bradd Jaffy (@BraddJaffy) October 13, 2016 "The woman was out cold and now she's coming back," Trump told the crowd. "See? We don't go by these new and very much softer NFL rules. Concussion, oh! Oh! Got a little ding on the head, no, no, you can't play for the rest of the season. Our people are tough!" Obviously, the rules are not that simple. The league released a detailed description of the game-day process on its "Play Smart, Play Safe" website in September, but here's a stripped-down version: When a potential concussion is identified, the player is removed from the field. The medical staff reviews video of the play and performs and an examination. If the medical staff suspects a concussion, the player is taken to the locker room for a full assessment. If the player is diagnosed with a concussion, he can't play that day. If the player passes the exam, he will be monitored for symptoms throughout the game. When players are removed from the game with a concussion, as Cam Newton was two weeks ago, there's a multi-step progression for returning to action as well, which includes rest and recovery, light aerobic exercise, strength training, football activities and then full clearance. Newton missed Week 5 because of concussion he sustained in Week 4 against Atlanta. Image: Kevin C. Cox/Getty Images While it's encouraging to see tweaks to concussion protocol, some see the NFL's player safety push as an empty promise. None of the Week 1 helmet-to-helmet hits on Newton resulted in penalties, which led many — including Panthers' tight end Greg Olson and Newtown' dad — to criticize the NFL for not officiating with safety in mind. "Player safety sounds great, is a great offseason rallying cry, sounds awesome," Olson said. "But we got zero yards out of any of those hits. That’s the reality of it.” In the statement from the NFL on Wednesday, the league lauded referee Ed Hochuli for following protocol and removing Buffalo Bills quarterback Tyrod Taylor from a game. The league said it would use Hochuli's "proactive officiating" as an example to other referees. Image: sbnation Seattle Seahawks cornerback Richard Sherman blasted the NFL earlier this month for its shoddy relationship with players, particularly when it comes to safety. He called the league out on what he says is a hypocritical push. "It's hard to stress player safety in such a violent game," Sherman said in The Player's Tribune. "Does the league care when Cam Newton gets hit in the face five times and pretty much knocked out the game? They have all these spotters and people that watch the game specifically for these reasons. You see the guy on his hands and knees shaking his head after he just took a shot to the face, and they're saying they didn't see any indications that he needed to come out of the game." The NFL cares more about ratings than safety, Sherman said, so taking a star player like Newton out with the game on the line could have damaged those ratings. But Newton's case represents a larger discussion among players, who feel like the league doesn't take care of its assets. "It’s a huge issue — one that a lot of players talk about behind closed doors, and would like to talk about more openly," Sherman writes. "They’re passionate about it and they want to see change." As of Wednesday, there is change. But don't be surprised to see critics from within and outside the NFL call for more progress. For a league that owes $1 billion to settle thousands of concussion lawsuits, this is an issue that won't go away overnight. The Associated Press contributed to this report.
2023-09-20T01:27:17.568517
https://example.com/article/3294
<?xml version='1.0'?> <catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog" prefer="public"> <!-- ...................................................................... --> <!-- XML Catalog data for DocBook XML V4.5 ................................ --> <!-- File catalog.xml ..................................................... --> <!-- Please direct all questions, bug reports, or suggestions for changes to the docbook@lists.oasis-open.org mailing list. For more information, see http://www.oasis-open.org/. --> <!-- This is the catalog data file for DocBook V4.5. It is provided as a convenience in building your own catalog files. You need not use the filenames listed here, and need not use the filename method of identifying storage objects at all. See the documentation for detailed information on the files associated with the DocBook DTD. See XML Catalogs at http://www.oasis-open.org/committees/entity/ for detailed information on supplying and using catalog data. --> <!-- ...................................................................... --> <!-- DocBook driver file .................................................. --> <public publicId="-//OASIS//DTD DocBook XML V4.5//EN" uri="docbookx.dtd"/> <system systemId="http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" uri="docbookx.dtd"/> <system systemId="http://docbook.org/xml/4.5/docbookx.dtd" uri="docbookx.dtd"/> <!-- ...................................................................... --> <!-- DocBook modules ...................................................... --> <public publicId="-//OASIS//DTD DocBook CALS Table Model V4.5//EN" uri="calstblx.dtd"/> <public publicId="-//OASIS//ELEMENTS DocBook XML HTML Tables V4.5//EN" uri="htmltblx.mod"/> <public publicId="-//OASIS//DTD XML Exchange Table Model 19990315//EN" uri="soextblx.dtd"/> <public publicId="-//OASIS//ELEMENTS DocBook Information Pool V4.5//EN" uri="dbpoolx.mod"/> <public publicId="-//OASIS//ELEMENTS DocBook Document Hierarchy V4.5//EN" uri="dbhierx.mod"/> <public publicId="-//OASIS//ENTITIES DocBook Additional General Entities V4.5//EN" uri="dbgenent.mod"/> <public publicId="-//OASIS//ENTITIES DocBook Notations V4.5//EN" uri="dbnotnx.mod"/> <public publicId="-//OASIS//ENTITIES DocBook Character Entities V4.5//EN" uri="dbcentx.mod"/> <!-- ...................................................................... --> <!-- ISO entity sets ...................................................... --> <public publicId="ISO 8879:1986//ENTITIES Diacritical Marks//EN//XML" uri="ent/isodia.ent"/> <public publicId="ISO 8879:1986//ENTITIES Numeric and Special Graphic//EN//XML" uri="ent/isonum.ent"/> <public publicId="ISO 8879:1986//ENTITIES Publishing//EN//XML" uri="ent/isopub.ent"/> <public publicId="ISO 8879:1986//ENTITIES General Technical//EN//XML" uri="ent/isotech.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Latin 1//EN//XML" uri="ent/isolat1.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Latin 2//EN//XML" uri="ent/isolat2.ent"/> <public publicId="ISO 8879:1986//ENTITIES Greek Letters//EN//XML" uri="ent/isogrk1.ent"/> <public publicId="ISO 8879:1986//ENTITIES Monotoniko Greek//EN//XML" uri="ent/isogrk2.ent"/> <public publicId="ISO 8879:1986//ENTITIES Greek Symbols//EN//XML" uri="ent/isogrk3.ent"/> <public publicId="ISO 8879:1986//ENTITIES Alternative Greek Symbols//EN//XML" uri="ent/isogrk4.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Math Symbols: Arrow Relations//EN//XML" uri="ent/isoamsa.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Math Symbols: Binary Operators//EN//XML" uri="ent/isoamsb.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Math Symbols: Delimiters//EN//XML" uri="ent/isoamsc.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Math Symbols: Negated Relations//EN//XML" uri="ent/isoamsn.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Math Symbols: Ordinary//EN//XML" uri="ent/isoamso.ent"/> <public publicId="ISO 8879:1986//ENTITIES Added Math Symbols: Relations//EN//XML" uri="ent/isoamsr.ent"/> <public publicId="ISO 8879:1986//ENTITIES Box and Line Drawing//EN//XML" uri="ent/isobox.ent"/> <public publicId="ISO 8879:1986//ENTITIES Russian Cyrillic//EN//XML" uri="ent/isocyr1.ent"/> <public publicId="ISO 8879:1986//ENTITIES Non-Russian Cyrillic//EN//XML" uri="ent/isocyr2.ent"/> <!-- End of catalog data for DocBook XML V4.5 ............................. --> <!-- ...................................................................... --> </catalog>
2024-07-26T01:27:17.568517
https://example.com/article/2370
Haley Samsel It's no secret that college costs are on the rise -- especially to families preparing to shell out thousands of dollars for tuition this fall. But how exactly do families manage to pay for college? That question is at the center of "How America Pays for College 2017," a new report from student loan company Sallie Mae and market research firm Ipsos. The companies asked 800 undergraduate students and 800 parents of undergraduates about the resources they use to pay for school as well as their attitudes about college. Among the report's most intriguing findings: Students and parents are equally sharing the burden of college costs. Students foot about 30% of the bill with income, savings and loans while parents cover 31% of the costs. "That's somewhat of a shift," says Rick Castellano, Sallie Mae's vice president of corporate communications. "We hadn't seen that so close previously." Student borrowing is up to 19% of college costs, an increase from 13% in 2016. Scholarships and grants account for the majority of the remaining costs at 35% of college costs, while relatives and friends chipped in 4%. This is the second year in a row that the report has seen scholarships and grants "fund the largest portion of the 'paying for college pie,' and the largest it's been in a decade," Castellano says. Since scholarships and grants cover the largest portion of college costs, it's no surprise that more families are receiving them. 49% of families surveyed received a scholarship for the 2016-17 school year, and 47% of them received a grant. An overwhelming majority (87%) of families who used scholarships said they obtained one from the student's school. These statistics show how families are becoming more fluent in applying for financial aid, Castellano says. "If you think about how families are paying, they're really becoming savvier higher education consumers," Castellano adds, noting that 86% of families reported filling out the FAFSA, compared to 74% in Sallie Mae's first report in 2008. The report also examined regional differences in how families view and pay for college across the country. Students attending college on the West Coast pay, on average, the least amount for college at $19,181, while students in Northeastern states are paying the highest: $35,431. Across the board, students report being more cost-conscious than their parents and eliminate schools due to cost at a higher rate. Related: 10 states with the cheapest public college tuition Students also assume they have more responsibility to pay for loans than parents think they do, says Marie O'Malley, Sallie Mae's senior director of consumer research. "Part of what we're talking about here is almost a communication issue rather than an underlying philosophy," she says. Overall, more families are starting to eliminate colleges from their wishlist due to costs, the report found. 76% of families say they eliminated a college from consideration due to costs. That's a 18% jump since 2008, when 58% of families reported the same thing. While families prioritize college attendance — 86% of families said they knew their child would go to college since the child was in preschool — families still struggle to create a financial plan to pay for it. The percentage of families who have a plan has hovered around 40% for years, Castellano says. "The majority of families do not have a plan to pay for all years of college before the student enrolls," the report reads. "Planning has remained persistently low over time." O'Malley says that planning includes saving for college, researching costs, budgeting, understanding financial aid eligibility and earning college credit while still in high school, among other options. "There's no one right answer for how to do college in terms of the choices you make, whether it's the school you choose, where it's located, what your living arrangements are going to be," O'Malley says. "But there are choices that can be made and resources that can help make it affordable or doable, depending on what direction you take." The biggest hurdle in planning to pay for college is getting started, Castellano says. "It's those who plan that usually save more and borrow less, and that's really the name of the game when it comes to paying for college," Castellano says. "The earlier you can start a plan, the better." Dig into more findings from "How America Pays for College 2017" here. Related: 5 essential strategies for when you're broke and can't pay your student loans Haley Samsel is an American University student and a USA TODAY digital producer. This story originally appeared on the USA TODAY College blog, a news source produced for college students by student journalists. The blog closed in September of 2017.
2023-09-18T01:27:17.568517
https://example.com/article/7014
Dear Subscriber, please note that you are receiving this email because you have opted into our FREE daily market analysis newsletter. If you would like to be removed from our list, reply to this email with remove in the subject line. Thank you! December 11th, 2000 Previous Session's Best Performing Alert One of Friday's Before the Bell Alerts Emulex Corp. (EMLX) gained 17 points after our alert. RealTimeTraders.com offers free real-time stock alerts throughout the day. Why pay for chat rooms and stock pickers for old news? Morning Watch List This section contains 6 stocks that are expected move in 1 to 5 days. Click here for this morning's Watch List Stocks: http://www.realtimetraders.com/realtimetraders/marketinfo/watchlist.asp Beyond The Numbers Are the markets ready for a rally? Whether the markets are ready for a bounce or a sustainable rally is a topic that can probably be argued for a long time to come. Some may argue the fundamentals may or may not be in place for a reversal. While others may argue the markets are due for a bounce because of the oversold condition. Then there are those few, including RealTimeTraders.com who believe it's a good idea not to fight the markets and simply go with the flow. Important factors may be coming together for a move up Notice, our heading states &a move up8 rather than a rally. Along with a few market analysts we are not so sure this move up will be a definite reversal or a temporary bounce that many traders are looking for in next few sessions. Last week we pointed out that Federal Reserve Chairman Greenspan finally gave the markets what they had been anticipating. His favorable comments were interpreted to be a sign of possible lowering of interest rates to help the economy from slowing down to undesirable growth levels. Last Friday Labor Department announced the November employment numbers were below the consensus estimates. Over the weekend we learned that Iraq will be back in the crude oil market. If Iraq's crude comes online, that may further pressure the crude oil prices, (which is in the best interest of US consumers). The fact this election drama is also getting close to an end is also a positive for the markets. All the above facts coupled with the usual year end rally (better known as Santa Claus rally) may set the stage for a meaningful move up in the last few weeks of the year 2000. Economic slowdown may impact future earnings Last week some of the biggest and well-known names warned their upcoming quarterly earnings will fall short of Wall Street,s expectations. Despite the earnings warnings, markets have started to rally on the FED Chairman,s comments about possibly lowering interest rates to stimulate or stop the economy from slowing down. While lower rates are expected to help the markets move higher, it is important to note that corporate earnings will not immediately start to move higher. Given that, we believe markets are likely to move in a range for some time before starting to pick up pace. Moreover, unlike the past when dot com businesses rocketed higher and higher despite losing more money, stock prices in the future will depend a lot on the companies future earnings. <br><br> We encourage our readers to take a closer look at companies' earnings power and their position within the sector. What we mean by that is, it is a good idea to stay with the leaders in each sector rather than any stock that is moving higher, a good example of that is Applied Materials (AMAT) within the semiconductor sector. Applied Materials is the strongest stock in the Semiconductor equipment world, and as such it has always produced steady and predictable earnings. Sector Analysis Healthcare and Insurance related Stocks continue to attract money Once again we like to stay with stocks that are outperforming the broad market. Past few months we have been pointing out health and food related stocks that are among the defensive stocks that has caught the interest of investors. We at RealTimeTraders.com believe the healthcare and insurance related stocks will continue to move higher regardless of the possibility of technology stocks coming back into the spotlight. Some of the stocks that are worth mentioning are United Healthcare (UNH) has been hovering around its 52-week high for the past few weeks. Friday the stock closed at a new 52-week on average volume. Allstate Corp. (ALL) closed at a new 52-week high on Friday for the fourth consecutive day. Oxford Healthcare (OXHP) has set series of new highs since we started to alert our readers to this stock back in May 2000. We continue to see positive money flow into this stock. Economic Reports & Bond Markets This week markets will start out with a light schedule of economic reports. However, as the week heads towards the end markets will face some important economic reports that will help economist determine the direction and the strength of the economy. Monday wholesale inventory for the month of October is scheduled to be released at 10:00 AM. Analysts expect the inventory to climb higher by 0.3-0.5% Market Breadth Friday's NYSE advance/decline reading was 2076/816 Vs. Thursday's reading of1438/1384 Friday's Nasdaq advance/decline reading was 2818/1146 Vs. Thursday's reading of 1554/2361 New Highs and New Lows Friday's NYSE new high/lows was 245/60 Vs. Thursday's reading of 140/105 Friday's Nasdaq new high/lows was 81/171 Vs. Thursday's reading of 40/309 Important Market Moving News Visit our BeforeTheBell page - http://www.realtimetraders.com/realtimetraders/stockpicks/beforethebell.asp for the most important news that may effect your portfolio. This page also provide information on stocks that are going to gap open higher/lower and the reasons if any. We also alert traders to the actively traded stocks before the bell. Upgrades / Downgrades / Coverage Initiations Click on the following link for today's upgrades. http://www.realtimetraders.com/realtimetraders/marketinfo/g_minfo.asp?page=upg rades Click on the following link for today's downgrades. http://www.realtimetraders.com/realtimetraders/marketinfo/g_minfo.asp?page=dng rades Click on the following link for today's new coverage initiations http://www.realtimetraders.com/realtimetraders/marketinfo/g_minfo.asp?page=cov erage Please remember that we will only be able to continue to offer our service FREE of charge and add other valuable content to our site if we have enough subscribers. So please pass the word around about our free services to your friends, colleagues, and family members. Also if you use any newsgroups, chat rooms or forums, we would greatly appreciate if you can post a message or two about our services. Thank you. Today's Key Earnings Releases Company Name????????? Expected earnings Symbol Analogic??????????????????????? 0.30???? ALOG Roper Industries??????????? 0.49???? ROP Wallace Computer???????? 0.40???? WCS Disclaimer All information provided in this email is for informational purposes only. RealTimeTraders.com obtains data from sources it believe to be reliable. RealTimeTraders.com nor its content providers guarantee the accuracy of the content. Although the opinions expressed here are based on sound judgement, experience, and research, no warranty is given or implied as to their true reliability. The responsibility for decisions made from information contained in this email lies solely with the individual making those decisions. RealTimeTraders.com DOES NOT offer investment advice. Consult your financial advisor for all your investment decisions. Do not rely on this publication alone to make your investment decisions. By using the information contained in this email the user acknowledges that he/she has read RealTimeTraders.com's User Agreement http://www.realtimetraders.com/realtimetraders/disclaimer.htm and agree to all terms and conditions.
2023-09-09T01:27:17.568517
https://example.com/article/9639
Dead Space scrolling shooter Helo! I am new to Game Creator, and i am in the middle of creating a Dead Space themed scrolling shooter. I still need more Dead Space themed animations/sprites, backgrounds, background music, and especialy, good advice. If you can provide any of this for me, please post or message me! here is a link for my game:My link Helo! I am new to Game Creator, and i am in the middle of creating a Dead Space themed scrolling shooter. I still need more Dead Space themed animations/sprites, backgrounds, background music, and especialy, good advice. If you can provide any of this for me, please post or message me! here is a link for my game:My link Try going to sdb.drshnaps.com, (Sprite Database). You might be able to find some custom made sprites, just look on its side bar (labeled custom). And by Game Creator, you are referencing to Game Maker right?-pspiq3 Helo! I am new to Game Creator, and i am in the middle of creating a Dead Space themed scrolling shooter. I still need more Dead Space themed animations/sprites, backgrounds, background music, and especialy, good advice. If you can provide any of this for me, please post or message me! here is a link for my game:My link Try going to sdb.drshnaps.com, (Sprite Database). You might be able to find some custom made sprites, just look on its side bar (labeled custom). And by Game Creator, you are referencing to Game Maker right?-pspiq3 Helo! I am new to Game Creator, and i am in the middle of creating a Dead Space themed scrolling shooter. I still need more Dead Space themed animations/sprites, backgrounds, background music, and especialy, good advice. If you can provide any of this for me, please post or message me! here is a link for my game:My link Try going to sdb.drshnaps.com, (Sprite Database). You might be able to find some custom made sprites, just look on its side bar (labeled custom). And by Game Creator, you are referencing to Game Maker right?-pspiq3 I checked on the website, and they did not have any dead space sprites.
2023-09-09T01:27:17.568517
https://example.com/article/2752
/* * Copyright 2015-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.net.meter; import org.onosproject.event.ListenerService; import org.onosproject.net.DeviceId; import java.util.Collection; /** * Service for add/updating and removing meters. Meters are * are assigned to flow to rate limit them and provide a certain * quality of service. */ public interface MeterService extends ListenerService<MeterEvent, MeterListener> { /** * Adds a meter to the system and performs it installation. * * @param meter a meter * @return a meter (with a meter id) */ Meter submit(MeterRequest meter); /** * Remove a meter from the system and the dataplane. * * @param meter a meter to remove * @param meterId the meter id of the meter to remove. */ void withdraw(MeterRequest meter, MeterId meterId); /** * Fetch the meter by the meter id. * * @param deviceId a device id * @param id a meter id * @return a meter */ Meter getMeter(DeviceId deviceId, MeterId id); /** * Fetches all the meters. * * @return a collection of meters */ Collection<Meter> getAllMeters(); /** * Fetches the meters by the device id. * * @param deviceId a device id * @return a collection of meters */ Collection<Meter> getMeters(DeviceId deviceId); /** * Allocates a new meter id in the system. * * @param deviceId the device id * @return the allocated meter id, null if there is an internal error * or there are no meter ids available */ MeterId allocateMeterId(DeviceId deviceId); /** * Frees the given meter id. * * @param deviceId the device id * @param meterId the id to be freed */ void freeMeterId(DeviceId deviceId, MeterId meterId); /** * Purges all the meters on the specified device. * @param deviceId device identifier */ default void purgeMeters(DeviceId deviceId){ //Default implementation does nothing } }
2024-05-24T01:27:17.568517
https://example.com/article/8936
Charles Belk being detained by Beverly Hills Police (Photo via Facebook) In yet another case of "walking while black," a film and TV producer recently told his tale of being held for six hours by Beverly Hills police while attending a pre-Emmys event because he looked like a burglary suspect. Charles Belk, 51, writes in a Facebook post that he was on his way to check his parking meter last Friday in Beverly Hills when he was detained. Earlier he had been handling celebrity talent at a Emmy Awards Gifting Suite, and he was planning on heading to a VIP Emmy pre-party later that night. He was walking from a restaurant on Wilshire Boulevard to his car, parked on La Cienega Boulevard at about 5:20 p.m. when it happened. He is grateful that he didn't look even more "suspicious" at the time, "In fact, if it wasn’t for a text message that I was responding to, I would have actually been running up LaCienega Blvd when the first Beverly Hills Police Officer approached me. Running!" He said he was surrounded by six police cars, then made to sit on the curb. He was handcuffed and searched, then transported to the Beverly Hills station. He was booked, accused taking part in the armed robbery at a Citibank location and couldn't leave without $100,000 bail. He said his car was impounded, he was denied a phone call and wasn't given a very good explanation as to why he was being held. What was he arrested for? Being a tall, bald, black man. I get that the Beverly Hills Police Department didn't know at the time that I was a law abiding citizen of the community and that in my 51 years of existence, had never been handcuffed or arrested for any reason. All they saw, was someone fitting the description. Doesn't matter if he's a "Taye Diggs BLACK", a "LL Cool J BLACK", or "a Drake BLACK" Who is Charles Belk? He's a college educated business man, who received his Bachelor of Science in Electrical Engineering from the University of Southern California, then an MBA from Indiana University. He was a consultant for the NAACP, he worked on the 1996 Atlanta Olympics, has worked at IBM and has been on the board of numerous film festivals. Here's his IMDB. Among the long list of credentials he rattled off in his Facebook post, not one of them was "bank robber." What Belk says confuses him the most is why the footage from the bank was not reviewed until after multiple requests by Belk, six hours after he was detained and forced to sit on the curb. The footage showed that it was a different tall, bald, black man, and Belk was free to go. "If it can happen to me," Belk wrote, "it can happen to anyone. Time has come for a change in the way our law enforcement officers 'serve and protect' us. We all do not fit the description." Belk also posted his release online: Charlie Belk's release form (Photo via Facebook) [h/t to Crooks and Liars]
2023-09-17T01:27:17.568517
https://example.com/article/7314
1. Introduction {#sec1} =============== Total hip arthroplasty (THA) is a highly successful surgical intervention as it restores function, alleviates pain, and greatly improves quality of life. Despite the overwhelming success of this surgical intervention, surgeons may be averse to recommend THA in the young patient due to anticipated high activity level resulting in repetitive loading, excessive demand placed on the hip, and the limited implant survivorship. Moreover, surgical treatment of this patient population is challenging due to the often aberrant proximal femoral geometry, previous surgery with retained hardware, leg length discrepancy, and relative acetabular dysplasia or retroversion. To date, there remains a paucity of studies assessing the efficacy of THA in the young patient. Moreover, of the available studies, many discuss the outcomes of THA in patients with inflammatory arthritis \[[@B11]--[@B18]\] leaving few studies describing the outcome of THA in noninflammatory hip degeneration \[[@B2]--[@B25]\]. From an implant standpoint, the primary concern is the increased risk of failure and high likelihood for revision surgery in the patient\'s lifetime, making the use of cementless implants appealing as the likelihood for aseptic loosening is decreased and stable long-term fixation is expected \[[@B21]--[@B14]\]. Additionally, if the implant becomes loose, the revision procedure is technically more facile as the surgeon does not need to remove cement. A review of the literature shows that in an older population, the use of proximally coated, tapered cementless stems in combination with modern articulations has excellent survivorship approaching 95% at 20 years \[[@B4]\]. As a regional referral center with an interest in young adult hip preservation in addition to arthroplasty, we frequently encounter young patients with end stage hip osteoarthritis and consequently are often left with the clinical dilemma of the most appropriate treatment in this age group. Herein, we present our radiographic and clinical results following total hip arthroplasty in a series of young patients under the age of 30 with advanced coxarthrosis primarily secondary to noninflammatory processes using modern cementless implants and bearing couples. 2. Materials and Methods {#sec2} ======================== After IRB approval, we performed a retrospective review of 40 consecutive cementless total hip arthroplasties performed by a single surgeon (CLP) from 1996--2008 in 34 patients under the age of 30 with mean followup of 65 months (range 24--151). The components utilized in all cases were a cementless acetabular component and a proximally porous coated cementless femoral component. The femoral components included a tapered wedge design (Taperloc, Biomet, Warsaw, IN), or a modular S-ROM (Depuy, Warsaw, IN). Acetabular components consisted of monoblock cobalt chrome (CoCr) cups (M2a or Magnum, Biomet, Warsaw IN or ASR, Depuy, Warsaw, IN), modular titanium cups with a CoCr insert (Pinnacle, Depuy, Warsaw, IN), or modular titanium cups with a conventional, highly cross-linked or vitamin-E enhanced cross-linked polyethylene liner (Ranawat-Burnstein Ringloc with Arcom, ArcomXL or E1 polyethylene (Biomet, Warsaw, IN) ([Table 1](#tab1){ref-type="table"}). Our primary outcome measures included the Harris hip score \[[@B12]\], clinical complications, revisions, and radiographic analysis focusing both on femoral and acetabular radiolucencies. All surgeries were performed in a clean-air operating room, and the operating team wore body-exhaust suits. All patients received prophylactic intravenous antibiotics 30 to 60 minutes prior to incision and which were continued for 24 hours after surgery. A standard miniposterolateral approach was used in the majority of cases, though direct lateral or anterolateral approaches were also utilized. All acetabular components were inserted using a standard 1-2 mm press-fit technique. Adjunct acetabular screws were used as needed to achieve initial mechanical stability of the acetabular component. Patients received pharmacologic venous thromboembolic prophylaxis for at least 4 weeks after surgery with low molecular weight heparin or warfarin. Patients were made weight bearing as tolerated in the immediate postoperative period unless contraindicated based upon an intraoperative complication. Patients were evaluated prior to the index surgery and scheduled postoperatively at six weeks, six months, one year, and biannually thereafter with clinical exam, HHS, and radiographs. The patient records were reviewed to obtain patient demographic data, preoperative diagnosis, operative details, and arthroplasty component information. Preoperative and postoperative Harris hip scores (HHSs) from the latest followup visit are reported \[[@B12]\]. Serial radiographic evaluation included anteroposterior pelvis and groin lateral films of the operative side. Two reviewers (LAA, JMG) evaluated the most recently obtained radiographs of the hip for component migration or subsidence and radiolucencies in acetabular zones of De Lee and Charnley \[[@B5]\] and femoral zones of Gruen \[[@B10]\]. All postoperative complications and any revisions were documented prospectively in our total joint database. 3. Results {#sec3} ========== The mean age of this group of patients was 22 years (range 15--29) with a mean BMI of 27 (range 19--43). 50% of these patients had prior ipsilateral hip surgery, and 47.5% were Charnley class B or C. The majority of these patients had a preoperative diagnosis of pediatric hip disease (52.5%) or avascular necrosis (30%), and only 3 patients (7.5%) had a diagnosis of inflammatory arthritis ([Table 1](#tab1){ref-type="table"}). In terms of perioperative factors, we noted a relatively high estimated blood loss (EBL) (mean 456 mL, range 200--1200) and rate of transfusions (15%). The majority of these cases were done through a posterior approach (75%). In regards to component utilization, and likely related to the rate of prior hip surgery, we noted a large amount of modular femoral stems in the young group (42.5%). The majority of articulations in these patients were metal on metal (MOM) (67.5%), with 89% of these MOM bearings utilizing large heads (head size 36 mm or greater) and 79% utilizing monoblock acetabular components ([Table 1](#tab1){ref-type="table"}). The overall perioperative complications rate was 15%. Dislocation accounted for 67% of these complications, with 10% of these younger patients having had at least one dislocation. Other complications included infection in one patient and pulmonary embolus in one patient ([Table 2](#tab2){ref-type="table"}). Clinical outcomes were measured using the harris hip score (HHS). We found a mean 33.4 point (95% CI: 28.0--37.6) improvement in the Harris Hip Score following THA, with mean preoperative HHS of 62.7 (95% CI: 57.3--68.0) and a mean postoperative score of 94.7 (95% CI: 92.2--97.1). The most recent followup AP and lateral radiographs of the hip were evaluated for femoral and acetabular radiolucencies. In terms of radiographic outcomes, we identified Zone 1 femoral radiolucencies in 4 femurs (10%) and a Zone 2 radiolucency in 1 femur (2.5%). We did not identify any other femoral radiolucencies. On the acetabular side, we found Zone 1 radiolucencies in 2 acetabuli (5%), a Zone 2 radiolucency in 1 acetabulum (2.5%), and a Zone 3 radiolucency in 1 acetabulum (2.5%) ([Table 2](#tab2){ref-type="table"}). No radiolucencies were progressive, and one implant was deemed radiographically loose. Our aseptic revision rate was 17.5% with a mean time to revision of 74 months (range 22--124 months) ([Table 2](#tab2){ref-type="table"}). Due to the large number of metal on metal articulations in this series, we divided our aseptic revisions into revisions for metallosis and revisions for reasons other than metallosis. Metallosis accounted for the majority of our aseptic revisions (57%), and when excluding these cases, we found three aseptic revisions for a rate of 7.5%, with one patient revised for instability, one for periprosthetic femur fracture, and one for aseptic loosening of both the femoral and acetabular components ([Table 3](#tab3){ref-type="table"}). 4. Discussion {#sec4} ============= Surgeons are frequently presented with the clinical dilemma as to the most appropriate treatment option for young patients with advanced coxarthrosis. Much of the existing literature available on total hip arthroplasty in the young patient describes the results when performed for inflammatory arthropathies. Additionally, the majority of early studies involved the use of cemented fixation in these young patients. Many studies evaluating outcomes of cemented total hip arthroplasty in patients less than 30 years of age have shown poor results with high revision rates \[[@B1], [@B22]\]. In a 1984 study, with an overall aseptic revision rate of 33% at 8-year followup of cemented THAs in patients less than 20 years of age with diagnoses of polyarticular inflammatory arthritis, Roach insightfully noted that "perhaps in the future, non-cemented prostheses may better serve this difficult group of patients" \[[@B22]\]. Subsequently, in contrast to cemented fixation, several studies were published showing much improved clinical outcomes and aseptic revision rates with cementless THA in these younger patients with polyarticular disease \[[@B11]--[@B18]\]. However, in his study of THAs in patients less than 30 years of age, Chandler noted higher revision rates in patients with unilateral arthroplasties and higher activity levels as compared to those with polyarticular inflammatory disease \[[@B1]\]. The improved outcomes in the patients with juvenile inflammatory conditions are thought to be secondary to decreased activity levels due to their polyarticular disease, and this disparity has made it difficult to translate these published results to young patients without inflammatory arthritis. With the present study, we add our results to the limited body of the literature on the use of contemporary cementless total hip arthroplasty with mid- to long-term followup in young patients (\<30 years) with primarily noninflammatory coxarthrosis ([Table 4](#tab4){ref-type="table"}). An examination of the literature shows a common finding of higher rates of revision for aseptic loosening in this younger THA population consistent with the trend found in this study ([Table 4](#tab4){ref-type="table"}). Nizard et al., Dudkiewicz et al., and Wangen et al. all found very high rates of revision for aseptic loosening in the their cohorts (range 15 to 45%) \[[@B7], [@B17], [@B25]\], while Kamath et al., Costa et al., Clohisy et al., Restrepo et al., and our current study found rates from 0--8% \[[@B2], [@B4], [@B13], [@B20]\]. When evaluating the available literature with a majority of noncemented THAs in young patients with primarily non-inflammatory arthritis, we can see that the aseptic revision rates are improved compared to the traditionally high revision rates in this young population performed with other techniques \[[@B1], [@B22]\]. However, there does seem to be a trend toward drastically rising revision rates as followup increases ([Table 4](#tab4){ref-type="table"}). In most of these studies, a higher rate of loosening was found on the acetabular side compared to the femoral component (3 : 1), while our revision rates were similar on both the femoral and acetabular sides ([Table 4](#tab4){ref-type="table"}). While our aseptic femoral revision rate does seem high at 5%, one of these revisions was in the setting of a periprosthetic fracture and only one of these femoral revisions was for aseptic loosening of the stem. If this is taken into account, our aseptic femoral revision rate would have been 2.5% at 65-month followup which is similar to recent metaanalysis data finding aseptic revision rates of cementless femoral components in younger patients to be 1.3% at mean 8.4-year followup \[[@B24]\]. This study is one of the only available studies looking at a group of young patients having had THA with a majority of modern large head MOM bearings (70%). A recent study by Girard et al. found low revision rates (4.3% at 9 years) compared to our MOM THA patients (17.5% overall and 10% for metallosis alone at 5.4 years) \[[@B9]\]. However, they were using a metal-polyethylene sandwich acetabular bearing surface with smaller heads (89% 28 mm heads), which has not shown the same revision rates as other large head monoblock metal-on-metal bearings. In other published series, these metal-polyethylene sandwich bearings have had similar revision rates to standard metal-on-polyethylene bearings \[[@B6]--[@B23]\]. Therefore, we feel that our metallosis revision rate of 10% at 5.4 years may be a more accurate reflection of actual revision rates in young patients with large head monoblock metal-on-metal articulations. In order to further put our results into perspective, we performed a comparison of this young group of patients to an internal control group of 50 randomly selected cementless total hip arthroplasties performed in 49 patients over the age of 50 during this same time period matched for the duration of followup and the distribution of the bearing utilized. In a post-hoc power analysis, we found that we were grossly underpowered for most of the comparisons between these groups, and we therefore do not formally present this data as a comparative retrospective study. However, despite not meeting statistical significance, we did find several interesting findings that we feel are clinically significant and an accurate description of our experience and thus should be highlighted. Among the demographic details, as might be expected, our young patients had a smaller mean BMI of 27 compared to 32 in the older patients (*P* = 0.033). We also noted a trend toward more Charnley A hips in the older group (68% versus 52.5%), which is also logical as more of the younger group had diagnoses of sequelae of pediatric diseases, which were often bilateral. Further, and again not surprising is the fact that more of the young patients had a diagnosis of avascular necrosis or osteoarthritis secondary to prior pediatric hip diseases, while the majority of our older patients had a diagnosis of idiopathic osteoarthritis. Both groups had very similar preoperative mean Harris hips scores. In terms of components utilized, we found more modular femoral stems in the young group (43% versus 4%), likely as a result of the altered or small anatomy encountered in these patients with the sequelae of pediatric hip diseases. Among our perioperative data, we noted a trend toward more blood loss (456 versus 403 mL) and postoperative transfusions (15% versus 6%) in the younger group. We are unsure of the reason for the higher transfusion rate seen in the younger group as younger patients are often more tolerant of lower hemoglobin levels. Our transfusion trigger was not specifically different between the groups. We believe that this higher transfusion rate may indicate that our trend toward more blood loss in the younger group may have been an underestimate of the increased blood loss actually seen, especially since intraoperative estimations of blood loss are usually quite subjective. This increased blood loss may be a reflection of the fact that 50% of our young hips had prior surgery making the THA more difficult requiring more extensile approaches and more blood loss. We also found a trend toward more dislocations (10% versus 2%) and thus overall complications (15% versus 6%) among the younger group. When looking at complications other than dislocation, the rate of complications was similar between the groups (5% versus 4%). Among our outcomes data, we found a trend in the younger group of higher overall aseptic revision rates (17.5% versus 8%) and aseptic revisions for reasons other than metallosis (7.5% versus 4%). When we excluded failures for metallosis, our midterm aseptic revision rates in both groups were similar to the existing literature and we feel that this strengthens this comparison despite statistical insignificance \[[@B3], [@B16]\]. In terms of both radiographic and clinical outcomes, the two groups were notably similar. There are several limitations to this study. The retrospective nature yields itself to several biases, including selection and recall bias. The followup on this study is a limitation in that our mean followup is only 5.4 years, which limits conclusions regarding long-term clinical outcomes, complications, and need for further interventions. Additionally, our young THA cohort had limited numbers, making an attempt to compare this to a control group grossly underpowered. However, despite our limited power, this comparison does provide some unique information as none of the other existing literature provides any comparison to an internal control group of older patients, and we have therefore included our findings above. In conclusion, contemporary cementless THA in patients under the age of 30 for diagnoses other than inflammatory arthritis is associated with acceptable functional improvement and radiographic outcomes at midterm followup. There does appear to be a trend toward a higher transfusion rate, higher dislocation rate, and higher midterm overall aseptic revision rate in this young group of patients. The high prevalence of prior pediatric hip surgery in the young THA group may predispose to increased technical difficulty resulting in increased complications and higher revision rates in these younger patients. Although our revision rate was high in the younger patients, it is favorable compared to older techniques and consistent with the limited data available with modern cementless techniques in patients of similar age. Long-term followup of this important group of patients will be imperative to evaluate longevity and the sequelae of implant wear in these patients who are likely to outlive their prosthetic bearings. One or more of the authors is a paid consultant for Biomet Orthopedics. ###### Patient demographics, perioperative data, and component data. Demographics   ---------------------------------------------- ---------------- Age at surgery \[mean (range)\] 22 (15--29) Gender 24 F (60%) 16 M (40%) BMI \[mean (range)\] 27 (19--43) Side 19 L (47.5%) 21 R (52.5%) Prior-hip surgery 20 (50%) Charnley class A 21 (52.5%) Charnley class B 10 (25%) Charnley class C 9 (22.5%) Diagnosis   OA secondary to pediatric hip diseases 21 (52.5%) Avascular necrosis (AVN) 12 (30%) Inflammatory arthritis 3 (7.5%) Failed hip fusion 2 (5%) Septic arthritis 1 (2.5%) Posttraumatic arthritis 1 (2.5%) Perioperative data   Estimated blood loss \[mean (95% CI)\] 456 (374--537) Postoperative transfusions 6 (15%) Intraoperative fractures 2 (5%) Posterior approach 30 (75%) Anterolateral approach 8 (20%) Direct lateral approach 2 (5%) Femoral component   Modular cementless stem 17 (42.5%) Nonmodular cementless stem 23 (57.5%) Acetabular component   Monoblock CoCr Cup 21 (52.5%) Titanium cup with coCr insert 6 (15%) Titanium cup with polyethylene insert 13 (32.5%) Articulation   Metal-on-metal 27 (67.5%) Metal-on-conventional polyethylene 11 (27.5%) Metal-on-highly cross-linked polyethylene 1 (2.5%) Metal-on-vitamin E cross-linked polyethylene 1 (2.5%) Femoral head size   22 mm to 28 mm 14 (35%) 32 mm to 38 mm 12 (30%) 40 mm to 56 mm 14 (35%) ###### Complications, revision rates, and radiographic and Clinical outcomes. Complications *n* (%) ------------------------------------------------------- ------------------- Infection 1 (2.5%) Pulmonary embolism 1 (2.5%) Dislocation 4 (10%) Total complications 6 (15%) Revisions *n* (%) All aseptic revisions 7 (17.5%) Time to revision \[mean (95% CI)\] 74 (38--110) Radiographic outcomes *n* (%) Femoral radiolucencies (gruen zones)    Zone 1 4 (10)  Zone 2 1 (2.5)  Zone 3 0  Zone 4 0  Zone 5 0  Zone 6 0 Acetabular radiolucencies (De Lee and Charnley zones)    Zone 1 2 (5)  Zone 2 1 (2.5)  Zone 3 1 (2.5) Clinical outcomes mean (95% CI) Preop Harris hip score \[mean (95% CI)\] 62.7 (57.3--68.0) Postop Harris hip score \[mean (95% CI)\] 94.7 (92.2--97.1) Change in Harris hip score \[mean (95% CI)\] 33.4 (28.0--37.6) ###### Revision details (7 aseptic revisions---17.5% aseptic revision rate at mean 65-month mean followup). ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Diagnosis Charnley class Femoral head type Femoral head size Femoral component Acetabular component Articulation Reason for revision Revision details Time to revision (months) -------------------- ---------------- ------------------- ------------------- ------------------- ---------------------- ---------------- ---------------------------------------------- --------------------------------------------------------------- --------------------------- AVN B CoCr 22 Biomet Integral Biomet RB MOP Recurrent dislocations-poly wear. Exchanged liner and head. 123.7 JRA C CoCr 22 DePuy SROM Biomet RB MOP Periprosthetic femur fracture. Stem revised. 92.5 OA 2/2 Perthes A CoCr 45 DePuy SROM DePuy ASR MOM-ASR Aseptic loosening.\ Stem loose, fibrous fixation cup. Complete revision with MOP. 26.7 No metallosis. Failed hip fusion\ A CoCr 28 Biomet MH Calcar Biomet RB MOM (sandwich) Recurrent dislocation.\ Cup and stem stable.\ 101.8 (Fused for AVN) Metallosis found. Revised to MOP. AVN A CoCr 38 Biomet Intergral Biomet M2A MOM-M2A Metallosis-mild metal debris/excessive scar. No loosening. Revised to MOP 92.1 OA 2/2 CDH A CoCr 41 DePuy SROM DePuy ASR MOM-ASR Metallosis.\ No loosening. Revised to MOP. 60.8 OA 2/2 CDH A CoCr 46 DePuy SROM DePuy ASR MOM-ASR Metallosis.\ No loosening.\ 21.8 Revised to MOP ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ AVN: avascular necrosis, JRA: juvenile rheumatoid arthritis, OA: osteoarthritis, CDH: congenital dysplasia of the hip, CoCr: cobalt chromium, MOM: metal on metal, MOSP: metal on conventional polyethylene. ###### Studies of patients \<30 years old with majority noninflammatory arthritis treated with cementless total hip arthroplasty (sorted by mean followup). -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Study Kamath et al. \[[@B13]\] Costa et al. \[[@B4]\] Clohisy et al. \[[@B2], [@B3]\] Current Study Restrepo et al. \[[@B21], [@B20]\] Nizard et al. \[[@B17]\] Dudkiewicz et al. \[[@B7]\] Girard et al. \[[@B9]\] Wangen et al. \[[@B25]\] ---------------------------------- -------------------------- ------------------------ --------------------------------- ---------------------- ------------------------------------ -------------------------- ----------------------------- ------------------------- -------------------------- Year pub 2012 2012 2010 2013 2008 2008 2003 2010 2008 Mean age (range) 18 (13--20) 20 (13--30) 20 (12--25) 22 (15--29) 17.64 (13.5--20) 23.4 (13--30) 23.2 (14--29) 25 (15--30) 25 (15--30) Patients (hips) 17 (20) 40 (53) 88 (102) 34 (40) 25 (35) 94 (108) 56 (69) 34 (47) 44 (49) Mean F/U (years) 4.1 4.6 5 5.4 6.6 6.9 7.4 9 13 Prior operations 80%\* 0% Unknown 50% 20% 18% High 27.70% Unknown \% Inflammatory arthritis 0% 0% 12% 7% 26% 13% 29% 4% 0% Surgical approach 100% post 100% AL Post and AL 75% post 20% AL\ 100% AL 90% Post\ Unknown 100% post Post and DL 5% DL 6% DL\ 4% AL Cementless stems 95% 100% 95% 100% 100% 53% 91.30% 94% 100% Cementless cups 100% 100% 100% 100% 100% 92% 91.30% 100% 100% Articulation 70% COC\ 100% MOP 45% MOXLP\ 67.5% MOM 27.5% MOP\ 63% COXLP\ 100% COC 100% MOP 100% MOM (Metasul) 100% MOP 30% MOXLP 29% MOP\ 5% MOXLP\ 31% MOP\ 14% COC\ 5% MOEP 6% COC\ 7% COXLP 5% MOM Infections 0% 1.9% 1% 2.5% 0% 0% 4.3% 4.3% Unknown Dislocations 0% 0% 3.9% 10% 0% 1.9% 7.2% 2.1% 6.1% Femoral aseptic Revision rate 0% 0% 0% 5%\*\* 0% 4.6% 4.3% 0% 0% Acetabular aseptic revision rate 0% 1.9% 2.9% 2.5%\*\* 0% 13.9% 15.9% 4.3% 49% Overall aseptic revision rate 5% 3.8% 7% 7.5%\*\* 0% 15.7% 15.9% 4.3% 49% -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \*Majority of prior operations were core decompression procedures for avascular necrosis. \*\*Failure rates for reasons other than metallosis are listed for the current study. Post: posterior approach, AL: anterolateral approach, DL: direct lateral approach. COC: ceramic on ceramic, MOXLP: metal on highly cross-linked polyethylene, MOP: metal on polyethylene, MOEP: metal on E-polyethylene, COXLP: ceramic on highly cross-linked polyethylene, MOM: metal on metal. [^1]: Academic Editor: Nicholas Dunne
2024-02-27T01:27:17.568517
https://example.com/article/4664
There are plenty of reasons our little players could end up injured or out of commission for a period of time or longer. Three reasons that continue to pop up over and over are concussions, dehydration and goals which topple over. Concussions aren't readily preventable, but the opportunity can be mitigated with proper training and with proper treatment their effects can be reduced. You should note that US Youth Soccer doesn't encourage heading until older age groups and that the Centers for Disease Control and Prevention (CDC) has partnered with US Youth Soccer to offer some outstanding information on concussions available for free at http://www.cdc.gov/concussion/sports/. The last two, dehydration and goals toppling over, can be prevented if the adults surrounding the players take the right precautions. It's important to keep these problems in mind whenever our youngsters take the field for practice or games. We need to prepare to either prevent or handle these situations and we need to take them seriously. The statistics on concussions are both staggering and sobering. Last school year saw 400,000 concussions in high school sports alone. Fifty percent of all ER visits for concussions involved 8 to 19 year olds in sports and 40 percent of those sports related concussions involved children between the ages of 8 and 13. Concussions among children doubled between 1997 and 2007. Doctors attribute this to children participating younger and younger in contact sports and the size of children increasing. Although soccer ranks fairly low on the scale of sports contributing to concussions, parents and coaches need to be vigilant. Coaches have a responsibility to teach the skill of heading, where potential injuries risk can occur as players leave the ground to contact the ball, correctly. The details of the proper body mechanics of the skill can be found in the Skills School Manual (/assets/1/1/Skills_School_Manual.pdf ) and can be seen in the DVD Skills School – Developing Essential Soccer Techniques (http://www.usyouthsoccershop.com/frontpage-items-us-youth-soccer-skills-school.html). Also use the information in the Heading Guidelines (/assets/1/1/Heading_Guidelines.pdf); all available on the US Youth Soccer website. In general, introduce the skill of heading in the U-10 age group with balancing the ball and a bit of juggling. Do teach basic heading skill, but use it sparingly in training and matches. Then begin to gradually increase the amount of time in training sessions on coaching this skill from the U-12 age group and older. Recommendations following a concussive episode include taking a week to 10 days off for a mild concussion and even longer if the hit was particularly hard or the symptoms required an ER or doctor's visit. Those symptoms include, but are not limited to, dizziness, headache, confusion, nausea, ringing in the ears, slurred speech and fatigue. Any time a player has blacked out, even for a few seconds, that player needs to receive medical attention. If a player has had multiple concussions no matter how many years apart, that player also needs to receive medical attention. Most doctors agree that three is the limit for concussive episodes. Therefore, it's important that parents keep close track of any brain injury their child may have suffered – it doesn't just need to be on the field of play. Over the past decade doctors have come to understand how serious concussions are. Studying retired NFL and NHL players, doctors have seen hidden, serious and long-term effects of concussions which have lead to more stringent guidelines for youth players to protect their most precious biological asset. Dehydration is easily preventable, yet occurs. Usually this is due to three factors: athletes don't prepare properly before a match, athletes ignore their need to hydrate and event organizers don't allow for hydration breaks. As a result, we can see players collapsing from heat exhaustion, cramps and disorientation – all symptoms of dehydration. When the weather is hot and humid, dehydration can occur even more quickly. Athletes and event organizers should keep a close eye on the heat index (http://en.wikipedia.org/wiki/Heat_index)—a resource that takes into account both temperature and humidity. Once the index reaches a certain level, everyone should take care to provide hydration breaks during a game. When the temperature is high, the body compensates by dilating blood vessels on the skin to allow for more heat loss but that restricts blood flow to the brain. Athletes shouldn't lose more than two percent of their body weight during any contest or practice. If they do, then they have severe dehydration and need to address the condition immediately. Water isn't necessarily the best source for replenishing the body. First, it can encourage more urination which defeats the purpose of hydration. Second, it can actually kill a player's thirst. And third, dehydration involves the loss of fluids and electrolytes, the latter of which water doesn't address. But if water is all that's available, then by all means use it. Before a match, players should pre-hydrate with a sports drink. The rule of thumb is 16 to 24 ounces of drink per hour of exercise (note, most of the youth recreational games are less than an hour in play). Symptoms of dehydration include dizziness, cramps, muscle fatigue, disorientation and nausea. Ironically thirst isn't a symptom of dehydration because dehydration often suppresses thirst. Parents could consider bringing sports drinks to be prepared and absolutely insist that games be interrupted on hot days to allow for hydration breaks. Severe dehydration can lead to brain damage, muscle damage, heart damage and even death. Stopping a game for 10 minutes in the middle of a half could be all that's needed to avoid dehydration problems. A goal toppling onto players occurs too often for something so preventable (http://www.cpsc.gov/cpscpub/pubs/soccer.pdf). All goals should be securely anchored, but even the best anchoring can end up being no match for a gaggle of players leaping up to the goal so they can hang or do chin-ups. The primary prevention for this serious event is educating players about the dangers. Every year a handful of players end up being crushed by goals, which is a handful too many. Whether a goal falls during a game or because of players hanging on it, the result can be tragic. Therefore, clubs need to be mindful of the danger and provide proper anchoring in the form of stakes. Sand bags can be shifted off the ground struts and no longer provide the correct counter-weigh. Stakes require more effort to remove, which is exactly why they are the best for anchoring a goal. Every club likes to have the freedom to move goals around so they can reconfigure fields and help eliminate overplay on the goal mouth ground, but the additional effort to pull up stakes is well-worth the added safety the stakes provide. Any time you see players leaping onto goals, you need to speak up even if it's not your children. Once the goal starts to tip, it falls quickly and heavily. So there isn't time for polite conversation. Every kid thinks it will never happen to him, so even with our admonishments the temptation of that solid crossbar will still attract them. We need to be vigilant and proactive. Injuries in sport are to be expected, but we can protect our children from some common harm both by being practical and watchful. When it comes to concussions we need to be sure our children don't return to playing too soon and recognize the symptoms so we can seek medical care. We can avoid dehydration by making sure our children drink at regular intervals during games and practice and by insisting that they get hydration breaks when the heat index is high. To save our children from falling goals we need to check that the goals in our children's games and practices are securely anchored, educate our children about the dangers and be the goal police if we see kids playing on goals. We have it in our power to make soccer safer and thereby more enjoyable. Hopefully, working together, we can do just that. Comments * Denotes required field *Name: *City: *State: Comments: We look forward to reviewing your comments! Please input the text and numbers that you see above into the following box in order to post your comment.
2023-09-12T01:27:17.568517
https://example.com/article/9774
Q: R: Loop through columns, select value from a column and write it to a new column in same row I have dataframe in following format id var1 val1 status1 var2 val2 status2 var3 val3 status3 123 a 12 false b 23 true c 34 true Here I want to go though each column of row, get the first occurance of status true for a variable and save it to a new row. Here is the expected output for above example. Is there a way to do this without using 2 for loops. (loop inside a loop). id var1 val1 status1 var2 val2 status2 var3 val3 status3 firstOccured 123 a 12 false b 23 true c 34 true b A: It seems to me that it would be easier to work with data in long format. So the first would be to reshape from wide to long dat_long <- reshape(dat, idvar = "id", varying = 2:ncol(dat), direction = "long", sep = "") Assuming that you have more than one group of id, you can use ave (for the grouping) and match (to get the first index of "true" in status) as follows: dat_long <- transform(dat_long, firstOccured = ave(status, id, FUN = function(x) var[match("true", x)])) Result dat_long # id time var val status firstOccured #123.1 123 1 a 12 false b #123.2 123 2 b 23 true b #123.3 123 3 c 34 true b If we need to go back to wide format we can do out <- reshape(dat_long, idvar = "id", timevar = "time", direction = "wide", sep = "") out <- out[setdiff(names(out), c("firstOccured1", "firstOccured2"))] out # id var1 val1 status1 var2 val2 status2 var3 val3 status3 firstOccured3 #123.1 123 a 12 false b 23 true c 34 true b data dat <- structure(list(id = 123L, var1 = "a", val1 = 12L, status1 = "false", var2 = "b", val2 = 23L, status2 = "true", var3 = "c", val3 = 34L, status3 = "true"), .Names = c("id", "var1", "val1", "status1", "var2", "val2", "status2", "var3", "val3", "status3"), class = "data.frame", row.names = c(NA, -1L))
2023-12-17T01:27:17.568517
https://example.com/article/9897
Three Philippine Catholic priests say they and some church leaders who are critical of the president's bloody crackdown on illegal drugs have received death threats from unknown people. Father Robert Reyes and two other priests told reporters Monday that they are hesitant to seek protection from the police because they are behind the anti-drug campaign that has left thousands of mostly poor drug suspects dead, but are considering seeking court protection. Reyes says the threats eased after Manila Archbishop Luis Antonio Tagle notified President Rodrigo Duterte last month about them, which were mostly made in cellphone messages to some bishops and priests. Duterte has often lashed out at Catholic bishops over sex abuses by clergy, calling the church the most hypocritical institution and questioning God's existence.
2023-11-23T01:27:17.568517
https://example.com/article/6666
/* * This file is part of RawTherapee. * * Copyright (c) 2019 Jean-Christophe FRISCH <natureh.510@gmail.com> * * RawTherapee is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * RawTherapee is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with RawTherapee. If not, see <https://www.gnu.org/licenses/>. */ #pragma once #ifdef GUIVERSION #include <cairomm/cairomm.h> #include <glibmm/ustring.h> #include "editcoordsys.h" #include "../rtengine/coord.h" class ObjectMOBuffer; class RTSurface; /** @file * * The Edit mechanism is designed to let tools (subscribers) communicate with the preview area (provider). * Subscribers will be tools that need to create some graphics in the preview area, to let the user interact * with it in a more user friendly way. * * Do not confuse with _local_ editing, which is another topic implemented in another class. The Edit feature * is also not supported in batch editing from the File Browser. * * Edit tool can be of 2 types: pipette editing and object editing. * * ## Pipette edition * * By using this class, a pipette mechanism can be handled on the preview. * * Each pipette Edit tool must have a unique ID, that will identify them, and which let the ImProcCoordinator * or other mechanism act as appropriated. They are all defined in rtgui/editid.h. A buffer type has to be given * too, to know which kind of buffer to allocate (see EditSubscriber::BufferType). * * Only the first mouse button can be used to manipulate the pipette on the Preview, that's why the developer has * to implement at least the following 4 methods: * - mouseOver * - button1Pressed * - drag1 * - button1Released * * Actually, only curves does use this class, and everything is handled for curve implementer (as much as possible). * See the curve's class documentation to see how to implement the curve's pipette feature. * * ### Event handling * * The mouseOver method is called on each mouse movement, excepted when dragging a point. This method can then access * the pipetteVal array values, which contain the mean of the pixel read in the buffer, or -1 if the cursor is outside * of the image. In this case, EditDataProvider::object is also set to 0 (and 1 if over the image). * * When the user will click on the left mouse button while pressing the CTRL key, button1Pressed will be called. * Setting "dragging" to true (or false) is not required for the pipette type editing. * * The drag1 method will be called on all subsequent mouse move. The pipetteVal[3] array will already be filled with * the mean of the read values under the cursor (actually a fixed square of 8px). If the BufferType is BT_SINGLEPLANE_FLOAT, * only the first array value will be filled. * * Then the button1Released will be called to stop the dragging. * * ## Object edition * * By using this class, objects can be drawn and manipulated on the preview. * * The developer has to handle the buttonPress, buttonRelease, drag and mouseOver methods that he needs. There * are buttonPress, buttonRelease and drag methods dedicated to each mouse button, for better flexibility * (e.g.button2Press, button2Release, drag2 will handle event when mouse button 2 is used first). RT actually * does not handle multiple mouse button event (e.g. button1 + button2), only one at a time. The first button pressed * set the mechanism, all other combined button press are ignored. * * The developer also have to fill 2 display list with object of the Geometry subclass. Each geometric shape * _can_ be used in one or the other, or both list at a time. * * The first list (visibleGeometry) is used to be displayed on the preview. The developer will have to set their state * manually (see Geometry::State), but the display shape, line width and color can be handled automatically, or with * specific values. To be displayed, the F_VISIBLE flag has to be set through the setActive or setVisible methods. * * The second list (mouseOverGeometry) is used in a backbuffer, the color used to draw the shape being the id of the * mouseOverGeometry. As an example, you could create a line to be shown in the preview, but create 2 filled Circle object * to be used as mouseOver detection, one on each end of the line. The association between both shape (visible and mouseOver) * is handled by the developer. To be displayed on this backbuffer, the F_HOVERABLE flag has to be set through the * setActive or setHoverable methods. For overlapping mouse over geometry, the priority is set by the order in the list : * the last item is detected first (think of it like a stack piled up). * * * ### Event handling * * RT will draw in the back buffer all mouseOverGeometry set by the developer once the Edit button is pressed, and handle * the events automatically. * * RT will call the mouseOver method on each mouse movement where no mouse button is pressed. * * On mouse button press over a mouseOverGeometry (that has F_HOVERABLE set), it will call the button press method corresponding * to the button (e.g. button1Pressed for mouse button 1), with the modifier key as parameter. Any other mouse button pressed at * the same time will be ignored. It's up to the developer to decide whether this action is starting a 'drag' or 'pick' action, * by setting the 'action' parameter to the appropriated value. * * If the user sets action to ES_ACTION_DRAGGING, RT will then send drag1 events (to stay with our button 1 pressed example) on each * mouse movement. It's up to the developer of the tool to handle the dragging. The EditProvider class will help you in this by * handling the actual position in various coordinate system and ways. * * When the user will release the mouse button, RT will call the button1Release event (in our example). The developer have * then to set action to ES_ACTION_NONE. * * If the user sets action to ES_ACTION_PICKING, RT will keep in memory the mouseOver object that was selected when pressing the mouse * (e.g. button 1), as well as the modifier keys. * * The element is said to be picked when the mouse button is released over the same mouse over object and with the same active * modifier keys. In this case, the corresponding picked event (e.g. picked1 in our example) and the 'picked' flag will be true. * If any of those condition is false, picked1 will still be be called to terminate the initiated picking action, but 'picked' * will be false. This is necessary because the user may want to update the geometry if the picking is aborted. The developer have * then to set action to ES_ACTION_NONE. * * Picking an on-screen element correspond to single-clicking on it. No double click is supported so far. * * Each of these methods have to returns a boolean value saying that the preview has to be refreshed or not (i.e. the displayed * geometry). * * ## Other general internal implementation notes * * When a tool is being constructed, unique IDs are affected to the EditSubscribers of the Pipette type. * Then the EditorPanel class will ask all ToolPanel to register the 'after' preview ImageArea object as data provider. * The Subscribers have now to provide a toggle button to click on to start the Edit listening. When toggling on, the Subscriber * register itself to the DataProvider, then an event is thrown through the standard ToolPanelListener::panelChanged * method to update the preview with new graphics to be displayed. If a previous Edit button was active, it will be deactivated * (the Edit buttons are mutually exclusive). For the Pipette type, a buffer will be created and has to be populated * by the developer in rtengine's pipeline. The unique pipette ID will be used to know where to fill the buffer, as each pipette * will need different data, corresponding to the state of the image right before the tool that needs pipette values. E.g for * the HSV tool, the Hue and Saturation and Value curves are applied on the current state of the image. That's why the pipette * of the H, S and V curve will share the same data of this "current state", otherwise the read value would be wrong. * * When the Edit process stops, the Subscriber is removed from the DataProvider, so buffers can be freed up. * A new ToolPanelListener::panelChanged event is also thrown to update the preview again, without the tool's * graphical objects. The Edit button is also toggled off (by the user or programmatically). * * It means that each Edit buttons toggled on will start an update of the preview which might or might not create * a new History entry, depending on the ProcEvent used. * */ class RGBColor { double r; double g; double b; public: RGBColor (); explicit RGBColor (double r, double g, double b); explicit RGBColor (char r, char g, char b); void setColor (double r, double g, double b); void setColor (char r, char g, char b); double getR (); double getG (); double getB (); }; class RGBAColor : public RGBColor { double a; public: RGBAColor (); explicit RGBAColor (double r, double g, double b, double a); explicit RGBAColor (char r, char g, char b, char a); void setColor (double r, double g, double b, double a); void setColor (char r, char g, char b, char a); double getA (); }; /// @brief Displayable and MouseOver geometry base class class Geometry { public: /// @brief Graphical state of the element enum State { NORMAL, /// Default state ACTIVE, /// Focused state PRELIGHT, /// Hovered state DRAGGED, /// When being dragged INSENSITIVE /// Displayed but insensitive }; /// @brief Coordinate space and origin of the point enum Datum { IMAGE, /// Image coordinate system with image's top left corner as origin CLICKED_POINT, /// Screen coordinate system with clicked point as origin CURSOR /// Screen coordinate system with actual cursor position as origin }; enum Flags { F_VISIBLE = 1 << 0, /// true if the geometry have to be drawn on the visible layer F_HOVERABLE = 1 << 1, /// true if the geometry have to be drawn on the "mouse over" layer F_AUTO_COLOR = 1 << 2, /// true if the color depend on the state value, not the color field above }; /// @brief Key point of the image's rectangle that is used to locate the icon copy to the target point: enum DrivenPoint { DP_CENTERCENTER, DP_TOPLEFT, DP_TOPCENTER, DP_TOPRIGHT, DP_CENTERRIGHT, DP_BOTTOMRIGHT, DP_BOTTOMCENTER, DP_BOTTOMLEFT, DP_CENTERLEFT }; protected: RGBColor innerLineColor; RGBColor outerLineColor; short flags; public: float innerLineWidth; // ...outerLineWidth = innerLineWidth+2 Datum datum; State state; // set by the Subscriber float opacity; // Percentage of opacity Geometry (); virtual ~Geometry() {} void setInnerLineColor (double r, double g, double b); void setInnerLineColor (char r, char g, char b); RGBColor getInnerLineColor (); void setOuterLineColor (double r, double g, double b); void setOuterLineColor (char r, char g, char b); RGBColor getOuterLineColor (); double getOuterLineWidth (); double getMouseOverLineWidth (); void setAutoColor (bool aColor); bool isVisible (); void setVisible (bool visible); bool isHoverable (); void setHoverable (bool visible); // setActive will enable/disable the visible and hoverable flags in one shot! void setActive (bool active); virtual void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) = 0; virtual void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) = 0; virtual void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) = 0; }; class Circle : public Geometry { public: rtengine::Coord center; int radius; bool filled; bool radiusInImageSpace; /// If true, the radius depend on the image scale; if false, it is a fixed 'screen' size Circle (); Circle (rtengine::Coord& center, int radius, bool filled = false, bool radiusInImageSpace = false); Circle (int centerX, int centerY, int radius, bool filled = false, bool radiusInImageSpace = false); void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; }; class Line : public Geometry { public: rtengine::Coord begin; rtengine::Coord end; Line (); Line (const rtengine::Coord& begin, const rtengine::Coord& end); Line (int beginX, int beginY, int endX, int endY); void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; }; class Polyline : public Geometry { public: std::vector<rtengine::Coord> points; bool filled; Polyline (); void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; }; class Rectangle : public Geometry { public: rtengine::Coord topLeft; rtengine::Coord bottomRight; bool filled; Rectangle (); void setXYWH(int left, int top, int width, int height); void setXYXY(int left, int top, int right, int bottom); void setXYWH(rtengine::Coord topLeft, rtengine::Coord widthHeight); void setXYXY(rtengine::Coord topLeft, rtengine::Coord bottomRight); void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; }; class Ellipse : public Geometry { public: rtengine::Coord center; int radYT; // Ellipse half-radius for top y-axis int radY; // Ellipse half-radius for bottom y-axis int radXL; // Ellipse half-radius for left x-axis int radX; // Ellipse half-radius for right x-axis bool filled; bool radiusInImageSpace; /// If true, the radius depend on the image scale; if false, it is a fixed 'screen' size Ellipse (); Ellipse (const rtengine::Coord& center, int radYT, int radY, int radXL, int radX, bool filled = false, bool radiusInImageSpace = false); void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; }; class OPIcon : public Geometry // OP stands for "On Preview" { private: Cairo::RefPtr<RTSurface> normalImg; Cairo::RefPtr<RTSurface> prelightImg; Cairo::RefPtr<RTSurface> activeImg; Cairo::RefPtr<RTSurface> draggedImg; Cairo::RefPtr<RTSurface> insensitiveImg; static void updateImages(); void changeImage(Glib::ustring &newImage); void drawImage (Cairo::RefPtr<RTSurface> &img, Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem); void drawMOImage (Cairo::RefPtr<RTSurface> &img, Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem); void drivenPointToRectangle(const rtengine::Coord &pos, rtengine::Coord &topLeft, rtengine::Coord &bottomRight, int W, int H); public: DrivenPoint drivenPoint; rtengine::Coord position; OPIcon (const Cairo::RefPtr<RTSurface> &normal, const Cairo::RefPtr<RTSurface> &active, const Cairo::RefPtr<RTSurface> &prelight = {}, const Cairo::RefPtr<RTSurface> &dragged = {}, const Cairo::RefPtr<RTSurface> &insensitive = {}, DrivenPoint drivenPoint = DP_CENTERCENTER); OPIcon (Glib::ustring normalImage, Glib::ustring activeImage, Glib::ustring prelightImage = "", Glib::ustring draggedImage = "", Glib::ustring insensitiveImage = "", DrivenPoint drivenPoint = DP_CENTERCENTER); const Cairo::RefPtr<RTSurface> getNormalImg(); const Cairo::RefPtr<RTSurface> getPrelightImg(); const Cairo::RefPtr<RTSurface> getActiveImg(); const Cairo::RefPtr<RTSurface> getDraggedImg(); const Cairo::RefPtr<RTSurface> getInsensitiveImg(); void drawOuterGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawInnerGeometry (Cairo::RefPtr<Cairo::Context> &cr, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; void drawToMOChannel (Cairo::RefPtr<Cairo::Context> &cr, unsigned short id, ObjectMOBuffer *objectBuffer, EditCoordSystem &coordSystem) override; }; class OPAdjuster : public Geometry // OP stands for "On Preview" { }; inline void RGBColor::setColor (double r, double g, double b) { this->r = r; this->g = g; this->b = b; } inline void RGBColor::setColor (char r, char g, char b) { this->r = double (r) / 255.; this->g = double (g) / 255.; this->b = double (b) / 255.; } inline double RGBColor::getR () { return r; } inline double RGBColor::getG () { return g; } inline double RGBColor::getB () { return b; } inline void RGBAColor::setColor (double r, double g, double b, double a) { RGBColor::setColor (r, g, b); this->a = a; } inline void RGBAColor::setColor (char r, char g, char b, char a) { RGBColor::setColor (r, g, b); this->a = double (a) / 255.; } inline double RGBAColor::getA () { return a; } inline void Geometry::setInnerLineColor (double r, double g, double b) { innerLineColor.setColor (r, g, b); flags &= ~F_AUTO_COLOR; } inline void Geometry::setInnerLineColor (char r, char g, char b) { innerLineColor.setColor (r, g, b); flags &= ~F_AUTO_COLOR; } inline void Geometry::setOuterLineColor (double r, double g, double b) { outerLineColor.setColor (r, g, b); flags &= ~F_AUTO_COLOR; } inline double Geometry::getOuterLineWidth () { return double (innerLineWidth) + 2.; } inline void Geometry::setOuterLineColor (char r, char g, char b) { outerLineColor.setColor (r, g, b); flags &= ~F_AUTO_COLOR; } inline double Geometry::getMouseOverLineWidth () { return getOuterLineWidth () + 2.; } inline void Geometry::setAutoColor (bool aColor) { if (aColor) { flags |= F_AUTO_COLOR; } else { flags &= ~F_AUTO_COLOR; } } inline bool Geometry::isVisible () { return flags & F_VISIBLE; } inline void Geometry::setVisible (bool visible) { if (visible) { flags |= F_VISIBLE; } else { flags &= ~F_VISIBLE; } } inline bool Geometry::isHoverable () { return flags & F_HOVERABLE; } inline void Geometry::setHoverable (bool hoverable) { if (hoverable) { flags |= F_HOVERABLE; } else { flags &= ~F_HOVERABLE; } } inline void Geometry::setActive (bool active) { if (active) { flags |= (F_VISIBLE | F_HOVERABLE); } else { flags &= ~(F_VISIBLE | F_HOVERABLE); } } inline Geometry::Geometry () : innerLineColor (char (255), char (255), char (255)), outerLineColor ( char (0), char (0), char (0)), flags ( F_VISIBLE | F_HOVERABLE | F_AUTO_COLOR), innerLineWidth (1.5f), datum ( IMAGE), state (NORMAL), opacity(100.) { } inline RGBAColor::RGBAColor () : RGBColor (0., 0., 0.), a (0.) { } inline RGBColor::RGBColor () : r (0.), g (0.), b (0.) { } inline RGBColor::RGBColor (double r, double g, double b) : r (r), g (g), b (b) { } inline RGBColor::RGBColor (char r, char g, char b) : r (double (r) / 255.), g (double (g) / 255.), b (double (b) / 255.) { } inline RGBAColor::RGBAColor (double r, double g, double b, double a) : RGBColor (r, g, b), a (a) { } inline RGBAColor::RGBAColor (char r, char g, char b, char a) : RGBColor (r, g, b), a (double (a) / 255.) { } inline Circle::Circle () : center (100, 100), radius (10), filled (false), radiusInImageSpace ( false) { } inline Rectangle::Rectangle () : topLeft (0, 0), bottomRight (10, 10), filled (false) { } inline Polyline::Polyline () : filled (false) { } inline Line::Line () : begin (10, 10), end (100, 100) { } inline Ellipse::Ellipse () : center (100, 100), radYT (5), radY (5), radXL (10), radX (10), filled (false), radiusInImageSpace (false) { } inline Circle::Circle (rtengine::Coord& center, int radius, bool filled, bool radiusInImageSpace) : center (center), radius (radius), filled (filled), radiusInImageSpace ( radiusInImageSpace) { } inline Circle::Circle (int centerX, int centerY, int radius, bool filled, bool radiusInImageSpace) : center (centerX, centerY), radius (radius), filled (filled), radiusInImageSpace ( radiusInImageSpace) { } inline Line::Line (const rtengine::Coord& begin, const rtengine::Coord& end) : begin (begin), end (end) { } inline Line::Line (int beginX, int beginY, int endX, int endY) : begin (beginX, beginY), end (endX, endY) { } inline Ellipse::Ellipse (const rtengine::Coord& center, int radYT, int radY, int radXL, int radX, bool filled, bool radiusInImageSpace) : center (center), radYT (radYT), radY (radY), radXL (radXL), radX (radX), filled (filled), radiusInImageSpace (radiusInImageSpace) { } #endif
2024-04-04T01:27:17.568517
https://example.com/article/8779
Sunday, 21 September 2014 Wine bars are back with a vengeance. The much derided symbol of 90s Conran-infused yuppiedom have risen again. Clapton has seen the Danish-inspired Verden and wine shop-bar Pie Franco open in the last 6 months, hot on the heels of Sager and Wilde on Hackney Road. Across town, wine focused bars are opening, some with a focus on cheese and charcuterie, others with more comprehensive menus. Mission E2 is the latest, opening a fortnight ago in Bethnal Green by Charlotte and Michael Sager-Wilde. Its focus is on the wine and cuisine of California, its name a nod to the exciting Mission district in San Francisco, which is the new beating heart of creative Californian cooking. It was where I was staying exactly a year ago as I ate my way through the Golden State. Situated in a railway arch on the blossoming Paradise Row, it's a great mix of London and California. A gigantic palm tree fills the arch, with a deep grey polished concrete floor and huge bi fold doors that open up the whole space to the terrace out front. On this balmy September evening you could have easily believed you were in trendy Los Angeles suburb Echo Park. There is a reasonably sized wine list of sensibly priced, exclusively Californian wines by the glass and bottle, and a more extensive and expensive menu for the real connoisseur. The main list did us fine, enjoying a glass of Mission Fizz, which looked totally flat but must have been full of invisible bubbles. A Sonoma County Trousseau Gris was light and minerally, and the Nero d'Avola from Mendocino County was so deep and sultry that it tasted of death, in a good way. Prices started at £4.50 a glass for (presumably great) house wines, up to about £10.50 for more interesting choices. With a standard £20 mark up on all bottles, there's a big incentive to push the boat out a bit. The food menu is impressive, with smaller bites, starters, mains, sharing dishes and desserts. We could have tried everything, but settled on nduja arancini and globe artichoke to start. The arancini were relatively subtle but there was enough nduja to give a bit of fiery warmth. The glove artichoke was just perfectly cooked and served with an anchovy buttery emulsion to tip the leaves in. It's such a nice dish for leisurely nibbling through with a good glass of white, and the heart at the end is the ultimate reward for the perseverance. Choosing mains was particularly hard, with ox cheek lentils and salsa verde, rabbit with giroles and polenta, and a cuttlefish and mussel stew on the menu. In the end we settled on their platter of lamb chops, priced at £38 but could have easily fed three people with the six generous chops. They were incredible. Beating even Tayyabs on taste, if not price. Garlicky, herby, perfectly charred, brilliantly fatty and charred lemons just added to the stickiness. We shared a dulce de leche cheesecake for dessert, which was nicely pungent and came with chunks of cinder toffee. At £50 a head, it's not cheap, but it felt totally justified for the quality of food and wine. You'd pay about the same in San Francisco, and boy, what a saving on the airfare. There's a brunch menu and plenty more to try on the evening menu. And if it's true to Californian form, the menus will evolve with the season, so plenty of reason to go back and make my way through the wine list. Low key but warm service ensures that you'll have a relaxed, enjoyable time. And I'm just delighted to have a slice of California so close to home. Thursday, 4 September 2014 The brunch backlash has started. The weekend-only in-between meal been singled out as a defining feature of the much-derided urban creative class' self-indulgence, and of the international sameness of the gentrification aesthetic. Brunchers are pulled apart for queuing for tables, drinking bottomless mimosas and caring too much about where's hot and not. It may not be a real backlash, though; rather, some clickbait from the good folk at the Grauniad to get those much-derided (but desirable for the advertisers!) urban creatives sharing the link all over social media. Because what's not to love about - let's be real - eggs for a late breakfast at the weekend. If you've worked your socks off all week, barely having time to wolf down a piece of toast before going to work each day, why not enjoy the most important meal of the day, slowly, with friends, when you get to the weekend? Hackney, as I've document, is in the midst of a brunch revolution. There are now so many places to get your fill of eggs at the weekend, that it's very rare to have to wait for a table. There are now high end options, Antipodean twists, classic greasy spoons, Med-influences. But nowhere has really, really specialised in only doing that classic eggy, bacony thing well...until now. Hash E8 opened a couple of weeks ago halfway between Dalston and Clapton on Dalston Lane. It's an all day cafe - a 'short order' cafe they say, which is American English for a short menu of diner-style food that can be cooked up quickly, to order. So far the the short order menu focuses on all things piggy and breakfasty. These guys source all their pig from a farm in Yorkshire, and you'll find that pig making its way into bacon, sausage, sausage patties, sausages, bacon jam and their signaturee slice of confit pork belly, which appears in many of the dishes and is available on the side. It's a much richer and more punchy option than your standard bacon. I've been a couple of times already - it's that good - and made my way through some signature dishes and specials. The Piggy Muffin is their spin on the McDonald's breakfast classic. It's epic: between the two muffin halves you'll find a homemade hash brown (more on that in a minute), confit belly slice, crisp bacon, a slice of regulation processed cheese, a mini omelette (one egg, whipped and fried) and their own bacon jam. It comes with a side of their deeply flavoured chutney and is held together with a skewer, which my piggy friend abandoned as he squeezed the whole thing together to eat as a burger. Bravo, Alistair. Another signature is their Belly Benedict, which is your classic poached eggs on top of their confit belly, spinach, topped with a silky hollandaise and then some flakes of their umami dust (bento and black sesame, I think). An American short order influence comes in the shape of their home fries, gorgeously crisp and flavoursome. On my second visit the special was a Hash Benedict, which was the aforementioned egg combo but on top of a whopping round hash cake. It tasted German style, like reibekuchen, with slithers of potato and a decent amount of sweet onion to give it full flavour. I added confit belly to mine, because it's too good not to have, and it came with a lightly pickled beetroot and cucumber salad, continuing the Mitteleuropaeisch theme. We went all out gluttony, and chased our sizeable savoury brunch with their sweet special: a French toast sandwich of peanut butter, Nutella and banana. My arteries may not have thanked me for it, but it was worth the extra clogging. Hash E8 brings out the piggy side of me, and I thought I may as well go the whole hog. They keep it diner style on the drinks. No pretending that a kale and flaxseed smoothie will save your soul here: just orange juice (from concentrate), filter coffee (by the Clapton based Roasting Shed) and cups of tea. For breakfast booze fans, there is a small selection of craft beer and a Bloody Mary. There are plans to open in the evenings too, but the owners say they are mastering the daytime service first. Based on my two early visits, I'd say they've mastered it already: friendly, efficient service, fair prices for the size and quality, and already doing a very steady trade. I know that Hash E8 is going to be a regular haunt, and I'm already dreaming of my next slice of confit pork belly. Oh, hello! North East Eats is your source for recommendations of good places to eat in the North-Eastern corner of London, mainly focused on Hackney and the surrounding areas. I am interested in all food and don't discriminate. I'm as likely to be found guzzling the latest hipster hamburgers as I am seeking out the most authentic Portuguese cafe in deepest Leyton. I can often be spotted getting cold queueing for street food, or getting wet cycling in a suit after work in search of a good meal. I live in Lower Clapton, Hackney, which is seeing a boom in delicious foodie establishments and brunch culture, so there's a bit of a slant towards Clapton and brunch, which are both excellent, obviously. Editorial policy: No freebies, no PRs, no sponsored posts, no editing other than for factual accuracy. These views are all my own, based on what I experience as a paying customer. I hope you feel you can trust the content as a result.
2023-10-08T01:27:17.568517
https://example.com/article/5275
AP source: Libya releases 4 US military personnel Protesters march against what they said was the extension of the mandate of the National Congress at Martyrs' Square in Tripoli December 27, 2013. REUTERS/Ismail Zitouny (LIBYA - Tags: POLITICS CIVIL UNREST) WASHINGTON (AP) — Four U.S. military personnel investigating potential evacuation routes in Libya were taken into custody at a checkpoint and then detained briefly by the Libyan government before being released, a U.S. official said Friday night. No one was injured. The military personnel were taken to the U.S. Embassy after their release, a Defense Department official said. The official was not authorized to discuss the incident by name and requested anonymity. The four were supporting U.S. Marine security forces protecting the American Embassy, the official said. They were likely U.S. special operations forces, which have been deployed to Libya. An altercation apparently took place at a checkpoint near the town of Sabratha, the official said. Reports of gunfire could not be confirmed. After they were detained at the checkpoint, the Americans were transferred to the Ministry of the Interior and held for a few hours, the official said. The U.S. Embassy in Tripoli includes a security detail. The embassy's personnel are restricted in their movements in Libya. ___ Associated Press writers Bradley Klapper in Washington and Josh Lederman in Honolulu contributed to this report.
2024-05-05T01:27:17.568517
https://example.com/article/4914
Masters W70 400 metres world record progression This is the progression of world record improvements of the 400 metres W70 division of Masters athletics. Key References Masters Athletics 400 m list Category:Masters athletics world record progressions
2024-05-05T01:27:17.568517
https://example.com/article/4740
Monday, October 17, 2016 The Step-by-Step Blueprint I Used to Build Zero to $1.5M Revenue in 7 Months Inspire prepared to make $12,000 or more every month from Staffing and Recruitment. You can transform your interests into a corner Human Resources way of life enlisting business. Learn the Step-by-Step Formula I Used for Building a $2.6M Revenue Staffing Agency and Recruiting Service in One Year ($1.5M in 7 Months). Learn how to fire up with no cash Learn how to take after your interests as an enrollment specialist Learn how to draw in astonishing clients to a great degree rapidly Learn how to charge a premium cost Learn how to subsidize your finance without credit Learn how to outsource your finance and organization Learn how to make frameworks to free up your time Learn how to gather your cash the easy way Learn how to deal with your business while you travel Learn how to build a business you can offer This Start & Grow Your Staffing & Recruiting Lifestyle Business coupon course is for individuals who are occupied with an awesome entrepreneurial wander that they can begin with to a great degree little start-up cash. Anyplace from the totally unpracticed who have never been in selecting, to individuals who have 20 years+ of staffing and enrollment encounter. This Start & Grow Your Staffing & Recruiting Lifestyle Business coupon course is likewise to staff, Recruitment and HR experts and business visionaries who have considered beginning a staffing organization or an enrollment business. On the off chance that you as of now work in this industry, you likely need more control over how you do things. You need to investigate approaches to get a greater bit of the benefit pie. You need to escape micromanagement and inflexible technique. Likewise, you need to put the bits of the day together your way and choose how long you function. You need to set up frameworks that help you work less and gain more. The Problem with Starting a Staffing Service or Recruiting Business on the off chance that You Have Never Done it? It Can be Complicated and Disastrous in the event that You don't Know What to Do. In the event that you are somebody who is absolutely new to enlisting or selecting business or enrolling business development- - or on the off chance that you are an interior representative of a job office, or a virtual spotter, or even somebody who works in computerized and online enlistment, the inquiries are numerous. How would you begin your enrollment business without cash? What segment of the staffing business sector will you benefit? By what means will you build a quick book of enrollment business? In what manner will you fund your finance for transitory and contract representatives? In what manner will you guarantee organizations really pay your staffing office? How would you stay in staffing office consistence and cover yourself legitimately? How would you not get eaten alive by organization finance weights and specialist's remuneration costs? How would you promote your enrolling administrations inexpensively and keep the stream of hopefuls coming in? How would you robotize your enlistment deals and administration? How would you procure other selecting pros to a great degree efficiently? Maybe the greatest inquiries are these: How would you build opportunity into the condition? How would you function less and acquire more? How would you free up your time and begin expelling yourself from the every day operations? Begin and Grow Your Staffing and Recruitment Lifestyle Business answers these inquiries and in addition others. My group and I have bundled up into regulated disentangled substance the lessons, tools, procedures and critical thinking frameworks borne out of my own adventure from Zero to $1.5M in 7 months when I initially began my own business and set out as a Staffingpreneur. This Start & Grow Your Staffing & Recruiting Lifestyle Business coupon course uncovers the potential calamities of beginning your own particular enrollment business or staffing organization and it gives the arrangements before you experience them. It sets out a down to earth and achievable simple to-take after program for achieving the expressed objectives. The course is designed to help staffing and enrollment benefit business visionaries build their business establishment keeping in mind the end goal to develop it into a sellable resource that can be changed over to a way of life business or in the long run sold.
2024-07-17T01:27:17.568517
https://example.com/article/8536
Sometimes I can’t believe how Californian California is. Women walk around half-naked, waiters call patrons “dude,” and medical marijuana is legal. But I wondered just how legal. Could anyone buy it? Even me, who doesn’t have cancer, AIDS, arthritis, glaucoma or even any previous pot-smoking experience? Medical marijuana isn’t really legal -- in 2005, the Supreme Court said federal anti-drug laws trump state laws -- but California and 11 other hippie states have been flipping off Washington for years. Finding a medical marijuana distributor is shockingly easy, as Times columnist Sandy Banks noted in her recent columns on getting pot to treat arthritis. Sprinkled innocuously around L.A. County are more than 200 dispensaries that look like health food stores or pharmacies -- including three just at the intersection of Fairfax and Santa Monica. To shop at these places, though, you need a doctor’s recommendation on an official form. Once you have that, no California cop can arrest you for holding up to eight ounces. That amount, I’m guessing, was based on conservative medical estimates of how much Snoop Dogg would need if he came down with glaucoma at the same time Animal Planet aired a “Meerkat Manor” marathon. I made an appointment at a medical office recommended by Shirley Halperin, the coauthor of the new book, “Pot Culture: The A-Z to Stoner Language & Life.” Halperin chose our particular clinic less for its medical expertise than the fact that it shared a parking lot with a pot dispensary. Stoners are very clearheaded when it comes to avoiding extra effort. As I sat in the tiny waiting room, filling out my medical history and getting nervous, Halperin assured me that no one she knows had been rejected, which seemed convincing because the only people sitting near me were two healthy looking guys in their 20s. When I got called in, I entered a doctor’s office different from any I’d ever been in. It contained only a tiny desk, two chairs, a small TV and two cans of Glade. Also, the doctor wore a Hawaiian shirt. He took my blood pressure and asked what I was suffering from. “Anxiety,” I said. And then “occasional insomnia.” And even though he seemed to be moving on, I blurted something about headaches. The only malady that would have made me more similar to every human being throughout history would have been “these painful little pieces of skin that peel up next to my fingernails.” The doctor followed up on my insomnia, however, and asked if I was having work problems or relationship issues as he handed me a photocopy of a handwritten list of psychiatrists. He’d give me a recommendation for medical marijuana for six months, he said, and would extend it to one year if I saw a therapist. The whole thing took about four minutes. I paid the receptionist $80 -- cash only -- and she gave me a filled-out form that states I am under medical care and supervision for the treatment of a “medical problem.” I felt touched that the doctor hadn’t just written I was suffering from “stuff.” At the dispensary, a Harley-riding bouncer checked my newly minted medical forms and driver’s license and let us inside. The dispensary was like a really nice coffee shop, with paintings on the wall for sale, couches and a drum kit upstairs for live jazz. A pretty woman behind the counter -- kind of a pot sommelier -- brought out a huge menu, divided into sativa (uppers) and indica (the downers all dealers sell) varieties, with names such as Bluedot Popcorn, Hindu Skunk and Purple Urkel. Like a high-end tea shop, she used chopsticks to procure the buds from glass jars -- all organic and grown in California -- which she had me smell and look at under a microscope. I settled on a gram of Sugar Kush, which sounded appealing until I wondered what kind of breakfast cereal would cure Sugar Kush munchies. Honey Bunches of Fudge? Frosted Mini Frosted Minis? Count Plaqula? Next, I took the advice of a fellow patient and went to buy some “edibles” at the Farmacy. This is the most famous of the L.A. dispensaries, with three locations, only two of which are right next to a Whole Foods. The Westwood branch is a sleek health food store that also sells vitamins and lots of Goji berries, and, unlike at the doctor’s office, all the salespeople wear white lab coats. As a first-timer, I got to spin a wheel to determine my free gift medicine, which was a pot-infused lollipop. I also bought a vegan chocolate-chip cookie medicine and a chocolate bar medicine, and deeply considered the gelato medicine. Wondering if I had an unusually easy time, I called High Times magazine’s 2006 Stoner of the Year, Doug Benson, a comedian who just released “Super High Me,” a documentary in which he stops smoking pot for 30 days and then, for his next month, is high every waking minute. As part of the documentary, he got his medical marijuana certificate. “I told my doctor I had a weak back. And when he said, ‘How long?’ I said, ‘About a week back.’ ” He did not get rejected. As a patient or a comedian. In fact, Benson buys all his pot from a dispensary now. Even with the sales tax, he pays the same price and, he said, gets more consistent quality than he did from a dealer. “I had a dealer who came by my house, but this is more convenient,” he said. When I asked him how that could be, he explained: “I used to have to sit there and listen to his stories. Because dealers like to hang out.” I always wondered what would happen if marijuana were legalized for anyone over 18. It seems it already has been, and nothing happened. -- jstein@latimescolumnists.com
2023-08-13T01:27:17.568517
https://example.com/article/6065
Dectin-1: a signalling non-TLR pattern-recognition receptor. Dectin-1 is a natural killer (NK)-cell-receptor-like C-type lectin that is thought to be involved in innate immune responses to fungal pathogens. This transmembrane signalling receptor mediates various cellular functions, from fungal binding, uptake and killing, to inducing the production of cytokines and chemokines. These activities could influence the resultant immune response and can, in certain circumstances, lead to autoimmunity and disease. As I discuss here, understanding the molecular mechanisms behind these functions has revealed new concepts, including collaborative signalling with the Toll-like receptors (TLRs) and the use of spleen tyrosine kinase (SYK), that have implications for the role of other non-TLR pattern-recognition receptors in immunity.
2024-04-23T01:27:17.568517
https://example.com/article/7341
Molecular mechanisms of azole resistance in fungi. This paper reviews the current status of our understanding of azole antifungal resistance mechanisms at the molecular level and explores their implications. Extensive biochemical studies have highlighted a significant diversity in mechanisms conferring resistance to azoles, which include alterations in sterol biosynthesis, target site, uptake and efflux. In stark contrast, few examples document the molecular basis of azole resistance. Those that do refer almost exclusively to mechanisms in laboratory mutants, with the exception of the role of multi-drug resistance proteins in clinical isolates of Candida albicans. It is clear that the technologies required to examine and define azole resistance mechanisms at the molecular level exist, but research appears distinctly lacking in this most important area.
2024-06-05T01:27:17.568517
https://example.com/article/5439
Even a late push by Romney in Facebook "likes" was unable to close the gap, with the final count being nearly 3-to-1 in Obama's favor. Mitt Romney can take pride in that analysts showed his smaller legion of fans to be more active in sharing content and liking his posts. In the last 18 hours former Gov. Romney was losing nearly 1,000 supporters an hour, as each hour on average he was "unliked" by 982 former social media buddies. Romney's former fans are "unliking" him at a steady rate. [Image Source: Disappearing Romney] The former Massachusetts governor's Facebook effort was marred in the days leading up to the elections by allegations that some of his "likes" were due to some kind of glitch (or according to conspiracy theorists "hacking") on Facebook. A number of people complained that their accounts mysteriously "liked" Romney and could not be changed. Presidential hopeful Mitt Romney and his VP candidate Paul Ryan lost the social media race, with Barack Obama receiving nearly 3 times as many likes. [Image Source: AP/HR] Of course take the President's social media cachet with a grain of salt. Social media's most active members tend to skew to a younger demographic, and the younger demographic also tends to be remarkably consistent at being inconsistent in terms of voting and political fundraising. In elections that are often decided by narrow margins in battleground states, one must wonder if a candidate who loses the social media battle by such a wide margin will be able to win in future elections, as social media's influence grows by the year. quote: This is why he lost. People just didn't trust him. They trusted Obama over Romney. And that's pretty remarkable, because our President hasn't done a whole lot in four years. Infact most of the stuff he did, he did in his first 18 months. Because that's when Democrats had control of the house. Republicans are always more interested in partisanship than doing what's best for America. That's why they held hostage the legislation that would help middle class americans unless the rich got the Bush tax cuts extended as well.This election is a resounding defeat for republicans. When we appoint properly liberal SC judges and enact filibuster reform in the next 4 years, they'll be done. There's only so much gerrymandering they can do to keep control of the House when all the demographics are working against them. last time i checked Harry Reids and Obama's idea of bi-partisanship is bend over and agree with everything we propose the word compromise isn't even in their vocabulary. This country cannot exist in any semblance of its prior glory under the boot of the Socialist party in power. What you and the rest of the leftists are creating in government is tantamount to a banana republic with a super rich ruling class and everyone else dirt poor. Need proof? the ruling class and their friends are exempt from Obamacare while the rest of us are stuck with it. Debt = GDP and they want to spend more money in stimulus but first they have to raise the debt ceiling which means Debt > GDP throw tax increases on the wealth creators and watch that skyrocket!. They have taken away nearly all incentive to be successful capping success at <=$250K. Why would anyone work their ass off to pay taxes only to be forced to pay for those that that don't pay taxes and collect welfare? This is the fundamental problem with your large scale tribalism(Socialism). The fact of the matter is that partisanship has expired the differences are too great; this is a fundamental disagreement in the future direction of our republic that eventually leads to violence confrontation(lets hope not). You in your bubble seem to think the first world is somehow immune to Economic depressions, social unrest, mass starvation..civil war. You are mistaken you and others like you have brought the potential of all of that upon us.
2024-07-29T01:27:17.568517
https://example.com/article/8746
SMOS processes are utilized to increase transistor (MOSFET) performance by increasing the carrier mobility of silicon, thereby reducing resistance and power consumption and increasing drive current, frequency response and operating speed. Strained silicon is typically formed by growing a layer of silicon on a silicon germanium substrate or layer. Germanium can also be implanted, deposited, or otherwise provided to silicon layers to change the lattice structure of the silicon and increase carrier mobility. The silicon germanium lattice associated with the germanium substrate is generally more widely spaced than a pure silicon lattice, with spacing becoming wider with a higher percentage of germanium. Because the silicon lattice aligns with the larger silicon germanium lattice, a tensile strain is created in the silicon layer. The silicon atoms are essentially pulled apart from one another. Relaxed silicon has a conductive band that contains six equal valance bands. The application of tensile strength to the silicon causes four of the valance bands to increase in energy and two of the valance bands to decrease in energy. As a result of quantum effects, electrons effectively weigh 30 percent less when passing through the lower energy bands. Thus, lower energy bands offer less resistance to electron flow. In addition, electrons meet with less vibrational energy from the nucleus of the silicon atom, which causes them to scatter at a rate of 500 to 1,000 times less than in relaxed silicon. As a result, carrier mobility is dramatically increased in strained silicon compared to relaxed silicon, providing an increase in mobility of 80 percent or more for electrons and 20 percent or more for holes. The increase in mobility has been found to persist for current fields up to 1.5 megavolt/centimeter. These factors are believed to enable device speed increase of 35 percent without further reduction of device size, or a 25 percent reduction in power consumption without reduction in performance. The use of germanium in SMOS processes can cause germanium contamination problems for IC structures, layers and equipment. In particular, germanium outgassing or outdiffusion can contaminate various components associated with the fabrication equipment and integrated circuit structures associated with the processed wafer. Germanium outgassing can be particularly problematic at the very high temperatures and ambient environments associated with integrated circuit fabrication. For example, conventional IC fabrication processes can utilize temperatures of approximately 1000° C., which enhance germanium outgassing. Germanium outgassing can also negatively affect the formation of thin films. In addition, germanium outdiffusion can cause germanium accumulation or “pile up” at the interface of layers. High levels of germanium at the surface of a wafer can adversely affect the formation of silicide layers. In particular, high concentration of germanium in a top surface of a substrate can adversely affect the formation of silicide layers above the source and drain regions. The germanium concentration at the top surface can be exacerbated by the fabrication steps associated with source and drain regions and gate structures. Germanium contamination of IC equipment is becoming a more serious issue as IC fabrication processes explore the advantages of the higher carrier mobility of strained silicon (SMOS) devices. IC fabrication equipment that tends to become contaminated with germanium can include deposition chambers, furnaces, diffusion equipment, etching tools, etc. The quartzware associated with such equipment is particularly susceptible to germanium contamination. Germanium contamination is particularly problematic when equipment is used in both non-germanium and germanium fabrication lines. Shared equipment must be purged of germanium contamination before it is used in non-germanium processes, because such contamination is particularly damaging to metals used during conventional IC fabrication. Further, high levels of germanium contamination can be problematic even for strained silicon (SMOS) processes. Flash devices are particularly sensitive to low level germanium contamination, because Flash technology uses IC structures and processes that are incompatible with germanium. For example, germanium contamination may cause data retention problems for the Flash memory cell. It is nevertheless desirous to use equipment associated with the Flash fabrication line with germanium containing products (e.g., SMOS products). Thus, there is a need for an efficient process for decontaminating a wafer surface. Further, there is a need for a system and a method which reduces germanium contamination. Even further, there is a need for a method of removing germanium from a strained silicon layer. Yet further, there is a need for a process which reduces the adverse effects of germanium on silicidation processes. Further, there is a need for a decontamination process that allows shared equipment to be used in both a Flash production line and a germanium production line.
2023-12-18T01:27:17.568517
https://example.com/article/8939
Tunisian PM Mohammed Ghannouchi resigns over protests Published duration 27 February 2011 image caption Mohammed Ghannouchi was seen as too closely linked to the old regime Tunisian Prime Minister Mohammed Ghannouchi has announced on state TV that he is resigning - a key demand of demonstrators. He was speaking at a news conference in Tunis, after making a lengthy speech defending his record in government. Mr Ghannouchi is seen as being too close to former President Zine al-Abidine Ben Ali, who was toppled in an uprising last month. Mr Ghannouchi, 69, had served under Mr Ben Ali since 1989. "After having taken more than one week of thinking, I became convinced, and my family shared my conviction, and decided to resign. It is not fleeing my responsibilities; I have been shouldering my responsibilities since 14 January [when Mr Ben Ali fled]," he said. "I am not ready to be the person who takes decisions that would end up causing casualties," he added. "This resignation will serve Tunisia, and the revolution and the future of Tunisia," he added. Within hours a replacement was named for Mr Ghannouchi - Beji Caid-Essebsi, 84, who served as foreign minister in the government of the late President Habib Bourguiba. Earlier in the day, police in Tunis fired tear gas and warning shots to disperse the latest demonstration calling for a new government and a new constitution on a third day of violence. Huge protests On Friday and Saturday, anti-government protesters held huge rallies calling for Mr Ghannouchi's resignation. At least three people were killed in clashes between hundreds of demonstrators and security forces in Tunis on Saturday. Tunisia's government had insisted it was introducing reforms as fast as it could, and that it was planning to hold elections by July. media caption The BBC's Paul Moss in Tunis: "Police are giving protesters merciless beatings" But those promises did not seem to satisfy the protesters, correspondents say. The fall of Mr Ben Ali after 23 years in power sparked similar uprisings in the Arab world, including one that led to the downfall of long-time Egyptian President Hosni Mubarak on 11 February and another under way in Libya. The trigger for the protests in Tunisia was a desperate act by a young unemployed man on 17 December 2010.
2024-04-15T01:27:17.568517
https://example.com/article/8801
Psychosocial adaptation in relatives of critically injured patients admitted to an intensive care unit. The aim of this study is to analyze how the length of time a patient spends in an Intensive Care Unit (ICU) affects close relatives, with regard to specific clinical variables of personality, family relationships and fear of death. The study group consisted of 57 relatives of seriously ill patients admitted to the ICU of "Virgen del Rocío" Rehabilitation and Trauma Hospital (Seville, Spain). The instruments applied were: a psychosocial questionnaire, clinical analysis questionnaire, family environment scale and fear of death scale. The relatives of patients admitted to ICU obtained higher scores in hypochondria, suicidal depression, agitation, anxious depression, guilt-resentment, paranoia, psychasthenia, psychological maladjustment and self-expression, and less in fear of their own death, as when compared to interviews with the same relatives 4 years later. The length of time a patient spent in the ICU influenced relatives in some clinical variables of personality, family relationships and fear of death
2024-06-25T01:27:17.568517
https://example.com/article/3509
The Home of Scottish Combat Sports Main menu Tag Archives: diet At the Griphouse we have a large number of wonderful people who are all interested in health, fitness and the ancient arts of arse kickery. The nutritional side of things is something we haven’t really tackled as a gym. Sure guys have been passing around knowledge and supporting each other but we have never had a large scale Griphouse approved eating plan. That’s what this experiment is all about. A big pile of us doing a nutritional intervention and seeing what happens. One of the things I’ve picked up from cutting weight is that accountability makes for serious body composition change. Our fighters routinely lose around 5-10kg in the week leading up to a weigh in. They have a time frame, a date to be a certain weight. If they miss weight you are accountable and your team mates will hold you to it. To that end we have set up this Facebook page, where you can discuss the diet, share results and suggest alternative recipe ideas. Feel free to add yourself here: We are interested in fat loss over weight loss though so don’t worry about having to sit in salt baths and saunas for ages. All you need to do is follow the program for a long as you like and reap the rewards. Program Principles Eat the same thing everyday We tend to run into trouble with this question “what do I want to eat?”. By the time we ask it we are usually pretty hungry and cake seems to be the answer. By planning your meals and preparing them in advance this problem is dealt with. On Sunday decide what you will eat and make a big pile of it. Enough to do the entire week or half of the week as I tend to do. Refrigerate or freeze it to keep it from going weird. Tupperware is your friend here. If you have your meals prepared any poor food choices are on you. You do not need variety. Yes it will be boring and by Friday you will hate the taste of it but it’s an easy path to improved body composition. This will involve mostly low carb fare such as soups, stews that sort of thing. More details tomorrow. 2. Drink 5cups of green tea a day This isn’t because I think green tea has any mystical properties. It’s mainly down to ensuring hydration is maintained and well …..I like green tea. If you are caffeine sensitive any other herbal tea is fine. When we are dehydrated we get hungry and eat stuff as the body can get a lot of hydration from solid food sources. It is often difficult to distinguish hunger and thirst. All of your systems tend to run sub optimally when in a dehydrated state. The act of making tea also seems to distract from thoughts of food. That might just be me though. 3. A protein shake used when you start to get hungry and want cakes Protein is the most satiating of the macronutrients and is a great weapon to stave of hunger. Most protein company’s get their whey protein from the same places so the individual brands sell pretty much the same product. More important get one you like the taste of it should almost be like a treat (sort of, it’s not really as good as a snickers, but chocolate protein with hazelnut milk tastes like Nutella which is pretty boss) 4. An apple of your choice as snacks whenever you want something crappy. Again these should be with you wherever you go in case you start getting hungry. Apples are nutrient dense and have a sweetness that exceeds their calorie content. Pretty decent if you have a haribo problem. So if after eating 3 meals, 5 cups of tea, a protein shake and 2 apples you still want doughnuts….well that’s pretty impressive actually. 5. Replace two meals each week with whatever you like. Two meals a week eat something you want. It can be burgers, pizza , doughnuts whatever. Try not to go absolutely mental and stop when you’ve had enough. Plan it in advance and enjoy it. And that is pretty much the program. Tomorrow I’ll post the meal plans and dietary guidelines with the aim of kicking off on Monday 22nd
2023-08-11T01:27:17.568517
https://example.com/article/2264
The acyl-CoA synthetase "bubblegum" (lipidosin): further characterization and role in neuronal fatty acid beta-oxidation.. Acyl-CoA synthetases play a pivotal role in fatty acid metabolism, providing activated substrates for fatty acid catabolic and anabolic pathways. Acyl-CoA synthetases comprise numerous proteins with diverse substrate specificities, tissue expression patterns, and subcellular localizations, suggesting that each enzyme directs fatty acids toward a specific metabolic fate. We reported that hBG1, the human homolog of the acyl-CoA synthetase mutated in the Drosophila mutant "bubblegum," belongs to a previously unidentified enzyme family and is capable of activating both long- and very long-chain fatty acid substrates. We now report that when overexpressed, hBG1 can activate diverse saturated, monosaturated, and polyunsaturated fatty acids. Using in situ hybridization and immunohistochemistry, we detected expression of mBG1, the mouse homolog of hBG1, in cerebral cortical and cerebellar neurons and in steroidogenic cells of the adrenal gland, testis, and ovary. The expression pattern and ability of BG1 to activate very long-chain fatty acids implicates this enzyme in the pathogenesis of X-linked adrenoleukodystrophy. In neuron-derived Neuro2a cells, mBG1 co-sedimented with mitochondria and was found in small vesicular structures located in close proximity to mitochondria. RNA interference was used to decrease mBG1 expression in Neuro2a cells and led to a 30-35% decrease in activation and beta-oxidation of the long-chain fatty acid, palmitate. These results suggest that in Neuro2a cells, mBG1-activated long-chain fatty acids are directed toward mitochondrial degradation. mBG1 appears to play a minor role in very long-chain fatty acid activation in these cells, indicating that other acyl-CoA synthetases are necessary for very long-chain fatty acid metabolism in Neuro2a cells.
2023-11-23T01:27:17.568517
https://example.com/article/2365
Top latest Five blue gemstones Urban news Menu Top latest Five blue gemstones Urban news Each individual stone resonates with a rather various pattern geometrically. These same geometric patterns reside in our methods, organs, and so forth. To carry a Rose Quartz as much as our coronary heart is always to talk to the structural designs active inside our refined bodies to align by themselves and operate effortlessly and with purity. But how Are you aware in which a blockage is as part of your chakra system? If you are sick in the meanwhile, it is obvious there is a blockage somewhere. It is possible to track these blockages by making use of a pendulum for instance. Go ahead and take pendulum with your hand and place your other hand on the basis chakra. Hook up with your increased self and think about the movement which the pendulum tends to make. Do the exact same for all chakras and if there is a adjust in motions you come up with a Take note of it. In the event your pendulum rarely moves in any way, the chakra won't acquire up A great deal Strength or no Vitality in any way, and when it moves in huge uncontrolled motions the chakra in all probability can take up excessive Electricity. It is possible to fix the disharmony from the chakras via the laying on of stones. Once frequencies greater plus much more pure than the level from the chakra at that second, the chakra will start to vibrate more rapidly and the slower frequencies of your blockages will gradually be dissolved. Who should use it? This gem is rather useful for the really serious minded people who have confidence in labor. This helps you to consider failures as stepping stones to accomplishment and rise in life In spite of each of the variables Functioning towards a single's achievements. The Blackstar is known to create great business enterprise sales opportunities and management specialists. High quality : Top quality ROSE QUARTZ - Rose quartz is a superb coronary heart-therapeutic gemstone. This is a nature cure that could be used for managing any issue that requirements psychological healing. In Health-related Astrology this helps These struggling from coronary heart ailments, gynaecological complications, epilepsy, mental ailments and many others. Who should really don it: Regarded as gem of Moon it is best fitted to persons born beneath the zodiac indication Most cancers. It is said to maintain their erratic temperament underneath Management. while in the Brihadeswara temple while in the Thanjavur all around the internal partitions on the garbhagriha different karanas (dance poses) of Bharatanatyam are actually painted in vivid shades that have not faded even after a thousand yr. It is powerful versus skin disorders and ulcers (each inside and exterior) and Lots of people have faith that it's got the potential to provide unqualified achievements to one who wears it. Who must put on it? ” Some gemstones are more likely to have clear strains that others, and you can find out more details on them here: YourGemologist The subsequent is a listing of Blue gems and minerals shown within our databases. Click on the pictures to get whole details, click on the X to get rid of the gem within the listing. This can be a Gem of Ketu (Dragon's Tail ) .By sporting this gem one particular is ready to subdue his enemies and is able to get rid of the impact of any damaging forces focusing on him. Actual physical problems which might be connected to a blockage In this particular chakra are: kidney failure, bladder and huge intestine troubles, but also frigidity, nymphomania and diseases of the uterus, ovaries, prostate and testicles. Also meals allergy symptoms and intolerances are associated with this chakra. AMETHYST - Positive aspects: It is actually ordinarily regarded as, a terrific help, in acquiring rid of intoxication. It really is even claimed that if you consume wine in a very cup made of Amethyst, it can loose its intoxicating effect. This stone is believed to mend females struggling from gynaecological challenges. Also extensively utilized by people today involved with occult sciences, mainly because it is alleged to get impart sturdy spiritual more info powers. The Crown get more info chakra is depicted to be a lotus flower using a thousand white leaves, the image of infinity and tuned into the best type of consciousness and also the divine. QUARTZ -AMETHYST stimulates the brow and coronary heart chakras. It is a superb considered amplifier and boosts intuition and also the psychological, psychological and spiritual bodies are bonded to work with each other to be a unit. Additionally, it stimulates the many meridians and accupressure points and may be warn wherever on the body.
2023-09-06T01:27:17.568517
https://example.com/article/3731
Rubenesque What does an ironing board and Inesse have in common? Nothing! Inesse explains why life is not easy for her in her native country. "Men always look at me, especially in summer," she tells us. "I try not to recognize their 'looks' and just march to my destination with an 'angry,' confident kinda-look. It helps most of the time, but not always. T-shirts are no help. And imagine me wearing black spandex. Mucho problemos! They stared at me in London when I went there to model for SCORE and Voluptuous , and in my home country they stare even more! Usually I try not to wear over-sexy and revealing clothes. Men have very lousy pick-up lines! So primitive! Like they are the first! They look at me and lose their minds. 'Hey, baby, come over here!' they yell at me in the street. The worst happened when I was learning to drive.
2024-07-28T01:27:17.568517
https://example.com/article/6550
Cloth or disposable diapers? The vast majority of parents choose the latter for their convenience and for what is generally perceived as a lower “yuck factor.”But would parents dismiss cloth diapers so quickly if it turned out disposables could put little boys at risk for testicular cancer or decreased fertility as adults?Ivana Kadija, a former Manhattan ad exec who’s the new owner of The Stork, Central Virginia’s only diaper delivery service, hopes to deliver that warning to Charlottesvillians and win new business in the process.Her message may already be soaking in with some families. In the month since she purchased The Stork, she says she's increased the customer base by 50 percent, going from 12 families to 18. But what supports her claims? Research published in such publications as Discover, Mothering, and online at WebMD.com over the past three years has suggested the cancer and reduced fertility risks, as well as the risk of childhood asthma caused by the chemicals in the high-absorbency diapers made by companies such as Procter & Gamble (Pampers), and Kimberly-Clark (Huggies). Kadija says those studies, coupled with cloth diapers’ perceived benefits for the environment, made cloth an obvious choice for her and her husband, freelance writer Brian Wimer, to use on their own two-year-old daughter, Luka.But are cloth diapers really environmentally superior? A study by the Arthur D. Little International Management and Technology Consulting Firm in Cambridge, Massachusetts, suggests that while disposable diapers generate more solid waste, reusable diapers generate more process solid waste, i.e. wastewater treatment sludge and incinerator ash. In addition, the study says, reusable diapers consume more water and release higher levels of total water pollutants. But Kadija counters that disposables use at least as much water during production. And, she adds, “the water issue pales in comparison with waste management issues, human feces and viruses in landfills issues, energy consumption issues, and infant health issues.” Not surprisingly, the disposable diaper companies trash Kadija’s assessment. “Disposable diapers are one of the most thoroughly evaluated and tested products in the world,” says Lisa Jester, spokesperson for Procter & Gamble. Jester says that none of the health claims have ever been accepted by the wider scientific and medical community; she adds that many of the environmental claims against disposable diapers relate to antiquated diapers.“We use 40 percent less material than 15 years ago,” Jester says. That translates to less waste, and since disposable diapers are more absorbent, and there are fewer leaks, that means less laundering of bedding and clothes, a further saving in water, she says.“There’s nothing we’ve been taught as physicians to suggest a long-term health risk from disposable diapers,” says local pediatrician Carlos Armengol.The studies cited by Kadija, he says, are reported in reputable journals, but don’t offer conclusive evidence– just the need for more research. And he offers a different study that finds higher testicular cancer rates in men who wore cloth diapers as babies. The conclusion? More research is needed.Although Armengol believes “it’s clear” that cloth diapers are better for the environment and that they also may reduce the likelihood of diaper rash, he doesn’t fault “the overwhelming majority” of parents in his practice who use disposables for their convenience– and he won’t try to deter them. And it’s convenience, the disposable companies say, that keeps today’s busy parents buying disposables. For example, if a child soils a cloth diaper while on a summer outing, the parents must then lug the hot, fetid mass with them until they get home to their diaper pail. A disposable simply requires a trash can.But although Kadija acknowledges that using cloth requires some advance planning– such as bringing plastic bags along on trips– she insists that doing what’s good for your child and for the environment doesn’t have to mean a big, smelly mess. Kadija says people who imagine saggy, soggy, leaky diapers fastened with sharp pins should take another look. “There’s a new way of doing cloth diapers,” she says. Instead of the huge safety pins of yore, today’s cloth diapers are attached one of two ways: with Y-shaped rubber clips that hook onto the diaper to keep it secure, or by a waterproof diaper cover that attaches with Velcro and helps retain moisture. As for the most odious task associated with cloth diapers– ye olde scraping– Kadija says it’s no longer necessary. The Stork’s laundering service, National Linen Company in Roanoke, takes care of all waste disposal and returns the clean diapers to Kadija each week.For local parents who decide to give cloth diapering a go, Kadija says she delivers and picks up once a week in her 1987 “electrician-style” Chevy Astro (which will soon be fitted with a divider to keep Kadija breathing fresh air). Although the delivery service is limited to a two-mile radius of Charlottesville, Kadija says there is a drop-box on each side of town for those who live outside the radius.Newborns generally use 70-80 diapers per week, with the number decreasing as the child grows (and goes) less frequently. And with an average cost of $16-$18 per week for delivery and pick-up– plus a $50 refundable deposit– the cost of cloth diapers is approximately the same as disposables, roughly 32 cents each, Kadija says.The one thing Kadija, the diaper companies, and most pediatricians agree on? It all trickles down to a personal choice. And fortunately, says Procter & Gamble’s Jester, “There is plenty of room for all kinds of products in the marketplace.”
2023-10-27T01:27:17.568517
https://example.com/article/6835
Stability, antigenicity, and aggregation of Moraxella bovis cytolysin after purification and storage. To compare stability, antigenicity, and aggregation characteristics of Moraxella bovis cytolysins among isolates from geographically diverse areas. 8 isolates of M. bovis. Filter-sterilized broth culture supernatants of M. bovis were concentrated, diafiltered, and chromatographed. The endotoxin and cytolysin activities in samples were measured. Chromatographed cytolysins of M. bovis were examined by immunoblotting. Hemolytic and leukotoxic activities were measured from samples collected at each step of purification and before and after storage. Hemolysis was measured directly by use of washed bovine erythrocyte targets. Leukotoxicity was measured by use of a 51Cr release assay. Cytolysin was retained by a filter with 100-kd nominal molecular weight limit. Hemolytic activity, leukotoxic activity, and endotoxin were eluted together in void volume of a gel-filtration column (molecular mass exclusion limit = 4 X 10(7) d). Gel-column chromatographed diafiltered retentate had the greatest specific cytolytic activity and the highest endotoxin-to-protein ratio. Frozen diafiltered retentate(-80 degrees C, 4 months) was cytolytic after thawing. Immunoblots of gel-column chromatographed cytolysin contained 4 proteins with molecular masses between 90 and 68 kd. Fractions with high lytic activities also had additional protein bands with molecular masses of 98 and 63 kd. Immunoblots of gel-column chromatographed diafiltered retentate revealed proteins with molecular masses between 90 and 68 kd. Diafiltered M. bovis cytolysin is aggregated with endotoxin. Antigenicity and cytolytic activities in diafiltered retentate are conserved among M. bovis isolates. Diafiltration could be useful for bulk semipurification of M. bovis cytolysin. Cytolysin-enriched vaccines of M. bovis could be contaminated by endotoxin.
2023-12-06T01:27:17.568517
https://example.com/article/6599
--- abstract: '[*Abstract. Wide variety of engineering design tasks can be formulated as constrained optimization problems where the shape and topology of the domain are optimized to reduce costs while satisfying certain constraints. Several mathematical approaches were developed to address the problem of finding optimal design of an engineered structure. Recent works [@BEM2DMarczak; @BEM3D] have demonstrated the feasibility of boundary element method as a tool for topological-shape optimization. However, it was noted that the approach has certain drawbacks, and in particular high computational cost of the iterative optimization process. In this short note we suggest ways to address critical limitations of boundary element method as a tool for topological-shape optimization. We validate our approaches by supplementing the existing complex variables boundary element code for elastostatic problems with robust tools for fast topological-shape optimization. The efficiency of the approach is illustrated with a numerical example.* ]{}' author: - 'Igor Ostanin[^1]' - Denis Zorin - Ivan Oseledets bibliography: - 'ostanin.bib' title: 'Toward Fast Topological-Shape Optimization With Boundary Elements' --- Introduction ============ Problems of optimal design, *i.e.* variation of shape and topology of the domain to extremize certain functional subject to additional constraints, are ubiquitous in different branches of engineering. Recent emergence and rapid development of stereolithography and 3D printing technologies [@3Dprint; @Stereo] have enabled fast prototyping of complex-shaped structures and revived the interest in automated optimal design. The most common formulation of optimization problems studied in structural engineering (e.g. [@Novotny3Delast]) seeks for an optimal shape and topology of an elastic body that minimize the strain energy (compliance) while satisfying the weight constraint and the additional constraints imposed by the boundary value problem. Non-convex nature of such optimization problems often makes finding globally optimal designs difficult or impossible. Various numerical methods, including shape gradient-based approaches [@gradient_based], level set methods [@AllaireLevelSet; @LevelSet], homogenization [@Homog2DKikuchi; @AllaireHomog] and topological-shape sensitivity [@NovotnySokolovskyBook] have been developed during the last few decades to address this class of problems. These approaches are typically implemented within the finite element method (FEM) context. In case if admissible designs include only homogeneous regions with piecewise-constant properties (micro-structured composite designs are prohibited), the problem of optimization reduces to finding optimal configuration of the domain boundaries. Few recent papers [@BEM2DMarczak; @BEM3D] suggested that boundary integral approaches, and in particular boundary element method (BEM), can be a convenient tool to address this class of problems. The implementation of optimization algorithms within BEM utilizes the concept of topological derivative (TD) [@Sokolmain; @NovotnySokolovskyBook; @Sokol3D] - a cost of making an infinitesimal circular (spherical) cavity with a center in a given point of the domain. Existence of this kind of derivative opens wide avenue for numerous gradient-based approaches. The most straightforward one utilizes the idea of calculation of the field of TD on a given mesh, and removing material in the regions where the TD is below certain cutoff level. This kind of approach has demonstrated its feasibility in both 2D and 3D elastostatic problems [@BEM2DMarczak; @BEM3D]. However, existing papers on the subject do not address the issue of complexity of topological-shape optimization with BEM, and, particularly, the question of competitiveness of BEM approaches as compared to FEM. In this note we suggest a simple optimization technique based on 2D BEM. It is demonstrated that even without using fast BEM techniques, one can easily reach $O(n_{s}^{2})$ asymptotic performance of shape optimization with BEM, which corresponds to the performance of 2D FEM approaches - $O(n_{v})$ ($n_{s}$ and $n_{v}$ are the numbers of surface elements and volume elements after BEM/FEM discretization). The suggested technique is based on complex variables boundary element method (CVBEM) [@CVBEM_main]. It utilizes quadtree-based calculation of TD inside the domain, allowing $O(n_{s}^{2})$ performance of evaluation of fields inside the domain. We also use simple $O(n_{s}^{2})$ algebraic solver based on blockwise update of the inverse system matrix to solve direct boundary value problem at every iteration. The paper has the following organization. The next section describes the method of topological optimization employed in our research, as well as its novel features. Section 3 provides an illustrative numerical example that is served to highlight the capabilities of our approach. The summary and discussion of our results, as well as the major directions of future work are presented in Section 4. Method ====== Optimization technique ---------------------- In this paper we develop our approach in 2D using the framework of CVBEM [@CVBEM_main]. The method offers a number of attractive features: i) complex hypersingular boundary integral equation (BIE) for tractions and displacement discontinuities, derived for a system that can include an infinite matrix, finite-sized blocks with piecewise-constant properties, voids and cracks [@CVBEM_main]; ii) closed form analytical calculation of all the integrals in BIE [@CVBEM_main], iii) circular elements to approximate curved boundaries [@CVBEM_main]; vi) asymptotic approximations to model cracks [@CVBEM_main]; v) implementation of piecewise-constant body forces [@cvbem_bf]. ![(a) Details on the definition of TD. (b) Optimization on the rectangular mesh. (c) Quadtree strategy of sampling of the TD inside the domain. ](Fig1){width="9cm"} Our scheme of topological optimization utilizes the notion of topological derivative [@NovotnySokolovskyBook]. Following [@Novotny2Delast] we define the TD as (Fig. 1(a)): $$D(x)=\underset{\underset{\delta\varepsilon\rightarrow0}{\varepsilon\rightarrow0}}{\lim}\frac{\varPsi(\varOmega_{\varepsilon+\delta\varepsilon})-\varPsi(\varOmega_{\varepsilon})}{f(\varepsilon+\delta\varepsilon)-f(\varepsilon)}$$ Here $\Psi$ is the cost functional, $\Omega_{\varepsilon}$ is the original domain perturbed by the presence of an infinitesimal cavity of radius $\varepsilon$ centered at the point $x$, $\delta\varepsilon$ is the small perturbation of the cavity radius, and $f$ is regularizing, problem-dependent function. Here and below we’ll be working with the strain energy functional, that serves as a global measure of compliance of an elastic structure. For this case the area (volume) of the cavity serves as regularizing function. In absence of body forces, the closed form analytical expressions are available both in 2D and 3D [@Novotny2Delast; @Novotny3Delast]. In the case of 2D plane strain elasticity the expression for TD writes $$D(x)=\frac{2}{(1+\nu)(1-2\nu)}\sigma\cdot\epsilon+\frac{(1-\nu)(4\nu-1)}{2(1-2\nu)}tr\sigma tr\epsilon$$ Here $\sigma$ and $\epsilon$ are stress and strain tensors at point $x$. In case of presence of a constant body force $\boldsymbol{b}$ the expression (2) should be enriched with an additional term, associated with work done by the body force $-\boldsymbol{bu}(x)$, where $\boldsymbol{u}(x)$ is displacement at point $x$. The treatment of constant body force within CVBEM is discussed in [@cvbem_bf]. It worth noting here that similar analytical expressions for TDs are also available for other important classes of cost functionals, in particular, for the components of homogenized elasticity tensor of a periodic cell [@periodicCell1; @periodicCell2]. Shape optimization approaches based on TDs use either hard-kill methods [@Novotny2Delast; @BEM2DMarczak], or bidirectional optimization[@BESO]. In our work we utilize a hard-kill algorithm, in which the optimization is achieved by progressive elimination of material in the areas where the TD is below the cutoff level, as discussed further. Boundary generation procedure ----------------------------- The new set of domain boundaries is created at every iteration of optimization procedure. Boundary generation strategy employed in our work is similar to one described in [@BEM3D]. We discretize the initial domain onto a set of $M$ square cells (Fig. 1(b)). The fields inside the domain are calculated at the center of each cell $x_{c}(i)$, $i=1..M$. The cutoff level for the TD is defined as $D_{0}=D_{min}+\alpha(D_{max}-D_{min})$, where $D_{min}$ and $D_{max}$ are minimum and maximum values of TD within the current domain, and the coefficient $\alpha$ is tuned in range between 0.1% and 2%. For every cell of the initial domain we define the Boolean status $s$. At the beginning of the iterative process $s=1$ for every cell. At every iteration we assign $s=0$ for the cells with $D(x_{c}(i))<D_{0}$ (marked with empty points in Fig. 1(b)), and for isolated cells (marked with gray points in Fig. 1(b)). When the status is assigned, we generate the new boundary using straightforward algorithm - if $i$-th cell has $s=1$ and its right (top, left, bottom) neighbor has $s=0$, then generate a corresponding boundary element. For every boundary between the neighboring cells one straight element with three points of collocation (quadratic approximation) is used. At every iteration we mark the elements that were deleted and those that were added at the current step. Boundary value problem constraints are incorporated explicitly - for the cells bounded by the elements with non-homogeneous Neumann boundary conditions $s\equiv1$. The volume constraint does not present explicity in this scheme, so it is incorporated as the stopping criteria - the iterative process is discontinued when the ratio of current area of the material to the initial one $R$ reaches the prescribed value $R_{0}$. Quadtree sampling of fields inside the domain --------------------------------------------- The described procedure of sampling of TDs on the uniform mesh should be considered computationally inefficient, since the calculation of a field of TDs inside the domain takes $O(n_{s}^{3})$ operations. In order to reduce the asymptotic complexity of this step, we use the specific strategy of calculation of TDs, which is based on quadtree algorithm of sampling [@qtree]. The main idea is to use few different levels of refinement, that are employed for initial detection and further refinement of features of optimized domain. Within this approach each refined cell is subdivided onto 4 sub-cells (Fig. 1(c)). One can formulate different possible criteria of refinement. In this work we use the following criterion - if the values of $s$ for a current cell and its nearest neighbor are not the same, both cells should be refined to the next level. The boundary generation algorithm described above is used to generate boundaries around the points of highest level of refinement. It is easy to see that such an algorithm of sampling and boundary generation requires calculation of the topological derivative at $O(n_{s})$ points inside the domain, leading to $O(n_{s}^{2})$ operations of evaluations of boundary integrals for each boundary element. Iterative update of the inverse matrix --------------------------------------- CVBEM [@CVBEM_main] generates non-symmetric and non-sparce system matrix, and the corresponding system of equations is difficult to solve using regular iterative approaches. Here we describe one possible way to treat the resulting system matrix during the iterative updates of the initial boundary value problem. The original boundary consisting of $n_{s}$ elements leads to $N_{s}=6n_{s}$ rows/columns in the resulting non-symmetric matrix generated by CVBEM (considering $3$ collocation points per element and $2$ degrees of freedom per collocation point). Assume that at $k-th$ iteration $n_{a}$ elements have been added to a boundary, and $n_{r}$ elements have been removed. This results in $N_{a}=6n_{a}$ and $N_{r}=6n_{r}$ added and removed rows/columns in the system matrix. If, as always the case in the iterative process, $n_{a}\ll n_{s}$, $n_{r}\ll n_{s}$, it is efficient to perform update of the inverse matrix using incremental expressions, such as ShermanMorrisonWoodbury formula [@swm] or Banachiewicz [@Banachiewicz] formula for blockwise matrix inverse: $$\left(\begin{array}{cc} A & B\\ C & D \end{array}\right)^{-1}=\left(\begin{array}{cc} A^{-1}+A^{-1}BS{}^{-1}CA^{-1} & -A^{-1}BS{}^{-1}\\ -S^{-1}CA^{-1} & S^{-1} \end{array}\right)$$ Where $S=D-CA^{-1}B$. Denoting blocks of an extended matrix as $E,F,G$ and $H$, and expressing $A^{-1}$, we obtain the formula for inverse matrix update in case of removing elements: $$A^{-1}=\left(\begin{array}{cc} E & \cancel{F}\\ \cancel{G} & \cancel{H} \end{array}\right)^{-1}=E-FH^{-1}G$$ It is clear that calculations of both expressions do not require $(N_{s}\times N_{s})(N_{s}\times N_{s})$ matrix multiplications, only lower-rank operations with $(N_{s}\times N_{s})$, $(N_{s}\times N_{a})$,$(N_{s}\times N_{d})$,$(N_{a}\times N_{a})$ and $(N_{d}\times N_{d})$ matrices. This leads to $O(n_{s}^{2})$ performance of the algorithm of iterative matrix update. It is important to mention that in case if the matrix update is associated with the new cavity in the domain, matrix $S$ is singular and requires regularization, *e.g.* truncated SVD regularization [@SVD_regularization] (used in this work) or explicit incorporation of additional conditions that fix rigid body motion of a new closed boundary [@CVBEM_main]. The suggested (or similar) schemes of low-rank inverse matrix update are well know and are widely used in adjacent fields of scientific and engineering calculations, including network structures, asymptotic analysis and boundary value problems with changing boundaries (see the excellent review presented in [@Inverse]). It is thus natural to adopt this approach for fast iterative optimization schemes with boundary elements. Numerical example ================= In this section we discuss a simple benchmark example that allows us to evaluate the capabilities of our approach. Consider a 2D problem of optimization of a shape and topology of a fixed support. The initial domain has the square shape (Fig. 2(a)). The left side of the support is rigidly fixed, and the point load is applied at the upper-right angle. ![Shape optimization of a fixed support. (a) Initial boundary value problem. (b) Final configuration reached after $29$ iterations. (c) The ratio of current strain energy $E$ to the initial strain energy $E_{0}$ as a function of current volume fraction $R$.](Fig2){width="9cm"} We utilize the optimization scheme described above to find an optimal shape and topology of the support, providing the smallest strain energy $E$ for the volume fraction $R_{0}=0.5$. The cutoff parameter $\alpha$ was set to $0.003$. The TDs are sampled on a grid containing $20\times20$ cells. The solution was found in $29$ iterations. The final shape is given in Fig. 2(b). The obtained solution is in good agreement with the one obtained with FEM and BEM in the earlier works [@Novotny2Delast; @BEM2DMarczak]. Fig. 2(c) gives the evolution of the strain energy functional during the iterative process. Fig. 2(d) presents the comparison of times spent during the iteration with full LU factorization ($LU$), and the blockwise inverse matrix update, including adding rows/columns according to (3) ($BU_{+}$) and removing rows/columns according to (4) ($BU_{-}$). The boost in performance clearly depends on rank of the update (number of added/removed elements) and varies significantly from iteration to iteration. The total time spent on the full LU factorization is 41 s, whereas the total time spent on the blockwise matrix update is 15.5 s. Calculations were performed with regular Core i-5 laptop machine. Linear algebra operations were implemented within non-optimized BLAS/LAPACK framework [@lapack] and win-32 gfortran compiler [@Fortran]. In order to illustrate the quadtree sampling of TDs inside the domain, we consider the same example, but with the twice higher degree of grid refinement. Two levels of grid refinement are used. The coarse level discretizes domain onto $20\times20$ cells. As we could see above, this level is sufficient to detect the important features of the optimized domain. On the finer level we discretize the domain onto $40\times40$ cells. This level is used to render finer features of the optimal design. Fig. 2(e) gives the sampling points during first iteration, Fig. 2(f) demonstrates corresponding configuration of domain boundaries. Using quadtree sampling has led to decrease of the number of points inside the domain from 1600 to 676, and decrease of the corresponding computational time from 11.3 to 4.8 s. Discussion and conclusions ========================== In this work we suggested a set of tools for topological-shape optimization with boundary elements. As we could see on the example of 2D CVBEM method and a simple toolkit for topological optimization, one can reach the computational performance available with 2D FEM techniques. It is therefore clear that the usage of fast BEM techniques [@fastBEM; @fastBEM2] would undoubtedly allow to outperform all the existing FEM techniques of topological-shape optimization. For example, using fast BEM would allow solution of direct boundary value problem for $O(n_{s})$ operations, and calculation of the field of TD inside the domain for $O(n_{s}ln(n_{s}))$ operations, which is much faster than the corresponding operations performed within FEM (both take $O(n_{v})$ operations). In addition, FEM techniques require good quality domain discretization, generation of which takes at least $O(n_{v}ln(n_{v}))$ operations. These considerations motivate the development of three-dimensional fast BEM-based framework for topological-shape optimization of an elastic domain. Its implementation can be based on the principles similar to those presented in this work, and extended with additional features: smoother boundaries generation, combined shape and topology optimization iterations, *etc*. Clearly, the BEM-based techniques should become the most computationally efficient tools for topological-shape optimization. [^1]: Corresponding author, tel: +79150174677, e-mail:i.ostanin@skoltech.ru
2024-03-11T01:27:17.568517
https://example.com/article/8938
--- abstract: 'The *$n$-cube* is the poset obtained by ordering all subsets of $\{1,\ldots,n\}$ by inclusion, and it can be partitioned into $\binom{n}{\lfloor n/2\rfloor}$ chains, which is the minimum possible number. Two such decompositions of the $n$-cube are called *orthogonal* if any two chains of the decompositions share at most a single element. Shearer and Kleitman conjectured in 1979 that the $n$-cube has $\lfloor n/2\rfloor+1$ pairwise orthogonal decompositions into the minimum number of chains, and they constructed two such decompositions. Spink recently improved this by showing that the $n$-cube has three pairwise orthogonal chain decompositions for $n\geq 24$. In this paper, we construct four pairwise orthogonal chain decompositions of the $n$-cube for $n\geq 60$. We also construct five pairwise *edge-disjoint* symmetric chain decompositions of the $n$-cube for $n\geq 90$, where edge-disjointness is a slightly weaker notion than orthogonality, improving on a recent result by Gregor, Jäger, Mütze, Sawada, and Wille.' author: - Karl Däubel - Sven Jäger - Torsten Mütze - Manfred Scheucher bibliography: - 'refs.bib' title: 'On orthogonal symmetric chain decompositions[^1]' --- Introduction ============ The *$n$-dimensional cube $Q_n$*, or $n$-cube for short, is the poset obtained by taking all subsets of $[n]:=\{1,\dotsc,n\}$, and ordering them by inclusion. This poset is sometimes also called the *subset lattice* or the *Boolean lattice*, and it is a fundamental and widely studied object in combinatorics. For illustration, Figure \[fig:Q4\] shows the 4-cube. In this figure and throughout this paper, we draw posets by their Hasse diagrams. Clearly, $Q_n$ is a graded poset with rank function given by the set sizes, and every maximal chain has size $n+1$. We refer to the family of all subsets of a fixed size $k\in\{0,\dotsc,n\}$ as the *$k$th level* of $Q_n$. It is easy to see that $Q_n$ has a unique largest level $n/2$ for even $n$, and two largest levels $\lfloor n/2\rfloor$ and $\lceil n/2\rceil$ for odd $n$. We refer to these levels as *middle levels*. Sperner’s classical theorem [@Sperner1928] asserts that each middle level is in fact a largest antichain of $Q_n$, i.e., $Q_n$ has width $a_n:=\binom{n}{\lfloor n/2\rfloor}$. As a consequence, at least $a_n$ many chains are needed to partition $Q_n$, and by Dilworth’s theorem [@Dilworth1950], a partition into this many chains indeed exists. De Bruijn, van Ebbenhorst Tengbergen, and Kruiswijk [@DeBruijnVETK1951] first described an inductive construction of a partition of $Q_n$ into $a_n$ many chains that are all *symmetric* and *saturated*, i.e., every chain starts and ends in symmetric levels around the middle, and no chain skips any intermediate levels. Throughout this paper, we will refer to their decomposition as the *standard decomposition*. Lewin [@Lewin1972], Aigner [@Aigner1973], and White and Williamson [@WhiteWilliamson1977] gave alternative descriptions of the standard decomposition via greedy matching algorithms as well as explicit local rules to follow the chains in the standard decomposition. The easiest-to-remember local rule using parenthesis matching was given by Greene and Kleitman [@GreeneKleitman1976] (we will describe their rule in Section \[sec:proof-unroll\]). The standard decomposition of $Q_n$ was famously used by Kleitman [@Kleitman1965] to prove the two-dimensional case of the Littlewood-Offord conjecture on signed sums of vectors [@LittlewoodOfford1938] (later proved in all dimensions by Kleitman [@Kleitman1970]). Shearer and Kleitman [@ShearerKleitman1979] were the first to investigate chain decompositions of the $n$-cube that are different from the aforementioned standard decomposition. They proved that, when picking subsets $x,y\subseteq [n]$ at random, the probability that $x\subseteq y$ is at least $1/a_n$, for every probability distribution on $Q_n$. Their proof introduces the notion of orthogonal chain decompositions. Formally, two decompositions of $Q_n$ into $a_n$ (not necessarily symmetric or saturated) chains are called *orthogonal* if every two chains from the two decompositions have at most a single element of $Q_n$ in common. For example, Figure \[fig:Q4\] shows three pairwise orthogonal chain decompositions into 6 chains in $Q_4$. Shearer and Kleitman conjectured that $Q_n$ admits $b_n:=\lfloor n/2\rfloor + 1$ pairwise orthogonal chain decompositions for all $n\geq 1$. As a warm-up exercise, we verified their conjecture for $n\leq 7$ with computer help. It is easy to check that there are at most $b_n$ pairwise orthogonal decompositions (consider the node degrees in the Hasse diagram around the middle levels). As a first step towards their conjecture, Shearer and Kleitman established the existence of *two* orthogonal chain decompositions for all $n\geq 2$. They proved this by showing that the standard decomposition and its *complement*, obtained by taking the complements of all sets with respect to the full set $[n]$, are almost-orthogonal. Formally, we say that two decompositions of $Q_n$ into $a_n$ symmetric and saturated chains are *almost-orthogonal* if every two chains from the two decompositions have at most a single element of $Q_n$ in common, with the exception of the two unique chains of size $n+1$, which are only allowed to intersect in their minimal and maximal elements $\emptyset$ and $[n]$. It is straightforward to verify that for $n\geq 5$, every family of almost-orthogonal decompositions can be modified to orthogonal decompositions, by moving the empty set $\emptyset$ in all but one of the decompositions from the unique longest chain to a shortest chain, one decomposition at a time (see [@ShearerKleitman1979; @Spink2017] for details). Recently, Spink [@Spink2017] made the first progress towards the Shearer-Kleitman conjecture from 1979 by proving that $Q_n$ has *three* pairwise orthogonal chain decompositions for $n\geq 24$. He actually showed that $Q_n$ has three almost-orthogonal decompositions into symmetric and saturated chains, from which the result follows as described before. Our results ----------- Using Spink’s product construction, we improve on his result as follows. \[thm:ortho\] For all $n\geq 60$, the $n$-cube has four pairwise almost-orthogonal decompositions into symmetric and saturated chains, and consequently four pairwise orthogonal chain decompositions. A slightly weaker notion than almost-orthogonality was introduced in a recent paper by Gregor, Jäger, Mütze, Sawada, and Wille [@GregorJMSW18]. We refer to any cover relation $x\subseteq y$ as an *edge* $(x,y)$ ($y$ is one level above $x$), and we say that two decompositions of $Q_n$ into $a_n$ symmetric and saturated chains are *edge-disjoint* if the two decompositions do not share any edges. Equivalently, the two decompositions form edge-disjoint paths in the cover graph of $Q_n$, which is the graph formed by all cover relations. By this definition, every pair of almost-orthogonal chain decompositions is edge-disjoint, but not necessarily vice versa. The main application of edge-disjoint chain decompositions in [@GregorJMSW18] was to construct cycle factors in subgraphs of $Q_n$ induced by an interval of levels around the middle, with the goal of generalizing the recent proof of the middle levels conjecture by Mütze [@Muetze2016] (see also [@GregorMuetzeNummenpalo2018]). It is also easy to check that $Q_n$ admits at most $b_n$ pairwise edge-disjoint chain decompositions. The authors of [@GregorJMSW18] conjectured that this bound can be achieved for all $n\geq 1$. They verified this conjecture for $n\leq 7$, and proved that $Q_n$ has *four* pairwise edge-disjoint decompositions for $n\geq 12$. We improve on this result as follows. \[thm:edge\] For all $n\geq 90$, the $n$-cube has five pairwise edge-disjoint decompositions into symmetric and saturated chains. Unless stated otherwise, all chains we consider in the following are symmetric and saturated, and we will from now on omit those qualifications. Moreover, we refer to any decomposition of $Q_n$ into symmetric and saturated chains as an *SCD*. Also, when referring to a family of pairwise almost-orthogonal or pairwise edge-disjoint SCDs, we will from now on omit the qualification ‘pairwise’. Small dimensions ---------------- Table \[tab:small\] summarizes what is known for small values of $n$. Specifically, the table shows the maximum number of almost-orthogonal and edge-disjoint SCDs of $Q_n$ that we know for $n\leq 25$, together with the upper bound $b_n$. As indicated in the table, we actually found six edge-disjoint SCDs of $Q_{11}$, which, using the product construction from [@GregorJMSW18], yields six edge-disjoint SCDs for all dimensions $n=11k$, $k \in {\mathbb{N}}$. To extend this result to all but finitely many dimensions, thus improving Theorem \[thm:edge\], we would only need to find six edge-disjoint SCDs of $Q_n$ for some dimension $n$ not of this form. It is also interesting to note that there are *no* three almost-orthogonal SCDs of $Q_4$ (see [@Spink2017]), i.e., in this case the trivial upper bound $b_n$ cannot be achieved. Nevertheless, there are three orthogonal decompositions using non-symmetric chains in $Q_4$—see Figure \[fig:Q4\]—so this shows that not every family of orthogonal chain decompositions can be obtained from almost-orthogonal SCDs. +=\[remember picture,baseline\] +=\[inner sep=0pt,anchor=base,minimum width=0.2cm,align=center,outer sep=4pt\] [l|lllllllllll]{} $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\ almost-orthogonal SCDs &1 &2 &2 &2 &3 &3\* & &3\* &3\* &3 &\ edge-disjoint SCDs &1 &2 &2 &3 &3 &4 &4 & () [4]{}; & () [4\*]{}; & &\ upper bound $b_n=\lfloor n/2\rfloor+1$ &1 &2 &2 &3 &3 &4 &4 & () [5]{}; & () [5]{}; & () [6]{}; &6\ $\cdots$\ $\cdots$ 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ---- ----- ----- ---- ----- ---- ----- ---- ----- ----- ----- ----- ---- ----- -- -- -- -- -- 3 3\* 4\* 3 3\* 3 4\* 3 3 4\* 4\* 3\* 3 4\* 4 4 4 4 4 4 4 4 5\* 5\* 6\* 4 4 4 7 7 8 8 9 9 10 10 11 11 12 12 13 13 : Number of almost-orthogonal and edge-disjoint SCDs of $Q_n$ we know for $n \leq 25$. Entries marked with \* are new compared to the earlier results from [@Spink2017] and [@GregorJMSW18]. For $n\leq 11$, the corresponding families of SCDs are provided electronically on the third authors’ website [@www] and on the arXiv [@preprint], and for the shaded entries they are also shown in Figures \[fig:Q7\_4\_ortho\]–\[fig:Q11\_6\_edge\]. For $n\geq 12$, they are obtained via the product constructions presented in [@Spink2017] and [@GregorJMSW18]. The entries in the dotted box are explained in Remark \[rem:lbounds\].[]{data-label="tab:small"} (1.north west) – (3.north east) – (6.south east) – (4.south west) – cycle; As the table shows, we can also slightly improve Spink’s aforementioned result [@Spink2017] that $Q_n$ has three almost-orthogonal SCDs for $n\geq 24$. His proof left only the dimensions $n=6,8,9,11,13,16,18,23$ as possible exceptions [@Spink2017 Theorem 3.3] (for $n\geq 5$). Using the SCDs shown in our table for $n\leq 11$ and Spink’s product construction [@Spink2017 Theorem 3.5], we can close all those gaps, and obtain that $Q_n$ has three almost-orthogonal SCDs for all $n\geq 5$, and three orthogonal chain decompositions for all $n\geq 4$, providing some more evidence for the Shearer-Kleitman conjecture. We also see that $Q_n$ has four edge-disjoint SCDs for all $n\geq 6$, ruling out the two possible exceptions $n=9,11$ left by Gregor et al. [@GregorJMSW18]. \[rem:lbounds\] Our lower bounds for edge-disjoint SCDs differ from the upper bound $b_n$ by 1 exactly for the dimensions $n=8,9,10$; see the values in the dotted box in Table \[tab:small\]. In fact, it can be shown that our approach for finding edge-disjoint SCDs via the necklace poset $N_n$ yields at most $b_n-1$ edge-disjoint SCDs of $Q_n$ for all even $n$ and for $n=9$ (see Lemma \[lem:cyclic-bound\] below), so our methods cannot yield better lower bounds for those cases. Proof ideas ----------- We now outline the main ideas for proving Theorems \[thm:ortho\] and \[thm:edge\]. #### Product constructions We compute families of $s=4$ almost-orthogonal and $s=5$ edge-disjoint SCDs, for two cubes $Q_a$ and $Q_b$ of small coprime dimensions $a$ and $b$. Specifically, these dimensions will be $(a,b)=(7,11)$ and $(a,b)=(10,11)$, respectively; see the shaded entries in Table \[tab:small\]. Using the product constructions presented in [@Spink2017] and [@GregorJMSW18], we then obtain $s$ SCDs of the corresponding type for all dimensions $n$ for which $n$ is a non-negative integer combination of $a$ and $b$, in particular for all $n\geq (a-1)(b-1)$. This evaluates to $n\geq 60$ and $n\geq 90$ for the aforementioned pairs $(a,b)$, respectively. #### Problem reduction via the necklace poset To find families of SCDs in cubes of small fixed dimension ($n=7$, 10, and 11) that satisfy the desired constraints, we reduce the search space to a much smaller poset, the so-called *necklace poset* $N_n$; see Figure \[fig:NQ5\]. It is obtained from $Q_n$ by identifying all subsets that differ only in cyclically renaming the elements of the ground set $1\rightarrow 2\rightarrow \cdots\rightarrow n\rightarrow 1$. The necklace poset $N_n$ inherits the level structure from $Q_n$, and notions such as symmetric chains and SCDs translate to it in a natural way. Moreover, $N_n$ is by a factor of $n(1-o(1))$ smaller than $Q_n$, which turns out to be crucial for our computer searches for SCDs. We refer to the process of translating an SCD computed in $N_n$ to $Q_n$ as *unrolling*. Unrolling essentially creates $n$ copies of each chain in $N_n$, and these copies are obtained by cyclic renaming as explained before. This strategy works particularly well when $n$ is a prime number, and with some adjustments it can also be made to work for composite $n$. We also introduce a suitable notion of edge multiplicities for the necklace poset (as indicated in Figure \[fig:NQ5\]), which allows us to find multiple edge-disjoint SCDs in $N_n$ simultaneously, and to unroll them to multiple edge-disjoint SCDs in $Q_n$. Specifically, we prove that the two constructions of SCDs in $N_n$ found by Griggs, Killian, and Savage [@GriggsKillianSavage2004] and by Jordan [@Jordan2010] can be unrolled to almost-orthogonal SCDs in $Q_n$. The key steps here are Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\] and Proposition \[prop:compl\] below. #### Using SAT solvers To search multiple edge-disjoint SCDs in the necklace poset $N_n$ for some small fixed dimension $n$, we formulate the problem as a propositional formula in conjunctive normal form (CNF), and compute solutions using the SAT solvers Glucose [@AudemardLS2013] and MiniSat [@EenSorensen2003]. In our CNF formula, we use Boolean variables that indicate whether certain nodes and edges belong to a particular SCD, and we introduce clauses ensuring that a satisfying variable assignment indeed corresponds to an unrollable SCD, and that multiple SCDs are edge-disjoint. Once a valid variable assignment is found, we use incremental CNF augmentation to enforce the remaining properties, in particular almost-orthogonality of the unrolled SCDs in $Q_n$. Specifically, if we encounter a violation, we add an additional clause that prevents this particular configuration. We solve the augmented CNF using an incremental SAT solver, until we either find a feasible solution or obtain a formula with no satisfying assignment. This approach keeps the size of the generated CNFs and of the computation time small, as the solvers can reuse structural information of the CNFs, rather than recomputing a solution from scratch. The size of the formulas can be reduced further by prescribing some SCDs to be particularly nice decompositions. Related work ------------ #### Other chain decompositions There is a considerable amount of literature on partitioning the $n$-cube using possibly non-symmetric and/or non-saturated chains. One of the most interesting open problems in this direction is a well-known conjecture of Füredi [@Furedi1985] (cf. [@Griggs1988]), which asserts that $Q_n$ can be decomposed into $a_n$ (not necessarily symmetric or saturated) chains whose sizes differ by at most 1, so their size is $2^n/a_n$ rounded up or down, which is approximately $\sqrt{\pi n}(1+o(1))$. Tomon [@Tomon2015] recently made some progress towards this conjecture, by showing that for large enough $n$, the $n$-cube can be decomposed into $a_n$ chains whose size is between $0.8\sqrt{n}$ and $13\sqrt{n}$. Another remarkable result, recently shown by Gruslys, Leader, and Tomon [@GruslysLeaderTomon2019], is that for large enough $n$, the $n$-cube can be partitioned into copies of any fixed poset $P$, provided that the number of elements of $P$ is a power of 2 and that $P$ has a unique minimal and maximal element. Pikurkho [@Pikhurko1999] showed that all edges of the $n$-cube can be partitioned into symmetric chains, but it is not clear whether some of those chains can be selected to form one or more SCDs. In a slightly different direction, Streib and Trotter [@StreibTrotter2014] presented the construction of an SCD of the $n$-cube that can be extended to a Hamiltonian cycle through the entire cover graph. The existence and construction of SCDs has also been investigated for many graded posets different from $Q_n$. The paper [@DeBruijnVETK1951] proves that divisor lattices, which are products of chains, are symmetric chain orders, and Griggs [@Griggs1977] gave a sufficient condition for a general graded poset to admit an SCD. Griggs, Killian, and Savage first constructed an explicit SCD of the necklace poset $N_n$ [@GriggsKillianSavage2004] when the dimension $n$ is a prime number, with the goal of constructing rotation-symmetric Venn diagrams for $n$ curves in the plane (see [@RuskeySavageWagon2006]). Their result for $N_n$ with $n$ prime was later generalized by Jordan [@Jordan2010] to all $n \in {\mathbb{N}}$, and to even more general quotients of $Q_n$ by Duffus, McKibben-Sanders, and Thayer [@DuffusMcKibbenSandersThayer2012]. All these constructions in the necklace poset proceed by taking suitable subchains from the standard SCD of $Q_n$. Further generalizations of these results can be found in [@Dhand2012; @HershSchilling2013; @DuffusThayer2015]. #### SAT solvers in combinatorics We conclude this section by listing some recent results where SAT solvers were used successfully to tackle difficult problems in (extremal) combinatorics, either by using them to find a solution, or to prove that no solution exists. Fujita [@Fujita2012] established a new lower bound $R(4,8) \geq 58$ for the classical Ramsey numbers. Similarly, Dransfield, Liu, Marek, and Truszczy[ń]{}ski [@DransfieldLMT2004] derived improved bounds for van der Waerden numbers (see also [@HerwigHvLvM2007] and [@KourilPaul2008]). Another recent result that received considerable attention is the paper by Konev and Lisitsa [@KonevLisitsa2014] on the [E]{}rdős discrepancy conjecture. SAT solvers have also been used in the context of geometry, specifically for tackling Erdős-Szekeres type questions, see the papers by Balko and Valtr [@BalkoValtr2017] and by Scheucher [@Scheucher2018]. Moreover, with their help researchers were able to find new coil-in-the-box Gray codes [@ZinovikKroeningChebiryak2008] and to compute pairs of orthogonal diagonal Latin squares [@ZaikinKochemazovSemenov2016]. Outline of this paper --------------------- In Section \[sec:proofs\] we present the proofs of our two main theorems. The proofs of two crucial lemmas, which settle the base cases for our construction, are deferred to Section \[sec:small\] at the end of the paper. In Section \[sec:unroll\] we explain our reduction technique to produce SCDs of $Q_n$ by working in the much smaller necklace poset $N_n$, and in Section \[sec:sat\] we describe how to exploit this reduction using a SAT solver. Product constructions implying Theorems \[thm:ortho\] and \[thm:edge\] {#sec:proofs} ====================================================================== As already mentioned in the introduction, both of our theorems are proved by applying product constructions established in [@Spink2017] and [@GregorJMSW18], respectively, which allow us to obtain $s$ almost-orthogonal or edge-disjoint SCDs of $Q_{a+b}$, given $s$ such SCDs in the smaller cubes $Q_a$ and $Q_b$. In the following we will repeatedly use the basic number-theoretic fact that, if $a$ and $b$ are coprime integers, then every integer $n\geq (a-1)(b-1)$ is a non-negative integer combination of $a$ and $b$. Proof of Theorem \[thm:ortho\] {#sec:proof-ortho} ------------------------------ The product construction for almost-orthogonal SCDs requires an additional property that we now define: A family of almost-orthogonal SCDs of the $n$-cube for some odd $n$ is called *good* if the union of edges given by all chains of size 2 from all the decompositions forms a unicyclic graph, i.e., a graph all of whose components contain at most a single cycle. The following crucial statement was proved in [@Spink2017]. \[lem:prod-ortho\] Let $s\geq 3$ and $r\geq 2$ be integers, and let $n_1,\dotsc,n_r\geq 3$ be a sequence of odd integers. If each $Q_{n_i}$, $1\leq i\leq r$, has a good family of $s$ almost-orthogonal SCDs, then $Q_{n_1+\dotsb+n_r}$ has $s$ almost-orthogonal SCDs. The base case for applying Lemma \[lem:prod-ortho\] is the following result, which will be proved in Section \[sec:small\]. \[lem:ortho\] The cubes $Q_7$ and $Q_{11}$ each have four good almost-orthogonal SCDs. As every integer $n\geq (7-1)(11-1)=60$ is a non-negative integer combination of 10 and 11, we can apply Lemmas \[lem:prod-ortho\] and \[lem:ortho\] to obtain the desired SCDs. Spink [@Spink2017 Theorem 3.6] also proved that the goodness requirement in Lemma \[lem:prod-ortho\] can be omitted if the additional condition $r\geq 6$ is added. As every integer $n\geq 60$ is a non-negative integer combination of 7 and 11 with coefficients that sum up to at least 6, we would not need the families of SCDs of $Q_7$ and $Q_{11}$ to be good to prove Theorem \[thm:ortho\]. However, since proving this modified version of Lemma \[lem:prod-ortho\] is considerably harder, partially deferred to another paper [@DavidSpinkTiba2018], and since goodness is not hard to achieve on top of almost-orthogonality, we prefer to stick with Lemma \[lem:prod-ortho\] in its stated form. Moreover, in this form the lemma also yields four almost-orthogonal SCDs for all non-negative integer combinations of 7 and 11 that are smaller than 60. Proof of Theorem \[thm:edge\] {#sec:proof-edge} ----------------------------- The following product lemma for edge-disjoint SCDs was proved in [@GregorJMSW18]. \[lem:prod-edge\] If $Q_a$ and $Q_b$ each have $s$ edge-disjoint SCDs, then $Q_{a+b}$ has $s$ edge-disjoint SCDs. The base case for applying Lemma \[lem:prod-edge\] is the following result, which will be proved in Section \[sec:small\]. \[lem:edge\] The cubes $Q_{10}$ and $Q_{11}$ each have five edge-disjoint SCDs. As every integer $n\geq (10-1)(11-1)=90$ is a non-negative integer combination of 10 and 11, we can apply Lemmas \[lem:prod-edge\] and \[lem:edge\] to obtain the desired SCDs. To complete the proofs of our main theorems, it remains to prove Lemma \[lem:ortho\] and Lemma \[lem:edge\]. The corresponding SCDs are provided in Section \[sec:small\] below. Unrolling the necklace poset {#sec:unroll} ============================ Given a subset $x\subseteq [n]$, we write $\sigma(x)$ for the subset obtained from $x$ by cyclically renaming elements $1\rightarrow 2\rightarrow \cdots\rightarrow n\rightarrow 1$. Moreover, we write ${\langle x\rangle}$ for the family of all subsets obtained by repeatedly applying $\sigma$ to $x$, and we refer to ${\langle x\rangle}$ as a *necklace*, and to any element of ${\langle x\rangle}$ as a *necklace representative*. We say that the necklace ${\langle x\rangle}$ is *full* if $|{\langle x\rangle}|=n$, and *deficient* if $|{\langle x\rangle}|<n$. For example, for $n=4$ the necklace ${\langle \{1,\!3,\!4\}\rangle}=\{\{1,\!3,\!4\},\{2,\!4,\!1\},\{3,\!1,\!2\},\{4,\!2,\!3\}\}$ is full, and the necklace ${\langle \{1,\!3\}\rangle}=\{\{1,\!3\},\{2,\!4\}\}$ is deficient. Note that the cardinality of any necklace divides $n$. Consequently, if $n$ is a prime number, then ${\langle \emptyset\rangle}$ and ${\langle [n]\rangle}$ are the only deficient necklaces, and all other necklaces are full. On the other hand, if $n$ is composite, then there are more than these two deficient necklaces. The *necklace poset $N_n$* is the set of all necklaces ${\langle x\rangle}$, $x\subseteq [n]$, and its cover relations are all pairs $({\langle x\rangle},{\langle y\rangle})$ for which $x\subseteq y$ form a cover relation in the $n$-cube; see the left hand side of Figure \[fig:NQ5\]. Similarly to the $n$-cube, we also refer to the cover relations in $N_n$ as *edges*. As $\sigma$ preserves the set size, $N_n$ inherits the level structure from $Q_n$, and notions such as symmetric chains and SCDs translate to $N_n$ in the natural way. As almost all necklaces of $N_n$ are full, we have that $N_n$ is by a factor of $n(1-o(1))$ smaller than $Q_n$, which is vital for our computer searches for SCDs. We now collect a few simple observations about transferring SCDs from $N_n$ to $Q_n$. These observations are illustrated in Figure \[fig:NQ5\]. Recall that all chains we consider are symmetric and saturated. \[obs:unroll-full\] Let $y=(y_1,\ldots,y_k)$ be a chain of full necklaces in $N_n$. Then there are necklace representatives $x=(x_1,\ldots,x_k)$ with $x_i\in y_i$ for $1\leq i\leq k$, such that $\sigma^i(x)=(\sigma^i(x_1),\ldots,\sigma^i(x_k))$ for $i=0,\ldots,n-1$ is a family of $n$ disjoint chains in $Q_n$ that visit exactly all elements from $y_1,\ldots,y_k$. The easiest way to pick necklace representatives satisfying those conditions is to move up the chain $y$ from its minimal element $y_1$ to its maximal element $y_k$, starting with an arbitrary representative $x_1\in y_1$, and then arbitrarily picking $x_{j+1}\in y_{j+1}$ for $j=1,\ldots,k-1$ such that $(x_j,x_{j+1})$ is an edge in $Q_n$. We refer to the process of translating a chain from $N_n$ to a family of $n$ chains in $Q_n$ as described by Observation \[obs:unroll-full\] as *unrolling*. As an example, consider the chain $(y_1,\ldots,y_4)=\big({\langle \{1\}\rangle},{\langle \{1,\!2\}\rangle},{\langle \{1,\!2,\!3\}\rangle},{\langle \{1,\!2,\!3,\!4\}\rangle}\big)$ in $N_5$. The necklace representatives $x=(x_1,\ldots,x_4)=(\{1\},\{1,\!2\},\{1,\!2,\!3\},\{1,\!2,\!3,\!4\})$ form a chain in $Q_5$, and $\sigma^i(x)$, $i=0,\ldots,4$, is a family of five disjoint chains in $Q_5$ that visit exactly all $5\cdot 4=20$ elements from $y_1,\ldots,y_4$. It is crucial here to observe that the choice of necklace representatives in Observation \[obs:unroll-full\] is *not unique*. In the previous example, we could also choose $x=(x_1,\ldots,x_4)=(\{1\},\{1,\!5\},\{1,\!4,\!5\},\{1,\!2,\!4,\!5\})$ as necklace representatives, yielding a *different* family of five disjoint chains in $Q_5$. The notion of unrolling can be extended straightforwardly from a chain of full necklaces to a chain that has one deficient necklace at each of its ends, as captured by the following observation. The crucial insight here is that if a necklace ${\langle x\rangle}$ is deficient and of size $d<n$, then ${\langle x\rangle}=\{\sigma^i(x)\mid i=0,\ldots,d-1\}$. \[obs:unroll-def\] Let $(y_0,\ldots,y_{k+1})$ be a chain of necklaces in $N_n$ such that $y_1,\ldots,y_k$ are full and $y_0$ and $y_{k+1}$ are deficient and of the same size $d<n$. Then there are necklace representatives $(x_0,\ldots,x_{k+1})$ with $x_i\in y_i$ for $0\leq i\leq k+1$, such that $\sigma^i(x_0,\ldots,x_{k+1})$ for $i=0,\ldots,d-1$, and $\sigma^i(x_1,\ldots,x_k)$ for $i=d,\ldots,n-1$, is a family of $n$ disjoint chains in $Q_n$ that visit exactly all elements from $y_0,\ldots,y_{k+1}$. As an example, consider the chain $y=(y_0,\ldots,y_4)=\big({\langle \{1,\!5\}\rangle},\allowbreak {\langle \{1,\!2,\!5\}\rangle},\allowbreak {\langle \{1,\!2,\!3,\!5\}\rangle},\allowbreak {\langle \{1,\!2,\!3,\!5,\!6\}\rangle},\allowbreak {\langle \{1,\!2,\!3,\!5,\!6,\!7\}\rangle}\big)$ in $N_8$. It has one deficient necklace of size $d=4$ at each of its ends, and all inner necklaces are full. Taking $x=(x_0,\ldots,x_4)=(\{1,\!5\},\allowbreak \{1,\!2,\!5\},\allowbreak \{1,\!2,\!3,\!5\},\allowbreak \{1,\!2,\!3,\!5,\!6\},\allowbreak \{1,\!2,\!3,\!5,\!6,\!7\})$ as necklace representatives, unrolling yields four chains of size 5, namely $\sigma^i(x_0,\ldots,x_4)$ for $i=0,1,2,3$, and it yields four chains of size 3, namely $\sigma^i(x_1,\ldots,x_3)$ for $i=4,5,6,7$. We say that a chain in $N_n$ is *unimodal* if its minimal and maximal element are necklaces of the same size (possibly deficient), and all other elements are full necklaces. Moreover, we say that an SCD of $N_n$ is *unimodal* if all of its chains are unimodal. Combining Observations \[obs:unroll-full\] and \[obs:unroll-def\] yields the following fact, which allows us to translate an entire SCD from $N_n$ to $Q_n$. \[obs:unroll-unimodal\] Given a unimodal SCD of $N_n$, $n\geq 1$, unrolling each of its chains yields an SCD of $Q_n$. This observation is illustrated in Figure \[fig:NQ5\]. We refer to the process of unrolling all chains of an SCD of $N_n$ to an SCD of $Q_n$ as *unrolling the SCD*. Recall that in this unrolling process there may be several choices for picking necklace representatives for each chain. We now want to simultaneously unroll multiple SCDs from $N_n$ to edge-disjoint SCDs of $Q_n$. This motivates the following definitions: For any edge $e=({\langle x\rangle},{\langle y\rangle})$ of $N_n$ where ${\langle x\rangle}$ is on level $k\leq (n-1)/2$, we define the *capacity* $c(e)$ as the number of distinct elements from $[n]$ that can be added to $x$ to reach an element in ${\langle y\rangle}$. For any edge $e=({\langle y\rangle},{\langle x\rangle})$ of $N_n$ where ${\langle x\rangle}$ is on level $k\geq (n+1)/2$, we define the capacity $c(e)$ symmetrically as the number of distinct elements from $[n]$ that can be removed from $x$ to reach an element in ${\langle y\rangle}$. We can think of the cover graph of $N_n$ with those capacities on its edges $e$ as a multigraph with edge multiplicities $c(e)$; see the left hand side of Figure \[fig:NQ5\]. It is easy to see that the sum of capacities of all edges $e=({\langle x\rangle},{\langle y\rangle})$ for fixed ${\langle x\rangle}$ on level $k\leq (n-1)/2$ is $n-k$, which is equal to the number of neighbors of $x$ in level $k+1$ of the cover graph of $Q_n$. We say that a family of unimodal SCDs of $N_n$ is *edge-disjoint* if for every edge $e$ in $N_n$, there are at most $c(e)$ chains in those SCDs containing this edge. For even $n\geq 4$, the middle level of $N_n$ contains the deficient necklace ${\langle \{1,3,5,\ldots,\allowbreak n-1\}\rangle}$. Consequently, any unimodal chain containing this necklace has size 1. It follows that the edges incident to this necklace cannot be used by any chain, so that the upper bound $b_n$ for the maximum number of edge-disjoint SCDs given in the introduction (for $Q_n$) can be improved by 1, yielding the following lemma (see [@Wille2018] for a formal proof). \[lem:cyclic-bound\] For even $n\geq 4$, there are at most $b_n-1=n/2$ unimodal SCDs of $N_n$ that are edge-disjoint. Lemma \[lem:cyclic-bound\] shows that our approach via the necklace poset $N_n$ yields at most four edge-disjoint SCDs of $N_8$ and at most five edge-disjoint SCDs of $N_{10}$; recall Remark \[rem:lbounds\]. By considering the deficient necklace ${\langle \{1,4,7\}\rangle}$ and its complement in $N_9$, one can similarly show that $N_9$ has at most $b_9-1=4$ edge-disjoint SCDs (see [@Wille2018]). The following lemma was stated and proved in [@GregorJMSW18] in slightly different form. \[lem:prime-unimodal\] Let $n\geq 2$ be a prime number. Every family of $s\leq b_n$ unimodal SCDs of $N_n$ that are edge-disjoint can be unrolled to $s$ edge-disjoint SCDs of $Q_n$. In Section \[sec:small\] we will apply Lemma \[lem:prime-unimodal\] to prove the case $n=11$ of Lemma \[lem:edge\]. Note that the conclusion of Lemma \[lem:prime-unimodal\] does not hold if the dimension $n$ is not prime, but composite. The example in Figure \[fig:not-unrollable\] shows that even two chains between two deficient necklaces in $N_n$ cannot always be unrolled so that the resulting sets of chains are edge-disjoint in $Q_n$. Consequently, in general it may not be possible to unroll two edge-disjoint SCDs of $N_n$ to two edge-disjoint SCDs of $Q_n$. Nevertheless, the next two lemmas show that unrolling is possible for two known constructions of SCDs of $N_n$, yielding not only two edge-disjoint SCDs, but even two almost-orthogonal SCDs of $Q_n$. Specifically, these constructions are due to Griggs, Killian, and Savage [@GriggsKillianSavage2004] for prime $n$, and due to Jordan [@Jordan2010] for all $n$, and they will be explained in the next section. It is worth to mention that both constructions in general yield different SCDs for prime $n$; see Figure \[fig:gksj\]. \[lem:unroll-GKS\] For every prime $n\geq 2$, the unimodal SCD of $N_n$ constructed as in [@GriggsKillianSavage2004] and its complement can be unrolled to two almost-orthogonal SCDs of $Q_n$. \[lem:unroll-J\] For every $n \geq 1$, the unimodal SCD of $N_n$ constructed as in [@Jordan2010] and its complement can be unrolled to two almost-orthogonal SCDs of $Q_n$. In Section \[sec:small\] we will apply Lemma \[lem:unroll-GKS\] to prove the cases $n=7$ and $n=11$ in Lemma \[lem:ortho\] and we apply Lemma \[lem:unroll-J\] to settle the case $n=10$ in Lemma \[lem:edge\]. Of course, we could simply check by computer whether these concrete small instances can be unrolled, but we still think that the preceding two lemmas are interesting general facts that have not appeared in the literature before. The proof of Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\] is rather long and technical, and will be given in the next section. It is followed by Section \[sec:sat\], where we describe our computer search for SCDs of the necklace poset using a SAT solver. The reader may want to skip these parts for the moment, and continue in Section \[sec:small\] with the proofs of Lemmas \[lem:ortho\] and \[lem:edge\]. Proofs of Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\] {#sec:proof-unroll} -------------------------------------------------------- In the remainder of this section we represent subsets of $[n]$ by their characteristic $\{0,1\}$-strings of length $n$. The $i$th entry of a bitstring $x$ is denoted by $x_i$. For instance, the set $x=\{1,3,5,6\}\subseteq [6]$ is represented by the bitstring $x=x_1\ldots x_6=101011\in\{0,1\}^6$. The operation $\sigma(x)$ on the set $x$ translates to a cyclic right-rotation of the bitstring $x$. Moreover, we write $|x|$ for the number of 1s in $x$, which is the same as the level of $x$ in $Q_n$. Also, for any bitstring $x$ and any integer $r\geq 0$, we write $x^r$ for the concatenation of $r$ copies of $x$. We begin by recapitulating the SCD constructions in the $n$-cube and the necklace poset described in the three papers [@GreeneKleitman1976; @GriggsKillianSavage2004; @Jordan2010]. The first construction by Greene and Kleitman is used as an auxiliary construction for the other two constructions, which we need for proving Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\]. For the reader’s convenience, the Greene-Kleitman construction is illustrated in Figure \[fig:paren\] for one particular chain, and the other two constructions are illustrated in Figure \[fig:gksj\] for $n=7$. #### The Greene-Kleitman construction in $Q_n$ Greene and Kleitman [@GreeneKleitman1976] proposed the following explicit construction of an SCD of $Q_n$. Given any bitstring $x$ of length $n$, we think of every 0-bit as an opening parenthesis, and every 1-bit as a closing parenthesis, and we match closest pairs of opening and closing parentheses in the natural way; see Figure \[fig:paren\]. We let $M(x)$ be the set of all index pairs corresponding to matched parentheses in $x$, and we let $U_0(x)$ and $U_1(x)$ be the index sets of unmatched opening and closing parentheses, respectively. The length of $x$ clearly satisfies $n=2|M(x)|+|U_0(x)|+|U_1(x)|$. For any $x$ with $U_0(x) \neq \emptyset$, we let $\tau(x)$ be the bitstring obtained from $x$ by flipping the leftmost unmatched 0 to a 1. The union of all chains $(x, \tau(x), \ldots, \tau^k(x))$, where $x\in\{0,1\}^n$ with $U_1(x) = \emptyset$ and $k = |U_0(x)|$, forms an SCD of the $n$-cube for all $n\geq 1$. This follows easily from the observation that we have $M(x)=M(\tau(x))=\cdots=M(\tau^k(x))$ and $U_0(\tau^k(x))=\emptyset$ along each such chain, so the chain is uniquely determined by its matched pairs of parentheses. We denote this SCD by $D_n$. This is exactly the standard SCD of the $n$-cube mentioned in the introduction. #### The Griggs-Killian-Savage construction in $N_n$ for prime $n$ \ To construct an SCD of $N_n$ for prime $n$, Griggs, Killian, and Savage [@GriggsKillianSavage2004] use the standard SCD $D_n$ in $Q_n$ as a starting point and select subchains of $D_n$, such that exactly one representative of each necklace is contained in one of the subchains. For this purpose we define, for any $x\in\{0,1\}^n$, the *block code $\beta(x)$* as follows: If $x$ has the form $x = 1^{a_1}0^{b_1}1^{a_2}0^{b_2}\cdots 1^{a_r}0^{b_r}$ with $r\geq 1$ and $a_i,b_i\geq 1$ for all $i=1,\ldots,r$, then $\beta(x) := (a_1 + b_1, a_2 + b_2, \ldots, a_r + b_r)$. Otherwise we define $\beta(x) := (\infty)$. If $\beta(x)\neq (\infty)$, then we say that the block code of $x$ is *finite*. Note that the block code is finite if and only if $x$ starts with 1 and ends with 0. Observe also that for all chains from the standard SCD $D_n$, the block code of all chain endpoints is $(\infty)$, whereas along the inner bitstrings of each chain, we see the same finite block code along the chain; see Figure \[fig:gksj\]. For prime $n$, we let ${R^{\mathrm{GKS}}}_n\subseteq \{0,1\}^n$ be the set of all necklace representatives whose block code is lexicographically minimal in their necklace. As $n$ is prime, this gives exactly one representative per necklace.[^2] It was shown in [@GriggsKillianSavage2004] that the representatives ${R^{\mathrm{GKS}}}_n$ induce symmetric and saturated subchains of $D_n$, and we denote these subchains by ${D^{\mathrm{GKS}}}_n$. Clearly, the corresponding chains in the necklace poset form an SCD of $N_n$. #### The Jordan construction in $N_n$ for arbitrary $n$ Jordan’s construction [@Jordan2010] of an SCD of $N_n$ for arbitrary $n$ also uses the standard SCD $D_n$ in $Q_n$ as a starting point, but selects subchains in a different fashion. We let ${R^{\mathrm{J}}}_n$ be the set of all necklace representatives that have the maximum number of unmatched 1s in their necklace. (It is easy to see that those also have finite block code—see Lemma \[lem:finite-jordan\]—but this is irrelevant for the moment.) Note that ${R^{\mathrm{J}}}_n$ may contain several representatives from the same necklace; see Figure \[fig:gksj\]. It was shown in [@Jordan2010] that these representatives ${R^{\mathrm{J}}}_n$ induce symmetric and saturated subchains of $D_n$. We now search for pairs of chains that contain two representatives of the same necklace. Jordan showed in her paper that these duplicates always lie symmetrically at the ends of both chains, so we may trim the shorter of the two chains symmetrically at both ends. If both chains have the same size, then we trim any of the two, yielding different resulting subchains. We repeat this trimming process until each necklace has only a single representative left, and we denote the remaining subchains of $D_n$ by ${D^{\mathrm{J}}}_n$. Clearly, the corresponding chains in the necklace poset form an SCD of $N_n$. We emphasize again that the outcome of the trimming procedure is not unique, but could be made unique by some lexicographic tie-breaking rule. #### Proof of Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\] The following statements are the main steps for proving Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\]. The proofs of these statements are deferred to the next subsection. Our first proposition will be used to show that the cyclic rotations of the complement of any subchain of a chain in the standard decomposition $D_n$ are almost-orthogonal to all other chains in $D_n$, and it can thus be seen as a generalization of the results of Shearer and Kleitman [@ShearerKleitman1979]. \[prop:compl\] Consider two distinct bitstrings $x$ and $y$ with finite block code that lie on the same chain of $D_n$. Then for every $k\geq 0$, the bitstrings $\sigma^{k}({\overline}{x})$ and $\sigma^{k}({\overline}{y})$ do *not* lie on the same chain of $D_n$. The next two lemmas capture crucial properties of the subchains ${D^{\mathrm{GKS}}}_n$ of $D_n$ obtained from the Griggs-Killian-Savage construction. \[lem:unimod-gks\] For every prime $n\geq 2$ and every chain from ${D^{\mathrm{GKS}}}_n$, the corresponding necklaces form a unimodal chain in $N_n$. \[lem:finite-gks\] For every prime $n\geq 2$, all necklace representatives in ${R^{\mathrm{GKS}}}_n$ except $0^n$ and $1^n$ have finite block code. The next two lemmas are the analogous statements for the subchains ${D^{\mathrm{J}}}_n$ obtained from the Jordan construction. \[lem:unimod-jordan\] For every $n\geq 1$ and every chain from ${D^{\mathrm{J}}}_n$, the corresponding necklaces form a unimodal chain in $N_n$. \[lem:finite-jordan\] For every $n\geq 1$, all necklace representatives in ${R^{\mathrm{J}}}_n$ except $0^n$ and $1^n$ have finite block code. With these lemmas in hand, the proof of Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\] is straightforward. We first consider the SCD of $N_n$ for prime $n\geq 2$ obtained via the Griggs-Killian-Savage construction described before, specified by the chains of necklace representatives ${D^{\mathrm{GKS}}}_n$. We let $U_n$ be the SCD of $Q_n$ obtained by unrolling each chain from this SCD. Furthermore, we let ${\overline}{U_n}$ be the SCD of $Q_n$ obtained by unrolling each chain from the complement of this SCD, or equivalently, by taking the complement of $U_n$. In both cases, unrolling is possible because of Lemma \[lem:unimod-gks\] (recall Observations \[obs:unroll-full\] and \[obs:unroll-def\]), where we also use that complementation preserves unimodality. It remains to show that $U_n$ and ${\overline}{U_n}$ are almost-orthogonal SCDs of $Q_n$. For this consider two distinct bitstrings $x'$ and $y'$ on the same chain in $U_n$ that are neither $0^n$ nor $1^n$. There is a unique $k\geq 0$, such that $x'=\sigma^k(x)$ and $y'=\sigma^k(y)$ for two bitstrings $x$ and $y$ on the same chain in ${D^{\mathrm{GKS}}}_n$. Consider the following chain of implications: - $x'$ and $y'$ lie on the same chain in ${\overline}{U_n}$. - ${\overline}{x'}$ and ${\overline}{y'}$ lie on the same chain in $U_n$. - There is a unique $\ell\geq 0$, so that $\sigma^\ell({\overline}{x'})$ and $\sigma^\ell({\overline}{y'})$ lie on the same chain in ${D^{\mathrm{GKS}}}_n$. - $\sigma^{k+\ell}({\overline}{x})$ and $\sigma^{k+\ell}({\overline}{y})$ lie on the same chain in ${D^{\mathrm{GKS}}}_n$. From Lemma \[lem:finite-gks\] we know that $x$ and $y$ have finite block code. Clearly, if two elements lie on the same chain in ${D^{\mathrm{GKS}}}_n$, then they also lie on the same chain in $D_n$. Consequently, applying Proposition \[prop:compl\] falsifies the last of the above statements, so the first one is also false, i.e., we obtain that $x'$ and $y'$ do not lie on the same chain in ${\overline}{U_n}$. To complete the proof that $U_n$ and ${\overline}{U_n}$ are almost-orthogonal, we can verify directly that the unique longest chains in $U_n$ and ${\overline}{U_n}$, namely $(0^n,1^10^{n-1},1^20^{n-2},\ldots,1^{n-2}0^2,1^{n-1}0^1,1^n)$ and its complement, intersect only in $0^n$ and $1^n$. This proof proceeds in an analogous fashion as the proof of Lemma \[lem:unroll-GKS\] presented before, using Lemmas \[lem:unimod-jordan\] and \[lem:finite-jordan\] instead of Lemmas \[lem:unimod-gks\] and \[lem:finite-gks\]. It remains to prove Proposition \[prop:compl\] and Lemmas \[lem:unimod-gks\]–\[lem:finite-jordan\], which will be done in the next three subsections. #### Proof of Proposition \[prop:compl\] For the proof we will need the following auxiliary lemma. \[lem:compl-aux\] Let $x,y\in\{0,1\}^n$, and let $i$ and $j$ be two distinct indices such that $x_i x_{i+1}=01$ and $y_i y_{i+1}\neq 01$, and $x_j x_{j+1}\neq 01$ and $y_j y_{j+1}=01$. Then the sets $M(\sigma^k(x))$ and $M(\sigma^k(y))$ are distinct for all $k\geq 0$. Clearly, for any bitstring $z$ we have that $(\ell,\ell+1)\in M(z)$ if and only if $z_\ell z_{\ell+1}=01$. In the following we consider all indices in $x$ and $y$ modulo $n$, with $1,\ldots,n$ as representatives. As a consequence of our first observation, if $k\neq -i$, then $(k+i,k+i+1)$ is in $M(\sigma^k(x))$ but not in $M(\sigma^k(y))$. Similarly, if $k\neq -j$, then $(k+j,k+j+1)$ is in $M(\sigma^k(y))$ but not in $M(\sigma^k(x))$. As $i\neq j$, the two sets are distinct in any case. We assume without loss of generality that $|x|<|y|$, i.e., $y$ is obtained from $x$ by repeatedly applying $\tau$. Furthermore, let $i$ and $j$ be the indices of the leftmost unmatched 0 in $x$ and the rightmost unmatched 1 in $y$, respectively. More formally, we have $i=\min U_0(x)$ and $j=\max U_1(y)$. As $x$ and $y$ have finite block code, we have $x_1=y_1=1$ and $x_n=y_n=0$. In particular, these positions are unmatched, so $i>1$ and $j<n$ are well-defined. Moreover, we clearly have $i\leq j$. Note that $x_{i-1}$ is either matched or an unmatched 1. However, as every block of matched parentheses ends with 1, we have $x_{i-1}=y_{i-1}=1$ in any case. A similar argument shows that $y_{j+1}=x_{j+1}=0$. Summarizing, the situation looks as follows: From these observations it follows that ${\overline}{x_{i-1}x_i}=01$ and ${\overline}{y_{i-1}y_i}=00$, and similarly ${\overline}{x_j x_{j+1}}=11$ and ${\overline}{y_j y_{j+1}}=01$. Applying Lemma \[lem:compl-aux\] to the indices $i-1$ and $j$ in ${\overline}{x}$ and ${\overline}{y}$ hence shows that $M(\sigma^k({\overline}{x}))$ and $M(\sigma^k({\overline}{y}))$ are distinct for all $k\geq 0$. As each chain of $D_n$ is uniquely described by its matched pairs of parentheses, we obtain that $\sigma^k({\overline}{x})$ and $\sigma^k({\overline}{y})$ do not lie on the same chain, proving the proposition. #### Proofs of Lemmas \[lem:unimod-gks\] and \[lem:finite-gks\] This is trivial, as there are only two deficient necklaces for prime $n$, namely ${\langle 0^n\rangle}$ and ${\langle 1^n\rangle}$. Each bitstring $x$ other than $0^n$ and $1^n$ has two consecutive bits $x_ix_{i+1}=01$. Consequently, the rotated bitstring $\sigma^{-i}(x)$ starts with 1 and ends with 0 and therefore has finite block code. #### Proofs of Lemmas \[lem:unimod-jordan\] and \[lem:finite-jordan\] This proof was suggested in Wille’s thesis [@Wille2018] on edge-disjoint SCDs in the $n$-cube, and is reproduced here with her permission. Consider a chain from ${D^{\mathrm{J}}}_n$ with a bitstring $x$ such that ${\langle x\rangle}$ is deficient and of size $d<n$. We will show that $x$ is an endpoint of this chain and that the other endpoint $y$ corresponds to a deficient necklace ${\langle y\rangle}$ of the same size $d$. The following argument is illustrated in Figure \[fig:unimod-jordan\]. Define $r := n/d$ and let $v\in\{0,1\}^d$ be such that $x = v^r$. We assume without loss of generality that $|x|\leq n/2$, implying that $|v| \leq d/2$. As every matched pair of parentheses involves exactly one 0 and one 1, it follows that $|U_0(v)|\geq |U_1(v)|$. This ensures that we can match every unmatched 1 in the $i$th copy of $v$ in $x$ with an unmatched 0 in the $(i-1)$th copy of $v$ for all $i=2,\ldots,r$, implying that $U_1(x)=U_1(v)$. We proceed to show that $x$ is the starting point of its chain in ${D^{\mathrm{J}}}_n$. If $U_1(x) = \emptyset$, then $x$ is the starting point of its chain in $D_n$ by definition, and consequently also the starting point of its chain in ${D^{\mathrm{J}}}_n$. Otherwise $U_1(x)\neq \emptyset$, and we show that then $z:=\tau^{-1}(x) \notin {R^{\mathrm{J}}}_n$. By our observation from before, all unmatched 1s of $x$ lie in the first copy of $v$, so we have $z=\tau^{-1}(x) = \tau^{-1}(v)v^{r-1}$. Together with the fact that $|U_1(\tau^{-1}(v))|=|U_1(v)|-1$, we obtain $U_1(z)=U_1(x)-1$ and $U_1(\sigma^{-d}(z))=U_1(x)$. This implies $U_1(z)<U_1(\sigma^{-d}(z))$, i.e., $z$ does not have the maximum number of unmatched 1s among the representatives of its necklace, so indeed $z \notin {R^{\mathrm{J}}}_n$. We now show that $y:=\tau^{n-2|x|}(x)$ has the form $y = w^r$ for some $w\in\{0,1\}^d$. This implies $|{\langle y\rangle}|\leq d$, which is sufficient to prove that actually $|{\langle y\rangle}|=d$, as otherwise we could reverse the roles of $x$ and $y$ in the proof, yielding a contradiction. For $i=1,\ldots,r-1$, the number of 0s in the $i$th copy of $v$ in $x$ that are unmatched in $x$ is $|U_0(v)|-|U_1(v)| = d - 2|v| = (n-2|x|)/r$. Consequently, by applying $\tau^{n-2|x|}$ to $x$, we arrive at $\tau^{n-2|x|}(x) = w^r$ with $w = \tau^{d-2|v|}(v)$. Let $x\in {R^{\mathrm{J}}}_n\setminus\{0^n,1^n\}$, i.e., $x$ has the maximum number of unmatched 1s among the representatives of its necklace. Note that $x_1=1$, as otherwise we could rotate $x$ to the left until the first 1-bit reaches the first position, which would strictly increase the number of unmatched 1s. A similar argument shows that $x_n=0$, as otherwise we could rotate $x$ to the right until the last 0-bit reaches the last position, which would strictly increase the number of unmatched 1s. These two observations imply that $x$ has finite block code. SAT based computer search {#sec:sat} ========================= In this section we describe our computer search for SCDs in cubes of small dimension using a SAT solver. The reduced necklace graph -------------------------- We let $N_n^-$ denote the multigraph obtained as follows: We consider the cover graph of $N_n$, where the edge multiplicities are given by the capacities (as defined before Lemma \[lem:cyclic-bound\]), and we remove all edges between a full necklace and a deficient necklace, whenever the deficient necklace is closer to the middle level(s); see Figure \[fig:N6m\]. Note here that even though $N_n^-$ is a (multi)graph, it inherits the level structure from the poset $N_n$, so all the poset notions (chain, SCD, etc.) from before translate to $N_n^-$ in the natural way. The aforementioned edge removals enforce that a chain containing a deficient necklace must either start or end at this necklace. Informally speaking, removing those edges does not harm us when searching for unimodal chains and SCDs, as they must not be contained in any unimodal chain anyway. SAT formula for edge-disjoint SCDs of $N_n^-$ --------------------------------------------- In this section we describe a propositional formula $\Phi(n,s)$ in conjunctive normal form (CNF), whose solutions correspond to $s$ edge-disjoint unimodal SCDs of $N_n^-$. In the later sections we show how to modify those solutions, so that they can be unrolled to $s$ edge-disjoint (and good almost-orthogonal) SCDs of $Q_n$. Throughout this section, the integers $n\geq 1$ and $s\geq 2$ are fixed. We first compute the level sizes of $N_n^-$, and, based on this, the number $c_n$ of chains and the chain sizes that an SCD must have. Different SCDs will be indexed by $i=1,\ldots,s$, and different chains in the $i$th SCD will be indexed by $j=1,\ldots,c_n$. We also assume that the chains of the $i$th SCD are indexed in decreasing order of their size, so chain $j=1$ is the unique longest chain, and chain $j=c_n$ is a shortest chain. We use Boolean variables $X_{i,j,e}$ to indicate that edge $e$ of $N_n^-$ is contained in chain $j$ of decomposition $i$. Moreover, Boolean variables $Y_{i,j,u}$ are used to indicate that node $u$ of $N_n^-$ is contained in chain $j$ of decomposition $i$. Clearly, we introduce these variables only for pairs $(j,e)$ and $(j,u)$ in the relevant levels. For instance, the node $u={\langle \emptyset\rangle}$ can only be contained in the longest chain 1, so we only have a single variable $Y_{i,j,u}$ for fixed $i$ and $u$, namely $Y_{i,1,u}$. In the following we describe the clauses of our CNF formula $\Phi(n,s)$ in verbal form. Link edge to node variables: : If some edge variable $X_{i,j,e}$ is satisfied and the edge $e$ connects nodes $u$ and $v$, then both corresponding node variables $Y_{i,j,u}$ and $Y_{i,j,v}$ must be satisfied. Moreover, if some node variable $Y_{i,j,u}$ is satisfied and chain $j$ extends above the level of $u$, then at least one edge variable $X_{i,j,e}$ for an edge $e$ incident with $u$ and a node from a level above must be satisfied. Similarly, if chain $j$ extends below the level of $u$, then at least one edge variable $X_{i,j,e}$ for an edge $e$ incident with $u$ and a node from a level below must be satisfied. At a deficient necklace $u$, one or both of these edge sets are empty, and consequently a chain extending beyond the level of $u$ in the corresponding direction will never be mapped to $u$. Force chains to be present: : For any chain $j$ and a level $k$ visited by this chain, at least one of the node variables $Y_{i,j,u}$, where $u$ runs over all nodes on level $k$, must be satisfied. Node-disjoint chains: : For any node $u$ on a level visited by two chains $j$ and $j'$ in the same SCD $i$, at most one of the two node variables $Y_{i,j,u}$ or $Y_{i,j',u}$ must be satisfied. Enforce unimodality: : For any deficient necklace $u$ on some level $k\leq (n-1)/2$, if one of the node variables $Y_{i,j,u}$ is satisfied, then one of the corresponding variables $Y_{i,j,v}$, where $v$ is on level $n-k$ and satisfies $|u|=|v|$, has to be satisfied. Note that there may be deficient necklaces of different sizes on the same level. Edge-disjoint SCDs: : For any two SCDs $i$ and $i'$, any two chains $j$ and $j'$ from those SCDs, and any edge $e$ between two consecutive levels that are intersected by both chains, at most one of the two edge variables $X_{i,j,e}$ and $X_{i',j',e}$ must be satisfied. A useful trick to reduce the size of the resulting CNF formula dramatically is to fix some SCDs to be particular standard decompositions, for instance the ones mentioned in Lemmas \[lem:unroll-GKS\] and \[lem:unroll-J\], so that the corresponding edge and node variables are not free, but fixed constants. Similarly, we may also couple certain pairs of SCDs to be complements of each other, so only one set of variables is free, and the other is forced. Unrolling by incremental CNF augmentation ----------------------------------------- Any solution of the CNF formula $\Phi(n,s)$ described before corresponds to $s$ edge-disjoint unimodal SCDs of $N_n^-$ (and $N_n$). However, as the example in Figure \[fig:not-unrollable\] shows, these SCDs cannot always be unrolled to $s$ edge-disjoint SCDs of $Q_n$. Unfortunately, we have no systematic way to avoid this problem, so we resolve it in an ad-hoc fashion: We compute a satisfying assignment of $\Phi(n,s)$ using a SAT solver, and we test whether the current solution can be unrolled. If not, then we take the first pair of chains from two SCDs that cannot be unrolled simultaneously, and we add an additional clause that prevents this particular pair of chains to appear in a solution, yielding an augmented CNF formula $\Phi'(n,s)$. The advantage of this approach is that an incremental SAT solver has the ability to reuse information about the structure of $\Phi(n,s)$ when solving the augmented instance $\Phi'(n,s)$. We repeat this iterative process until we either find a solution that can be unrolled to $s$ edge-disjoint SCDs of $Q_n$, or the resulting CNF formula has no satisfying assignment. In practice, this last case usually cannot be detected, as the solvers take too long to certify non-satisfiability. Good almost-orthogonal SCDs --------------------------- We take a similar incremental approach to compute good families of almost-orthogonal SCDs. Again we start with the CNF formula $\Phi(n,s)$, and keep adding constraints that prevent certain pairs of chains to appear. Specifically, we forbid a pair of chains if it cannot be unrolled or if the unrolled chains intersect in more than one node (or in more than two if these are the longest chains). It turns out that adding the following clauses right in the beginning speeds up the incremental search process considerably, as it immediately excludes many local violations of almost-orthogonality. Forbid diamonds: : Consider four edges $e=(x,v)$, $f=(v,y)$, $g=(x,w)$, and $h=(w,y)$ that form a ‘diamond’, i.e., $x$ is on some level $k$, the elements $v$ and $w$ are on level $k+1$, and $y$ is on level $k+2$ of $N_n^-$, such that for any necklace representative of $x$, flipping the two bits corresponding to $e$ and $f$ leads to the same representative of $y$ as flipping the two bits corresponding to $g$ and $h$. For any two SCDs $i$ and $i'$ and any two chains $j$ and $j'$ from those SCDs that intersect all levels $k$ to $k+2$, not all four edge variables $X_{i,j,e}$, $X_{i,j,f}$, $X_{i',j',g}$, $X_{i',j',h}$ must be satisfied. The goodness property could be enforced in a similar incremental way, but coincidentally, the solutions we obtained all satisfied this property right away. In hindsight, this might not be so surprising, given that the graph formed by the 2-element chains is relatively sparse: Every SCD contributes a matching that satisfies only a $\Theta(1/n)$-fraction of all nodes on average. Implementation details ---------------------- We used the incremental SAT solvers Glucose [@AudemardLS2013] and MiniSat [@EenSorensen2003]. The unrolling tests and incremental CNF augmentation that drive the SAT solver were implemented in C++. Table \[tab:bench\] shows the sizes of the generated SAT instances, running times, and memory requirements for the four families of SCDs that we computed for proving Lemmas \[lem:ortho\] and \[lem:edge\] (shown in Figures \[fig:Q7\_4\_ortho\]–\[fig:Q11\_6\_edge\]). $n$ $s$ SCDs \#variables \#clauses solver time memory ------ ----- ------------------- ------------- -------------- --------- --------- -------- $7$ $4$ almost-orthogonal $1.152$ $1.484$ Glucose 0.048 s 11 MB $11$ $4$ almost-orthogonal $132.432$ $1.437.326$ MiniSat 23 min 972 MB $10$ $5$ edge-disjoint $49.900$ $381.880$ Glucose 2:29 h 501 MB $11$ $6$ edge-disjoint $198.648$ $14.258.688$ Glucose 3:51 h 1.6 GB : Size of SAT instances and required computing resources. The number of variables and clauses are recorded at the end of the CNF augmentation and take into account internal simplifications carried out by the solver.[]{data-label="tab:bench"} Proofs of Lemmas \[lem:ortho\] and \[lem:edge\] {#sec:small} =============================================== To prove Lemmas \[lem:ortho\] and \[lem:edge\], we describe families of four good almost-orthogonal SCDs and five edge-disjoint SCDs of the $n$-cube for $n=7,11$ or $n=10,11$, respectively. We specify those SCDs in Figures \[fig:Q7\_4\_ortho\]–\[fig:Q11\_6\_edge\] in compact form, by unimodal SCDs of the necklace poset $N_n$, from which the SCDs in the $n$-cube can be recovered by unrolling as described in Section \[sec:unroll\] and by taking complements of some of the resulting SCDs. We specify each chain in one of these SCDs uniquely by a particular choice of necklace representatives (recall Observations \[obs:unroll-full\] and \[obs:unroll-def\] and the remarks between them). The representatives are described by specifying the minimal and maximal elements of each chain, and the elements from $[n]$ that are added/removed from the sets when moving along the chains. The resulting full SCDs of $Q_n$ are provided in files that can be downloaded from the third authors’ website [@www] and on the arXiv [@preprint], together with a simple Python program for verification. In those files, subsets of $[n]$ are encoded by their characteristic bitstrings of length $n$ (as in Section \[sec:proof-unroll\]). Proof of Lemma \[lem:ortho\] ---------------------------- We now describe four good almost-orthogonal SCDs of $Q_7$ and $Q_{11}$. The SCDs $V_7$ and $V_{11}$ defined below are constructed as in [@GriggsKillianSavage2004] (recall Section \[sec:proof-unroll\]), and are then unrolled together with their complements as described in Lemma \[lem:unroll-GKS\]. Figure \[fig:Q7\_4\_ortho\] shows two SCDs $V_7$ and $W_7$ in $N_7$, each consisting of 5 chains, that together with their complements ${\overline}{V_7}$ and ${\overline}{W_7}$ can be unrolled to four good almost-orthogonal SCDs of $Q_7$; see the file `Q7_4_ortho.txt`. Specifically, the union of all $4\cdot 14=56$ edges given by all chains of size 2 of those SCDs forms one cycle of length 14 and 14 paths on 3 edges each. These are indeed all unicyclic components. Figure \[fig:Q11\_4\_ortho\] shows two SCDs $V_{11}$ and $W_{11}$ in $N_{11}$, each consisting of 42 chains, that together with their complements ${\overline}{V_{11}}$ and ${\overline}{W_{11}}$ can be unrolled to four good almost-orthogonal SCDs of $Q_{11}$; see the file `Q11_4_ortho.txt`. Specifically, the union of all $4\cdot 132=528$ edges given by all chains of size 2 of those SCDs forms 66 isolated edges, 22 paths on 2, 3, or 7 edges each, 22 trees on 5 edges (the trees have one degree 3 node with paths of lengths 1, 1, and 3 attached to it), and 2 cycles of length 22 with an additional dangling edge attached to each node. These are all unicyclic components. Proof of Lemma \[lem:edge\] --------------------------- We now describe five edge-disjoint SCDs of $Q_{10}$ and six edge-disjoint SCDs of $Q_{11}$. The SCD $X_{10}$ defined below is constructed as in [@Jordan2010] (recall Section \[sec:proof-unroll\]), and is then unrolled together with its complement as described in Lemma \[lem:unroll-J\]. The SCDs of $Q_{11}$ were computed with the help of Lemma \[lem:prime-unimodal\]. Figure \[fig:Q10\_5\_edge\] shows three SCDs $X_{10}$, $Y_{10}$, and $Z_{10}$ in $N_{10}$, each consisting of 26 chains, that together with the complements ${\overline}{X_{10}}$ and ${\overline}{Y_{10}}$ can be unrolled to five edge-disjoint SCDs of $Q_{10}$; see the file `Q10_5_edge.txt`. Figure \[fig:Q11\_6\_edge\] shows three SCDs $X_{11}$, $Y_{11}$, and $Z_{11}$ in $N_{11}$, each consisting of 42 chains, that together with their complements ${\overline}{X_{11}}$, ${\overline}{Y_{11}}$, and ${\overline}{Z_{11}}$ can be unrolled to six edge-disjoint SCDs of $Q_{11}$; see the file `Q11_6_edge.txt`. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Kaja Wille for several inspiring discussions about symmetric chain decompositions. We also thank the anonymous referee of this paper whose suggestions improved the presentation in several places. Manfred Scheucher was supported by DFG Grant FE 340/12-1. [^1]: An extended abstract of this paper has been submitted to Eurocomb 2019. [^2]: If $n$ is composite, the method fails, as there may be several necklace representatives with the same block code, e.g., $x=100110$ and $y=110100$ with $\beta(x)=\beta(y)=(3,3)$.
2024-07-04T01:27:17.568517
https://example.com/article/8899
Attorney calls Howard death ruling 'terrible,' 'wrong' The attorney for the family of Everette Howard calls the coroner's ruling in his death "incomplete" and "wrong." WCPO WCPO Copyright 2012 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. WCPO Copyright 2012 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. WCPO Copyright 2012 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Copyright 2011 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. CINCINNATI - The attorney for the North College Hill teen who died following the use of a Taser last August is calling the coroner's ruling in the death "terrible,""wrong" and "incomplete." Hamilton County Coroner Lakshmi Sammarco, M.D. reported the official cause of 18-year-old Everette Howard's death is "unknown/undetermined," stating in a media release, "...it is frustrating not to be able to ascertain a definitive cause of death." But Al Gerhardstein, attorney for the Howard family, says it is not possible to definitively blame the Taser because doctors know that electricity does not leave evidence behind in the body. He says this case called for Dr. Sammarco to look at other evidence such as witness statements of how Howard reacted after the Taser was used on him. "We do this all the time with forensic science. You have to look at the circumstances surrounding the death. I don't see any evidence in this coroner's report that she did that," Gerhardstein said. Amnesty International's look at medical examiners' findings linking Taser shocks and deaths found, "Such cases are not always clear cut, and one challenge for pathologists is that there are usually no obvious physical signs on a body to show the effects of electrical shocks. Various circumstances, including proximity of the shocks to the fatal collapse; the toxicology findings; and whether the impact of restraining procedures, including CED (Taser) shocks, could have contributed cardiac or respiratory failure." Gerhardstein said there was good news from the autopsy report, because the toxicology results proved there were no drugs or alcohol in Howard's system when he died. "3 in the morning, he's in the college dorm and he's not drinking or using drugs. He's a very clean and upstanding kid, very healthy kid...and the only thing that happened that night is he was Tased and then he died and she's saying this doesn't matter, the Taser doesn't matter...I don't think so," said Gerhardstein. Howard's family has waited to learn how he died for 10 months after UC police used a Taser to subdue him on campus Aug. 6, 2011. Travonna and Everette Howard, Sr. told 9 News in an exclusive interview they were devastated when the coroner gave them the news about their son before releasing it to the public. "I was angry. I was angry," said Everette Howard, Sr. "This is not what we were looking for," said Travonna Howard. "It's like my son didn't even matter. He doesn't count, and hearing the answer from the coroner, they're pretty much telling us 'Oh well, we don't know what happened to him and you know that's the end of that.'" The coroner's news release states that several cardiac pathology experts were consulted and only one agreed to review the case. The release states that that coroner suffered a medical episode, which further delayed the investigation. The investigation was also delayed by the death of the previous coroner, Dr. Anant Bhati. Also, 9 News first reported the investigation was delayed early on because Ohio's Bureau of Criminal Investigations opted to have the Taser tested for its electrical output at a lab in Canada, which had the Taser caught up in customs issues. "We waited for a good report, we didn't get one, and now we'll take it into our own hands," said Gerhardstein, who plans to hire his own expert now to look into the death. "It hurts to know that somebody has had your kid's heart for 10 months," said Everette, Sr. "You know his heart is not even buried with him. It's been cut, mutilated...and no answer." Copyright 2012 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
2023-09-13T01:27:17.568517
https://example.com/article/1832
NEW YORK (Reuters) - Wall Street's major indexes fell on Monday, with the S&P 500 sliding 2 percent, weighed by technology and financial stocks as shares of Apple Inc AAPL.O and Goldman Sachs Group Inc GS.N came under pressure. Traders work on the floor at the New York Stock Exchange (NYSE) in New York City, U.S., November 12, 2018. REUTERS/Brendan McDermid STORY: [.N] COMMENTS: RANDY FREDERICK, VICE PRESIDENT OF TRADING AND DERIVATIVES FOR CHARLES SCHWAB IN AUSTIN, TEXAS: “At the end of the week on Friday, people were sort of taking risk off the table going into the weekend. That’s not an uncommon thing especially when we had a pretty sharp rally earlier in the week after the midterm elections. And then we have what I call a “partial holiday” today. So anytime you’ve got a weekend plus an almost holiday weekend – although markets are open today the banks aren’t – for people to take some risk off the table, especially after a bullish move in the market, is not uncommon. But I kind of thought we would have a bit of a bounce this morning and we didn’t get that at all. “I think the bigger thing was the tech sector. Certainly it was a couple of stocks that do business with Apple talking about lower orders and concerns about that. And Apple itself is down a lot. It’s a big bellwether in many indexes and it’s considered a bellwether for the whole tech industry and I think that just kind of sparked some panic across several tech sectors including semiconductors... The thing about tech is that it’s the biggest sector in the market – it’s 21 percent of the overall market, which is huge. But also these guys have been an outperformer for a good part of this year and, in fact, not just this year but for several years. So they tend to be highly volatile. When they lead to the upside they also tend to lead to the downside, and so when there becomes some concern about that, people get a little uncertain, a little scared and they start selling them off.” CHUCK CARLSON, CHIEF EXECUTIVE OFFICER AT HORIZON INVESTMENT SERVICES IN HAMMOND, INDIANA “Looks like we’ve gone through the bulk of the earnings season so now the market’s attention is going to be focused on non-earnings -related things. And today’s trading may be a bit exacerbated by the holiday and volumes may be a bit lower. But a couple of things are going on. “One, you continue to see the adjustment in market leadership, today that was really accelerated with money continuing to move out of the high-growth tech area, spurred by reports that Apple’s smartphone business may be slowing. Money is shifting from there into risk-off assets, utilities, defensive stocks, dividend payers. That’s an adjustment that had been occurring for some time, but there seems to be an acceleration today. “Couple that with concerns about the dollar’s strength and what that might mean for corporate profits going forward. The biggest issue is the market continuing to grapple with what appears to be a pendulum swing back toward value, dividend-paying stocks, risk-off assets versus where we were early in the year which was growth, growth, growth. When you have significant shifts like that you have tons of money that’s flopping around out there, it can result in days like this where you have these kinds of adjustments going on. “Certainly volume will impact the thinner market and that can accentuate moves. That is a factor that may be exacerbating the decline today, you probably have a lower participant market today. Tomorrow it will be interesting to see if people are willing to step in and buy the two day dip. When the bond market opens along with the stocks, maybe the stock market took all of the selling today. If people are in a selling mood they’re going to sell what they can sell. Today the only thing they could sell were stocks. Tomorrow is going to be pretty telling in terms of how significant this one-day trading activity was.” PETER JANKOVSKIS, CO-CHIEF INVESTMENT OFFICER AT OAKBROOK INVESTMENTS LLC IN LISLE, ILLINOIS: “I would say at the moment it seems the path of least resistance is down. Obvious the techs are dragging on the market pretty heavily. We have also seen energy, perhaps joining in a little bit more as we get closer to the close because we have seen oil up earlier in the day and now it has turned negative. “Most of the other stuff has been out there, tech has been down all day, industrials with GE have been weighing all day, that is the biggest change I can see. (Goldman Sachs) is certainly hitting the financials, although the financials as a whole are kind of in line with the market overall. “You just have to wait and see, realistically we are getting to the point where things are a little overdone on this but it sometimes takes a while for the market to come to that same conclusion.” RANDY FREDERICK, VICE PRESIDENT OF TRADING AND DERIVATIVES FOR CHARLES SCHWAB IN AUSTIN “I think it’s just Apple. It’s such a big company and such a bellwether for all things tech. The tech sector has been a leader year-to-date and it’s a high beta sector, it starts to become a bigger drag on the way down. “It seems a bit overdone to me. There is obviously some volatility in the oil markets and the dollar is up a little bit, but this kind of a selloff is pretty substantial. It seems to be fairly widespread. It’s a bit surprising.”
2024-01-21T01:27:17.568517
https://example.com/article/5427
If you're in Ohio and an MLS fan, Hell is Real, as the derby between FC Cincinnati and Columbus Crew SC is known. Ahead of the first league clash between the teams at Nippert Stadium as part of Heineken Rivalry Week (Sunday, 6 pm ET | FS1; MLS LIVE on DAZN in Canada), here are some choice words and plays from the rivalry so far.
2023-09-04T01:27:17.568517
https://example.com/article/6817
Cannabis research program will be available to Canadian university students Université de Sherbrooke (UdeS), along with CannaSher Inc, has announced the creation of the CannaSher Chair on Medical Cannabis “to explore the full potential of cannabis plants and to identify the most promising molecules,” according to a May 23 press release. The project will have a budget totalling over 1.5 million dollars, with CannaSher investing $900,000 combined with an UdeS contribution of $703,700. The establishment of the Chair “will also be used as leverage to obtain additional research funding” in the field, it stated in the press release. See also Cannabis research program With Professor Kamal Bouarab of the Department of Biology at the helm, the CannaSher Chair aims to “generate and improve cannabis shoots capable of producing various cannabinoids (molecules of therapeutic interest).” Prof. Baoubab also intends to “test and optimize a new culture system developed by CannaSher to grow cannabis plants,” as well as to make provisions for students to train within the research program at the undergraduate and graduate levels. “Cannabinoids are substances produced by animals and humans as potent remedies against various symptoms including pain, depression, loss of appetite, anxiety, or inflammation,” says Prof. Bouarab. “Cannabis plants produce hundreds of cannabinoids, some of which have effects similar to the ones we produce ourselves. In collaboration with CannaSher, our research will help promote certain molecules in plants, so that their effects can be studied in a therapeutic context.” Cannabis curriculum for university students “I’m planning to recruit two research assistants to work in the project upon [receiving] our licence from Health Canada,” Prof. Bouarab told Daily Hive by phone when asked about educational opportunities. “Students will be under my supervision and will work directly with the two research assistants. “The two research assistants will set up different experiments of the project. I will start to recruit students for Master at beginning of the second year of the Chair.” The professor says that UdeS will complete criminal background checks for all personnel and students who will work on the project. Subjects that he would like to develop with students include the optimization of “a new culture system developed by CannaSher,” the “selection of the best cultivars according to their cannabinoid profiles,” and the development of “crosses between selected cultivars to improve their contents [of] cannabinoids.” A strategic partnership CannaSher president Steven Blanchard hailed the creation of the Chair as “an important milestone” for the company. “We are very proud to work in partnership with the Université de Sherbrooke and with Prof. Bouarab,” he commented on Wednesday. Professor Jean-Pierre Perreault, VP of Research and Graduate Studies at Université de Sherbrooke, is similarly enthused. “CannaSher is a young company for whom research quality is fundamentally important. The business alliance we are beginning with these entrepreneurs is part of our vision for the university’s research.” CannaSher Chair is a new partnership between CannaSher and the Université de Sherbrooke (Université de Sherbrooke) CannaSher is a privately-owned corporation based in Sherbrooke, QC, and is in the process of obtaining the necessary licences to produce and distribute medical cannabis. CannaSher intends to “manufacture cannabis oils and cannabis-based products for medical applications,” and expects to start producing cannabis by the end of this year. Although the project is just getting started, Blanchard is optimistic about the future of the Chair. “We are convinced that Prof. Bouarab’s research will contribute to increase our scientific understanding of cannabis and its medical use and will ultimately improve the health of Canadians and patients around the world,” he stated.
2023-12-21T01:27:17.568517
https://example.com/article/5588
Prenatal diagnosis, genetics and reproductive decision-making. Recent developments in genetic science will potentially have a significant impact on reproductive decision-making by adding to the list of conditions which can be diagnosed through prenatal diagnosis. This article analyses the jurisdictional variations that exist in Australian abortion laws and examines the extent to which Australian abortion laws specifically provide for termination of pregnancy on the grounds of fetal disability. The article also examines the potential impact of pre-implantation genetic diagnosis on reproductive decision-making and considers the meaning of reproductive autonomy in the context of the new genetics.
2024-01-09T01:27:17.568517
https://example.com/article/7378
Q: ASP.NET HttpHandler error after Windows 10 Fall Creators Update A couple of days ago my Windows 10 development machine got the Falls Creators Update. Since then every attempt to register a custom HttpHandler in any ASP.NET Web Site (not web application) fails with error: Failed to map the path '/App_GlobalResources/'. The stack trace is: [InvalidOperationException: Failed to map the path '/App_GlobalResources/'.] System.Web.Hosting.HostingEnvironment.MapPathActual(VirtualPath virtualPath, Boolean permitNull) +8965114 System.Web.Hosting.HostingEnvironment.MapPathInternal(VirtualPath virtualPath) +42 System.Web.Compilation.BuildManager.CheckTopLevelFilesUpToDate2 (StandardDiskBuildResultCache diskCache) +295 System.Web.Compilation.BuildManager.CheckTopLevelFilesUpToDate (StandardDiskBuildResultCache diskCache) +55 System.Web.Compilation.BuildManager.RegularAppRuntimeModeInitialize() +174 System.Web.Compilation.BuildManager.Initialize() +238 System.Web.Compilation.BuildManager.InitializeBuildManager() +267 System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags) +224 [HttpException (0x80004005): Failed to map the path '/App_GlobalResources/'.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +9002835 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +333 I am using Visual Studio 2015 with the integrated IIS express. I have some old asp.net web sites (.net version 2.0) that used to work. Now, even the following example does not work: App_Code/MyHandler.cs: public class MyHandler : IHttpHandler { public MyHandler() { // // TODO: Add constructor logic here // } public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/html"; context.Response.Write("Hello"); } } web.config: <?xml version="1.0"?> <configuration> <system.webServer> <handlers> <add name="MyHandler" verb="*" path="*.htm" type="MyHandler"/> </handlers> </system.webServer> </configuration> I really cannot understand what is happening and I've run out of ideas and patience. Please, help. TIA. A: I've decided to follow @Hamlet Hakobyan's suggestion and uninstall Windows 10 Fall Creators Update. Let me tell you right away that the problem is now fixed. Here is my development setup in case someone faces the same situation: Windows 10 build 1709 (Fall Creators Update) Visual Studio Community 2015 Update 3 ASP.NET Web Site project (not web application project) .NET framework version 2.0 IHttpHandler implementation in .cs file in App_Code folder. Http handler registration in web.config for IIS Express running in Integrated Mode as per: https://msdn.microsoft.com/en-us/library/46c5ddfy.aspx If your http handler(s) stopped working after having installed the Windows 10 Fall Creators Update and you receive the not-so-useful error below: Failed to map the path '/App_GlobalResources/'. then go ahead and revert to previous Windows 10 build (1703). Hope it helps.
2024-05-02T01:27:17.568517
https://example.com/article/5141
Q: Find and pass multiple attributes to ajax i have jquery code that finds "src" attribute of all images in div (every image has same class: .image_class), then sends it to ajax (and then to php file that finds that images and delete them). Here is code: $remove_images = $.each($("#main_div").find(".image_class").attr("")); $.ajax({ type: 'POST', data: { action: 'delete_images', imagefile: $remove_images }, url: 'delete.php', success: function(msg) { alert("" + msg + ""); } }) PHP file looks like this: if($_POST["action"]=="delete_images"){ @unlink($_POST['imagefile']); } But this does not work. I suspect something is wrong in $remove_images (lets say "$.each is bad). A: You can use the .map() method to collect stuff from a jQuery collection. The callback will run for every element in the collection that is returned by .find() in our case. this will represent the current DOM element. Whatever you return from the callback will be included in the result. var $remove_images = $("#main_div").find(".image_class").map(function () { return $(this).attr('src'); }).get(); jsFiddle Demo I used the .get() method to convert the result into a plain Javascript array. Please read the manual of .map() for further information (and in general, get used to using the manual, because you could have found the answer to your question there - the examples are great in most cases).
2024-01-10T01:27:17.568517
https://example.com/article/7289
Q: Logging with DEBUG level in JBoss 7.1.1 Goal: My application should have messages with ERROR and DEBUG levels. Level of logging must set (switch) via JBoss Admin Console. Logging should be written to standard JBoss log file and server console. I tried to use java.util.logging.Logger, but this logger have not necessary levels. I switched to log4j with slf4j. Messages with ERROR level is exist. Problem with DEBUG and System.out.println. Interesting, that DEBUG level is visible in testing phase. import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Logger creation and configuration */ public class ResourcesLog { @Produces Logger getLog(InjectionPoint ip) { String category = ip.getMember().getDeclaringClass().getName(); return LoggerFactory.getLogger(category); } } or just LOG = LoggerFactory.getLogger(MyClass.class); Pom file: <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.6</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.6</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.6</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.6</version> </dependency> Maybe contains unnecessary. I can't set slf4j-simple and slf4j-api as "provided" - got an error. Should this libs be inside war? jboss-deployment-structure.xml in WEb-INF: <?xml version="1.0" encoding="UTF-8"?> <jboss-deployment-structure> <deployment> <exclusions> <module name="org.apache.commons.logging" /> <module name="org.apache.log4j" /> <module name="org.jboss.logging" /> <module name="org.jboss.logging.jul-to-slf4j-stub" /> <module name="org.jboss.logmanager" /> <module name="org.jboss.logmanager.log4j" /> <module name="org.slf4j" /> <module name="org.slf4j.impl" /> </exclusions> </deployment> </jboss-deployment-structure> And log4j.xml settings in resource folder (in war is in 'WEB-INF/classes/log4j.xml'): <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false"> <appender name="FILE" class="org.jboss.logging.appender.DailyRollingFileAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="File" value="${jboss.server.log.dir}/server.log"/> <param name="Append" value="true"/> <param name="DatePattern" value="'.'yyyy-MM-dd"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c] %m%n"/> </layout> </appender> <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="Target" value="System.out"/> <param name="Threshold" value="DEBUG"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n"/> </layout> </appender> <category name="org.mypackage"> <priority value="DEBUG"/> </category> <root> <level value="INFO"/> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> </root> </log4j:configuration> A: Since you want your logs to go to the standard logging configuration and you want to control this configuration with a management option, there is no need to use log4j. Ditch the log4j.xml and the jboss-deployment-structure.xml. Also you'll only need the slf4j-api and it needs to be marked as provided. Next you need to configure a logger on the server. In CLI it would be a command like the following for a standalone server /subsystem=logging/logger=org.mypackage:add(level=DEBUG) You'd also need to set the console handler and the file handler to allow for debug messages. By default the file handler is already set to print debug messages. To change the console handler to allow debug messages execute the follow CLI command. /subsystem=logging/console-handler=CONSOLE:write-attribute(name=level,value=DEBUG) These changes can also be made in the web console as well.
2023-10-08T01:27:17.568517
https://example.com/article/1188
Q: How to merge variable which is changing over time [R] I'm attempting to do a merge - i.e. link two datasets based on a common string. The variable I'm trying to link, however, changes overtime, so the merge needs to account for the date in order to link the correct value. Rather than have a matrix for the value to link at each date, I have one which gives the date of each time the value changes. For example let's say I'd want to merge the price of apples and oranges onto a list of apples and oranges purchased on certain dates. My first dataframe (transactions) contains a the date a purchase took place, and whether it was an apple or an orange that was purchased. The second data frame contains the dates where the price of apples and oranges changed, and what it changed to (in this example prices change on the 1 January, but really it could be any date. > transactions <- data.frame(Date_Purchased = as.Date(c("02/01/2018", "02/01/2020", "02/01/2019", "02/01/2020"), format = "%d/%m/%Y"), Item_Purchased = c("APPLE", "APPLE", "ORANGE", "ORANGE")) > transactions Date_Purchased Item_Purchased 1 2018-01-02 APPLE 2 2020-01-02 APPLE 3 2019-01-02 ORANGE 4 2020-01-02 ORANGE >price <- data.frame(Date=as.Date(c("01/01/2018", "01/01/2019", "01/01/2020", "01/01/2018", "01/01/2019", "01/01/2020"), format = "%d/%m/%Y"), Item = c("APPLE", "APPLE", "APPLE", "ORANGE", "ORANGE", "ORANGE"), Price = c(0.30, 0.35, 0.40, 0.60, 0.70, 0.75)) > price Date Item Price 1 2018-01-01 APPLE 0.30 2 2019-01-01 APPLE 0.35 3 2020-01-01 APPLE 0.40 4 2018-01-01 ORANGE 0.60 5 2019-01-01 ORANGE 0.70 6 2020-01-01 ORANGE 0.75 The cost of an apple on January 2nd 2018 is 30c, and its cost on January 2nd 2020 is 40c. Similarly the cost of an orange on January 2nd 2019 is 70c and January 2nd 2020 75c. As such I need the merged dataset to look like: Date_Purchased Item_Purchased Price_On_Date_Purchased 1 2018-01-02 APPLE 0.30 2 2020-01-02 APPLE 0.40 3 2019-01-02 ORANGE 0.70 4 2020-01-02 ORANGE 0.75 Unfortunately I'm really limited on the machine that I'm on in that I don't have access to the CRAN library and can't download additional packages, which means I haven't been able to use the neardate() function in what I've tried, which I think would be useful. This is a level above what I'm used to doing on R so I'm at a bit of a loss to be honest. A: Since you cannot download additional packages here is a base R approach : transactions$Price_On_Date_Purchased <- unlist( by(transactions, transactions$Item_Purchased, function(x) { tmp <- subset(price, Item == x$Item_Purchased) tmp$Price[findInterval(x$Date, tmp$Date)] })) transactions # Date_Purchased Item_Purchased Price_On_Date_Purchased #1 2018-01-02 APPLE 0.30 #2 2020-01-02 APPLE 0.40 #3 2019-01-02 ORANGE 0.70 #4 2020-01-02 ORANGE 0.75 We divide transactions based on Item_Purchased, subset the corresponding items from price dataframe. Using findInterval we find the appropriate date in which the price was changed and get the corresponding Price value.
2023-08-02T01:27:17.568517
https://example.com/article/6055
[tox] envlist = py{35,36,37,38} buildwhl clean docs fmt lint readme release [testenv] deps = .[test] commands = pytest -v -m 'not xfail' {posargs} [testenv:buildwhl] basepython = python3.7 deps = twine wheel commands = python setup.py sdist bdist_wheel twine check dist/*.whl dist/*.tar.gz python setup.py clean --all [testenv:clean] deps = cleanpy commands = cleanpy --all --exclude-envs . [testenv:docs] basepython = python3.7 deps = -r{toxinidir}/requirements/docs_requirements.txt commands = python setup.py build_sphinx --source-dir=docs/ --build-dir=docs/_build --all-files [testenv:fmt] basepython = python3.7 deps = autoflake black isort>=5 commands = autoflake --in-place --recursive --remove-all-unused-imports --ignore-init-module-imports . isort . black setup.py test tcconfig [testenv:lint] basepython = python3.7 deps = mypy>=0.782 pylama commands = python setup.py check -mypy tcconfig setup.py --ignore-missing-imports --show-error-context --show-error-codes --python-version 3.5 -pylama [testenv:readme] changedir = docs deps = path readmemaker>=1.0.0 commands = python make_readme.py [testenv:release] deps = releasecmd>=0.2.0 commands = python setup.py release --sign {posargs}
2024-01-17T01:27:17.568517
https://example.com/article/5470
Q: How can I do compiled binding or x:Bind to indexed property using string as an index? I'm trying to use compiled binding I have a property Errors that I used to bind using the regular binding like {Binding Errors[PropertyName]}. However, when I tried to use {x:Bind VM.Errors[PropertyName]}, I got this error "Invalid binding path 'VM.Errors[PropertyName]' : Expected a digit" I also tried to use quotes like VM.Errors['PropertyName'] but it does'nt solve the problem. A: It does not work. Microsoft writes: {Binding Groups[2].Title} Binds to the specified item in the collection. Only integer-based indexes are supported. See: https://msdn.microsoft.com/en-us/library/windows/apps/mt210946.aspx at the end of the page. I event tried implementing IReadOnlyDictionary oe IDictionary, but no success. Its interessting because on https://msdn.microsoft.com/en-us/library/windows/apps/mt185586.aspx they say: For example, consider a business object where there is a list of "Teams" (ordered list), each of which has a dictionary of "Players" where each player is keyed by last name. An example property path to a specific player on the second team is: "Teams1.Players[Smith]". (You use 1 to indicate the second item in "Teams" because the list is zero-indexed.) Update My support case at Microsoft connnect was closed. They write: Thank you for reporting this issue as well as providing a sample project. The suggested scenario was not supported in Windows 10 RTM. We are however considering adding such support to a future update to Windows 10.
2024-01-20T01:27:17.568517
https://example.com/article/6225
A group of senators, including Democratic presidential candidate, Bernie Sanders, is trying to keep companies from calling borrowers’ cellphones to collect on debt held or backed by the federal government, including student loans and mortgages. Sen. Ed Markey (D-Mass.) introduced a bill earlier this week that would reverse a controversial provision tucked into the budget bill passed last week, allowing companies that collect debt for the federal government to use robocalling technology to call borrowers’ cellphones without their permission. Sanders and others, including Sen. Elizabeth Warren (D-Mass.), co-sponsored the bill.
2024-07-01T01:27:17.568517
https://example.com/article/3483
Dana Johannsen: Lack of depth more than a stumbling block New Zealanders say the 20-goal loss to their arch-rivals across the Tasman has been a huge wake-up call. If the Constellation Cup revealed the Silver Ferns' strengths and ability to perform under pressure, the Quad Series has so far exposed their limitations. Photo / Getty Images The Ferns at least know they can cope without their inspirational captain. Chalk up plan A as a success, plan B as a disaster. If the Constellation Cup revealed the Silver Ferns' strengths and ability to perform under pressure, the Quad Series has so far exposed their limitations. New Zealand's embarrassing loss to Australia on Sunday confirmed that although the team have made big strides this year, depth remains not so much a stumbling block for them but a huge rockface to be scaled. Once you get beyond their starting seven, the Ferns struggle to remain competitive with the Diamonds. It is not a new dilemma. Waimarama Taumaunu faces the same conundrum every Silver Ferns coach has experienced - how to build depth in her side while meeting the expectations of a demanding sporting public who do not tolerate losses of any kind to Australia. Her attempts at striking that rather delicate balance have so far fallen flat. With the Constellation Cup safely locked away for the year, Taumaunu would have felt a bit more comfortable trying a few new things against Australia at the weekend. But she was far from at ease watching the result unfold from the sideline, nor did it look any better the four times she has viewed the footage since then. The Silver Ferns coach has been clear her major goal this season is to develop solutions out on court against Australia that aren't so reliant on the "big three" - Irene van Dyk, Laura Langman and Casey Williams. Going into the pinnacle events in 2014 and 2015, she wants a group of 12 players she can have confidence in on court against Australia. Related Content The Ferns at least know they can cope without their inspirational captain after Williams missed the Constellation Cup series through injury. Breaking the Ferns' heavy dependence on van Dyk and Langman is proving to be much tougher. Cathrine Latu has long been seen as van Dyk's heir apparent, only the succession plan has been stretched out a little longer than anyone could have imagined with the 40-year-old supershooter determined to keep playing through to the 2014 Commonwealth Games. Nevertheless the Ferns need two strong options at goal shoot and midcourters confident in feeding the ball in to both. But right now, New Zealand just don't have the personnel in the midcourt to achieve Taumaunu's goals. Reinforcements should arrive next season, with Liana Leota and Joline Henry expected back after a brief maternity break, while Courtney Tairi and Grace Rasmussen are on the comeback trail from injury. Until then, Taumaunu might be better shelving her midcourt experiments. Get the news delivered straight to your inbox Dana has more than a decade's experience in sports journalism, joining the Herald in 2007 following stints with TVNZ and RadioSport. Over that time Dana has covered several major events including the 2011 Netball World Cup in Singapore, 2011 Rugby World Cup, 2012-13 Volvo Ocean Race, and the 34th America’s Cup in San Francisco. A multi-award winning journalist, Dana was named New Zealand Sports Journalist of the Year in 2012 after scooping both the news and feature categories at the TP McLean Awards. The previous year she picked up the prize for best news break. She was also an inaugural recipient of the Sir John Wells scholarship at the 2009 NZSJA awards. Dana also writes a weekly sports column for the NZ Herald.
2023-11-14T01:27:17.568517
https://example.com/article/9504
A crater rim peeking out of the shadows, captured as LRO passed over the lunar terminator. NASA/GSFC/Arizona State University July 6, 2009NASA's Lunar Reconnaissance Orbiter (LRO) has transmitted its first images since reaching lunar orbit June 23. The spacecraft has two cameras, a low resolution Wide Angle Camera and a high-resolution Narrow Angle Camera. Collectively known as the Lunar Reconnaissance Orbiter Camera (LROC), they were activated June 30. The cameras are working well and have returned images of a region a few miles east of Hell E crater in the lunar highlands south of Mare Nubium. As the Moon rotates beneath LRO, LROC gradually will build up photographic maps of the lunar surface. To view these first calibration images, visit: http://www.nasa.gov/lro "Our first images were taken along the Moon's terminator — the dividing line between day and night — making us unsure of how they would turn out," said Mark Robinson, LROC principal investigator of Arizona State University in Tempe. "Because of the deep shadowing, subtle topography is exaggerated, suggesting a craggy and inhospitable surface. In reality, the area is similar to the region where the Apollo 16 astronauts safely explored in 1972. While these are magnificent in their own right, the main message is that LROC is nearly ready to begin its mission." The satellite also has started to activate its six other instruments. The Lunar Exploration Neutron Detector will look for regions with enriched hydrogen that potentially could have water ice deposits. The Cosmic Ray Telescope for the Effects of Radiation is designed to measure the Moon's radiation environment. Both were activated June 19 and are functioning normally. Instruments expected to be activated during the next week and calibrated are the Lunar Orbiter Laser Altimeter, designed to build 3-D topographic maps of the Moon's landscape; the Diviner Lunar Radiometer Experiment, which will make temperature maps of the lunar surface; and the Miniature Radio Frequency, or Mini-RF, an experimental radar and radio transmitter that will search for subsurface ice and create detailed images of permanently shaded craters. NASA/Goddard Space Flight Center/Arizona State University These images show cratered regions near the moon's Mare Nubium region, as photographed by the Lunar Reconnaissance Orbiter's LROC instrument. Impact craters feature prominently in both images. Older craters have softened edges, while younger craters appear crisp. Each image shows a region .87 miles (1.4 kilometers) wide, and features as small as 9.8 feet (3 meters) wide can be discerned. The bottoms of both images face lunar north. NASA/Goddard Space Flight Center/Arizona State University The final instrument, the Lyman Alpha Mapping Project, will be activated after the other instruments have completed their calibrations, allowing more time for residual contaminants from the manufacture and launch of LRO to escape into the vacuum of space. This instrument is an ultraviolet-light imager that will use starlight to search for surface ice. It will take pictures of the permanently shaded areas in deep craters at the lunar poles. "Accomplishing these significant milestones moves us closer to our goals of preparing for safe human return to the Moon, mapping the Moon in unprecedented detail, and searching for resources," said Richard Vondrak, LRO project scientist of NASA's Goddard Space Flight Center in Greenbelt, Maryland. While its instruments are being activated and tested, the spacecraft is in a special elliptical commissioning orbit around the Moon. The orbit takes less fuel to maintain than the mission's primary orbit. The commissioning orbit's closest point to the lunar surface is about 19 miles (31 kilometers) over the Moon's south pole, and its farthest point is approximately 124 miles (200 kilometers) over the lunar north pole. After the spacecraft and instruments have completed their initial calibrations, the spacecraft will be directed into its primary mission orbit in August, a nearly-circular orbit about 31 miles (50 kilometers) above the lunar surface.
2024-05-07T01:27:17.568517
https://example.com/article/1844
Q: Detecting the number of ORACLE rows updated by a OCI OCIStmtExecute call I have an ORACLE update statement that I call using the OCIStmtExecute function call. Using an OCI function call I would like to know how many rows have been updated by the action, e.g. zero, one or more. How do I do this ? A: Use the OCIAttrGet function call on your OCIStmt statement handle with the attribute type set to OCI_ATTR_ROW_COUNT So add the folllowing code to your program : ub4 row_count; rc = OCIAttrGet ( stmthp, OCI_HTYPE_STMT, &row_count, 0, OCI_ATTR_ROW_COUNT, errhp ); where: stmthp is the OCIStmt statement handle errhp is the OCIError error handle rc is the defined return code (sword) The number of rows updated (or deleted and inserted if that is your operation) is written into the passed row_count variable
2024-06-23T01:27:17.568517
https://example.com/article/7157
Japanese pay-TV net WOWOW and Australian prodco WildBear Entertainment are teaming up to coproduce a documentary on George Orwell's novel Nineteen Eighty-Four (pictured). Japanese pay-TV net WOWOW and Australian prodco WildBear Entertainment are teaming up to coproduce a documentary on George Orwell’s novel Nineteen Eighty-Four (pictured) that will explore how the dystopian book relates to society today. Its latest documentary project, entitled Finding 1984, is billed as “a modern commentary” on the novel, which depicts life in a surveillance society and famously introduced concepts such as “Big Brother,” referring to an omnipresent dictator; and the part-security camera, part-television “telescreen.” The documentary will investigate the concept of a modern surveillance society informed by information technology, and will feature interviews with Japanese artists Ryuichi Sakamoto and Mamoru Oshii, as well as Michael Radford, director of the narrative film Nineteen Eighty-Four, which also came out in 1984. Finding 1984 will be released exclusively in Japan, airing on WOWOW on December 27. TAGS: About The Author Daniele Alcinii is a news editor at realscreen, the leading international publisher of non-fiction film and television industry news and content. He joined the RS team in 2015 with experience in journalism following a stint out west with Sun Media in Edmonton's Capital Region, and with communications work in Melbourne, Australia and Toronto. You can follow him on Twitter at @danielealcinii.
2023-08-14T01:27:17.568517
https://example.com/article/6957
We use cookies to improve our service and to tailor our content and advertising to you.More infoYou can manage your cookie settings via your browser at any time. To learn more about how we use cookies, please see our cookies policy. Design: Informal medicine is analysed according to standard ethical principles: autonomy, beneficence and non-maleficence, distributive and procedural justice, and caring. Setting: Hospital, medical school, and other settings where patients may turn to physicians for informal help. Conclusion: No generalisation can be made to the effect that informal medicine is or is not ethical. Each request for informal consultation must be considered on its own merits. Guidelines: Informal medicine may be ethical if no payment is involved, and when the patient is fully aware of the benefits and risks of a lack of record keeping. When an informal consultation does not entail any danger to the patient or others, the physician may agree to the request. If, however, any danger to the patient or others is foreseen, then the physician must insist on professional autonomy, and consider refusing the request and persuading the patient to accept formal consultation. If a reportable infectious disease, or other serious danger to the community, is involved, the physician should refuse informal consultation or treatment, or at least make a proper report even if the consultation was informal. If agreeing to the request will result in an unfair drain on the physician’s time or energy, he or she should refuse politely. Statistics from Altmetric.com Informal (hallway, kerbside, off the cuff) medicine is informal self referral to a physician for consultation or treatment without the usual medical record keeping or follow up.1 It is not known how prevalent informal medicine is worldwide. Israeli physicians are very familiar with the phenomenon. A comprehensive search of the literature revealed that a significant proportion of reports come from Israel, although other countries are also represented.2–4 Weingarten reported 198 “off the cuff” consultations between general practitioners and patients; these occurred over a period of six months at social gatherings, at chance meetings, and in medical settings outside the regular practice.5 In one study of 219 Israeli medical students who completed anonymous questionnaires, 33% had, during their clinical rounds, informally consulted with doctors without a referral letter from their primary care physician. Students in clinical years used informal medicine significantly more than preclinical students (50% v 21% respectively), despite the fact that all students had government subsidised comprehensive medical insurance. In another study, conducted among physicians in an Israeli hospital,6 91 of 111 physicians who completed the questionnaire (82%) confirmed that they had been requested by their colleagues to provide hallway consultations relating to their own medical problems. Most of them (91%) agreed to consult because “they wanted to help”, “it is unpleasant to refuse”, or “it’s the acceptable thing to do”. Most of the requests were unscheduled and time consuming. Records were kept in only 36% of the cases and follow up was conducted in 62%. Physicians who provided hallway medicine also requested it from others (p<0.001) because of personal acquaintance, to save time, and because of ready accessibility, even though the prevailing attitude to informal medical consultation was negative or ambiguous. Now that cellular telephones are ubiquitous, contact can be made with physicians at almost any time and place. In a small study by a family physician, 94 requests for informal consultations over the cellular phone were documented and analysed over a three months period.7 Only 11% of the requests were made over the weekend. In 63% of the cases, the clinic at which the patient was registered was open at the time of the request. The principal reasons for the requests were medication (29%) and for a second opinion (28%). In 42 cases the request for consultation came while the physician was busy with other patients. Informal medicine may appear to be an unethical practice. The patient unfairly makes a request, which may be hard for the doctor to refuse, especially if they are friends or colleagues. The doctor deprives the patient of the benefits of record keeping and follow up. The medical profession, medical researchers, and patients in general are deprived of the knowledge that records might have facilitated. If the doctor receives payment then the violation of ethics seems even more serious. And isn’t a gift or a reciprocal favour from a grateful neighbour a form of payment? As we shall see, however, there are good reasons for considering many cases of informal medicine quite ethical. No generalisation can be made to the effect that informal medicine is or is not ethical. We must rather consider different kinds of cases. We shall use some ethical principles not as dogma but as a framework for discussion. In so doing, we are not in any way taking or endorsing a “principlist” approach. Nor are we saying that “principlism” is any better than any other approach. As in classroom discussion, so in bioethics literature, the use of fairly well known concepts as section headings can help us organise our thinking. That is the primary role that so called “principles” are intended to perform here. A complete treatment of informal medicine might include an extensive discussion of emergency “Good Samaritan” situations, as well as doctors’ self treatment. This modest contribution will, however, focus instead on the less spectacular corridor consultations, which are much more usual and less addressed. We hope that our analysis and discussion might be of value for clinicians. Emergency situations will be mentioned only by way of comparison. Doctors’ self treatment falls outside the scope of this paper. So does informal consultation about patients, and between and among doctors, which has been treated elsewhere.8,9,10 Our focus is only on cases where a patient has requested an informal consultation or treatment from a doctor. AUTONOMY Informal medicine seems prima facie to serve the autonomy of the patient. Autonomy is too often discussed within the narrow limits of decisions whether or not to accept treatment. Autonomy in a broader sense means, however, taking control of our own lives, making our own decisions about what it means to be healthy, and about what we should do to achieve and maintain health. If no records are kept the patient has the leeway to choose what information, if any, to show to other physicians. This may be dangerous, but shouldn’t the patient, especially if he or she is medically literate, have the right to decide what information to reveal to whom? Indeed, when the informal patient is a fellow physician, a nurse or a medical or nursing student, he or she may be presumed to be knowledgeable enough to be trusted to decide what personal medical information should be revealed to whom. Of course the fact that a consultation is informal does not guarantee that no records are kept. A physician could ask for permission to transmit some of the information to the patient’s usual physician. In the emergency setting, a healthcare provider who gives first aid on the spot at an accident—for example, will be expected to transmit some information to the formal staff when they arrive. The authors are indebted to one of the journal’s reviewers for this comment (S Hurst, personal communication, 2004). A physician might also feel duty bound to report a dangerous situation to the patient’s usual physician, or to report an infectious disease to the health authorities, even against the desire of the informal patient. In an informal consultation, the autonomous patient might feel freer to discuss the problem openly with the doctor, perhaps expressing disagreement or raising questions about diagnosis and treatment. This might be especially likely if the informal patient is a health professional. It might seem that, even in a formal setting, a patient who is a physician or nurse would be more assertive than other patients, although we know (admittedly anecdotally) of health professionals who can be quite assertive at work, but who become passive and easily manipulated when they are the patients. A physiotherapist recently complained to one of us (FL) that when she went to see a doctor about a winter flu, he had her strip naked for the examination. Only afterwards did she realise that he had overstepped his bounds. I asked her how long she had been a physiotherapist. When she told me that she had been in the profession for twenty years, I replied that she could not blame the doctor for what happened. Perhaps, although this is not provable, she might have maintained more control in an informal consultation, where the doctor might not wield such authority. We admit, moreover, that it cannot be proved that patients in general will be more assertive in informal settings. Indeed, a good physician should encourage a patient to take an active role in all consultations. We do wish to suggest, however, that for some patients and some doctors, an informal atmosphere may have a positive effect. Whether autonomy in these contexts serves the medical benefit of the person needing the treatment will be discussed under the category of beneficence and non-maleficence. There is also physician’s autonomy. On the one hand, informal medicine allows the physician to help friends and students, and to take on interesting cases, which might be impossible within the framework set by the physician’s employer. On the other hand, the physician might feel undue pressure from friends and colleagues, who request informal medicine, to the detriment of the physician’s autonomy. Anyone who requests informal medicine should respect in particular the physician’s professional autonomy, and his or her desire to maintain good clinical practice, and to avoid malpractice. The physician’s insistence on professional autonomy, when it is motivated by sincere concern for the patient and the community, may lead the physician to limit recognition of patient’s autonomy. Although many requests for informal medicine may be justified, the patient should be ready to accept a physician’s refusal with understanding, even though the patient may not understand or agree with the physician’s reasons for refusing. BENEFICENCE AND NON-MALEFICENCE Informal medicine may be very attractive in this age of intensive lifestyles. It saves time usually wasted waiting for an appointment, can save money, and can facilitate consultation with a specific consultant who is not necessarily the patient’s usual physician. There are cases where a lack of records might be in the interests of the patient. The Human Genome Project’s funding for ethical, legal, and social implications, as well as other generous sources for the bioethics of genetics, resulted in a vast number of articles discussing the right of the patient to privacy with respect to genetic information, as may be seen in the bibliography in reference eleven.11 The discussions usually refer to insurability and employability. It is arguable that it is to the good of society that employers should know about genetic predispositions to dangerous conditions, and that someone with a predisposition to heart attack, to take one example, should not be hired as an airline pilot or even a bus driver. It is argued on the other hand, however, that this is personal medical information, which no one has the right to divulge without the informed consent of the patient. Similar considerations apply outside of genetic medicine, and in medicine in general. Indeed it is arguable that the ethical questions of genetics are really questions of general medical ethics. The question whether a patient has a right to conceal a predisposition to heart attack from insurers and prospective employers is the same sort of question regardless of whether the predisposition is genetic or due to other causes. With the growing computerisation of medical records, we may get to the point that whatever a family physician records will be available to any physician and to laypeople as well. It is becoming obvious nowadays that whatever is put into a computer can easily become public knowledge. Privacy in general may be dying. This may especially be the case if ministries of health, sick funds, and health maintenance organisations put patients’ files on their electronic networks. Programmers do not yet seem to know how to design safeguards that hackers cannot get past. We may be approaching an era where medical confidentiality will be only “in theory”. Also, a patient may want to conceal information from the family doctor in order to make it easier to get notes later on certifying—for example, his or her ability to participate in strenuous sport, or to serve in a combat unit in the army. Turning to informal medicine when needed—for example, a prescription for an inhalator for someone with mild asthma—is a way to preserve medical privacy. It will be said that such behaviour is unethical, endangering oneself and others. However, the bioethics literature favouring genetic privacy sets a precedent for privacy in general. It must be emphasised, none the less, that the fact that someone has a right to something does not entail that every doctor has an obligation to help that person to exercise that right. A responsible physician who is approached with a request for informal medicine will seriously weigh a number of considerations before agreeing to the request. Is there a risk of malpractice if proper diagnostic procedures are not possible in the informal setting? Is there a chance that this patient is getting multiple prescriptions for the same drug through informal consultations with several doctors? If so, is there a risk of dangerous overtreatment (such as overuse of bronchodilators for asthma)?12–14 Beneficence and non-maleficence, moreover, concern not only the individual patient but the community as well. Surely any case that might present a public health or other danger to the community is inappropriate for informal medicine—for example, dangerous infectious disease, potentially dangerous psychiatric patients, and susceptibility to seizure or accident in the workplace should obviously be treated formally, documented, and properly reported. So, although much informal medicine may be perfectly innocent and beneficial, there are also many kinds of cases where a judicious physician might reply to a request for informal treatment by saying: “I recognise your concerns that proper documentation might lead to others learning about your case. I also admit that the bioethics literature strongly emphasises medical privacy these days. But I have to weigh your own rights and needs against clear dangers to yourself (and/or to others). I therefore strongly advise you to come to my clinic for proper diagnosis and treatment.” If the patient continues to insist on informal treatment, it may be in order to suggest looking for another doctor. DISTRIBUTIVE JUSTICE The major question of justice is whether informal medicine unfairly takes up time and energy that the physician should be giving to formal patients. If the physician works for a sick fund—for example, the time spent with informal patients who are not members of that specific sick fund, may be at the expense of paying, sick fund members. Informal medicine may be defended against this objection in at least two ways. In the first place, at least in a country like Israel, which has several sick funds, and each citizen is, by law, a member of one of them, the account can balance out. Today a doctor from this sick fund informally treats a member of that fund, whereas tomorrow a doctor from that fund might informally treat a member of this one. In the second place, in an informal situation, the patient might feel more encouraged to discuss the case freely and openly with the physician. If the case is interesting or unusual, the doctor might learn something from it. In fact, if the informal patient puts plenty of input into the consultation, and a lively discussion ensues, this can be a good learning experience for the doctor. This is especially likely if the patient is a colleague, or a medical or nursing student. However, any educated patient with a sharp, searching mind can ask provocative questions, and put forth interesting arguments. All this might be to the benefit of the doctor’s sick fund and its members. This is not to say that such interesting exchanges can take place only in informal medicine. Many “formal” doctors are warm, highly patient centred, open minded, and willing to learn from their patients. Some patients, however, even if they are physicians or nurses, can feel intimidated sitting across the desk from a physician, even if the physician is making every effort to be warm and open: and some physicians can, in spite of good intentions, maintain a chilling distance. So although informal consultation may not be the only means to open physician/patient intellectual exchange, it is a means which should not be ignored. Even if a doctor gives informal treatment to a friend or colleague during his working hours at a sick fund of which the patient is not a member, how much work time would the sick fund members really lose? Do we have to be stingy misers, insisting upon keeping accounts over every penny? How about a little largesse? The Talmudic saying: “This one has gained, and that one has not lost anything”15 seems to apply here. At the beginning of this article, we suggested a distinction between informal medicine for payment and informal medicine for free. Surely a doctor who is working for and receiving pay from a sick fund or private or public hospital or clinic, and who receives “under the table” payment from an informal patient during working hours, is guilty of dishonesty towards his or her employer. The violation is especially serious if the physician uses the employer’s facilities for private gain and does not reimburse the employer. If, however, the informal treatment for pay takes place after working hours there seems to be no violation of ethics. If no payment of any kind is received, then even if the treatment takes place during working hours and with the employer’s facilities, the employer has no ground for complaint provided that the doctor does not do this too often. Jewish law, however, makes an interesting argument against informal medicine for free. The Talmud discusses a case where someone has caused bodily injury to another. The injurer is required to compensate the injured monetarily for pain, loss of work time, and medical expenses. The injurer, however, tries to avoid paying medical expenses by offering informal medicine. “Rather than paying you,” he offers, “I will take you to my doctor, who will treat you for free”. To this, the Talmud replies: “A doctor who treats for nothing, is worth nothing” (BabylonianTalmud,15 85a). Maimonides urges the patient to refuse the offer and to insist on money for the best available doctor.16 With all due respect to Maimonides and the Talmud, however, it seems that if the injurer is offering the services of a doctor friend who really is one of the best, it would not necessarily be wrong to accept the informal and free treatment. The Talmudic remark: “a doctor, who treats for nothing, is worth nothing”, was made only in the context of compensation for injury. Judaism says nothing against free or informal treatment in other contexts. Maimonides worked all day as court physician for the Sultan in Egypt. In the evening he returned to his village and, although exhausted, treated local patients informally until late at night.17,18 Maimonides does not state clearly to what extent these patients included indigent people who would have needed treatment for free. Given the long tradition of charity in Judaism, however, as well as that of free treatment for poor people of all faiths in hospitals in Israel before the establishment of the sick funds, it is reasonable to assume that he would not have turned such a person down. Indeed the Hebrew word for charity, tsedaka, translates literally as justice. Charitable giving—which should include treating the poor for free—is not something that one does voluntarily. One is required by justice to do it. Christianity has a similar attitude. Dr AK Tharien, founder of the Christian Fellowship Hospital in Oddanchatram, Tamil-Nadu, remarked: “Christianity encourages all charitable acts as service to God”. He added: “Regarding the issue of doctors’ service outside clinic, if it is a charitable act, with no remuneration, it should be encouraged. But in India what is observed is many go for an extra income with no record (to escape income tax). This is unethical (A K Tharien, personal communication, 2005). PROCEDURAL JUSTICE Is it proper that physicians should make treatment decisions informally, and on their own, without the critical checks and balances of other staff members, nurses, social workers, etc? The answer to this question obviously rests on the nature and seriousness of the decision being considered. Certainly a decision not to resuscitate, or to disconnect a ventilator (to take one extreme), should not be taken by any individual physician, but is a matter for a staff meeting in which opinions of other doctors, nurses, a social worker, family, and patient should be heard. It does not seem likely, however, that informal medicine will often be required to deal with such weighty scenarios as: “Hey, Doc, if you have a spare moment, would you mind pulling the plug for me?” When it comes to less serious and more routine procedures, formal medicine is probably no less informal than informal medicine, in the sense that the physician makes decisions alone, without consulting with other professionals. Of course many decisions will be neither so serious as issuing a Do Not Resuscitate (DNR) order, nor so routine as prescribing an antibiotic for a simple infection. One expects that in such cases a physician treating informally will be just as responsibly circumspect, and just as willing to consult with other professionals, as when treating formally. CARING Heartfelt caring for a patient is a concept that has come up more in the nursing ethics literature than in that of medical ethics. It should, however, be mentioned here. Indeed, informal nursing also exists. So does informal medicine practised by nurses. It might seem that a caring professional would respond unhesitatingly to a patient’s request for informal treatment or advice. A professional who really cares about the patient, however, would carefully weigh the question whether this specific case is an appropriate one for informal medicine, or whether it should be handled in a more formal context. So in most cases, probably, the concept of caring does not really decide any issues either in favour of, or against, informal medicine. CONCLUSION At one extreme of the spectrum, it is clear that a physician treating informally for pay or reciprocal favours on the employer’s time and with the employer’s facilities is behaving unethically. At the other end, in an informal emergency situation, one who knows how to save life and does not do so is behaving unethically. So it is clear that no generalisation of the form: “Informal medicine is ethical” or “Informal medicine is unethical”, is warranted With respect to non-emergency situations, we may probably conclude in general that when no payment is involved, or when employer’s time and facilities are not being used, and when the patient is fully aware of the benefits and risks of a lack of record keeping, then informal medicine is ethically unobjectionable. As we have already mentioned, however, when informal, undocumented treatment presents clear and serious risk to the patient or to the community, the physician should try to persuade the patient to accept formal treatment. If the patient refuses to agree, then in some cases the physician may have to refuse to treat the patient at all. GUIDELINES No generalisation can be made to the effect that informal medicine is or is not ethical: each request for informal consultation must be considered on its own merits. When an informal consultation does not entail any danger to the patient or to others, the physician may agree to the request if he so wishes. If, however, any danger to the patient or to others is foreseen, then the physician must insist on professional autonomy, and consider refusing the request and persuading the patient to accept formal consultation. If a reportable infectious disease or other serious danger to the community is involved, the physician should refuse informal consultation or treatment, or at least make a proper report even if the consultation was informal. If agreeing to the request will result in an unfair drain on the physician’s time or energy, he or she should refuse politely. A doctor consulting or treating informally must be just as responsibly circumspect, and just as willing to consult with other professionals, as when treating formally. Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
2024-07-16T01:27:17.568517
https://example.com/article/2997
/* stylelint-disable at-rule-empty-line-before,at-rule-name-space-after,at-rule-no-unknown */ /* stylelint-disable no-duplicate-selectors */ /* stylelint-disable */ /* stylelint-disable declaration-bang-space-before,no-duplicate-selectors,string-no-newline */ .ant-cascader { -webkit-box-sizing: border-box; box-sizing: border-box; margin: 0; padding: 0; color: rgba(0, 0, 0, 0.65); font-size: 14px; font-variant: tabular-nums; line-height: 1.5715; list-style: none; -webkit-font-feature-settings: 'tnum'; font-feature-settings: 'tnum'; } .ant-cascader-input.ant-input { position: static; width: 100%; padding-right: 24px; background-color: transparent !important; cursor: pointer; } .ant-cascader-picker-show-search .ant-cascader-input.ant-input { position: relative; } .ant-cascader-picker { -webkit-box-sizing: border-box; box-sizing: border-box; margin: 0; padding: 0; color: rgba(0, 0, 0, 0.65); font-size: 14px; font-variant: tabular-nums; line-height: 1.5715; list-style: none; -webkit-font-feature-settings: 'tnum'; font-feature-settings: 'tnum'; position: relative; display: inline-block; background-color: #fff; border-radius: 2px; outline: 0; cursor: pointer; -webkit-transition: color 0.3s; transition: color 0.3s; } .ant-cascader-picker-with-value .ant-cascader-picker-label { color: transparent; } .ant-cascader-picker-disabled { color: rgba(0, 0, 0, 0.25); background: #f5f5f5; cursor: not-allowed; } .ant-cascader-picker-disabled .ant-cascader-input { cursor: not-allowed; } .ant-cascader-picker:focus .ant-cascader-input { border-color: #40a9ff; border-right-width: 1px !important; outline: 0; -webkit-box-shadow: 0 0 0 2px rgba(24, 144, 255, 0.2); box-shadow: 0 0 0 2px rgba(24, 144, 255, 0.2); } .ant-cascader-picker-borderless .ant-cascader-input { border-color: transparent !important; -webkit-box-shadow: none !important; box-shadow: none !important; } .ant-cascader-picker-show-search.ant-cascader-picker-focused { color: rgba(0, 0, 0, 0.25); } .ant-cascader-picker-label { position: absolute; top: 50%; left: 0; width: 100%; height: 20px; margin-top: -10px; padding: 0 20px 0 12px; overflow: hidden; line-height: 20px; white-space: nowrap; text-overflow: ellipsis; } .ant-cascader-picker-clear { position: absolute; top: 50%; right: 12px; z-index: 2; width: 12px; height: 12px; margin-top: -6px; color: rgba(0, 0, 0, 0.25); font-size: 12px; line-height: 12px; background: #fff; cursor: pointer; opacity: 0; -webkit-transition: color 0.3s ease, opacity 0.15s ease; transition: color 0.3s ease, opacity 0.15s ease; } .ant-cascader-picker-clear:hover { color: rgba(0, 0, 0, 0.45); } .ant-cascader-picker:hover .ant-cascader-picker-clear { opacity: 1; } .ant-cascader-picker-arrow { position: absolute; top: 50%; right: 12px; z-index: 1; width: 12px; height: 12px; margin-top: -6px; color: rgba(0, 0, 0, 0.25); font-size: 12px; line-height: 12px; -webkit-transition: -webkit-transform 0.2s; transition: -webkit-transform 0.2s; transition: transform 0.2s; transition: transform 0.2s, -webkit-transform 0.2s; } .ant-cascader-picker-arrow.ant-cascader-picker-arrow-expand { -webkit-transform: rotate(180deg); transform: rotate(180deg); } .ant-cascader-picker-label:hover + .ant-cascader-input { border-color: #40a9ff; border-right-width: 1px !important; } .ant-cascader-picker-small .ant-cascader-picker-clear, .ant-cascader-picker-small .ant-cascader-picker-arrow { right: 8px; } .ant-cascader-menus { position: absolute; z-index: 1050; font-size: 14px; white-space: nowrap; background: #fff; border-radius: 2px; -webkit-box-shadow: 0 3px 6px -4px rgba(0, 0, 0, 0.12), 0 6px 16px 0 rgba(0, 0, 0, 0.08), 0 9px 28px 8px rgba(0, 0, 0, 0.05); box-shadow: 0 3px 6px -4px rgba(0, 0, 0, 0.12), 0 6px 16px 0 rgba(0, 0, 0, 0.08), 0 9px 28px 8px rgba(0, 0, 0, 0.05); } .ant-cascader-menus ul, .ant-cascader-menus ol { margin: 0; list-style: none; } .ant-cascader-menus-empty, .ant-cascader-menus-hidden { display: none; } .ant-cascader-menus.slide-up-enter.slide-up-enter-active.ant-cascader-menus-placement-bottomLeft, .ant-cascader-menus.slide-up-appear.slide-up-appear-active.ant-cascader-menus-placement-bottomLeft { -webkit-animation-name: antSlideUpIn; animation-name: antSlideUpIn; } .ant-cascader-menus.slide-up-enter.slide-up-enter-active.ant-cascader-menus-placement-topLeft, .ant-cascader-menus.slide-up-appear.slide-up-appear-active.ant-cascader-menus-placement-topLeft { -webkit-animation-name: antSlideDownIn; animation-name: antSlideDownIn; } .ant-cascader-menus.slide-up-leave.slide-up-leave-active.ant-cascader-menus-placement-bottomLeft { -webkit-animation-name: antSlideUpOut; animation-name: antSlideUpOut; } .ant-cascader-menus.slide-up-leave.slide-up-leave-active.ant-cascader-menus-placement-topLeft { -webkit-animation-name: antSlideDownOut; animation-name: antSlideDownOut; } .ant-cascader-menu { display: inline-block; min-width: 111px; height: 180px; margin: 0; padding: 4px 0; overflow: auto; vertical-align: top; list-style: none; border-right: 1px solid #f0f0f0; -ms-overflow-style: -ms-autohiding-scrollbar; } .ant-cascader-menu:first-child { border-radius: 2px 0 0 2px; } .ant-cascader-menu:last-child { margin-right: -1px; border-right-color: transparent; border-radius: 0 2px 2px 0; } .ant-cascader-menu:only-child { border-radius: 2px; } .ant-cascader-menu-item { padding: 5px 12px; line-height: 22px; white-space: nowrap; cursor: pointer; -webkit-transition: all 0.3s; transition: all 0.3s; } .ant-cascader-menu-item:hover { background: #f5f5f5; } .ant-cascader-menu-item-disabled { color: rgba(0, 0, 0, 0.25); cursor: not-allowed; } .ant-cascader-menu-item-disabled:hover { background: transparent; } .ant-cascader-menu-item-active:not(.ant-cascader-menu-item-disabled), .ant-cascader-menu-item-active:not(.ant-cascader-menu-item-disabled):hover { font-weight: 600; background-color: #e6f7ff; } .ant-cascader-menu-item-expand { position: relative; padding-right: 24px; } .ant-cascader-menu-item-expand .ant-cascader-menu-item-expand-icon, .ant-cascader-menu-item-loading-icon { display: inline-block; font-size: 10px; position: absolute; right: 12px; color: rgba(0, 0, 0, 0.45); } .ant-cascader-menu-item .ant-cascader-menu-item-keyword { color: #ff4d4f; } .ant-cascader-picker-rtl .ant-cascader-input.ant-input { padding-right: 11px; padding-left: 24px; text-align: right; } .ant-cascader-picker-rtl { direction: rtl; } .ant-cascader-picker-rtl .ant-cascader-picker-label { padding: 0 12px 0 20px; text-align: right; } .ant-cascader-picker-rtl .ant-cascader-picker-clear { right: auto; left: 12px; } .ant-cascader-picker-rtl .ant-cascader-picker-arrow { right: auto; left: 12px; } .ant-cascader-picker-rtl.ant-cascader-picker-small .ant-cascader-picker-clear, .ant-cascader-picker-rtl.ant-cascader-picker-small .ant-cascader-picker-arrow { right: auto; left: 8px; } .ant-cascader-menu-rtl { direction: rtl; border-right: none; border-left: 1px solid #f0f0f0; } .ant-cascader-menu-rtl:last-child { margin-right: 0; margin-left: -1px; border-left-color: transparent; border-radius: 0 0 4px 4px; } .ant-cascader-menu-rtl .ant-cascader-menu-item-expand { padding-right: 12px; padding-left: 24px; } .ant-cascader-menu-rtl .ant-cascader-menu-item-expand .ant-cascader-menu-item-expand-icon, .ant-cascader-menu-rtl .ant-cascader-menu-item-loading-icon { right: auto; left: 12px; }
2023-10-27T01:27:17.568517
https://example.com/article/4853
/* * \file matmul_op.cc * \brief The matmul operation */ #include "blaze/operator/op/matmul_op.h" namespace blaze { REGISTER_CPU_OPERATOR(MatMul, MatMulOp<CPUContext>); // Input: A, B Output: C OPERATOR_SCHEMA(MatMul) .NumInputs(2) .NumOutputs(1) .IdenticalTypeOfInput(0) .SetDoc(R"DOC( MatMul operator C=A*B. )DOC"); } // namespace blaze
2023-10-11T01:27:17.568517
https://example.com/article/1650
This Message Forum is to discuss spiritual topics only. Please avoid personal or assembly matters. Let us use this facility for our spiritual enrichment and for bringing glory to our Lord almighty.Webmasters reserve the right to delete any topic or posting partly or completely from this forum. On this form it is very hot discussion over wearing churidar or chatta/mundu & wedding rings. Above all these I would like to invite your opinion about our young sisters are wearing “V-CUT” skirts even our meetings also. There is a difference between modesty and fashion. If an outfit is not modest, then it clearly doesn't belong. Fashion is another story. Who establishes what fashion is godly and what fashion is worldly? Suits and ties can be considered worldly because people of the world wear it. So can western dresses for women. The only standard in scripture given is modesty. Here are certain good write ups on this issue without any bias. Most people write based only on what they think is right. Read any articles or bible with PRAYER and it is God open our wisdom and heart to understand His truth. Scripture says, "As the flowwers of the field, Solomon wasent clothed like one of them, let your heavenly father took care of him" How do you relate to that verse. Is their identification in that portion and a lifestyle of materialism. The bible is expounding, we shouldnt get into unwanted materialism. Solomon is a role model for all of us, when it comes to materialism, and is a peer for you to follow. It is appropriate to dress depending on the culture. In the church ladies cover their heads while they pray. But it is not wise to come to church with dirty clothes. Wear the best to come to the presence of God. If we go to a party or wedding we wear kasavu saree. Above all God's presence is the highest we can get. So dress accordingly. I am not sure what a V-CUT skirt is. But I realize that the subject is appropriateness of a Christian’s attire in Assembly gatherings. The Word of God does not prescribe a uniform for Christian’s outward appearance but we are given guidelines and examples to follow. However it gives us clear instructions to “dress” our inner person. Col 3:8 instructs us to “put-off” anger, wrath, malice, blasphemy and filthy language. Col 3:12-15 instructs us to “put-on” compassion, kindness, humility, gentleness, patience, tolerance, forgiveness, peace, thankfulness and love. Keep in mind our beauty is short lived. Charm is deceptive, and beauty is fleeting; but a woman who fears the Lord is to be praised. Proverbs 31:30 A Christian can dress fashionably and attractively but within the boundaries of modesty and culture. The virtuous woman of Proverbs 31 was dressed in fine linen and purple, attires of kings and priests in OT and rich men in NT. Our clothes and appearance reveal a lot about our values, our character and what we believe. Modesty is something desirable and precious and it begins in the heart. Modesty comes from the inside out rather than outside in.Few words of caution: it is possible to have a very modest appearance outwardly and to have the heart of a Pharisee. The fact that you dress modestly or conservatively does not necessarily mean that you have a modest heart. It doesn't necessarily mean that you are more spiritual. The way you dress outwardly is a reflection of your heart, but not always. You can have a heart that really is rebellious against God and be dressed extremely conservatively. Blessings of Modesty: You will have the blessing of knowing that you have been obedient to God. It sets you free from being enslaved to fads and fashions. You will be respected and adored Christians. It will protect you from the wrong kind of attention from the wrong kind of men. You will be valued for your spiritual qualities. Finally as you choose modesty you will be able to win others to Christ.God bless. Dear Sisters,Its my view that we dress the best when we go to worship the King or be in the presence of the King. It is not a gathering for mourning or weeping or sorrowing nor a place thats unknown but a place where we bow down before the King of Kings and Lord of Lords.When we say the best its not fashion nor is it a style but what befits a woman in all her modesty, dignity and self-respect. You are going to be in the presence of the Almighty God who is very Loving and Awesome and also a God of Consuming fire. He is to be much revered and feared too. He cannot be pleased by fashion not styles nor that which enslaves man day by day but purely by our selves and our humility.In this regard the onus falls on the aged sisters in the assembly to prayerfully and gracefully teach the young. Its not culture nor changing times that is to be considered or given importance to. Its where we are and in whose presence we are that should capture and captivate our minds and our hearts.Let us all for that matter come well dressed and let us all come with joy and also trembling since we will be in the presence of GOD, I again repeat GOD (All Powerful and Most Holy). Dear Brethren,Well dressed is Not scantily dressed nor being dressed beyond your means and capacity.Every human being is taught to be well dressed from their childhood days.Appreciate the thoughts that yes modesty and respectfulness is a strong character to our dressing.Its not for us to see what people of other countries do and wear but what we can and should in all the material blessings that our Lord is mercifully blessing us with.In the presence of the king we must not be found shabbily dressed nor scantily dressed and we can never please God with rich display of wealth and glamour.Modesty matters most in our being well dressed.Like how we were all taught by our parents to dress well and be presentable let us too help others in our assemblies likewise since the world is fast changing in fashion and tasteful displays into which advertently some of us too fall. This thread is only to help us realise that its not fashion nor style that counts but how we revere and adore the one who is worthy of worship. Its how we show Him that Yes we are happy being His children and thankful for all that we receive from Him. Its how we respect Him, the one who cares for us. Our Attitude plays a great part in this small act Lets not write anything that would hurt others feeling. Your maturity in Christ Jesus can be understood by what you post. Jesus said a man can only bringout what is inside of him, good man brings forth good stored inside him and evil man the evil stored inside him. Now, What do you think when bible says dress with modesty? It means 1. The state or quality of being modest.2. Reserve or propriety in speech, dress, or behavior.3. Lack of pretentiousness; simplicity.So if you are wearing clothes to show-off what perfect figure you have, then you need to be checking if you are being modest. Normally, what i think is wearing skin tight clothes is very immodest, if look at such a persons shadow you would think the person is naked. This kind of dressing is just too much. This is my view only may not be correct. As it depends how you persive. What dya say bros... Thank you..username...seriously what is a "V-cut skirt"???? I mean "mmk" and "terry_martin" did have some good points on apparel in general...but really could you clarify what such a skirt is for the rest of us to put in our 2 cents..... there is nothing wrong in a Vneck shirt. But there will be alimit to how law it shows. if it is too law wear a t shirt or camisole.please do not show cleavage. it is wrong it is not modest. women should not give a chance for men to look at her and comit sin all those who are saved out there should know a modest dress for assembly should be,are they coming to show off their body parts or worship God. cammon ,now ladies get real let us fear God and put on some decent clothes do not give men a chance to look at u lustfully and be modest. "women should not give a chance for men to look at her and commit sin." May I ask you "Jenny1" is this a truly attainable goal ? I understand blatant exposure being a cause for "lustful thoughts"...but can you say with 100% certainity that every cloth you wear does not in anyway cause another man to look at you? Not ever...If your answer is "yes"...you are either extremely naive or in denial...because the issue of modesty is relative when it comes to culture but very clear when it comes to scriptures, but it requires a spiritual maturity to understand the issue properly and explain it. When we give advices or rsponses, especially to our younger women we need to be careful and not give this kind of half-baked responses which have cultural conotations rather than spiritual maturity. You can come for worship in a skirt/sleevelessshirt/tank/a cami etc. and nobody is actually looking at you and going all "lustful" because it isn't considered "immodest"....but you turn up in something like that into a church...maybe in Kerala.... and the same outfit which was "modest" elsewhere is "immodest" here. The attitude of the heart becomes a big factor...are we there to check what the other is wearing or not wearing or are we there to worship our Lord. If we are there to worship our Lord....what the other is doing or wearing doesn't become an issue or a matter for lustful thought because our hearts and minds are filled with the Lord. So can we with any conviction say..."you know ladies cover yourself and all the problems will be solved, don't give men a chance to look at you lustfully"....No one knows the hearts or minds of man but God. How much should a lady cover herself so as not to illicit any negative responses from males....because what may seem unappealing to one man is appealing to another...isn't that true? If you have raised boys or have had brothers you are close to, you will know the answer that it can even be just a pretty face that illicits bad thoughts...so then do we suggest to our women to cover up the face. If you think about it, then the muslims have it down right by making their women wear a "bhurka"....that can really stamp out any "lustful" thoughts from the opposite sex. You can't live your life worrying about things you can't control. Ask the saviour to guide you individually on what we should wear....everybody has opinins but we are here to glorify God and not man...so wear what feels right in the sight of the Lord...His opinion is the only one that matters in the end, as we cannot control the thoughts of others all the time. ^^very well said. Our people are too bothered about what others wear, frenchbeards, etc etc. They are very quick to judge. and the never realize that the main reason we come to church is to praise and worship God. Dear mom 23 can u sat what is the dress is good for sight of lord, evry one will say my dress my dress . pothujanam upayokikkan madikkunnathu polum now a days our sisters are wearing at church. blouse back open , or front opening up to lower end , sary seem it may be 3 mtrs, all belly visible, transperent cloths which can show inner dress. tyte dress which can show all body shape,short up or shartop, side keeri ingu mele vare ittirikkunnathu, nadakkumpol almost ellam out. top butom idan polum ethathavidham, pinne busil kayariyal mukalil pidikkan kai pokkiyal pinne parayukayum venda. about 10 -- 12 years it is ok . but after that think your self. I think you misunderstood the gist of my post or perhaps I wasn't clear enough about the point I was trying to make......I was responding to the usual statement of not allowing men to sin etc...etc... by clothes. I do not agree with that statement on may levels. Now if you attend a church where women dress in the manner in which you describe (exaggeration ?? I do not know....)...you have bigger problems than women and their choices of clothes!!!! The attire a wife or daughter of a household is a reflection on what the headship at home is like........In the same way the belief system of the members of a church are a reflection of what doctrinal truth they follow or adhere to or follow in their church. Let me explain because you may need to ask these questions to yourself..... 1. Is the biblical head of the house being a priest and prophet in his house? 2. Is he channelling and shepherding the hearts and minds of his family towards the Lord and living a life for His glory......which reflects in the choices they make, whether it is clothing or lifestyle? 3. How much biblical foundation is he giving them on what the gathering of saints at a church is all about....is it for worship and praise or is it for anything else? 4. What is the mother's role in the household? How well is her understanding of her scriptural role? 5. What is the father's vision for his family????? If families are not clear in their understanding of what the scripture (not culture/tradition) has asked them to do, what their roles or duties as husbands/fathers, wives/mothers, children/daughters/sons are.......then it is reflected in small ways like in attire and big ways like in the choices they make in life. So, then if you have a lot of members coming to a place of worship in unsuitable attire, then as a body of Christ, the church, needs to go back to the foundation and delve deeper into making members understand what God wants them to do, to make them understand biblical truths and turn away from peer and societal influences etc....... It takes work and a lot of patience! It cannot be done by pointing fingers, but they need to be shepherded gently and carefully. But if we want changes that affect generations to come then seeking the scripture for wisdom is the way to go........instead of blaming people for their choices, finding out the "why" of their choice is more important because then only can you address it and help resolve it scripturally.
2024-03-01T01:27:17.568517
https://example.com/article/7388
It is known to employ hot isostatic pressure processing techniques (HIP) for upgrading the mechanical properties of alloys, for example, cast alloys, characterized by the presence of micropores and/or other structural defects. According to U.S. Pat. No. 3,758,347, a metal casting of an alloy based on an element selected from the group consisting of Ni, Co, Fe, and Ti and having internal discontinuities, such as porosity, microfissures, cracks, and the like, can be improved by applying isostatic pressure to the casting at an elevated temperature less than that temperature which will cause substantial degradation of the mechanical properties of the alloy for a time sufficient to close the pores and effect diffusion bonding of the walls of the pores, fissures, etc. Superalloys are mentioned in particular, such as age-hardenable nickel-base superalloys designated by the trademarks Rene 80, Rene 100, etc. Rene 80 contains 0.17% C, 14% Cr, 5% Ti, 0.015% B, 3% Al, 4% W, 4% Mo, 9.5% Co, 0.03% Zr, and the balance nickel, while Rene 100 contains 0.17 % C, 9.5% Cr, 4.2% Ti, 0.015% B, 5.5% Al, 3% Mo, 15% Co, 0.06% Zr, 1% V, and the balance nickel. According to the patent, in the treatment of Rene 80 castings in an autoclave heated to 2225.degree. F. (1218.degree. C.) at a pressure of 10,000 psig, samples of the alloy were held for about 8 hours and then removed after cooling. The HIP treated samples were compared to samples not given the HIP treatment and following heat treatment. Both the HIP treated and untreated samples were subjected to a solution treatment at 2225.degree. F. (1218.degree. C.) for 2 hours in a vacuum, then inert gas quenched to room temperature followed by heating at 2000.degree. F. (1093.degree. C.) for 4 hours in vacuum and inert gas quenching to room temperature. Following the latter quench, the alloy samples were aged at 1925.degree. F. (1052.degree. C.) for 4 hours, furnace cooled to 1200.degree. F. (649.degree. C.) and held for 1 hour prior to air cooling to room temperature. Finally the two types of samples were heated at 1550.degree. F. (843.degree. C.) for 16 hours in Argon and then cooled to room temperature. The alloy samples were then tested for stress-rupture at 1600.degree. F. (871.degree. C.) under a stress of 45,000 psi. The results showed that the untreated samples (2 tests) exhibited an average life of about 41.5 hours and an average percent elongation of about 2.5 hours. The samples treated by HIP (6 samples) exhibited an average stress-rupture value of 141 hours and an average percent elongation of about 11.5%. As will be apparent, the HIP treatment applied to the aforementioned nickel-base alloy markedly improved the stress-rupture properties. Elimination of casting defects by using HIP is disclosed in a paper entitled "Elimination of Casting Defects Using HIP" by G. E. Wasielewski and N. R. Lindblad; Proceedings on The Second International Conference on Superalloys--Processing; Seven Springs, Pa., September 1972. According to the aforementioned paper, stress-rupture properties and room temperature ductility of nickel-base superalloys, for example, alloys referred to by the designation IN-738, Rene 77, IN-792, etc., can be improved by means of the HIP processing technique at temperatures ranging from about 2000.degree. F. (1093.degree. C.) to 2200.degree. F. (1204.degree. C.) for 1 to 10 hours at pressures ranging from about 5,000 to 30,000 psi, a temperature of 2150.degree. F. (1177.degree. C.) to 2200.degree. F. (1204.degree. C.) being particularly preferred to provide 100% densification of the alloy part. Similar improvements are indicated with HIP processing in a paper entitled "Improved Components Through Howmet's HIP Process", by T. H. Smith and L. Dardi; published in Casting About, Spring (April) 1974 by Howmet Turbine Components Corporation. In a patent which issued on Nov. 14, 1978 (U.S. Pat. No. 4,125,417), a HIP process is disclosed for use in the same manner for the same purpose as stated above, except that it is applied for salvaging and restoring useful properties of used alloy parts containing such defects as grain boundary voids or dislocations induced by high temperature creep in service, in addition to such cast defects as micropores. Following HIP processing, the alloy part is then subjected to heat treatment (solution treatment and aging) to restore the mechanical properties to their original values. The concept of employing the HIP process for upgrading the mechanical properties of magnesium and aluminum die castings is disclosed in U.S. Pat. No. 3,732,128, wherein the die casting is subjected to heat and pressure in a container at 300.degree. C. to 600.degree. C. under a pressure of 100 to 10,000 psi for 1 to 72 hours and rapidly cooled while still maintaining the applied pressure. The treated casting is thereafter aged at 100.degree. C. to 250.degree. C. for 1 to 72 hours at atmospheric pressure to improve the mechanical strength of the alloy. Thus, it is known that the use of HIP processing, involving the simultaneous application of heat and high pressure to investment cast superalloys, results in significant improvements in high temperature mechanical properties which have made it possible for gas turbine designers to specify premium quality castings for critical industrial gas turbine applications. The motivation for using investment castings stems from an industry-wide effort to improve substantially the efficiency and cost effectiveness of gas turbines. In recent years, this effort has been further accentuated by worldwide inflation and a growing shortage of fossil fuel supplies. It would be desirable to improve still further the capabilities of age-hardenable alloys, e.g., cast superalloys, in light of the ever-increasing high temperature demands being specified for jet engine components, such as for turbine blades employed in the hot end of the engine.
2023-08-04T01:27:17.568517
https://example.com/article/2172
Relations between effectiveness of a diagnostic test, prevalence of the disease, and percentages of uninterpretable results. An example in the diagnosis of jaundice. The relations between effectiveness, the percentages of uninterpretable results of a test, and the prevalence of the disease are studied in the example of the diagnosis of jaundice. Ten hepatologists and ten hepatobiliary surgeons were interviewed, and nineteen articles were reviewed. Accuracies of ultrasonography, endoscopic retrograde cholangiography, and transhepatic cholangiography, as well as of three strategies combining these tests, were ranked by hepatologists in an order that differed from chance, and by surgeons in an order that did not differ from chance. Analyses of published data, taking into account the percentages of uninterpretable results, showed that for a high prevalence of extrahepatic cholestasis, as in jaundiced patients seen by surgeons, there is no significant difference between the respective effectiveness of each test or strategy. We concluded that effectiveness must take into account the percentages of uninterpretable results and must be expressed as a function of prevalence.
2024-03-15T01:27:17.568517
https://example.com/article/2975