text stringlengths 8 5.77M |
|---|
Lifting the veil on mother love
A new film breaks one of the last taboos: old women and sex. By Stephanie Bunbury.
Anne Reid says she was employed to play May, a pensionable widow who has an affair with her son's young builder, because the filmmakers wanted someone nobody would look at twice in the street.
"I don't know what I think about that," the actor says sharply. At 69, she thinks she is still in pretty good shape.
Reid stars in Roger Michell's film The Mother, which is scripted by the novelist Hanif Kureishi and is remarkable simply because it dares to break one of the cinema's few remaining taboos - the idea that a post-menopausal woman might be interested in sex.
"I think people can't bear to think their parents have sex with each other, let alone other people," says Michell. "The idea that old people, particularly old women, should be annexed from sexual experience is pretty universal."
Which is nonsense, as Anne Reid says. "Women don't lose their desire for sex at 55. I fancy young men all the time. They don't fancy me, unfortunately."
Reid is familiar only, if at all, from television comedies such as Dinnerladies. Her forte, as she says, is making people chuckle. She never expected to get a lead role like this, although in truth May is scarcely more glamorous than one of Victoria Wood's canteen workers.
When we meet May, she is fitting her life around the demands of a cantankerous husband (Peter Vaughan) on a visit to London from their home in the provinces, the pair of them clearly in the way of their ambitious, messed-up adult children. Her greatest ambition, constantly thwarted, is simply not to upset anyone.
When May's husband dies, her grief is tempered by a kind of unacknowledged relief. She is free, but to do what? When she asks the amiable builder, played by Daniel Craig, to come upstairs with her, she momentarily sloughs off the daily dutifulness of a whole life.
"She hasn't lived," says Reid. "She doesn't know what good sex is and if she doesn't grab the opportunity now, she never will. She'll die without knowing."
May is typical, Reid says, of a whole generation. "I was on the tail end of it, but you married a virgin and that was it. A lot of women of my age did their duty, had a couple of kids and then thought, 'I don't like this very much, but there's nothing I can do about it'."
Finally, however, May can do something about it. In the airy transience of the half-built conservatory, just two people shut in a room together, it seems possible to say what she wants. It doesn't end beautifully for her, Michell says, "but I think it ends better than if she had just gone back to that house and stared at her husband's slippers."
"There is a lot of me in this," says Reid. "I know what it's like to give up your own life; I did it for 15 years when my mother was ill and I had a small child at home. My husband developed cancer. There was no room for my career and I thought I'd had the last of it . . . But when my husband died, I thought, 'What will become of me?' "
Gradually, she went back to the stage; incredibly, her life began again. |
[Meniscal injuries in the aged patient].
The author analyses clinical and radiographic aspects of 46 patients with meniscal injuries, with age over 60 years, and submitted to arthroscopic procedures for treatment. The relations of the clinical and arthroscopic findings with the surgical results were evaluated. The night pain complain, joint swelling, the knee flexion deformity were the most frequent clinical findings. The fact of being female, having injuries of lateral meniscus, narrowing of the joint space at the X-ray were bad prognosis signs. The medial femural condile osteonecrosis was the main cause of bad result in patients with medial meniscus injuries. |
997 So.2d 382 (2008)
Mark Anthony POOLE, Appellant,
v.
STATE of Florida, Appellee.
No. SC05-1770.
Supreme Court of Florida.
December 11, 2008.
*387 James Marion Moorman, Public Defender, and Paul C. Helm, Assistant Public Defender, Tenth Judicial Circuit, Bartow, FL, for Appellant.
Bill McCollum, Attorney General, Tallahassee, FL, and Scott A. Browne, Assistant Attorney General, Tampa, FL, for Appellee.
PER CURIAM.
Mark Anthony Poole was indicted, tried, and convicted of attempted first-degree murder, armed burglary, armed robbery, sexual battery, and first-degree murder. Poole appeals his conviction and his sentence of death for the first-degree murder. We have jurisdiction. See art. V, § 3(b)(1), Fla. Const. For the reasons stated below, we affirm Poole's convictions but vacate his sentence of death and remand for a new penalty phase proceeding.
FACTS AND PROCEDURAL HISTORY
Mark Anthony Poole was convicted of the first-degree murder of Noah Scott, attempted first-degree murder of Loretta White, armed burglary, sexual battery of Loretta White, and armed robbery. Poole was convicted based on the following facts presented at trial. On the evening of October 12, 2001, after playing some video games in the bedroom of their mobile home, Noah Scott and Loretta White went to bed sometime between 11:30 p.m. and 12 a.m. Later during the night, White woke up with a pillow over her face and Poole sitting on top of her. Poole began to rape and sexually assault her as she begged Poole not to hurt her because she was pregnant. As White struggled and resisted, Poole repeatedly struck her with a tire iron. She put her hand up to protect her head, and one of her fingers and part of another finger were severed by the tire iron. While repeatedly striking White, Poole asked her where the money was. During this attack on White, Scott attempted to stop Poole, but was also repeatedly struck with the tire iron. As Scott struggled to defend White, Poole continued to strike Scott in the head until Scott died of blunt force head trauma. At some point after the attack, Poole left the bedroom and White was able to get off the bed and put on clothes but she passed out before leaving the bedroom. Poole came back in the bedroom and touched her vaginal area and said "thank you." White was in and out of consciousness for the rest of the night. She was next aware of the time around 8 a.m. and 8:30 a.m. when her alarm went off.
When her alarm went off, White retrieved her cell phone and called 911. Shortly thereafter, police officers were dispatched to the home. They found Scott unconscious in the bedroom and White severely injured in the hallway by the bedroom. White suffered a concussion and multiple face and head wounds and was missing part of her fingers. Scott was pronounced dead at the scene. Evidence at the crime scene and in the surrounding area linked Poole to the crimes. Several witnesses told police officers that they saw Poole or a man matching Poole's description near the victims' trailer on the night of the crimes. Stanley Carter stated that when he went to the trailer park around 11:30 that night, he noticed a black male walking towards the victims' trailer. Carter's observations were consistent with that of Dawn Brisendine, who knew Poole and saw him walking towards the victims' *388 trailer around 11:30 p.m. Pamela Johnson, Poole's live-in girlfriend, testified that on that evening, Poole left his house sometime in the evening and did not return until 4:50 a.m.
Poole was also identified as the person selling video game systems owned by Scott and stolen during the crime.[1] Ventura Rico, who lived in the same trailer park as the victims, testified that on that night, while he was home with his cousin's girlfriend, Melissa Nixon, a black male came to his trailer and offered to sell him some video game systems. Rico agreed to buy them for $50, at which point the black male handed him a plastic trash bag. During this exchange, Nixon got a good look at the man and later identified Poole when the police showed her several photographs. Nixon testified that the next morning, when her son was going through the trash bag, he noticed that one of the systems had blood on it.
Pamela Johnson also testified that on the same morning, she found a game controller at the doorstep of Poole's house, she handed it to Poole, and Poole put it in his nightstand. She indicated that she had never seen that game controller before that morning and did not know what it would be used for because neither she nor Poole owned any video game systems. During the search of Poole's residence, the police retrieved this controller. In addition, the police retrieved a blue Tommy Hilfiger polo shirt and a pair of Poole's Van shoes, shoes Poole said he had been wearing on the night of the crimes. A DNA analysis confirmed that the blood found on the Sega Genesis box, Super Nintendo, Sega Dreamcast box and controller matched the DNA profile of Scott. Also, a stain found on the left sleeve of Poole's blue polo shirt matched White's blood type. The testing of a vaginal swab also confirmed that the semen in White was that of Poole. A footwear examination revealed that one of the two footwear impressions found on a notebook in the victims' trailer matched Poole's left Van shoe. The tire iron used in the crimes was found underneath a motor home located near the victims' trailer. A DNA analysis determined that the blood found on this tire iron matched Scott's DNA profile.
Based on this evidence, the jury returned a verdict finding Poole guilty on all charges, including first-degree murder. Following the penalty phase, the jury recommended death by a vote of twelve to zero. The trial court followed the jury's recommendation and sentenced Poole to death. The trial court found two statutory aggravating circumstances: (1) the defendant was previously convicted of another capital felony or of a felony involving the use or threat of violence to the person, and (2) the murder was especially heinous, atrocious, or cruel. The court also found three statutory mitigators and numerous nonstatutory mitigators. The statutory mitigators were: (1) the crime for which Poole was to be sentenced was committed while he was under the influence of extreme mental or emotional disturbance (moderate weight); (2) Poole's capacity to conform his conduct to the requirements of law was substantially impaired (moderate weight); and (3) Poole had no significant history of prior criminal activity (little weight). The nonstatutory mitigators were: (1) Poole is of borderline intelligence (some weight); (2) Poole received a head injury, which created dementia (little weight); (3) Poole's age at the time of the crime linked with mental deficiency and *389 lack of serious criminal history (moderate weight); (4) Poole dropped out of school due to his low intelligence and learning disabilities (little weight); (5) Poole lost Mr. Bryant, his "best friend, father figure, employer," and that had an emotional effect on Poole and led to his drug use (some weight); (6) Poole sought help for his drug problem in the past (little weight); (7) Poole had an alcohol abuse problem at the time of the crime (little weight); (8) Poole had a drug abuse problem at the time of the crime (little weight); (9) Poole does not have antisocial personality disorder nor is he psychopathic (some weight); (10) Poole has and can continue a relationship with his son (minimum weight); (11) Poole has a strong work ethic (little weight); (12) Poole has a close relationship with his family (moderate weight); (13) Poole is a religious person (little weight); and (14) the murder and rape were impulsive excessive acts, not premeditated acts (little weight). The trial court determined that these mitigating factors did not outweigh the aggravating circumstances and, as a result, the trial court sentenced Poole to death on the count of first-degree murder. The trial court also sentenced Poole to consecutive life sentences for the attempted first-degree murder of Loretta White, armed burglary, sexual battery of Loretta White, and armed robbery.
In this appeal, Poole raises four issues: (1) whether the trial court abused its discretion in denying Poole's motion for mistrial when the prosecutor repeatedly commented during closing argument on Poole's failure to testify at trial and his silence after his arrest; (2) whether the prosecutor violated Poole's right to a fair penalty phase proceeding by cross-examining defense witnesses about unproven prior arrests, the unproven content of a tattoo, and lack of remorse; (3) whether the prosecutor violated Poole's right to a fair penalty phase proceeding by misleading the jurors about their responsibilities in recommending a sentence; and (4) whether Florida's death penalty statute violates the Sixth Amendment right to trial by jury.
ANALYSIS
Prosecutorial Comments during Guilt Phase
Poole first asserts the trial court erred in denying his motion for mistrial when the prosecutor repeatedly commented in closing argument during the guilt phase on Poole's failure to testify at trial and his silence after his arrest. In making this argument, Poole cites several statements made by the prosecutor before defense counsel objected and moved for a mistrial. We find that the first two comments were not contemporaneously objected to and, as a result, are not properly preserved for appellate review. We also find that although the last comment was an erroneous prosecutorial comment on silence, the trial court did not abuse its discretion in denying the motion for mistrial.
The prosecutor began by stating that the defense's argument came from "Fantasy Land." A few sentences later, the prosecutor stated, "Well, there is no evidence in this case that at any time, either in this trial or anywhere else, Mr. Poole ever acknowledged that he did anything." A few paragraphs later, the prosecutor continued:
Mr. Poole talked to the police. And Mr. Pooleso that there's this other guy that was involved. Well, there's no evidence. Keep in mind what's evidence and what's argument. Mr. Dimmig is arguing all these things, but there is absolutely no evidence that Mr. Poole ever said, hey, somebody else was there before me and these people's heads were bashed in. There is no evidence of that.
*390 And there's no evidence that Mr. Poole ever said, well, I went in there and raped her and left her and then somebody else came in and beat their heads in. There's no evidence of that either. That's argument. But when you look at what the testimony is and what the physical evidence is and what the photographs are, there is no evidence to support that theory.
....
And if Mr. Poole wants to tell the state and Detective Grice that somebody helped him commit this crime, then let him come forward because
At this point, defense counsel objected and moved for a mistrial, arguing that the prosecutor's comments violated Poole's right to remain silent. The prosecutor argued that defense counsel's argument that Poole admitted to three of the crimes opened the door. After the prosecutor stated that he would not take the argument any further, the trial judge denied the motion for mistrial.
The first two comments that Poole alleges were improper were not followed by an objection.[2] We have consistently held that "the failure to raise a contemporaneous objection when improper closing argument comments are made waives any claim concerning such comments for appellate review." Card v. State, 803 So.2d 613, 622 (Fla.2001). However, we have carved out an exception to the contemporaneous objection rule when the unobjected-to comments rise to the level of fundamental error, that is, an error that "reaches down into the validity of the trial itself to the extent that a verdict of guilty or jury recommendation of death could not have been obtained without the assistance of the alleged error." Id. at 622. Neither of the two comments rises to the level of fundamental error because both comments were invited responses to the defense's closing argument. During his closing argument, defense counsel stated that Poole acknowledged that he committed the crimes of sexual battery, robbery, and burglary but denied that he was the person who inflicted the injuries on White and Scott. In response, the prosecutor was arguing that there was no evidence in the case to support the argument that Poole acknowledged that he committed those crimes or to support the argument that someone else inflicted the injuries on the victims. Because the prosecutor's comments were invited responses, the comments cannot be deemed improper. See Walls v. State, 926 So.2d 1156, 1166 (Fla.2006); see also Dufour v. State, 495 So.2d 154, 160-61 (Fla. 1986). Therefore, these comments do not warrant reversal.
The final comment"And if Mr. Poole wants to tell the state and Detective Grice that somebody helped him commit this crime, then let him come forward because...."was objected to. Moreover, we find that this comment was an improper comment on Poole's failure to testify. Under article I, section 9 of the Florida Constitution, a defendant has the constitutional right to decline to testify against himself in a criminal proceeding. Therefore, "any comment on, or which is fairly susceptible of being interpreted as referring to, a defendant's failure to testify is error and is strongly discouraged." Rodriguez v. State, 753 So.2d 29, 37 (Fla. 2000) (quoting State v. Marshall, 476 So.2d 150, 153 (Fla.1985)). In the instant case, *391 the prosecutor's comment impermissibly suggested a burden on Poole to prove his innocence by stating that he had to come forward and testify. Although this was an erroneous comment on Poole's silence, we find that the trial court did not abuse its discretion in denying the motion for mistrial because in light of the evidence linking Poole to the crimes, the error was not "so prejudicial as to vitiate the entire trial." Dessaure v. State, 891 So.2d 455, 464-65 (Fla.2004).[3] The evidence presented demonstrates that Poole was seen heading towards the victims' trailer on the night of the crime; Poole sold video games like those taken from the victims' residence immediately after the attack; the semen found in White matched Poole; Poole's shoeprint matched a shoeprint left inside the victims' trailer; and a stain found on Poole's shirt matched White's DNA profile.
Accordingly, relief is not warranted on this claim.
Prosecutor's Cross-Examination of Defense Witnesses
Poole next contends that the prosecutor violated his right to a fair penalty phase by cross-examining defense witnesses about unproven prior arrests, the unproven content of a tattoo, and lack of remorse. The questions regarding Poole's lack of remorse were not followed by a contemporaneous objection, and therefore the claim is unpreserved. However, the trial court erred in overruling defense counsel's objection after the prosecutor asked questions regarding unproven prior arrests and the unproven content of a tattoo. These errors were not harmless and require a new penalty phase proceeding.
Poole first argues that the trial court erred in permitting the prosecutor to use references to Poole's prior criminal record to impeach a defense mitigation witness, Joe Poole, Jr., who was Poole's older brother. During the prosecutor's cross-examination of Joe Poole, Jr., the following questions were asked:
Q: If you're that close to your brother, do you know if this was the first time he ever got arrested when he got arrested for this crime?
A: No.
Q: You don't know?
A: No, it's not his first time getting arrested.
Q: He got arrested in Georgia, South Carolina, Texas.
Defense counsel objected based on improper impeachment and argued that the line of questioning was improper because the defense was not asking for lack of prior criminal history and defense counsel specifically indicated this in his motion for limine before the penalty phase. The prosecutor argued that the line of questioning was appropriate not only because the question went to the credibility issue of how well Poole's older brother knew Poole, but also because the defense had put on evidence of Poole's reputation. The trial judge overruled the objection but directed the prosecutor not to go into any further details.
*392 The prosecutor's line of questioning regarding Poole's prior criminal history was improper, and the trial court erred in overruling the defense objection. At the beginning of the penalty phase, the defense filed a motion to exclude evidence of Poole's prior criminal activity, which also indicated that the defense would not ask for the mitigator of no significant prior criminal history. The State agreed that the motion should be granted and also agreed that they would not bring up Poole's criminal history unless the defense tried to put on such evidence. The trial court granted the motion by stipulation. Defense counsel followed through with the motion and did not put on any evidence of Poole's prior criminal history. However, the prosecutor still improperly presented inadmissible evidence of Poole's prior criminal activity under the guise of witness impeachment. Under section 921.141, Florida Statutes (2007), the State is only permitted to present evidence of those aggravators listed under subsection 5, which does not include a defendant's convictions for nonviolent felonies. See Hitchcock v. State, 673 So.2d 859, 861 (Fla.1996) ("[T]he State is not permitted to present evidence of a defendant's criminal history, which constitutes inadmissible nonstatutory aggravation, under the pretense that it is being admitted for some other purpose.").
The trial court's error of overruling defense counsel's objection to this line of questioning was not harmless. See Rodriguez v. State, 753 So.2d 29 (Fla.2000) (finding that the proper standard of review for an overruled objection is a harmless error standard). First, contrary to the State's argument, while Joe Poole, Jr. did not answer the last question in this line of questioning, he did state that Poole had been arrested before; then the prosecutor explicitly listed the states where Poole had been arrested for other crimes. By this point, the damage had been done and the jury knew that Poole had a criminal history. Although information about Poole having committed burglaries came out through the testimony of the defense mental health expert, Dr. William Kremper, we cannot say whether, in the absence of prosecutor's questions to Joe Poole, Jr., the jury would have still heard about this history through Dr. Kremper's testimony. Dr. Kremper's testimony followed Joe Poole, Jr.'s testimony, and we cannot now determine whether defense counsel would have presented such evidence if the prosecutor had not elicited testimony from Joe Poole Jr. about the defendant's criminal history.
This error cannot be deemed harmless because the prosecutor's questions to Poole, Jr. suggested that Poole was a career felon in numerous states, and he created a risk that the jury would give undue weight to such information in recommending a death sentence. We have consistently held that where the State presents evidence that constitutes inadmissible nonstatutory aggravation, the error is not harmless. See, e.g., Perry v. State, 801 So.2d 78, 89 (Fla.2001) (holding that the introduction of evidence that constituted impermissible nonstatutory aggravation was not harmless because the evidence was highly inflammatory and could have unduly influenced the penalty phase jury); see also Kormondy v. State, 703 So.2d 454, 463 (Fla.1997) (concluding that the admission of impermissible evidence of nonstatutory aggravation was not harmless error and stating that "[t]he jury is charged with formulating a recommendation as to whether [the defendant] should live or die [and] turning a blind eye to the flagrant use of nonstatutory aggravation jeopardizes the very constitutionality of our death penalty statute"); Geralds v. State, 601 So.2d 1157, 1162-63 (Fla.1992); Maggard v. State, 399 So.2d 973 (Fla.1981).
*393 The prosecutor also improperly asked Joe Poole, Jr. questions regarding the content of a tattoo on Poole's body. During cross-examination, the prosecutor asked Joe Poole, Jr. if Poole had tattoos on his body. Defense counsel objected and moved for a mistrial, arguing that the line of questioning was irrelevant. The prosecutor argued that the defense opened the door by making Poole look like an "angel" and that the "Thug Life" tattoo would show that there is another side to him. The trial court overruled the objection and found that the questions went to Joe Poole, Jr.'s credibility because he stated that he knew Poole better than anybody. The prosecutor then continued to ask Joe Poole, Jr. questions about the tattoo:
Q: Do you know how many tattoos Mark's got?
A: No, sir, I do not. I know of one.
Q: What's one say?
A: It should [say] "MP." I have "JLP." That's the first one we put on and wished we never did that because we got in much, much trouble behind that.
Q: From your parents?
A: From our parents.
Q: Okay. Well, doesn't he have a tattoo that says Thug Life right across his abdomen?
A: I haven't looked at his stomach, sir.
Q: So although you know him as well as you told this jury, you didn't know he had that?
A: No. I haven't examined his body.
Q: Okay. Well, but you said you just saw him a few years ago.
A: Right. Well, he had clothes on. He wasn't naked.
Defense counsel objected again and moved for a mistrial, arguing that the prosecutor was improperly testifying to factual issues after Joe Poole, Jr. said he did not know about the "Thug Life" tattoo. The trial court overruled the objection again and denied the motion.
Similar to the questions regarding Poole's prior criminal activity, these questions regarding the "Thug Life" tattoo were improper because they constituted inadmissible nonstatutory aggravation. While the State argues that the questions went to Joe Poole, Jr.'s credibility, we reiterate the rule that the State cannot introduce inadmissible nonstatutory aggravation under the guise of impeachment. See Geralds, 601 So.2d at 1162. This error was not harmless because the information regarding the tattoo prejudiced Poole in the eyes of the jury and could have unduly influenced the jury in recommending the death penalty.
Poole also asserts that the prosecutor's questions to Poole's mother, older sister, and nephew about Poole's lack of remorse were improper because lack of remorse is an inadmissible nonstatutory aggravator. This claim is unpreserved for appellate review because there was no objection made in the trial court. See Card, 803 So.2d at 622. Defense counsel failed to object to any of the prosecutor's questions regarding Poole's lack of remorse during the testimony of the witnesses. It was during a short recess, after all three witnesses testified, that defense counsel raised the issue about the lack of remorse questions. In fact, defense counsel did not object, but simply raised a concern that the prosecutor had impermissibly asked questions about Poole's lack of remorse when the defense had not raised remorse as a mitigator. The prosecutor responded that defense counsel failed to object at any point during these questions, but agreed to stop asking such questions.
Because defense counsel failed to object to the prosecutor's questions on lack of remorse, this claim can only be raised on *394 appeal if the alleged error is fundamental. Id.; see also McDonald v. State, 743 So.2d 501, 505 (Fla.1999). Having reviewed the unobjected-to questions asked by the prosecutor, we conclude that none of the questions were so egregious as to reach "down into the validity of the trial itself to the extent that a verdict of guilty or jury recommendation of death could not have been obtained without the assistance of the alleged error." Urbin v. State, 714 So.2d 411, 418 n. 8 (Fla.1998) (quoting Kilgore v. State, 688 So.2d 895, 898 (Fla.1996)). After defense counsel raised the concern regarding the prosecutor's questions, the prosecutor stated that he would not continue to ask such questions. A review of the penalty phase proceeding demonstrates that the prosecutor did not in fact continue to ask any defense witness whether Poole was remorseful for his actions. The prosecutor also did not argue lack of remorse in his closing arguments. However, we continue to caution prosecutors that this type of questioning should not take place when the defendant has not made remorse an issue in the penalty phase.
While the questions on Poole's lack of remorse do not individually amount to fundamental error, we find that the cumulative effect of this error and the error of presenting inadmissible nonstatutory aggravation of Poole's criminal history and the content of his tattoo deprived Poole of a fair penalty phase. The combination of these errors had the effect of unfairly prejudicing Poole in the eyes of the jury because these errors created a risk that the jury would give undue weight to this information in recommending the death penalty.
Accordingly, we vacate Poole's sentence of death and remand for a new penalty phase.
Prosecutorial Comments During Penalty Phase
Poole contends that he was denied a fundamentally fair penalty phase because the prosecutor made several improper comments during the penalty phase closing arguments. While Poole argues that the prosecutor made several improper comments, defense counsel only objected to the following comment that the prosecutor made near the end of his closing argument:
I don't think when you look at it from the perspective that this decision is any more difficult than the other. I'm only thinking that when you go back in that room and make that vote and you head for your car this afternoon, you're not going to find yourself feeling the same way. You're just going to find that you did your job just like you promised to do when you raised your right hand and swore to that oath.
Defense counsel objected and moved for a mistrial, arguing that the prosecutor's comment suggested that it was the jury's duty to recommend death. The trial court denied the motion.
Poole now contends that this comment improperly suggested that the jurors promised and took an oath to recommend the death penalty. On the other hand, the State argues that the prosecutor was asking the jurors to weigh the evidence in aggravation and mitigation. We find that that the trial court did not abuse its discretion in denying defense's motion for mistrial because the prosecutor was not suggesting that it was the jurors' sworn duty to recommend death. Throughout the closing argument, the prosecutor argued that the aggravators had been proven beyond a reasonable doubt and also attempted to rebut some of the mitigators. At the conclusion of making these arguments, the prosecutor, as an advocate for the State, *395 was attempting to persuade the jury that based on the aggravators and mitigators, they should recommend a death sentence. The prosecutor made the last comment to inform the jury that they should do their jobs as they promised to do when they took the oath, which is to weigh the mitigators and aggravators. This type of comment is not improper.
Defense counsel failed to contemporaneously object to the other comments that Poole now contends were improper. As a result, these claims were not properly preserved for appellate review. Additionally, because none of these comments rise to the level of fundamental error, there are no grounds for reversal. See Merck v. State, 975 So.2d 1054, 1064 (Fla.2007), cert. denied, ___ U.S. ___, 129 S.Ct. 73, 172 L.Ed.2d 66 (2008).
As to the first alleged improper comment, Poole argues that the prosecutor misled the jury by misstating the law concerning the weighing of aggravating and mitigating circumstances.[4] The first part of the comment was not improper because in discussing the heinous, atrocious or cruel aggravator, the prosecutor was attempting to argue that this aggravator should be given significant weight, and that it alone outweighed the defense's case in mitigation due to the overwhelming evidence that proved the aggravator. However, the second part of the comment was improper because the prosecutor was suggesting that unless the mitigating circumstances outweighed the aggravating circumstances, the jury had to vote for a death sentence. We have repeatedly held that a jury is not required to recommend a sentence of death when the aggravators outweigh the mitigators. See Brooks v. State, 762 So.2d 879 (Fla.2000); Henyard v. State, 689 So.2d 239 (Fla.1996). While this comment was improper, the comment still does not amount to fundamental error. The prosecutor did not repeat this statement during the rest of his closing arguments. Moreover, at the end of closing arguments, the trial judge read the standard jury instructions, which included an accurate statement of the law.
Poole next alleges that the prosecutor belittled evidence in mitigation and commented on matters not in evidence.[5] This comment was not improper because the prosecutor was attempting to rebut mitigating evidence argued by the defense. During the penalty phase, defense counsel admitted into evidence a photograph of Poole when he was a child and a photograph of the church Poole's family attended. Defense counsel used these photographs *396 and asked Poole's family members questions about attending church to demonstrate in mitigation that Poole was a good, loving person who came from a good family. The prosecutor responded during his closing argument that although Poole was a child who went to church at one time, he was not a child anymore, but a thirty-nine-year-old man who committed a crime.
Lastly, Poole alleges that the prosecutor misstated the law concerning brain damage as a mitigating circumstance: "Youyou are free to reject it if you want and say I don't think brain damage mitigates against the death penalty." When we consider this comment in the context in which it was made, we find that the comment was not improper. The prosecutor acknowledged that it was uncontroverted that Poole had brain damage, but was arguing that it was not enough to recommend a life sentence. In fact, the prosecutor even emphasized that the jury should accept the brain damage mitigator as proven if there was evidence to demonstrate that Poole did, in fact, have brain damage.
Because the prosecutor's comments were either not improper or did not constitute fundamental error, we deny relief on this claim.
Constitutionality of Section 921.141
Poole asserts that Florida's death penalty statute violates the Sixth Amendment because it does not require express unanimous findings of aggravating circumstances by the jury. Poole argues that the United States Supreme Court's decision in Ring v. Arizona, 536 U.S. 584, 122 S.Ct. 2428, 153 L.Ed.2d 556 (2002), requires that all aggravators necessary for the imposition of the death penalty must be found by the jury. Poole also requests that we reconsider our holding in State v. Steele, 921 So.2d 538 (Fla.2005), in which we held that the finding of at least one aggravator is implicit in the jury's recommendation of death.
First, the Court's decision in Ring does not require a finding that Florida's capital sentencing scheme is unconstitutional. In Steele, we not only concluded, consistent with prior case law, that section 921.141, Florida Statutes (2007), does not require jury findings on aggravating circumstances, but we specifically held that it is a departure from the essential requirements of law to use a special verdict form detailing the jury's determination on the aggravating circumstances. 921 So.2d at 544-48. Moreover, since the Ring decision, we have rejected similar arguments that Florida's death penalty statute is unconstitutional based on Ring. See Marshall v. Crosby, 911 So.2d 1129 (Fla.2005); Bottoson v. Moore, 833 So.2d 693 (Fla.2002); King v. Moore, 831 So.2d 143 (Fla.2002).
Moreover, the jury unanimously found that Poole committed the crimes of attempted first-degree murder of White, sexual battery of White, armed burglary, and armed robbery, during the course of the first-degree murder of Scott. We have repeatedly found that the prior violent felony conviction aggravator takes a case outside the scope of Ring. Guardado v. State, 965 So.2d 108 (Fla.2007), cert. denied, ___ U.S. ___, 128 S.Ct. 1250, 170 L.Ed.2d 90 (2008); Smith v. State, 866 So.2d 51, 68 (Fla.2004); Johnston v. State, 863 So.2d 271, 286 (Fla.2003) (finding that the existence of a prior violent felony conviction satisfied the constitutional mandates because the conviction was heard by a jury and determined beyond a reasonable doubt).
Accordingly, relief is not warranted on this claim.
*397 Sufficiency of the Evidence
Poole has not argued the sufficiency of the evidence supporting the conviction. Nonetheless, it is our duty to independently review the entire record. See Fla. R.App. P. 9.142(a)(6); see also Jones v. State, 963 So.2d 180 (Fla.2007). It is abundantly clear from the physical evidence and eyewitness testimony that Poole committed the murder of Scott, as well as the attempted murder of White, sexual battery of White, robbery, and burglary. Therefore, we uphold the first-degree murder conviction.
CONCLUSION
For the reasons expressed above, we affirm Poole's convictions. However, based on the prosecutor's improper cross-examination of defense witnesses, we vacate his sentence of death and remand the case to the trial court to conduct a new penalty phase proceeding consistent with this opinion.
It is so ordered.
QUINCE, C.J., and WELLS and LEWIS, JJ., concur.
PARIENTE, J., concurs in result only with an opinion in which ANSTEAD, J., concurs.
CANADY and POLSTON, JJ., did not participate.
PARIENTE, J., concurring in result only.
I concur with the majority's opinion vacating the death sentence as a result of reversible error based on the cumulative effect of the State presenting inadmissible nonstatutory aggravation of Poole's criminal history and the content of his tattoo, as well as the questions on lack of remorse. The majority's opinion is that the combination of these errors deprived Poole of a fair penalty phase. See majority op. at 393-94.
I concur in result only because of my disagreement with the majority's use of the mistrial standard rather than the harmless error standard as to the objected-to portions of the closing argument in the guilt phase on Poole's failure to testify at trial and his silence after arrest. For the reasons more extensively set forth in my special concurrence in Salazar v. State, 991 So.2d 364 (Fla.2008), I do not agree with the majority's use of the mistrial standard in this case because counsel both objected and moved for a mistrial. See majority op. at 390-91. Instead, the majority should employ the harmless error standard because Poole preserved the issue for review by objection and motion for mistrial. This standard is especially appropriate here where there is a clearly impermissible comment on Poole's constitutional right to remain silent.
There are at least two reasons for using the harmless error standard. First, it is consistent with our precedent in Parker v. State, 873 So.2d 270 (Fla.2004). Second, the standard applied by the majority in this case is virtually identical to that for fundamental error so essentially we are applying the same standard to objected-to error in combination with a motion for mistrial and unobjected-to error. See majority op. at 391. "Fundamental error is error that reaches `down into the validity of the trial itself to the extent that a verdict of guilty could not have been obtained without the assistance of the alleged error.'" Jones v. State, 949 So.2d 1021, 1037 (Fla.2006) (quoting Kilgore v. State, 688 So.2d 895, 898 (Fla.1996)). Similarly, "[a] motion for a mistrial should only be granted when an error is so prejudicial as to vitiate the entire trial." England v. State, 940 So.2d 389, 401-02 (Fla.2006). We should not equate objected-to error *398 with unobjected-to error, particularly where the prosecutor makes a clearly impermissible comment on the right to remain silent and the defense objects.
Consider what happened in this case. In closing argument, after stating that the defense argument came from "Fantasy Land," the prosecutor made two comments that were not objected to and could be characterized as "invited error" as explained by the majority. But then the prosecutor remarked:
And if Mr. Poole wants to tell the state and Detective Grice that somebody helped him commit this crime, then let him come forward because
It was at this point that defense counsel objected and moved for a mistrial, arguing that the prosecutor's comments violated Poole's right to remain silent.
The prosecutor attempted to justify his argument by claiming that defense counsel had "opened the door," but then told the trial judge "he would not take the argument any further." The trial court never sustained the objection and only denied the motion for a mistrial. Thus, the jury heard the objection but never learned that the comment was impermissible.[6]
I would therefore review the error based on the harmless error standard. The harmless error test properly "places the burden on the state, as the beneficiary of the error, to prove beyond a reasonable doubt that the error complained of did not contribute to the verdict or, alternatively stated, that there is no reasonable possibility that the error contributed to the conviction." State v. DiGuilio, 491 So.2d 1129, 1138 (Fla.1986). As this Court explained in DiGuilio, comments on the right to remain silent are high-risk errors but the errors are not per se reversible. Id. at 1135.
In the end, however, I concur with the majority's result in affirming the conviction, because I conclude that the prosecutor's improper comment was harmless beyond a reasonable doubt. First, the impermissible remark was not repeated or emphasized. See, e.g., Fitzpatrick v. State, 900 So.2d 495, 517 (Fla.2005) (determining that witness's improper comment on defendant's silence was harmless beyond a reasonable doubt because there was overwhelming permissible evidence of the defendant's guilt, the comment was neither repeated nor emphasized, and the trial judge expressly indicated the lack of importance he believed the jury attributed to the remark). Second, the improper comment was later followed by the trial court's instruction to the jury that the burden rested with the State and that "[t]he defendant is not required to present evidence or prove anything." The court also instructed the jury that Poole had the right not to testify and that the jury must not view Poole's failure to testify as an admission of guilt or be influenced by his decision in any way. See, e.g., Hitchcock v. State, 755 So.2d 638, 643 (Fla.2000) (concluding that prosecutor's erroneous comments during closing arguments about what the jury could consider in mitigation were harmless where the comments were followed by a correct explanation of mitigation and where the judge gave the mitigating circumstance at issue "some weight" in sentencing).
*399 Third, although the amount of evidence is not the test of harmless error, in this case there was overwhelming permissible evidence of Poole's guilt. Cf. DiGuilio, 491 So.2d at 1138 (finding improper comment on post-arrest silence to be harmful error in part because "the permissible evidence was not clearly conclusive"). The evidence demonstrated that Poole was seen by a neighbor walking towards the victims' trailer on the night of the crimes; Poole sold video games like those taken from the victims' residence immediately after the attack; and Poole's live-in girlfriend found a game controller on the porch of Poole's residence the following morning. Moreover, the State presented physical evidence linking Poole to the crimes, including semen found in victim Loretta White matching Poole's genetic profile, a shoeprint on a notebook in the victims' trailer matching Poole's shoeprint, a stain found on the sleeve of Poole's shirt matching White's DNA profile, and blood found on three game systems and a game controller, which were possessed by Poole after the crimes, matching victim Noah Scott's DNA profile.
Nevertheless, I emphasize that what we made clear over two decades ago about the harmless error test bears repeating:
The test is not a sufficiency-of-the-evidence, a correct result, a not clearly wrong, a substantial evidence, a more probable than not, a clear and convincing, or even an overwhelming evidence test. Harmless error is not a device for the appellate court to substitute itself for the trier-of-fact by simply weighing the evidence. The focus is on the effect of the error on the trier-of-fact. The question is whether there is a reasonable possibility that the error affected the verdict. The burden to show the error was harmless must remain on the state. If the appellate court cannot say beyond a reasonable doubt that the error did not affect the verdict, then the error is by definition harmful. This rather truncated summary is not comprehensive but it does serve to warn of the more common errors which must be avoided.
DiGuilio, 491 So.2d at 1138. And, as we emphatically stated in DiGuilio: "We wish to emphasize that any comment, direct or indirect, by anyone at trial on the right of the defendant not to testify or to remain silent is constitutional error and should be avoided." Id. at 1139.
Clearly, this prosecutor did not get the message of DiGuilio and its progeny, and he risked reversal of the guilt phase by making this impermissible closing argument. In this case, an objection was made to the impermissible comment and the trial court should have acted on it. What we stated in Bertolotti v. State, 476 So.2d 130 (Fla.1985), rings true today, especially in death penalty cases:
Moreover, we commend to trial judges the vigilant exercise of their responsibility to insure a fair trial. Where, as here, prosecutorial misconduct is properly raised on objection, the judge should sustain the objection, give any curative instruction that may be proper and admonish the prosecutor and call to his attention his professional duty and standards of behavior.
Id. at 134.
For these reasons, I concur in the majority's opinion vacating the death sentence, and concur in result only in the majority's opinion affirming on the issue of the prosecutor's improper comments on Poole's failure to testify at trial and his silence after arrest.
ANSTEAD, J., concurs.
NOTES
[1] White testified that Scott owned a Sega Genesis, Sega Dreamcast, and Super Nintendo.
[2] The first comment was: "Well, there is no evidence in this case that at any time, either in this trial or anywhere else, Mr. Poole ever acknowledged that he did anything." The second comment started with the prosecutor stating, "Mr. Poole talked to the police" and ending with "there is no evidence to support that theory."
[3] We recognize that generally, the proper standard of review for an overruled objection based on a comment on a defendant's right to remain silent or defendant's failure to testify is a harmless error test. See Heath v. State, 648 So.2d 660 (Fla.1994); see also State v. Marshall, 476 So.2d 150 (Fla. 1985). However, this standard does not apply here because after defense counsel simultaneously objected and moved for a mistrial, the trial judge never ruled on the objection, but simply denied defense counsel's motion for mistrial. As a result, the trial court's ruling on the motion for mistrial is reviewed under an abuse of discretion standard. Dessaure v. State, 891 So.2d 455, 464-65 & n. 5 (Fla.2004) (citing Cole v. State, 701 So.2d 845 (Fla. 1997)).
[4] The prosecutor stated:
What sets this crime apart so much from other crimes that the death penalty is the only conclusion you can come to? The fourth aggravating factor, heinous, atrocious and cruel.
... I submit to you that it is an overwhelming aggravating circumstance that can never be overcome in a case like this.
....
So now what weighs against that? I say to you that that scale is so far down here that there is nothingand the judge will tell you once you find that sufficient aggravating circumstances exists [sic] to warrant the death penalty, unless you find that the mitigating circumstances outweigh them. . . unless something is going to push this scale back down, then your vote has got to be for the death penalty. I tell you that [it] has to be twelve to nothing again.
[5] The prosecutor argued:
This is what you saw. A picture of a church, isn't that nice. When did he go to this church? When he was like 12, 16, 19. He is 39 years old when he murdered this boy. 39. Does it matter what he looked like in this picture? Was Ted Bundy okay in the fourth grade? I don't care, and I think you shouldn't care what he was doing in the fourth grade. A nice little picture in the fourth grade.
[6] As the majority explains, the final objected-to comment was an impermissible comment on Poole's failure to testify. As pointed out by the majority, "any comment on, or which is fairly susceptible of being interpreted as referring to, a defendant's failure to testify is error and is strongly discouraged." Rodriguez v. State, 753 So.2d 29, 37 (Fla.2000) (emphasis added) (quoting State v. Marshall, 476 So.2d 150, 153 (Fla. 1985)).
|
Background
==========
In 2000, the University of Adelaide Medical School adopted a new curriculum. This curriculum switch from traditional discipline lecture-based 'old' (TLB) to problem-based learning 'new' (PBL) as part of a worldwide trend and represented our most significant change since the 1960's \[[@B1]\]. In the adoption of any new curriculum, it is vital to evaluate its effectiveness, ensuring that standards and quality are maintained or enhanced. The full impact of changes to curricula is not known for some time after graduation, requiring a long-term approach to curriculum evaluation \[[@B2]\]. This study has evaluated how graduates of TLB and PBL curricula perceived their preparedness (self-reflection assessment) for hospital practice after completion of their intern year, in comparison to the work-place based assessment (WPBA) assessment.At the University of Adelaide, years 1--3 in the TLB curriculum were didactic in style, with the program organised into many separate subjects delivered by individual disciplines, primarily in a lecture mode. Years 4--6 were clinically focussed. There was little emphasis on clinical reasoning and relatively little small group learning. The subjects were not integrated in any way with each other, so that a student could be studying the anatomy of the brain, the pharmacology of heart failure, the characteristics of Staphylococcus, and the history of public health all at the same time. Communication skills were delivered in lecture format by staff from psychology, with very little opportunity for students to practise (Figure [1](#F1){ref-type="fig"}).
{#F1}
The 'new' PBL curriculum was centrally planned, with integrated multidisciplinary content, delivered in lectures and student-centred small group (n = 8) sessions. The use of clinical scenarios, designed to encourage students to form links between clinical practice and the basic medical sciences, commenced in 1st year. The scenarios were simple cases of common conditions, and progressed to more complex cases with multiple co-morbidities in 3rd year. Tutors fulfilled primarily a facilitative role and group discussions occupied 6--20 hours per week. There was an increase in emphasis on communication skills (allied health colleagues, patients, peers and supervisors) with opportunities to practise communication skills were introduced using actors, with audio visual recordings for students to review their own performance. Assessment was centrally organised and integrated, and included testing of knowledge and clinical reasoning \[[@B3]\].
In the evaluation of outcomes from an overall curriculum, student satisfaction alone is insufficient \[[@B4]\], and attention must be paid to impacts on student progression, student and graduate satisfaction, career choices or preferences, and career retention.
The effects of different curricula on such an elite group of students, shown to pass examinations irrespective of teaching methods \[[@B5]\], has been relatively inconclusive \[[@B1],[@B6],[@B7]\]. Studies comparing TLB with PBL curricula vary in their findings, from no clear differences in outcomes \[[@B1],[@B6],[@B8]-[@B10]\], to observed differences in the areas of social and cognitive dimensions \[[@B7]\], tests of factual knowledge \[[@B11]\], clinical examinations \[[@B11],[@B12]\], and licensing examinations \[[@B13]\].
The outcomes research to date has mainly employed a self-report study design \[[@B9],[@B14]\] and 'seldom include workplace points of view' \[[@B15]\]. It is important to widen the focus of evaluation beyond traditional educational outcomes, to include external assessment such as WPBA assessments during the intern year \[[@B9],[@B16],[@B17]\].
In Australia, at the time of this study, all medical graduates spent their first postgraduate year as an 'intern', in accredited public hospitals. Throughout Australia, intern assessment processes vary. However, all WPBA are made by senior clinicians and their supervising team and are endorsed by the Medical Board of Australia. In South Australia, an intern has on average five rotations. At the completion of each rotation, the clinical supervisors provide a WPBA report that identifies strengths and weaknesses and gives an overall appraisal of intern performance. The intern end of rotation assessment is 'high stakes', however the concept of pass/fail is not used, the intern is assessed as having made satisfactory, borderline or unsatisfactory progress in acquiring intern competencies. If a rotation has not been satisfactory, remedial measures are implemented and progress recorded. A single unsatisfactory rotation will not necessarily need to be repeated if good progress is made during the rest of the year.
The aim of stages I and II of the Medical Graduates Outcomes Program was to follow and compare long-term outcomes of graduates from the two types of curricula: lecture-based (graduates 2003, 2004) and problem-based (graduates 2005, 2006) curricula. To assess how well prepared these graduates felt for their internship and compare this self-assessment with the clinical supervisor-assessment results of their intern year.
Methods
=======
Participants, procedure and study design
----------------------------------------
The cohorts studied graduated from the University of Adelaide Medical program between 2003--2006, with graduates from 2003 and 2004 comprising the TLB cohort, and graduates from 2005 and 2006 comprising the PBL cohort. Methodological triangulation involved data collection via a self-administered questionnaire at the completion of the intern year (one year after graduation), and an audit of intern WPBA reports from five South Australian public hospitals.
Between December 2006 and May 2007 graduates were sent an information pack containing an introduction to the project, a consent form, the Preparedness for Hospital Practice questionnaire and a contact details form to allow data collection for the next two stages of the study. The six month period of contact and follow-up ensured that all graduates had completed their intern year. Graduates who completed their intern year outside of Australia were excluded from this analysis. The audit of intern reports was carried out in June and July 2009 (Audit form available as Additional file [1](#S1){ref-type="supplementary-material"}).
Graduates were asked to rate how well the medical program had prepared them in 13 broad practitioner competencies and 13 areas of clinical and hospital practice using a 5 point Likert scale, from 'Very well' through to 'Not at all well'. The questionnaire was based on two previously validated questionnaires \[[@B9],[@B14]\]. The different areas represent a diverse range of skills and are divided into three sections: Preparation for Hospital Practice (ie history taking and diagnosis); Clinical Skills & Preparedness (ie procedures including consent, prescribing and cannulation), and Resilience (ie level of responsibility and meeting challenges as intern).
The intern audit form was developed based on the structure and content of the WPBA across each hospital. Commonly assessed criteria were identified. The audit was carried out between June and August 2009. Fourteen criteria, 'Achieving Appropriate Level of Competence' and an 'Overall Appraisal' rating were assessed in the audit. A five point Likert Scale was used to record the competence, from 'High level of competence' through to 'Low competence'.
Ethical approval was obtained from the University of Adelaide Human Research Ethics Committee (H-019-2006 and H-099-2010) and the Ethics Committees of the five public hospitals.
Analysis
--------
The data were recoded by compression from a 5- to 3-point scale (e.g. 'strongly agree and agree', 'neutral', and 'disagree and strongly disagree'). Descriptive statistics (frequencies) were completed for all items by curriculum type. Differences between the curriculum types were examined using separate chi-square tests. In order to account for multiple testing we adjusted for the number of comparisons made (Bonferroni method \[[@B18]\]) to reduce the issue of multiplicity (ie increased rate of type I error). Results presented for each chi-squared test are the adjusted *p*-values.
Results
=======
A total of 166 graduates (39% of the total number contacted) completed the Preparedness for Hospital Practice Questionnaire (Table [1](#T1){ref-type="table"}). Matched WPBA data were available for 124 graduates. The demographics of the responding graduates do not differ significantly from their respective cohort populations for gender (*χ*^2^ (1, N = 458) =0.69, *p* = 0.405). The number of international students from the PBL cohort who responded to the survey was almost double that of the TLB (15 vs 8), but there was no significant difference between the cohorts in terms of domestic or international status (*χ*^2^ (2, N = 165) =4.28, *p =* 0.118). The respondents' ages ranged from 23--45 years at the time of survey, with 80% (134) of respondents being aged 28--31 years, and no significant difference between the curriculum cohorts (*χ*^2^ (11, N = 165) =11.03, *p* = 0.440). There was no significant difference in the proportion of respondents from each of the 5 hospital training sites (*χ*^2^ (5, N = 166) =5.66, *p* = 0.342) (Table [1](#T1){ref-type="table"}).
######
Results stages I and II
**% (N = 423)**
-------------------------------------------------- -------------------
Overall response rate 41.7% (172)\*
Preparedness for hospital practice Questionnaire 39.2% (166)\*
Intern reports audited with matched data 625 reports (124)
Follow-up consent 3 yr & 10 yr: 37.6% (159)\*
Contact details 36.2% (153)\*
2003 TLB 24.6% (41)
2004 TLB 23.5% (39)
2005 PBL 24.7% (41)
2006 PBL 27.1% (45)
Female preparedness survey 61.3% (49)
Female intern audit 60.5% (52)
\*Not all: graduates completed the questionnaire; consented to the audit or intern reports could be found.
Respondent self-assessment
--------------------------
For the overall evaluation, graduates from the TLB curriculum were more likely to rate the medical program as 'excellent/good' than were the graduates from the PBL curriculum (*χ*^2^ (4, N = 160) =15.55, *p* = 0.004) (Figure [2](#F2){ref-type="fig"}).
{#F2}
Preparedness for hospital practice
----------------------------------
In two of 13 'broad practitioner' competencies the two cohorts reported significantly different levels of preparedness. The TLB cohort reported higher levels of preparedness for 'Understanding disease processes' (*χ*^2^ (4, N = 166) =20.11, *p* \< 0.001) while the PBL cohort reported greater preparedness in 'Being aware of legal and ethical issues' (*χ*^2^ (4, N = 166) =15.85, *p* = 0.039) (Table [2](#T2){ref-type="table"}).
######
Graduate self-assessment rating of preparedness of broad practitioner competencies
**How well did the medical program prepare you for\...\...?** **Curriculum type** **% More than quite well prepared** **% Quite well prepared** **% Less than quite prepared** ***P-value unadjusted*** ***P-value adjusted***^**§**^
----------------------------------------------------------------------------------------- --------------------- ------------------------------------- --------------------------- -------------------------------- -------------------------- -------------------------------
History taking, clinical examination and selection & interpretation of diagnostic tests TLB 77.50 20.00 2.50 0.010 0.130
PBL 60.47 36.05 3.49
Diagnosis, decision making & treatment including prescribing TLB 50.00 37.50 12.50 0.613 1.00
PBL 45.35 39.53 15.12
Keeping accurate records TLB 56.25 28.75 15.00 0.444 1.00
PBL 60.47 31.40 8.14
Communicating effectively TLB 76.30 18.75 5.00 0.652 1.00
PBL 77.90 20.90 1.20
Working in a team TLB 67.50 22.50 10.00 0.540 1.00
PBL 76.70 18.60 4.70
Being aware of legal and ethical issues TLB 45.00 32.50 22.50 0.003 0.039\*
PBL 61.60 32.60 5.80
Managing time effectively TLB 45.00 28.75 26.25 0.415 1.00
PBL 40.70 37.20 22.10
Being aware of own limitations TLB 72.50 22.50 5.00 0.232 1.00
PBL 66.30 31.40 2.30
Understanding disease processes TLB 67.50 26.25 6.25 \<0.001 \<0.001\*\*
PBL 45.35 34.88 19.77
Understanding the principles of evidence based medicine TLB 62.50 28.75 8.75 0.422 1.00
PBL 55.80 32.60 11.60
Accept the level of responsibility expected of an intern' TLB 48.75 36.25 15.00 0.891 1.00
PBL 52.33 36.05 11.63
Meet the variety of challenges faced' TLB 48.75 37.50 13.75 0.460 1.00
PBL 59.30 25.58 15.12
Dealing with the differing relationships in the hospital context' TLB 46.25 33.75 20.00 0.250 1.00
PBL 61.63 29.07 14.46
^§^post hoc comparisons with Bonferroni correction significant at the \*P \< 0.05, \*\*P \< 0.001.
Resilience
----------
There was no difference between cohorts for any of the three criteria. The cohorts felt equally prepared to 'Accept the level of responsibility expected of an intern' (*χ*^2^ (4, N = 166) = 1.12, *p* = 0.891), 'Meet the variety of challenges they faced' (*χ*^2^ (4, N = 166) = 3.62, *p* = 0.460*)* and in 'Dealing with the differing relationships in the hospital context' (*χ*^2^ (4, N = 166) = 5.39, *p* = 0.250) (Table [2](#T2){ref-type="table"}).
Clinical skills & preparedness
------------------------------
There were no significant differences between cohorts in the 13 clinical skill competencies (Table [3](#T3){ref-type="table"}).
######
Graduates self-assessment rating of clinical skills preparedness
***How well did the medical program prepare you for\....?*** **Curric\--ulum type** **% More than quite well prepared** **% Quite well prepared** **% Less than quite prepared** ***P-value unadjusted*** **P-value adjusted**^**§**^
-------------------------------------------------------------- ------------------------ ------------------------------------- --------------------------- -------------------------------- -------------------------- -----------------------------
Basic CPR TLB 68.75 21.25 10 0.342 1.00
PBL 74.4 23.3 2.3
Obtaining valid consent TLB 50 18.75 31.25 0.008 0.104
PBL 48.84 38.37 12.79
Prescribing appropriately TLB 50 28.75 21.25 0.635 1.00
PBL 39.5 38.4 22.1
Writing a prescription TLB 38.75 32.5 28.75 0.321 1.00
PBL 43.1 36 20.9
IV cannulation TLB 75 17.5 7.5 0.576 1.00
PBL 69.77 24.42 5.81
Arterial blood sampling TLB 53.75 26.25 20 0.996 1.00
PBL 53.5 24.4 22.1
Suturing TLB 53.75 31.25 15 0.388 1.00
PBL 51.2 27.9 20.9
Performing an ECG TLB 37.5 41.25 21.25 0.053 0.689
PBL 30.2 30.2 39.6
Administering oxygen therapy TLB 46.25 35 18.75 0.336 1.00
PBL 39.53 33.72 26.74
Correct use of nebuliser TLB 38.75 28.75 32.5 0.671 1.00
PBL 31.4 33.7 34.9
Inserting a nasogastric tube TLB 38.75 28.75 32.5 0.184 1.00
PBL 29.07 38.37 32.56
Urinary catheterisation TLB 45 33.75 21.25 0.830 1.00
PBL 40.7 33.7 25.6
Control of haemorrhage TLB 40 30 30 0.701 1.00
PBL 39.5 34.9 25.6
^§^post hoc comparisons with Bonferroni correction.
WPBA
----
The range in number of reports of the interns competence varied from one (three interns) to nine reports (one intern) with the majority of interns having four (n = 24, 18.2%), five (n = 71, 53.8%) and six (n = 25, 18.9%) reports. There were no clear associations of number of reports with cohorts, hospitals, or rotations. A total of 82.0% (N = 533) of reports were signed by the intern, indicating they had received feedback on their rotation.
There was no significant difference between curriculum cohorts for 'Achieving Appropriate Level of Competence' (*χ*^2^ (1, N = 574) = 1.27, *p =* 0.260) and 'Overall Appraisal' (*χ*^2^ (3, N = 615) = 0.22, *p =* 0.974). A comparison of overall appraisal by WPBA and graduates' self-assessment of preparedness for internship is presented in Figure [3](#F3){ref-type="fig"}.
{#F3}
A similar pattern is seen for both cohorts, in that graduates assessed themselves more harshly than did their supervisors. Nine (7.3%) graduates received a rating of 'Variable' or 'Low Competence' on at least one of their individual assessments as an intern. However of these nine only three (2.4%) received an overall rating of 'Variable Competence' for that rotation and no graduate received an overall rating of 'Low Competence'. The low supervisor ratings had no significant relationship to cohort, hospital or rotation.
A comparison of the WPBA assessment of intern competence in 14 skills areas, found only one area where a difference was noted between the two curriculum cohorts (Table [4](#T4){ref-type="table"}). There was a trend for graduates of the PBL curriculum to be rated as having higher competence in their 'Interactions with peers and colleagues from other disciplines' (*χ*^2^ (3, N = 596) =13.10, p = 0.056).
######
Comparison of supervisor-assessment of intern competence in 14 skills areas by curriculum cohort
**Assessed skills & number of reports** **Competency level** **Competency level** ***p*- value unadjusted** ***p*- value adjusted**^**§**^
----------------------------------------------------------- ---------------------- ---------------------- --------------------------- -------------------------------- ------- ------ ------- -------
Clinical Assessment/presentation N = 596 83.39 16.23 0.38 87.31 12.69 0 0.185 1.00
Clinical judgement/problem solving N = 614 73.58 26.04 0.38 79.37 19.77 0.86 0.076 1.00
Ongoing management N = 610 81.51 17.74 0.75 86.38 13.04 0.58 0.304 1.00
Documentation N = 596 87.17 12.08 0.75 88.82 11.18 0 0.118 1.00
Physician/patient interactions N = 596 84.9 15.09 0 81.57 18.43 0 0.539 1.00
Senior colleague interactions N = 615 77.74 21.51 0.75 77.71 21.71 0.57 0.931 1.00
Peers & colleagues other disciplines interactions N = 596 86.8 13.21 0 93.65 6.04 0.3 0.004 0.056
Nurses & ancillary staff interactions N = 613 86.41 13.21 0.38 91.38 8.62 0 0.164 1.00
Ethics & integrity N = 593 48.29 51.71 0 44.85 55.15 0 0.106 1.00
Professional skills N = 518 85.61 14.39 0 82.67 17.32 0 0.236 1.00
Theoretical knowledge N = 614 73.86 26.14 0 75.71 24 0.29 0.274 1.00
Learning initiative N = 582 56.59 43.02 0.39 60.19 39.2 0.62 0.543 1.00
Technical competencies N = 580 72.22 27.78 0 71.96 28.05 0 0.931 1.00
Organisational & time management N = 612 78.49 21.13 0.38 81.56 17.87 0.58 0.634 1.00
^§^post hoc comparisons with Bonferroni correction.
Discussion
==========
We have found that the graduates from both medical curricula were equally competent in their clinical skills as assessed by their clinical supervisors, supporting the findings of previous research \[[@B1],[@B6],[@B8]-[@B10]\]. We also found there was a trend for graduates of the PBL curriculum to be rated as better communicators than those from the TLB curriculum. These important communication skills are transferable between clinical settings, research environments and future medical supervisors and teachers. The issues around improved communication skills and team work require further research, ie., does the problem-based curriculum increase these skills or are students today selected for these skills?
Our graduates' self-assessment of their preparedness for hospital practice varied between curriculum cohorts. PBL graduates self-assessed as being less prepared in two clinical skills ('Clinical exam & selection and interpretation of tests' and 'Understanding disease processes'), while the TLB graduates assessed themselves as less prepared for two of the broader practitioner skills ('Obtaining consent' and 'Legal and ethical issues'). Jones et al. similarly found in a PBL course graduates rated their ability in 'Understanding disease processes' less favourably than the TLB graduates \[[@B9]\].
Differences perceived by graduates may be due to differences in student expectations, medical education, the working environment or health care systems \[[@B19]\]. Differences in perception may also relate to specialty bias, for example understanding of disease processes may be more important in an internal medicine rotation than psychiatry. A Kings College School of Medicine and Dentistry survey \[[@B20]\] found that although over 70% of graduates reported their education had satisfactorily equipped them for medical practice, there were significant differences between those in primary care and hospital medicine regarding the relative importance of subjects within the curriculum. However, Ochsmann \[[@B19]\] found deficits in feelings of preparedness irrespective of chosen specialty. The area of how preparedness relates to specialty choice requires further study.
Feelings of preparedness are important in the successful transition from being a student to a practising doctor \[[@B19],[@B21]\]. However, the question of preparedness continues to be an ambiguous one. 'When junior doctors say they feel prepared, they may not mean they think they are competent' \[[@B22]\] and it is only by a comparison of self- and supervisor-assessment that we can explore the accuracy of their self--assessment. Our study did not find an association between self- and WPB assessment, supporting Bingham et al's findings, where trainees assessed themselves more harshly, while their supervisors assessed of trainees as 'at or above expected level' for 'every item in every term' (43% vs 98.5%) \[[@B23]\]. Qualitative data from the Stage I Preparedness Questionnaire, found two key differences between the TBL and the PBL graduates. The PBL cohorts were much more positive in their responses to how well the program had developed their attitudes to skill development, whilst asking for a greater emphasis on learning basic sciences.
A variety of studies have found that many graduates feel inadequately prepared for the role of junior doctor \[[@B24]-[@B26]\] and criticisms that medical schools do not prepare graduates for early medical practice are not new. Goldacre et al. explored UK junior doctors' views on preparedness in 2010, and found that the level of agreement that medical school had prepared them well for work varied between medical schools and changed over time, ranging from 82% to 30% at one year, to 70% - 27% (respectively) at three year's post graduation \[[@B21]\].
Both medical schools and medical graduates have questioned preparation and preparedness for early medical practice \[[@B21]\]. Kilminster et al. suggest that the '*Emphasis on preparedness (is) misplaced'*\[[@B27]\], and as a result the focus of their work is on exploring the challenges associated with the transition from student to doctor. Interestingly, our graduates from both curricula reported feeling equally well prepared in 'meeting the challenges', to 'accept the level of responsibility' of an intern, and in 'dealing with the different relationships in the hospital context'.
Feelings of preparedness may be affected by a number of factors, both internal and external. A comparison of three diverse UK medical schools found that medical graduates' feelings of preparedness may be affected by individual learning style and personality, but the majority of graduates reported external factors as having the greatest impact \[[@B28]\]. Graduates made reference to external factors such as clinical placements; shadowing and hospital induction procedures and the support of others as important. Illing et al. \[[@B28]\] suggest that perception of preparedness, with respect to external factors, can be addressed by improving hospital induction processes, increased structure and consistency in clinical placements, and addressing perceived weaknesses in clinical procedures identified by the graduates.
There may have been variations in feeling of preparedness from the experience gained during clinical placements in the variety of intern rotations, as 'institutions and wards have their own learning cultures...' \[[@B27]\]. However, unlike Illing \[[@B28]\], our study did not demonstrate significant variation between hospitals or rotations, except with respect to the signing of the intern reports and therefore potentially the feedback received by the interns.
There may also be variation in preparedness of the graduates from the two curricula that relate more to their confidence in their learning method. Millan et al. \[[@B29]\] suggest that as graduates are aware of the research purpose, TLB graduates '*may overestimate values'* comparing one learning method to another. The graduates we surveyed were aware they were the last two cohorts of the TLB and the first two of the PBL curriculum. The problem-based cohorts may have felt insecure because their curriculum was newly implemented \[[@B29]\] and they may have felt they were missing out on something. This lack of confidence in the PBL cohort may also have been reinforced by some teachers and clinicians who felt disenfranchised and were not fully supportive of the change.
Consideration should also be given when comparing the self-assessment skills of graduates of TLB curriculum with those of PBL, as we may be comparing apples with oranges \[[@B30]\]. Our PBL graduates learned to self-assess using concepts such as pass/fail instead of numerical grades, and may have greater difficulty evaluating their skills \[[@B29]\]. Millan suggests that PBL graduates 'might view their performance in a different manner'.
Feedback during internship
--------------------------
The giving and receiving of feedback is important in any training situation, with trainees commonly requesting feedback on their strengths and weaknesses \[[@B23]\]. However, just under 20% of our graduates did not sign their reports (acknowledging feedback). There may be a variety of reasons for this, such as the lack of provision of adequate time for assessment and feedback with the report completed after the intern had left the 'hospital site', a lack of training for both medical graduates and supervisors in assessment methods, or possibly a lack of interest in the particular area. In South Australia, demand for some rotations is higher than places available and most interns ultimately undertake rotations that are not within their area of interest, potentially reducing their desire to follow-up on feedback provided. A recent Australian retrospective study of 3390 assessment forms of prevocational trainees found that the forms may underreport performance and do not provide trainees with 'enough specific feedback to guide professional development' \[[@B23]\].
Strengths and limitations
-------------------------
Although the findings reported here are for graduates from one institution's medical program this may be considered a limitation, however a major strength of this study is the methodological triangulation of two types of data gathered -- questionnaire and the audit of intern reports. In addition, each intern was assessed in multiple specialty environments, in one of five large public hospitals, by multiple clinical supervisors, on a range of aspects of clinical knowledge and practice. The range and diversity of the WPBAs thus provides a reliable method of assessment.
Another limitation relates to missing data in the audit of supervisor assessments, which can be traced in the main to two particular rotations: nights and relieving. Comments provided by some supervisors highlight their reluctance to rate interns in these rotations, as they did not observe the interns performing certain skills.
The overall response rate for the longitudinal study of 41.7% may be considered low, but the nature of retrospective longitudinal studies carries with it the inherent issues of loss to follow-up. However, there was no significant difference in the age or gender of our non-responders and responders, and the responders were broadly representative of the four graduating cohorts.
Future research
---------------
The Medical Graduates Outcomes Evaluation Program includes a further 3 stages: 'Admissions and Selection', 'Early (first 5 years)' and 'late (10 years)' postgraduate years. These next stages will provide our university and the broader medical community with a comparison of long-term outcomes between two curriculum cohorts. Our study adds to the body of knowledge that highlights the need for education research in the areas of self-assessment and the giving and receiving of feedback. Curriculum changes based on self-assessment alone run the risk of 'throwing the baby out with the bathwater'. We suggest that further research is required into the impact of career specialty choices on the perception of how well medical programs prepare their graduates.
Conclusions
===========
Self- and WPB assessments are both valuable contributors to curriculum evaluation as well as guiding professional development. Our findings demonstrate that the curriculum change from TLB to PBL at our University has '*done no harm*' to our graduates' clinical practice in the intern year while potentially improving their communication skills and their attitude to skill development. Medical students and graduates, on the whole, are high achieving individuals, who *'leading up to medical school are groomed and selected for success in a traditional curriculum'* and who would succeed under either curriculum (88).
In addition we have learned that student confidence in a new curriculum may impact on their self-perception of preparedness, while not affecting their actual competence. The transition period from student to intern is a stressful time for all graduates, and it has been reported previously that graduates tend to underestimate when asked to self-assess 'how well they were prepared for hospital practice'. This perception is not to be discounted, but nor should it be used to support unevaluated curriculum change.
Abbreviations
=============
(TLB) 'old': Traditional (Discipline) lecture-based; (PBL) 'new': Problem-based learning; (WPBA) assessment: Work-place based assessment.
Competing interests
===================
The authors declare that they have no competing interests.
Authors' contributions
======================
DK and AT conceived and discussed the scope and design of the longitudinal evaluation project. DK, GL, AT, and PD contributed to the conception and design of stage II. GL conducted the searches, administered the questionnaire Stage I, conducted the audit Stage II and discussed the strategies used with DK. DK, GL, and AT, were jointly involved in the interpretation and data analysis. GL led the writing of the paper and each author contributed significantly to multiple subsequent revisions. All authors approved the final version of the manuscript submitted.
Pre-publication history
=======================
The pre-publication history for this paper can be accessed here:
<http://www.biomedcentral.com/1472-6920/14/123/prepub>
Supplementary Material
======================
###### Additional file 1
Intern audit form 2007_final.pdf.
######
Click here for file
Acknowledgements
================
The authors thank: the graduates for taking the time to participate; Dr Nancy Briggs and Mrs Michelle Lorimer for their statistical analysis and expertise; Mrs Teresa Burgess, Dr Ted Cleary, Professor Paul Rolan and Mrs Carole Gannon for their contribution to the initial conception of this project and Professors Ian Chapman and Mitra Guha for their ongoing contribution advice and guidance to the longitudinal project.
|
Greetings beloved brothers and sisters. We are going to tackle quite a complicated topic today but a nevertheless important one. What is God? Now if you’re anything like me when I started my spiritual journey, the word ‘God’ had negative connotations. Growing up in a very orthodox Christian family, God was the term used for a judging, punishing and cruel being that creates people and then consistently tests them. But you can use any term that makes you feel more comfortable, such as the One Infinite Creator, Brahman (used in Hinduism), Allah, Source, Oneness, your Higher Self, or the One Consciousness. They are all describing the same fundamental thing. It is the one intelligence/consciousness/awareness behind the whole universe. It is the Source of everything.
We have already talked about how we are all part of God, and we are all God. When you fully understand oneness, you realise there is no separation between you and God, you and others, and you and the universe. It is all simply one being, manifesting in different forms to experience itself and learn about itself. Every being, atom, planet, star and galaxy is just a vessel which God uses to experience itself from different perspectives.
God is such a difficult concept to understand and describe; perhaps it is the most difficult concept. From my own meditations and psychedelic experiences, I have found that God cannot really be described by words. Trying to describe God through words is already limiting the concept. It is like describing the colour red to someone who has been blind all their life. God can only be experienced. God is also a concept that no one can fully understand until they have fully realised their divine nature at the end of their soul journey. However, a limited understanding is sufficient for this stage of our spiritual journey.
Let’s see what some spiritual texts have said about God. Christians believe God is all powerful, all knowing, all loving and present everywhere. In the Bible, particularly in the book of John, God is described as spirit, light and love. God is also described as the Alpha and the Omega; the beginning and end. In Islam, the Quran describes God as eternal, everlasting, the originator, the shaper, the creator and sustainer of the universe. He is also described as the all-seer. Hinduism describes Brahman as the ultimate reality — the one supreme spirit who is the indescribable, inexhaustible, omniscient, omnipresent, original, first, eternal and absolute principle, without a beginning, without an end, who is hidden in all and who is the cause, source, material and effect of all creation known, unknown and yet to happen in the entire universe. The ordinary senses and ordinary intellect cannot fathom, grasp, or be able to describe Brahman even with partial success. The Upanishads describe Him as the One and indivisible, eternal universal self, who is present in all and in whom all are present. In the Gnostic view, there is a true, ultimate and transcendent God, who brought forth from within Himself the substance of all there is in all the worlds, visible and invisible. In a certain sense, they therefore believe that all is God, for all consists of the substance of God. Zoroastrianism, the oldest living religion in the world, believes in one God called Ahura Mazda, and he is also described as all-knowing, all-powerful, present everywhere, impossible for humans to conceive, unchanging, and the creator of all life. Ahura Mazda is also described as the creator, maintainer and most benevolent spirit. As you can see, all these descriptions are extremely similar to each other, and they all seem to agree that God is inconceivable to man. Science, too, agrees with them; in the topic about oneness, we discussed the scientific validation of a zero point field, making up nearly all of the space in an atom, but which appears as dark emptiness to the human eye. This zero point field connects everything in the universe. It is this zero point field that is God. However, we do not need to know the full nature of God at this stage of our spiritual journey; what I do believe is important for us to know is summarised perfectly by Ra in the Law of One: “All is one, and that one is love/light, light/love, the Infinite Creator.”.
Some may still be questioning whether God exists or not, and that’s absolutely fine. It is good to question everything. Even Buddhists don’t really believe in one creator, but they still do believe in tathata, which means suchness. It is sometimes understood that tathata underlies reality, and the appearance of things in the phenomenal world are manifestations of tathata. In essence, they believe in one reality that underlies all forms. What really convinced me that there was a divine intelligence behind everything was the beauty and majesty of the Universe. The way our body just knows what to do all the time — from replicating itself from one cell to a full human body, to healing itself, to processing information. The way that schools of fish and flocks of birds move together in a beautiful synchronised fashion. The way the planet provides the right atmosphere for life to thrive. How plants absorb energy from the sun, and we take our energy from plants, either directly or indirectly. How carbon dioxide and oxygen are perfectly exchanged between plants and animals/humans. How water is recycled from our oceans to the atmosphere and back again. How the planet is the perfect distance from the sun to enable us to survive. The way that the same pattern, based on the Fibonacci sequence, appears everywhere throughout the Universe:
We will talk more about sacred geometry in a later level of the course. The beauty of sunsets, stars, nature, art. The different types of animals that can survive in all kinds of environments, from the bottom of the oceans, to deep underground, to deserts, mountains and the air. The unsolved mysteries of the universe. The way events seem to miraculously line up perfectly in life through synchronicities. For me, there has to be a divine intelligence behind all these things.
Instead of describing God and the universe, I have found that using analogies has helped my understanding. Here are my favourite analogies:
1) God’s VR game — One can simply view the Universe and creation as one really big long virtual reality game, where one being, God, is playing all the different characters. The game appears real to the characters, but ultimately there is only one being playing it from the comfort of his sofa.
2) God’s Dream — You can view the Universe and everything that happens in it as simply God having a very long lucid dream that he can control. The dreamer is not separate from the dream.
3) God’s Movie — God is creating a movie called ‘The Universe’, but God is the only actor, and so God must put different disguises to appear as different characters in the movie.
4) Hologram — We have already talked about how you can view the universe as a hologram projected by one mind. Each part of a hologram contains the whole. Well, we can say that God is the one projector, projecting the Universe from this one source. Each being forms a small part of the hologram, but ultimately there is one Source.
5) DNA — It is a scientific fact that every cell in our bodies contains all of our DNA. This means that each cell has the information within it to become any cell of the body. We all start off as one fertilised cell that contains all of our DNA; this cell then multiplies several times. Each cell then manifests only some of the information contained within its DNA to become a specific cell with specific functions. Each human body is made up of trillions of cells. So the Universe can be thought of as a human body; each atom or being is a different cell. And so each atom, being, planet, star and galaxy contains the information of the whole of God within it (like DNA), but only manifests a tiny part of that information to become an individual cell. Each being fulfils a different function.
6) Invisible Mist — Imagine a Disneyland that was under construction but didn’t end up getting completed. The scaffolding of the buildings are half done, the parts of the rides are there but the rides haven’t been assembled, and the costumes for each character are there but there are no people to wear them. It is a lifeless place. Then one day a golden mist comes to the Disneyland — this golden mist is magical; it appears to give life to everything it touches. As it touches the building, it looked complete. As it touched the rides, they appeared fully assembled. As it filled the costumes, it appeared as if there were different beings acting in the costumes. However, to ensure the different characters had different personalities, the mist wanted to appear invisible to them so that they didn’t initially know that it was just one mist behind everything and everyone there. Otherwise how could there be an experience? Disneyland now appeared fully functional and open for business. All the different characters learnt to live with each other and work with each other. Some fought with each other, some loved each other, some became greedy, some became poor, some became more famous than others. Then one day, one of the characters, Mickey Mouse, was able to take off his costume and see reality for what it truly was. Mickey tried to convince the other characters that they were all just one mist playing with itself, but the other characters thought he was insane. “What are you talking about”, they all screamed. But as time went on, more and more characters began to realise the truth — that Mickey was right all along. Eventually all the characters realised that they were just one mist filling different costumes, and the costumes were no longer needed. The mist went on a journey of self-discovery. Once it discovered itself, the experience had ended and the mist left Disneyland. Disneyland returned to the construction site it first was, and the mist moved elsewhere to have a different experience.
You may even explore the concept of a creator in your meditations and come up with different analogies for yourself; in fact, I encourage you to. It is a hard concept to imagine that there is one consciousness living within us and at the same time imagining that we are living within the one consciousness; mainly because we have denied our true divine nature by creating the ego, the false self. The main thing to understand is the oneness of the universe. By oneness we do not mean your physical body is one with another body or one with the physical body of a planet. Remember, God has been described in religions as a spirit within all things that cannot be detected or conceived by human senses. In science, God is the zero point field in each atom, which is invisible to us. We are not saying that God is physical matter. What we are saying is that God is the divine invisible essence (or spirit) within everything and everyone. Physical matter can be thought of as just disguises that hides the Oneness and makes it look like there is separation. Understanding this has profound effects on how you will treat people, animals, objects, the planet. By treating everything and everyone as divine and one with you, you will automatically be in a constant state of love. We will talk about love and forgiveness in the next couple of topics, but oneness is the ultimate foundational concept we must understand first.
Thank you for reading this short but sweet topic. If you are interested in free healing sessions or spiritual support sessions, or if you would like to donate love or money to my channel, then please visit my website www.highvibelivin.co.uk. |
Q:
My Object isnt getting printed to my JavaFX Table View
Im currently passing my data through a constructor which fills in data requested from both my abstract and extended classes (i.e MusicItem Abstract and CD and Vinyl classes extended)
Im trying to print these into a Table View using JavaFX from my controller but nothing is showing up even with the system.out line placed.
I've tried putting them into Observable Lists as well as normal array lists, I've triple checked the data fields to see if they match the table and i tried viewing the the object at the moment its entered but when i do that, the hex code for that object shows up.
It connects to the database properly but doesn't display anything
abstract class MusicItem {
@Id
private int songID;
private String name;
private String artist;
private String genre;
private String releaseDate;
private double price;
public MusicItem(int songID, String name, String artist, String genre, String releaseDate, double price) {
this.songID = songID;
this.name = name;
this.artist = artist;
this.genre = genre;
this.releaseDate = releaseDate;
this.price = price;
}
}
@Entity
class Vinyl extends MusicItem{
private double speed;
private double diameter;
Vinyl(int songID, String name, String artist, String genre, String releaseDate, double price, double speed, double diameter) {
super(songID, name, artist, genre, releaseDate, price);
this.speed = speed;
this.diameter = diameter;
}
}
@Entity
class CD extends MusicItem{
private double duration;
CD(int songID, String name, String artist, String genre, String releaseDate, double price, double duration) {
super(songID, name, artist, genre, releaseDate, price);
this.duration = duration;
}
}
public WestminsterMusicStoreManager() throws UnknownHostException {
}
@Override
public void insertItem() {
System.out.print("Please enter Song ID : ");
id = Validation.intValidate();
System.out.print("Please enter Song name : ");
name = Validation.stringValidate();
System.out.print("Please enter Artist : ");
artist = Validation.stringValidate();
System.out.print("Please enter genre of " + name + " : ");
genre = Validation.stringValidate();
System.out.println("Please enter the release date in the requested order: ");
releaseDate = date.getDate();
System.out.print("Please enter price of song : ");
price = Validation.doubleValidate();
System.out.println("Will you be entering a CD or Vinyl Entry");
String choice = Validation.stringValidate();
switch (choice.toLowerCase()){
case "vinyl":
System.out.print("Please enter diameter of vinyl : ");
diameter = Validation.doubleValidate();
System.out.print("Please enter speed of vinyl : ");
speed = Validation.doubleValidate();
Vinyl vinyl = new Vinyl(id,name,artist,genre,releaseDate,price,speed,diameter);
musicItemList.add(vinyl);
database.insertVinyl(vinyl);
System.out.println(name+" was succesfully added to the database with an ID of"+id);
break;
case "cd":
System.out.println("Please enter duration of the song");
duration = Validation.doubleValidate();
CD cd = new CD(id,name,artist,genre,releaseDate,price,duration);
musicItemList.add(cd);
System.out.println(cd);
database.insertCD(cd);
break;
default:
System.out.println("Your value needs to be a choice between either CD or Vinyl");
insertItem();
}
}
}
public class Controller implements Initializable {
public GridPane customerMainLayout;
@FXML private TableColumn<MusicItem, String> artistCol, songNameCol, durationCol, genreCol;
@FXML private TableColumn<MusicItem, String> priceCol;
@FXML private TableColumn<MusicItem, Double> speedCol,diameterCol;
@FXML private TableColumn<MusicItem, Integer> songIDCol, releaseYearCol;
public TableView<MusicItem> customerViewTable;
@FXML private static JFXTextField searchBar;
@FXML private static JFXButton searchBtn;
private WestminsterMusicStoreManager musicStoreManager =new WestminsterMusicStoreManager();
private ObservableList<MusicItem> musicItem = FXCollections.observableArrayList(musicStoreManager.musicItemList);
public Controller() throws UnknownHostException {
}
public void initialize(URL location, ResourceBundle resources) {
songIDCol.setCellValueFactory(new PropertyValueFactory<>("songID"));
songNameCol.setCellValueFactory(new PropertyValueFactory<>("name"));
artistCol.setCellValueFactory(new PropertyValueFactory<>("artist"));
genreCol.setCellValueFactory(new PropertyValueFactory<>("genre"));
releaseYearCol.setCellValueFactory(new PropertyValueFactory<>("releaseDate"));
priceCol.setCellValueFactory(new PropertyValueFactory<>("price"));
durationCol.setCellValueFactory(new PropertyValueFactory<>("duration"));
speedCol.setCellValueFactory(new PropertyValueFactory<>("speed"));
diameterCol.setCellValueFactory(new PropertyValueFactory<>("diameter"));
addTableItems();
}
private void addTableItems() {
musicItem.forEach(musicItem -> {
if (musicItem instanceof CD){
CD cd = (CD)musicItem;
System.out.println(cd);
customerViewTable.getItems().add(cd);
}else {
Vinyl vinyl = (Vinyl)musicItem;
System.out.println(vinyl);
customerViewTable.getItems().add(vinyl);
}
});
}
}
A:
Figured it out. The code I wrote was never incorrect. The Observable list i had called was calling a new instance which was why no values returned (Because the list was empty after the instance call). I connected that values directly from MongoDB and got the values by adding them to an observable list. And setting the table columns and data to whatever the result was and it worked well
private ObservableList<MusicItem> addTableItems() throws UnknownHostException {
ObservableList<MusicItem> musicItem = FXCollections.observableArrayList();
Database database = new Database();
for (MusicItem item: database.datastore.find(MusicItem.class).asList()){
musicItem.addAll(item);
}
return musicItem;
}
|
social-white-twitterCreated with Sketch.social-white-facebookCreated with Sketch.social-white-redditCreated with Sketch.
Share
0 LIKES
0 comments
When Disney announced in 2015 that they would be continuing George Lucas’ plan for a Han Solo prequel, it looked like easy money on paper. Everyone loves Han Solo. What the plan didn’t account for was a series of blows such as evolving tastes in the tales audiences want from a galaxy far, far away and behind-the-scenes drama that overshadowed any residual excitement fans may have had. The result? The least-profitable Star Wars film to date, even without adjusting for inflation.
The message is loud and clear and yet a a subsection of the Star Wars fandom is overlooking the message: Fans are likely bored with white, male status quo. Luckily, Solo makes a fantastic backdoor pilot for a spin-off franchise. One about a compelling yet underutilized new character of the film, Qi’ra (Emilia Clarke).
By the end of Solo: A Star Wars Story, Han’s (Alden Ehrenreich) narrative arc is neatly wrapped up. Unless Lucasfilm thinks audiences want to watch Han butter Jabba the Hutt up only to drop his cargo at the first sign of Imperial cruisers, there’s not a lot of story left to tell. Qi’ra, on the other hand, moves into position late in Solo as the next potential leader of Crimson Dawn, one of the biggest and most terrifying crime syndicates in the galaxy.
In order to understand how Qi’ra could become a lynchpin in the third faction of the Star Wars universe (the other two being the Republic and the Empire), Darth Maul’s history is key. If you hadn’t kept up with Star Wars outside the films, the reappearance of the former Sith apprentice was out of left field. For those with even a passing knowledge of the animated shows The Clone Wars or Star Wars Rebels, it was a long-awaited return to form.
Some quick background on how Maul got here: After being sliced in half by Obi-Wan, Maul fell to his not-death and was transported to the trash world of Lotho Minor where he slowly went insane while creating cybernetic spider legs out of garbage. He later escaped with the help of his brother (long story), regained his sanity and got robot legs. From there Maul went on to use cold calculation and murder to bring the disparate crime families to heel until all feared the Crimson Dawn. Since he was supposed to be dead, Maul brought in Dryden Vos (Paul Bettany) to be the face of the organization. This gets us right up to Qi’ra declaring her hesitant allegiance to Maul at the end of Solo.
But the story doesn’t end there. By Rebels, which takes place around five years after the end of Solo, Maul becomes trapped on the Sith planet of Malachor for years with no communication with the outside world. Soon after the cast of Rebels helps him escape, Maul would find himself dead at the hands of his old nemesis, Obi-Wan Kenobi, on the planet Tatooine. But just because a crime lord dies or disappears doesn’t mean the organization falls. Whether he meant to or not, Maul prepared Qi’ra to be a successor. Based on the sketchy way Maul found himself on Malachor, the Corellian orphan may have even given herself a promotion.
Armed with decades of fighting for her life and knowledge of the Jedi-defeating martial art known as Teras Kasi, Qi’ra is uniquely positioned to take over the Crimson Dawn in the wake of Maul’s disappearance and death. She’s shown herself to be painfully pragmatic and ruthless. She’s a survivor first, one who now has deep ties to other aspects of Star Wars lore. While it might be tempting for Lucasfilm to kill her off in some misguided notion that Leia needs “protection” from Han’s first love continuing to exist, it would be such as waste.
For example, as the leader of the Crimson Dawn, Qi’ra would be owed fealty by none other than Jabba the Hutt himself. Perhaps out of fond memories, Qi’ra reached out via a surrogate to help Luke Skywalker formulate a plan to free her former beau…and take out a threat like Jabba, after which Qi’ra could install her own puppet leader. Given her own film, Disney could introduce both Sana Starros and Dr. Aphra to the live-action universe, perhaps hired to run a job for the Crimson Dawn that goes sideways…because the job always goes sideways.
Want another one? Let’s dig deep into the crazy fan theories and pull out the “Qi’ra is Rey’s mom” nugget. Let’s say Han and Leia are on the outs when Qi’ra reappears in Han’s life. He discovers she’s partially responsible for saving him from Jabba and they rekindle their romance briefly. Qi’ra becomes pregnant but doesn’t tell Han. She raises Rey until some catalyst in the galactic underworld forces Qi’ra to put Rey in hiding for her own safety. As an ace-in-the-hole, Qi’ra hires several henchmen to play a shell game with the Millennium Falcon so it ends up on Jakku. The logic being: Han would come eventually and find Rey should the worst happen. Qi’ra calls in a favor to get Unkar Plutt to watch young Rey. Then, for whatever reason, the money stops coming in and Plutt kicks Rey to the curb.
Of course, these are merely suggestions. But Solo: A Star Wars Story tees Qi’ra up for a lifetime of adventure and anguish and it would be a damn shame not to see it play out on the big screen. |
HDAC2 was involved in placental P-glycoprotein regulation both in vitro and vivo.
Placental P-glycoprotein (P-gp) plays a significant role in regulating drugs' transplacental transfer rates. Investigations on placental P-gp regulation could provide more therapeutic targets for individualized and safe pharmacotherapy during pregnancy. Currently, the epigenetic regulation of placental P-gp is rare. Our previous study has demonstrated that HDACs inhibition could up-regulate placental P-gp and HDAC1/2/3 might be involved in this process. The present study was carried out to further explore whether HDAC1/2/3 were indeed involved in the regulation of placental P-gp or not and screen out the subtype engaged in this process. BeWo and JAR cells were transfected with HDAC1/2/3 specific siRNA. After 48 h of transfection, cells were harvested for real-time quantitative PCR (qRT-PCR), Western blot, immunofluorescence and fluorescent dye efflux assay to evaluate P-gp expression, localization, and efflux activity, respectively. Hdac2 siRNA was intraperitoneally injected to pregnant mice every 48 h from E7.5 to E15.5 and digoxin was administered by gavages 1 h prior to euthanasia at E16.5. Placental Hdac1/2/3 and P-gp expression were determined by qRT-PCR and Western blot. Maternal plasma and fetal-unit digoxin concentrations were detected by enzyme-multiplied immunoassay. In vitro, HDAC2 inhibition could significantly elevate P-gp expression and reduce intracellular accumulation of P-gp substrates (DiOC2 (3) and Rh 123) both in BeWo and JAR, while knockdown of HDAC1/3 had no influence on P-gp expression and its efflux activity. Additionally, in vivo, Hdac2 silencing in pregnant mice also elevated placental P-gp expression and decreased digoxin transplacental transfer rate. HDAC2 inhibition could result in induction of placental P-gp expression and functionality both in vitro and in vivo. |
Introduction {#s1}
============
Neuroscientific research on "embodied cognition" postulates that higher cognitive processes, such as language, thought and reasoning, are functionally (and possibly structurally) interwoven with lower-level sensory and motor functions (Gallese and Lakoff, [@B30]; Barsalou, [@B11]). To this end, recent empirical evidence from behavioral and neuroimaging studies demonstrate that the motor cortex serves an important function for language processing, particularly during semantic processing (Pulvermüller, [@B73]; Pulvermüller et al., [@B76]; Moseley et al., [@B62]). More specifically, semantic processing of words associated with actions and motor movements activate the motor cortex somatotopically (Hauk et al., [@B31]; Pulvermüller and Fadiga, [@B75]; Moseley et al., [@B65]), which may be explained on the basis of the formation and activation of sensorimotor action-perception circuits comprising neurons in the motor cortex, in sensory cortices and in perisylvian language areas (Pulvermüller and Fadiga, [@B75]; Pulvermüller, [@B74]; Pulvermüller et al., [@B77]). Interestingly, recent data reveal a specific weakness in the processing of action-related words in clinical populations who have motor impairments (Boulenger et al., [@B14]; Bak and Chandran, [@B3]; Fernandino et al., [@B26],[@B27]; Cardona et al., [@B15]; Kemmerer, [@B40]; Desai et al., [@B20]). Specific impairments in action-semantic processing have also been reported in individuals with autism spectrum disorder (ASD), a neurodevelopmental syndrome characterized by problems with social interaction, communication and language, and, importantly, by dysfunction in motor behavior \[American Psychiatric Association (APA), ([@B2])\]. The motor deficits seen in ASD, ranging from differences in gait, fine motor skills, posture and coordination, are pervasive across the spectrum, occur in individuals with and without intellectual impairment, and are among the earliest symptoms to appear (Leary and Hill, [@B46]; Jansiewicz et al., [@B38]; Dziuk et al., [@B24]; Ming et al., [@B58]; Moseley and Pulvermüller, [@B61]). Unsurprisingly, abnormalities in structural and functional connectivity have been reported within and between primary motor cortex and other cortical regions in ASD (Mostofsky et al., [@B66], [@B67]; McCleery et al., [@B55]; Floris et al., [@B28]; Thompson et al., [@B86]), as have differences in gray matter volume (Duffield et al., [@B23]; Mahajan et al., [@B54]), thus suggesting that the action-semantic deficit in this group is comparable to that seen in other populations with disease or damage to the motor system.
In the past, cognitive theories of ASD have centered around the archetypal "autistic triad" of deficits in social interaction, social communication and social imagination (Wing and Gould, [@B101]); as such, obvious motor impairments have been traditionally regarded as secondary and consequently neglected in research. To date, few studies on autism have focused on highlighting the functional relationship between motor symptoms and difficulties in higher-order cognitive functions, which include action-related cognition (e.g., imitation and gesturing). The functional link between an observed action and its corresponding motor program may be required to perform a self-generated movement and has been attributed to the *mirror neuron system* (MNS) which is posited to exist across primary and premotor cortex, somatosensory cortex, and parietal cortex. Responsive to both action perception and action execution, mirror neurons appear to be a quintessential type of multimodal "information-mixing" neuron, and a crucial element in binding motor areas to sensory and perisylvian language areas in action-perception circuits (Moseley and Pulvermüller, [@B61]). A number of studies consequently suggest that the MNS may be relevant in action perception, imitation, prediction of goals and intentions, as well as in social cognition and language (Iacoboni, [@B36]; Rizzolatti and Sinigaglia, [@B79]).
Previous studies have demonstrated functional impairments and neuronal hypoactivity of the MNS in autism (Nishitani et al., [@B68]; Oberman et al., [@B69]; Iacoboni and Dapretto, [@B37]; Bernier et al., [@B12]; Cattaneo et al., [@B17]; Honaga et al., [@B33]; Rizzolatti and Fabbri-Destro, [@B78]; McCleery et al., [@B55]; Wadsworth et al., [@B94]). These are consequently posited as the neuronal substrate of behavioral deficits in action-related cognition, which are interpreted as a consequence of dysfunctional action-perception mapping. This is manifest in impaired semantic processing for action but not object words in autistic individuals without intellectual disability, an impairment which correlated with reduced activation in cortical motor regions during action-word processing (Moseley et al., [@B63], [@B64]; Moseley and Pulvermüller, [@B61]). Moreover, further studies in this clinical group revealed hypoactivation in motor as well as in limbic areas during processing of abstract emotional words (Moseley et al., [@B65], [@B64]), which other studies have shown to be a notable challenge for autistic people. These findings have been interpreted on the basis that both of these semantic categories (action and emotion words) typically involve the activation of premotor and motor action-perception networks during learning and require this activity for efficient, optimal comprehension. This is consistent with the recent suggestion that hypoactivity of the motor cortex could also be one of the reasons for deficits in the socio-communicative and emotional-affective domain in ASD (Mody et al., [@B59]). Functional impairments between the motor cortex and perisylvian language regions may thus be related to social-communicative and emotional-affective deficits in individuals with ASD, as the development of semantic concepts would be mandatory for verbally expressing and understanding emotions in oneself and others.
A different theoretical approach explains reduced comprehension of emotional stimuli in ASD in terms of alexithymia, a difficulty in expressing and identifying one's own emotional states or feelings (Silani et al., [@B82]; Milosavljevic et al., [@B57]; Gaigg et al., [@B29]). However, a point of convergence might be that alexithymia itself may be (partially) caused by dysfunctional semantic processing of emotion words, which might, in turn, be linked to impaired action-perception circuits involving motor and limbic regions. Emotions clearly influence the style in which an action is performed, and thus predictably, the same multimodal mirror neurons of frontal-motor and parietal cortex are sensitive to different emotional states underpinning the same observed action (Di Cesare et al., [@B21]). This suggests the importance of the motor system in perceiving emotional states.
Previous studies demonstrated atypical brain activity in motor systems whilst autistic people read action and emotion words (Moseley et al., [@B63], [@B64]), which also seems to be linked to a behavioral slowness in processing action words (Moseley et al., [@B62]). The next piece of this puzzle, however, remains missing: the link between language impairment for action and emotion words and *movement* impairment. To clarify this functional link, our study aimed to investigate the relationship between semantic processing of action and emotion words, fine and gross motor skills, and clinical symptoms in individuals with ASD and in typically-developed (TD) controls. In line with previous research with autistic participants, we predicted a specific processing deficit for action and emotion words but no groups differences for other word categories. We hypothesized that deficits in motor skills in individuals with ASD would be associated with clinical symptoms and impairments in processing these specific word categories.
Materials and Methods {#s2}
=====================
Participants {#s2-1}
------------
Nineteen autistic adults without intellectual disability (seven women) and 23 TD controls (nine women) were recruited for the study. One control participant had to be excluded from the final analysis due to poor task performance in the semantic decision task; therefore, the final data set comprised 19 ASD and 22 TD participants. All participants had normal or corrected-to-normal vision. In the control group, none of the participants had a history of psychiatric illness. Three participants in the ASD group took antidepressants.
The groups were matched for age, education, non-verbal IQ (measured by the LPS-3, Horn, [@B34]), and handedness (measured by the Edinburgh Handedness Inventory, Oldfield, [@B70]). Except for two participants in the ASD group, all participants were right-handed with a matched laterality-quotient (LQ). All participants were monolingual, native speakers of German. More information on both groups can be found in [Table 1](#T1){ref-type="table"}.
######
Means and standard deviations (SD, in brackets) of demographic and clinical variables used to match the autism spectrum disorder (ASD) and TD groups.
--------------------------------------------------------------------------------------------------
ASD group\ TD control group\ Statistical group difference
*N* = 19 *N* = 22
------------------------------- --------------- ------------------- ------------------------------
Age (years) 39.00 (11.20) 36.59 (7.55) n.s. (*p* = 0.4)
Education (years) 12.00 (1.52) 12.73 (0.88) n.s. (*p* = 0.06)
IQ (LPS-3) 117.76 (9.75) 112.96 (8.72) n.s. (*p* = 0.1)
Laterality Quotient (LQ) 79.79 (16.09) 88.18 (15.31) n.s. (*p* = 0.09)
Autism-Spectrum Quotient (AQ) 39.05 (6.62) 11.59 (4.02) *p* \< 0.001
--------------------------------------------------------------------------------------------------
*Between-group differences were calculated by independent t-tests (p-values are in brackets; n.s. indicates non-significant result). Groups did not differ on any variable except on the AQ*.
All ASD participants were diagnosed and recruited from the Autism Outpatient Clinic at the Charité University Medical School, Benjamin Franklin Campus, Berlin, Germany. Autism-specific diagnostic instruments were used for diagnosis, including the Autism Diagnostic Observation Schedule (ADOS; Lord et al., [@B51]) and a semi-structured clinical interview based on ASD criteria in the Diagnostic and Statistical Manual of Mental Disorders, 4th edition \[DSM-IV; American Psychiatric Association (APA), ([@B2])\]. If a parent was available---which was the case in 66% of all ASD patients---the Autism Diagnostic Interview-Revised (Lord et al., [@B50]) was conducted. Final diagnoses were established by expert consensus taking into account clinical interviews and scale assessments. A patient was diagnosed with ASD when scores on both the ADOS and the ADI-R exceeded the cut-off for autism spectrum or autism and all required DSM-IV criteria of the clinical interview were fulfilled. For the 33% of patients whose parents were not available for the ADI-R interview, an ASD diagnosis was given when all required criteria of the ADOS and the clinical interview were met and the patient provided sufficient examples that the autistic symptoms already existed in childhood.
The mean score of the ASD group on the Autism-Spectrum Quotient (AQ: Baron-Cohen et al., [@B10]) was 39.1 (SD: 6.6) compared to a mean score of 11.59 (SD: 4.020) in the control group: as expected, a significantly higher average score (*t*~(39)~ = 16.302, *p* \< 0.001). All but one participant in the ASD group scored above 26, which is considered as the general cut-off point for diagnosable autism (Woodbury-Smith et al., [@B97]).
Neuropsychological and Clinical Assessment {#s2-2}
------------------------------------------
### Leistungsprüfsystem-Test, Subtest 3 {#s2-2-1}
The *Leistungsprüfsystem-Test, Subtest 3* (Horn, [@B34]) was carried out with all participants to assess non-verbal IQ. Handedness was measured by the *Laterality Quotient*, assessed by the *Edinburgh Handedness Inventory* (Oldfield, [@B70]).
### Purdue Pegboard Test {#s2-2-2}
The Purdue Pegboard Test was used in both groups to assess manual dexterity, manual coordination and fingertip skills (Tiffin and Asher, [@B87]). The test consists of a board with two parallel rows of 25 holes running vertically. Participants were asked to use their right hand to put as many of the cylindrical metal pegs as possible in the right-sided row within 30 s; the same procedure was then followed for the left hand with the left-sided row. In a third condition which combined the two previous trials, participants had to simultaneously place the pegs within the right- and left-sided rows with their right and left hands respectively. In a fourth condition, as many "assemblies" as possible, consisting of different objects, had to be built within 60 s.
### Trailmaking Test (Parts A and B) {#s2-2-3}
The Trailmaking Test (TMT; Parts A and B) is a neuropsychological test to measure attention, processing speed and executive functions (Tombaugh, [@B88]). This test was performed with the ASD group only in order to assess psychomotor speed and attention (Part A) as well as executive function (Part B).
Clinical Questionnaires {#s2-3}
-----------------------
All participants filled out the Autism-Spectrum Quotient (AQ) and the Toronto Alexithymia Scale 26 (TAS-26; Taylor et al., [@B85]). The AQ measures the degree of autistic traits whereby higher scores indicate a higher degree of autistic traits (Baron-Cohen et al., [@B10]). This most popular dimensional measure of autistic traits has been extensively used and validated both in the general population and those with diagnosed autism (Hurst et al., [@B35]; Hoekstra et al., [@B32]; Ruzich et al., [@B80], [@B81]; Stevenson and Hart, [@B83]), where it boasts sound psychometric properties.
Alexithymia is popularly understood as a dimensional construct (Keefer et al., [@B39]) which is most commonly measured with the TAS-26. This scale comprises three subscales assessing the difficulties describing emotions (scale 1), difficulties identifying one's own emotions (scale 2), and the tendency to think in an externally-oriented way (scale 3).
Furthermore, all ASD participants filled out the Empathy Quotient (EQ; Baron-Cohen et al., [@B8]) and the Systemizing Quotient-R (SQ-R; Baron-Cohen et al., [@B9]; Wheelwright et al., [@B96]). The EQ measures the capacity for empathy, whereby a lower score indicates reduced empathy. The SQ-R measures the capacity for recognizing patterns and the tendency to "systemize," to see the world in terms of logical rules and systems and to try to impose these in life, whereby higher scores reflect greater tendency to systemizing. Developed by the same group as the AQ, EQ scores tend to be lower and SQ-R scores higher in autistic individuals, and both short forms of the original tests showed good psychometric properties (Wheelwright et al., [@B96]).
In an additional, self-designed questionnaire, the MOSES-Test ("Motor Skills in Everyday Situations"), participants had to self-assess their motor skills in everyday situations on a four-point Likert scale employing 12 statements such as "I can easily catch or throw a ball," or "I have no difficulties riding a bike." Possible scores ranged from 0 ("I completely agree") to 3 ("I completely disagree"). If the statements concerned difficulties ("I have difficulties in climbing stairs"), then scores ranged from 3 ("I completely agree") to 0 ("I completely disagree"). With an upper limit of 36, higher scores on this questionnaire suggest more difficulties in gross motor skills. The MOSES-Test can be found in the [Supplementary Materials](#SM1){ref-type="supplementary-material"}.
Semantic Decision Tasks {#s2-4}
-----------------------
### Stimuli {#s2-4-1}
In the first semantic decision task (SDT1; see details below), 90 action-related words {30 face-related \[e.g., "BEISSEN" ("TO BITE")\], 30 hand-related \[e.g., "MALEN" ("TO PAINT")\], 30 foot-related \[e.g., "LAUFEN" ("TO WALK")\]} and 90 object-related words {30 animal words \[e.g., "MAUS" ("MOUSE")\], 30 tool words \[e.g., "HAMMER" ("HAMMER")\], 30 food words \[e.g., "KUCHEN" ("CAKE")\]} were included.
In the second semantic decision task (SDT2; details below), we included 30 abstract emotional words \[e.g., "FREUDE" ("JOY")\] and 30 abstract neutral words \[e.g., "PLANEN" ("TO PLAN")\]. Abstract emotional words consisted of verbs and nouns associated with emotions, and the abstract neutral word category included verbs and nouns referring to emotionally neutral concepts or cognitions. Words were selected and matched as carefully as possible based on psycholinguistic properties such as word length and word frequency according to the CELEX database (Baayen et al., [@B102]).
Before conducting this experiment, a semantic rating study was carried out with 10 typically-developing participants who did not take part in the main experiment. This pre-experiment rating study was conducted to differentiate the selected word categories with respect to their semantic properties (see also Hauk et al., [@B31]; Moseley et al., [@B64]). Study participants rated all words with regards to semantic features such as concreteness, arousal, valence, emotion-relatedness and action-relatedness. Psycholinguistic variables and semantic ratings for the four major stimulus categories (action-, object-, abstract emotional-, abstract internal words) used in SDT 1 and 2 are displayed in the [Supplementary Materials](#SM1){ref-type="supplementary-material"}.
### Procedure {#s2-4-2}
All participants performed two separate and independent semantic decision tasks (SDT1 and SDT2) using E-prime software (Psychology Software Tools, Inc., Sharpsburg, PA, USA, [RRID:SCR_009567](https://scicrunch.org/resolver/RRID:SCR_009567)). The first SDT1 was carried out employing action- and object-related words; the second SDT2 task used abstract emotional and abstract neutral words. Each semantic decision task lasted 10 min, with a break given in between.
Participants were seated approximately 60 cm distance from the computer screen while words appeared on a white background in uppercase, black bold print. All participants were asked to decide as fast and accurately as possible if the presented words were related to human actions or to objects (in SDT1) or, in the second task (SDT2), whether the words were related to emotional or non-emotional abstract concepts. Participants indicated their semantic judgments by pressing one of two keys on a computer keyboard with the index and middle fingers of their right hand. The assignment of keys was counterbalanced between participants. After a fixation cross was shown at central location for 250 ms, words were presented tachistoscopically for 150 ms in a pseudorandomized order. Participants were shown the same words with each word being only shown once to each participant. After the offset of the word, a blank screen was shown until the participant made a decision, or until 2,500 ms had passed without a response, at which point the screen returned to the fixation cross. The stimulus onset asynchrony (SOA) was 2,500 ms. Instead of using their right hand, the two left-handed participants used the index and middle finger of their left hand to perform the SDTs.
Data Analysis {#s2-5}
-------------
All data was analyzed using SPSS version 24.0 ([RRID:SCR_002865](https://scicrunch.org/resolver/RRID:SCR_002865)). Independent *t*-tests were used to compare means of demographic variables, neuropsychological tests and clinical questionnaires.
For each participant, we derived mean reaction times and accuracy scores for each word category (action words and object words from SDT1, emotional and non-emotional abstract words from SDT2): this was done by averaging reaction times across all individual words in that category. Each word within a category received either a score of 1 (reflecting correct categorization) or 0 (reflecting that the participant had incorrectly categorized the word or failed to respond). For each participant, the means across these accuracy scores were then transformed into a percentage accuracy for each word category. As such, a mean accuracy score and a mean reaction time score for the action, object, abstract emotional and abstract non-emotional word categories were entered into SPSS for each participant.
To compare reaction times and accuracies of both groups for statistically significant differences, we performed four 2 × 2 mixed design repeated measures analysis of variances (ANOVAs). In each ANOVA, the between factor "Group" (two levels: ASD vs. control) and the within factor "Word Category" \[two levels: action words vs. object words (SDT1), emotional vs. non-emotional abstract words (SDT2)\] were included.
As concepts, tools and the words denoting them are known to evoke activity in motor regions which are associated with their action affordances, i.e., the actions associated with their use (Chao and Martin, [@B18]; Carota et al., [@B16]). As such, these "object-related" tool words tend to be semantically related not only to visual objects but also to specific actions (for instance, a fork to eating). In order to control for this potential "action-relatedness" of the tool word category, we conducted another ANOVA in which tool words were excluded from the analysis. *Post hoc* planned comparisons were conducted with subsequent Bonferroni corrections.
A Pearson correlation was computed for each group separately to assess the relationship between accuracy and latency for each word category in the semantic decision tasks and other variables (AQ, TAS-26, EQ, SQ-R and MOSES-Test). No outlier removal procedure was applied as none of the individual data sets exceeded the mean group values by more than two standard deviations.
Results {#s3}
=======
Neuropsychological and Clinical Assessment {#s3-1}
------------------------------------------
### Purdue Pegboard {#s3-1-1}
*T*-tests revealed significant differences between the two groups in the first three conditions of the Purdue Pegboard Test (PPB), but not in the fourth "assembly" condition. In comparison to the control group, the ASD group placed significantly fewer pegs with their right hands, left hands and with both hands simultaneously, thus demonstrating impaired fine motor skills (see [Table 2](#T2){ref-type="table"}).
######
Means, standard deviations (in brackets) and statistical group comparisons in the Purdue Pegboard (PPB) Test.
------------------------------------------------------------------
ASD group\ Control group\ Statistical\
*N* = 19 *N* = 22 testing (*t*)
-------------- -------------- ---------------- -------------------
PPB right 14.16 (1.53) 15.77 (1.51) *p* \< 0.01
PPB left 13.42 (2.38) 14.82 (1.43) *p* \< 0.05
PPB both 11.47 (1.57) 12.41 (1.26) *p* \< 0.05
PPB Assembly 34.74 (7.43) 36.41 (6.68) n.s. (*p* = 0.45)
------------------------------------------------------------------
*Statistically significant effects are indicated by p-values; n.s. indicates non-significant difference*.
### Trailmaking Test A and B {#s3-1-2}
We conducted the TMT A and B only for the ASD group and found a mean of 22.05 s (SD: 7.50) in the TMT A and a mean of 49.58 s (SD: 17.58) in the TMT B, indicating unimpaired performance in the range of norms from healthy participants as stated in the test.
Clinical Questionnaires {#s3-2}
-----------------------
### Toronto-Alexithymia-Scale-26 {#s3-2-1}
*T*-tests showed a significant difference between the ASD group and the TD group in overall TAS-26 scores (see [Table 3](#T3){ref-type="table"}) and in all three sub-scales.
######
Means, standard deviations (in brackets) and statistical group comparisons in the TAS-26 questionnaire.
ASD group *N* = 19 Control group *N* = 22 Statistical testing (*t*)
------------------ -------------------- ------------------------ ---------------------------
TAS-26 49.00 (10.29) 38.09 (5.97) *p* \< 0.001
TAS-26 (Scale 1) 18.53 (6.51) 12.09 (2.94) *p* \< 0.001
TAS-26 (Scale 2) 17.79 (4.34) 11.64 (3.65) *p* \< 0.001
TAS-26 (Scale 3) 12.68 (2.81) 14.36 (2.57) *p* \< 0.05
*Statistically significant effects are indicated by p-values*.
### EQ and SQ-R {#s3-2-2}
The Empathy Questionnaire (EQ) and the Systemizing Questionnaire Revised (SQ-R) were only filled out by the ASD group. The mean score on the SQ-R was 79.21 (SD: 22.837). The mean score of the EQ was 13.89 (SD: 5.597) which is comparable (even slightly lower) than the empathy scores seen in the autistic sample of the original and certainly under the recommended cut-off score of 30, which allowed the authors to distinguish 81% of their autistic sample (Baron-Cohen and Wheelwright, [@B7]).
### MOSES-Test {#s3-2-3}
A *t*-test revealed a significant difference in the overall MOSES-score between the ASD group and the control group (*p* \< 0.001). The ASD group scored significantly higher with a mean score of 14.53 (SD: 6.851) compared to a mean score of 4.50 (SD: 2.956) in the control group, indicating more motor difficulties in everyday life situations.
Semantic Decision Tasks {#s3-3}
-----------------------
### SDT1: Action Words vs. Object Words {#s3-3-1}
A mixed-design repeated measures ANOVA revealed a significant *Group × Word Category* interaction for accuracy (*F*~(1,39)~ = 4.01, *p* \< 0.05, $\eta_{\text{p}}^{2}$ = 0.093; see [Figure 1](#F1){ref-type="fig"}). *Post hoc* analyses using pairwise comparisons (Bonferroni-corrected) showed that participants in the ASD group made significantly more errors when presented with action words than they did to object words (*p* \< 0.05). This interaction did not show significance in the latency analysis (*F*~(1,39)~ = 0.0001, *p* = 0.985, $\eta_{\text{p}}^{2}$ = 0.0003). There was no significant main effect of *Group* in accuracy *F*~(1,39)~ = 2.42, *p* = 0.128, $\eta_{\text{p}}^{2}$ = 0.06) or latency *F*~(1,39)~ = 0.88, *p* = 0.355, $\eta_{\text{p}}^{2}$ = 0.02), suggesting that where differences did appear, they were associated with particular word categories rather than generally poorer or slower processing. However, a significant main effect of *Word Category* in the latency analysis (*F*~(1,39)~ = 27.15, *p* \< 0.001, $\eta_{\text{p}}^{2}$ = 0.41) suggested that *all* participants were slower to process action words; there was a non-significant tendency for them to be less accurate for action words, too (*F*~(1,39)~ = 2.87, *p* = 0.098, $\eta_{\text{p}}^{2}$ = 0.07). Means for accuracies and latencies are presented in [Table 4](#T4){ref-type="table"}.
{#F1}
######
Means and standard deviations (in brackets) for latencies and accuracies.
ASD group
---------------------------------------------------------- -------------- --------------
**I Action words---Object words**
Reaction time (ms) 630.09 (188) 590.26 (121)
Action words
Reaction time (ms) 573.58 (134) 533.34 (115)
Object words
Accuracy (%) 90.8 (7.4) 94.4 (3.1)
Action words
Accuracy (%) 93.8 (4.0) 94.2 (4.4)
Object words
**II Abstract emotional words---Abstract neutral words**
Reaction time (ms) 816.90 (379) 618.11 (136)
Abstract emotional words
Reaction time (ms) 885.61 (374) 774.62 (208)
Abstract neutral words
Accuracy (%) 91.90 (9.4) 95.80 (4.4)
Abstract emotional words
Accuracy (%) 81.70 (14.5) 90.70 (8.1)
Abstract neutral words
Furthermore, sub-categories of object and action words were investigated in *post hoc* analyses applying Bonferroni-corrected pairwise comparisons. The analyses revealed that in the control group, there were significant differences between animal words and tool words (*p* = 0.001), between tool words and food words (*p* = 0.002), and between animal words and each effector-specific type of action word (face-related words: *p* \< 0.001; hand-related words: *p* \< 0.001; foot-related words: *p* \< 0.001). In the ASD group, there were only significant differences between animal words and tool words (*p* = 0.005) and between animal words and foot-related action words (*p* = 0.011), but not between animal words and the other effector-specific action words (hand-related or face-related words), or between tool words and food words.
### SDT2: Abstract Emotional vs. Abstract Neutral Words {#s3-3-2}
The ANOVA revealed a main effect of *Word Category* both in accuracy (*F*~(1,39)~ = 14.38, *p* \< 0.001, $\eta_{\text{p}}^{2}$ = 0.26) and latency (*F*~(1,39)~ = 16.69, *p* \< 0.001, $\eta_{\text{p}}^{2}$ = 0.30): in both cases, all participants were faster and more accurate for abstract emotional than abstract neutral words. Furthermore, there was a significant main effect of *Group* in the accuracy analysis (*F*~(1,39)~ = 8.25, *p* = 0.007, $\eta_{\text{p}}^{2}$ = 0.17) with significantly fewer correct responses for all words, regardless of word category, in the ASD group (see [Table 4](#T4){ref-type="table"}). No significant main effect of *Group* was found in the latency analysis (*F*~(1,39)~ = 3.28, *p* = 0.078, $\eta_{\text{p}}^{2}$ = 0.08). Moreover, there was no significant *Group* × *Word Category* interaction for accuracy (*F*~(1,39)~ = 1.66, *p* = 0.205, $\eta_{\text{p}}^{2}$ = 0.04) or latency (*F*~(1,39)~ = 2.54, *p* = 0.119, $\eta_{\text{p}}^{2}$ = 0.06), suggesting no particular category-specific deficit specific to either group.
Correlations Between Clinical Data and Semantic Decisions {#s3-4}
---------------------------------------------------------
Pearson correlations were performed between neuropsychological tests, clinical scales, and latency and accuracy data from the semantic decision tasks. The results showed a positive correlation in the ASD group between AQ scores and the overall TAS-26 score (*r* = 0.674, *p* = 0.002). Furthermore, in the ASD group, there was a positive correlation between AQ scores and the MOSES-Test (*r* = 0.766, *p* \< 0.001). Regarding the EQ, a negative correlation between AQ and the EQ scores in the autistic group (*r* = −0.499, *p* = 0.03) corroborated previous research, where higher scores on the AQ were associated with lower scores on the EQ. However, there was no significant correlation between any of these tests and the accuracy or latency of semantic judgments for any particular word category.
Discussion {#s4}
==========
This study aimed to elucidate the relationship between semantic processing, motor skills and clinical variables in autistic individuals and IQ-matched neurotypical controls. In line with previous findings of action word deficits (Moseley et al., [@B62]), a significant Group × Word Category interaction was found for accurate data and revealed that autistic participants were significantly less accurate than typically-developing controls when processing words associated with actions. Importantly and in contrast, the ASD group performed as accurately as controls when making semantic decisions about object-related words. This category-specific deficit in action-semantic processing, seen here in another motor-impaired group alongside those noted previously (Boulenger et al., [@B14]; Bak and Chandran, [@B3]; Fernandino et al., [@B26],[@B27]; Cardona et al., [@B15]; Kemmerer, [@B40]; Desai et al., [@B20]), might be interpreted in terms of an underlying dysfunction of the neuronal action-perception links (Rizzolatti and Fabbri-Destro, [@B78]; Moseley et al., [@B62]) suggested to underlie semantic processing (Pulvermüller and Fadiga, [@B75]; Moseley and Pulvermüller, [@B61]). Abnormalities in the circuits connecting motor regions to perisylvian language cortices would result in difficulties recognizing or understanding those words which especially draw on these links for the motor programs supporting conceptual knowledge: namely, in the first instance, action words (for a comprehensive review, see Moseley and Pulvermüller, [@B61]). It is important to note the specificity of this action-semantic processing deficit in the present and the previous study (Moseley et al., [@B62]), which speaks against the assumption of a more generic semantic language impairment in ASD, which might have been reflected by main effects of Group in SDT1 (see below for discussion of SDT2). Previous studies suggest that the weakness that some clinical groups show in processing action-related stimuli is related to the differing semantic content of action-words and object-related words, rather than their differing grammatical roles (Pulvermüller and Fadiga, [@B75]; Moseley and Pulvermüller, [@B61]).
In support of the notion of an underlying action-motor problem in ASD, we found evidence for impaired motor skills in the ASD group compared to controls: in the Purdue Pegboard Test, the ASD group showed reduced hand motor skills when placing pegs in a board with the left hand, the right hand, and with both hands simultaneously. Interestingly, when a complex assembly of different objects with both hands was required, control participants and individuals with ASD performed equally well. Besides fine motor skills, the assembly task tests for bimanual coordination and executive function: our results may suggest that our autistic sample were able to compensate for deficits in unimanual fine motor skills by good performance on bimanual coordination. Although executive dysfunction in autism is assumed to be evident in everyday functioning, it is difficult to capture experimentally in tests with low ecological validity (Kenworthy et al., [@B42]; Wallace et al., [@B95]) and poor sensitivity (Demetriou et al., [@B19]). "Executive function" is a term which encapsulates many higher-level processes, and autistic people tend to show a somewhat inconsistent performance of executive difficulties and executive sparing, which is affected by sample differences in age, gender, IQ (where, notably, our study included only individuals with IQ in the normal range), by common comorbidities such as depression, anxiety and ADHD, and by task features such as complexity, whether tasks are open-ended or more structured (Demetriou et al., [@B19]) or even whether they measure cognitive performance vs. overt manifestations of difficulties (Albein-Urios et al., [@B1])[^1^](#fn0001){ref-type="fn"}. It is highly likely that the lack of executive impairment seen in our data belies significant difficulties in everyday life (Wallace et al., [@B95]).In this context, it seems not especially surprising that the autistic sample in our study did not appear impaired on the TMT Parts A and B, where they were compared with normative data from typically-developing participants in the same age range (Tombaugh, [@B88]). In contrast to previous studies (Hill and Bird, [@B103]), individuals with ASD in our study performed well on both parts of the TMT, though we were unable to perform a direct comparison to our own control group who did not complete the TMT. Interestingly and specifically relating to the TMT, a stronger performance has been seen in autistic girls and women than autistic boys and men (Bölte et al., [@B13]; Lehnhardt et al., [@B47]). This may furthermore explain a lack of group differences in our sample of men *and* women.
To our knowledge, this study is the first one to employ a semantic decision task with abstract emotional and abstract but emotionally neutral words. Based on previous data demonstrating cortical hypoactivation in the motor and limbic cortex in individuals with ASD when processing emotion words (Moseley et al., [@B64]) and data from patients with motor lesions (Dreyer et al., [@B22]), we expected to find evidence for impaired processing of abstract emotional words but not for emotionally neutral abstract words; these, like action words, would draw on motor systems for meaning (Moseley et al., [@B65]) and thus be especially impaired in our participants with movement impairments. Our data did not confirm this prediction but revealed that the ASD group, in general, showed less accurate and slower performance than typically-developing controls, irrespective of these two-word categories. One possible explanation of this finding could be due to the fact that the SDT2 task (abstract emotional words vs. abstract neutral words) was more difficult than the SDT1 task (action vs. object words). This might have led to a lower and more heterogeneous performance in the SDT2 task in both groups, reducing statistical power and thus working against the emergence of a statistically significant Group × Word category interaction.
Correlation analyses calculated between neuropsychological and clinical tests and accuracy and reaction time for semantic decisions did not reveal any statistically significant relationships, including (most notably for this study) a lack of relationship between movement impairments (in both the Purdue Test AND the MOSES-Test) and reaction times and accuracy for those word categories hypothesized to depend most on motor systems: action words and abstract emotional words. As such, our original hypothesis, that autistic deficits in motor skills would be functionally associated with impairments in action-semantic processing, was not statistically supported by the data. This is unexpected given the relationship between motor hypoactivity and impaired action word processing seen previously (Moseley et al., [@B62]). This previous study in autism, as well as reports from other patient groups with diseases or lesions of the motor system (Boulenger et al., [@B14]; Bak and Chandran, [@B3]; Cardona et al., [@B15]; Kemmerer, [@B40]), suggest the functional importance of the motor system for optimal action word processing; the studies above also indicate a functional role for motor systems for abstract emotional words (Moseley et al., [@B65], [@B64]; Dreyer et al., [@B22]) though this proposition has not yet accrued the same degree of empirical support. For action words, at least, simulation studies and studies of novel action word learning have been able to demonstrate the involvement and importance of motor systems in acquiring an action vocabulary. The fact that action and emotion word processing deficits were not related to motor dysfunction appears to speak against this interpretation. However, an interesting possibility is whether the deficits in hand dexterity shown here by the Pegboard Test may have been so specific that they did not correlate with errors to action words which ranged in effector-specificity, as the overall action word category included not only hand-related action words that might correspond with the motor programs employed by the Purdue Pegboard Test, but also those denoting motor programs of the feet and face. The same point could be made regarding emotion words, which foremost tend to be related to actions of the face (Moseley et al., [@B65]). A more thorough investigation might, as such, include a wider battery of motor tests and a larger sample size with greater power. It is also notable that autistic individuals may, to some extent, be able to compensate for impaired motor systems by recruiting other areas for semantic word processing (Moseley and Pulvermüller, [@B61]). This may be another reason for the lack of an association, and ultimately, studies would benefit from marrying multiple methodologies: imaging during language testing, *and* motor skills testing.
A notable limitation of our study is the fact that semantic differences between action and object words were confounded by uncontrolled differences in grammatical class: action words were all verbs, while object words were nouns which could have confounded our data. As such, it could be argued that autistic participants had a general deficit across the grammatical category of verbs. Though this study cannot speak to this possibility, our previous investigation in autistic participants found a double dissociation *within* the grammatical category of verbs between words with emotional content and those without (Moseley et al., [@B64]). Analysis of carefully orthogonalized word categories does indeed suggest that action and object words diverge along the semantic as opposed to grammatical line (Moseley and Pulvermüller, [@B60]), though dissociations between nouns and verbs as grammatical categories might appear as emergent properties of the more fundamental difference in action and object associations. The primacy of the semantic as opposed to grammatical dissociation has been supported by a number of studies (Barber et al., [@B5]; Vigliocco et al., [@B92]; Kemmerer et al., [@B41]; Fargier and Laganaro, [@B25]; Lobben and D'Ascenzo, [@B48]; Popp et al., [@B72]; Zhao et al., [@B100]; Vonk et al., [@B93]), though others reflect both semantic *and* grammatical divisions (Yudes et al., [@B99]; Yang et al., [@B98]). We would as such doubt that our findings reflect a general verb deficit in autism, but as debate surrounding the amodal vs. modal organization of language continues, we cannot speak conclusively on this matter.
Another point of note is that one of our subcategories of object words, tool words, is known to elicit activity in motor systems that has been associated with the action affordances of these objects (Chao and Martin, [@B18]; Carota et al., [@B16]). Including this more action-related subcategory within our superordinate object-word category might, therefore, have been problematic. In an attempt to exclude the possible contribution of action associations from tool words in our object word category, we ran a secondary analysis excluding tool words, which did not lead to a different pattern of results. As such, the autistic impairment seen for action words was impervious to the presence of tool words in the object word category, but along with tighter control over the grammatical confound of action/verbs and object/nouns, future studies may wish to exclude tool words within superordinate object word categories.
Whilst none of the motor or clinical tests correlated with the semantic language tasks, several other relationships of interest were observed which corresponded with previous research in autism. First, a significant correlation between the severity of autistic symptoms (as measured by the AQ) and the severity of alexithymia (as measured by the TAS-26) was obtained in our autistic participants. This finding suggests that a higher number of autistic traits is associated with greater alexithymia, and is in line with other research that has shown high comorbidity between ASD and alexithymia (Lombardo et al., [@B49]; Milosavljevic et al., [@B57]; Kinnaird et al., [@B43]). Our ASD participants had significantly higher overall scores on all scales of the TAS-26 in comparison to TD controls. Scale 1 of the TAS-26 measures difficulties in identifying feelings, scale 2 measures difficulties in describing (communicating) feelings, and scale 3 measures externally-orientated thinking.
A high degree of consistency was seen between our findings and previous literature on the AQ, the EQ, and the SQ-R: namely, that autistic participants had lower scores on the EQ and that empathy scores decreased as autistic traits increased (as in Baron-Cohen and Wheelwright, [@B7]; Wheelwright et al., [@B96]); and that as in previous studies, autistic individuals tend to score highly in systemizing (Baron-Cohen et al., [@B9]; Wheelwright et al., [@B96]). This pattern, overall, confirms the empathizing-systemizing account of autism (Baron-Cohen, [@B6]), and is consistent with that seen in very large samples (Baron-Cohen et al., [@B8]).
Our self-developed MOSES questionnaire evaluates problems in gross motor skills in daily life (e.g., catching a ball, riding a bicycle, descending stairs, standing on one leg). The ASD group scored significantly higher than controls on this self-report questionnaire, indicating gross motor deficits that corroborate the fine deficits seen in the Purdue Pegboard Test. Furthermore, there was a strong positive correlation between overall AQ scores and the MOSES questionnaire which implies that the degree of autistic traits may correspond to the severity of motor deficits in everyday life situations. Many studies have shown deficits in gross motor skills in individuals with ASD (Leary and Hill, [@B46]; Jansiewicz et al., [@B38]; Dziuk et al., [@B24]; Ming et al., [@B58]), and many studies have likewise shown a relationship between increased severity of autistic symptomatology and greater motor dysfunction (Papadopoulos et al., [@B71]; MacDonald et al., [@B53], [@B52]; Travers et al., [@B90], [@B89]; Stevenson et al., [@B84]; Uljarević et al., [@B91]; for review, see Moseley and Pulvermüller, [@B61]). Notably, the MOSES test in our study assessed how participants subjectively *perceived* their own gross motor skills. It is interesting that ASD participants' perception of their own deficits in gross motor function is consistent with the poorer scores in objective assessments of gross motor skills described in previous studies, and that as in previous studies, a relationship exists between motor deficits and autistic symptomatology, even when the former is self-reported.
Finally, this study possesses limited generalizability within the autism spectrum, due to the fact that only autistic adults without intellectual disability were included. Hence, these findings cannot be generalized to minimally-verbal adults, those with intellectual disability, or to children with ASD. Moreover, although the sample size in the present study is similar compared to other behavioral studies on autism, the results require confirmation in future studies with a larger clinical group.
Conclusion {#s5}
==========
Our study corroborates previous findings that autistic individuals show specific difficulties in semantic processing of action words; there was no evidence for differential semantic processing deficits for any other word category. Furthermore, our findings revealed deficits in fine motor skills as well as in self-reported gross motor behavior in autistic adults without intellectual disability. The results might be interpreted on the basis of impaired functional (or structural) connections within the motor cortex that hinders the formation of action-perception circuits which may be crucial for storing semantic concepts. The lack of a significant correlation between motor skills in ASD and deficits for action (and indeed emotion words) did not support the notion of a direct functional-behavioral link between motor performance and semantic processing of these words, but the study leaves open several possible interpretations. Further investigation is thus needed to corroborate the hypothesized functional relationship between motor deficits and impairments in processing words which imply motor regions.
Data Availability {#s6}
=================
The datasets generated for this study are available on request to the corresponding author.
Ethics Statement {#s7}
================
This study was carried out in accordance with the recommendations of the Charité Ethics Committee with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Charité Ethics Committee.
Author Contributions {#s8}
====================
JH contributed to the study design, recruitment and testing of participants, data analysis and writing of the manuscript. BM contributed to the study design, recruitment of participants, data analysis and writing of the manuscript. RM contributed to the study design and writing of the manuscript. SR contributed to the recruitment and testing of participants and writing of the manuscript.
Conflict of Interest Statement {#s9}
==============================
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We thank our participants for taking part in this study. We are grateful to Verena Büscher, Friedemann Pulvermüller, Felix Dreyer, Alessandra Mancini, David Hillus, and Svenja Köhne for their help at various stages of this study.
**Funding.** The study was supported by Charité Universitätsmedizin Berlin. We acknowledge support from the German Research Foundation (DFG) and the Open Access Publication Fund of Charité---Universitätsmedizin Berlin.
^1^Indeed, with reference to heterogeneity in task performance, it is important to note that although our autistic sample showed motor deficits in the majority of conditions in the Purdue Pegboard Test, other findings range from an absence of any impairments (Lai et al., [@B45]), impairments across the board (Barbeau et al., [@B4]), or inconsistent profiles contradictory to our sample (for instance, poorer performance in the assembly and right-handed condition, but not in the left-handed and simultaneous bimanual condition (Thompson et al., [@B86]). Again, it should be noted that motor skills are likewise affected by participant characteristics such as autistic symptom severity, IQ, language development and age, and the influence of sex is so far unknown (Moseley and Pulvermüller, [@B61]).
Supplementary Material {#s10}
======================
The Supplementary Material for this article can be found online at: <https://www.frontiersin.org/articles/10.3389/fnhum.2019.00256/full#supplementary-material>
######
Click here for additional data file.
######
Click here for additional data file.
[^1]: Edited by: Xiaolin Zhou, Peking University, China
[^2]: Reviewed by: Rutvik H. Desai, University of South Carolina, United States; Lihui Wang, Otto von Guericke University Magdeburg, Germany
|
/*-----------------------------------------*\
| RGBController_Dummy.cpp |
| |
| Generic RGB Interface Dummy Class |
| |
| Adam Honse (CalcProgrammer1) 2/25/2020 |
\*-----------------------------------------*/
#include "RGBController_Dummy.h"
RGBController_Dummy::RGBController_Dummy()
{
}
void RGBController_Dummy::SetupZones()
{
}
void RGBController_Dummy::ResizeZone(int /*zone*/, int /*new_size*/)
{
}
void RGBController_Dummy::DeviceUpdateLEDs()
{
}
void RGBController_Dummy::UpdateZoneLEDs(int /*zone*/)
{
}
void RGBController_Dummy::UpdateSingleLED(int /*led*/)
{
}
void RGBController_Dummy::SetCustomMode()
{
}
void RGBController_Dummy::DeviceUpdateMode()
{
}
|
# Coding styles
General guidelines on how to code for this probject
## Libraries
Those will be run automatically by the CI and if any error is found will fail the build.
* `tslint` for typescript styling. With vscode install the tslint extension to have live warning.
* `stylelint` for scss. With vscode install the stylelint extension to have live warning.
## Other
### Imports
Use 2 group for import(TSLint will then make sure they are sorted alpha in each group):
1. For npm import
2. For local import
e.g.
```typescript
import { Component } from "@angular/core";
import { FormBuilder, FormGroup, Validators } from "@angular/forms";
import { autobind } from "@batch-flask/core";
import { Observable } from "rxjs";
import { NotificationService } from "@batch-flask/ui/notifications";
import { SidebarRef } from "@batch-flask/ui/sidebar";
// Code goes here
```
### Relative import
Try to use absolute import whenever possible(From the root of the project - start with `"app/...")
Only use a relative import for
```typescript
// Good
import {Some} from "app/components/folder/file"
import {Some} from "./file"
import {Some} from "./folder/file"
// Ok
import {Some} from "../folder/file"
// Bad
import {Some} from "../../folder/file"
import {Some} from "../../../folder/file"
```
### Component template
Use `templateUrl: "abc.html"` instead of `template: require("abc.html")`.
Try to only use `template: "<inline-template></inline-template>"` for simple templates.
|
The present invention generally relates to the field of polishing. In particular, the present invention is directed to a polishing pad having grooves configured to enhance or promote mixing wakes during polishing.
In the fabrication of integrated circuits and other electronic devices, multiple layers of conducting, semiconducting and dielectric materials are deposited onto and etched from a surface of a semiconductor wafer. Thin layers of these materials may be deposited using any of a number of deposition techniques. Deposition techniques common in modern wafer processing include physical vapor deposition (PVD), also known as sputtering, chemical vapor deposition (CVD), plasma-enhanced chemical vapor deposition (PECVD) and electrochemical plating. Common etching techniques include wet and dry isotropic and anisotropic etching, among others.
As layers of materials are sequentially deposited and etched, the uppermost surface of the wafer becomes non-planar. Because subsequent semiconductor processing (e.g., photolithography) requires the wafer to have a flat surface, the wafer needs to be planarized. Planarization is useful for removing undesired surface topography as well as surface defects, such as rough surfaces, agglomerated materials, crystal lattice damage, scratches and contaminated layers or materials.
Chemical mechanical planarization, or chemical mechanical polishing (CMP), is a common technique used to planarize workpieces, such as semiconductor wafers. In conventional CMP using a dual-axis rotary polisher, a wafer carrier, or polishing head, is mounted on a carrier assembly. The polishing head holds the wafer and positions it in contact with a polishing layer of a polishing pad within the polisher. The polishing pad has a diameter greater than twice the diameter of the wafer being planarized. During polishing, each of the polishing pad and wafer is rotated about its respective center while the wafer is engaged with the polishing layer. The rotational axis of the wafer is offset relative to the rotational axis of the polishing pad by a distance greater than the radius of the wafer such that the rotation of the pad sweeps out a ring-shaped “wafer track” on the polishing layer of the pad. When the only movement of the wafer is rotational, the width of the wafer track is equal to the diameter of the wafer. However, in some dual-axis polishers, the wafer is oscillated in a plane perpendicular to its axis of rotation. In this case, the width of the wafer track is wider than the diameter of the wafer by an amount that accounts for the displacement due to the oscillation. The carrier assembly provides a controllable pressure between the wafer and polishing pad. During polishing, a slurry, or other polishing medium, is flowed onto the polishing pad and into the gap between the wafer and polishing layer. The wafer surface is polished and made planar by chemical and mechanical action of the polishing layer and slurry on the surface.
The interaction among polishing layers, polishing media and wafer surfaces during CMP is being increasingly studied in an effort to optimize polishing pad designs. Most of the polishing pad developments over the years have been empirical in nature. Much of the design of polishing surfaces, or layers, of polishing pads has focused on providing these layers with various patterns of voids and/or networks of grooves that are claimed to enhance slurry utilization and polishing uniformity. Over the years, quite a few different groove and void patterns and configurations have been implemented. Prior art groove patterns include radial, concentric circular, Cartesian grid and spiral, among others. Prior art groove configurations include configurations wherein the width and depth of all the grooves are uniform among all grooves and configurations wherein the width or depth of the grooves varies from one groove to another.
Some designers of rotational CMP pads have designed pads having groove configurations that include two or more groove configurations that change from one configuration to another based on one or more radial distances from the center of the pad. These pads are touted as providing superior performance in terms of polishing uniformity and slurry utilization, among other things. For example, in U.S. Pat. No. 6,520,847 to Osterheld et al., Osterheld et al. disclose several pads having three concentric ring-shaped regions, each containing a configuration of grooves that is different from the configurations of the other two regions. The configurations vary in different ways in different embodiments. Ways in which the configurations vary include variations in number, cross-sectional area, spacing and type of grooves.
Although pad designers have heretofore designed CMP pads that include two or more groove configurations that are different from one another in different zones of the polishing layer, these designs do not directly consider the effect of the groove configuration on mixing wakes that occur in the grooves. FIG. 1 shows a plot 10 of the ratio of new slurry to old slurry during polishing at an instant in time within the gap (represented by circular region 14) between a wafer (not shown) and a conventional rotary polishing pad 18 having circular grooves 22. For the purposes of this specification, “new slurry” may be considered slurry that is moving in the rotational direction of polishing pad 18, and “old slurry” may be considered slurry that has already participated in polishing and is being held within the gap by the rotation of the wafer.
In plot 10, new slurry region 26 essentially contains only new slurry and old slurry region 30 essentially contains only old slurry at an instant in time when polishing pad 18 is rotated in direction 34 and the wafer is rotated in direction 38. A mixing region 42 is formed in which new slurry and old slurry become mixed with one another so as to cause a concentration gradient (represented by region 42) between new slurry region 26 and old slurry region 30. Computational fluid dynamics simulations show that due to the rotation of the wafer, slurry immediately adjacent to the wafer may be driven in a direction other than the rotational direction 34 of the pad, whereas slurry somewhat removed from the wafer is held among “asperities” or roughness elements on the surface of polishing pad 18 and more strongly resists being driven in a direction other than direction 34. The effect of wafer rotation is most pronounced at circular grooves 22 at locations where the grooves are parallel, or nearly so, to rotational direction 38 of the wafer because the slurry in the grooves is not held among any asperities and is easily driven by wafer rotation along the length of circular grooves 22. The effect of wafer rotation is less pronounced in circular grooves 22 at locations where the grooves are transverse to rotational direction 38 of the wafer because the slurry can be driven only along the width of the groove within which it is otherwise confined.
Mixing wakes similar to mixing wakes 46 shown occur in groove patterns other than circular patterns, such as the groove patterns mentioned above. Like circular-grooved pad 18 of FIG. 1, in each of these alternative groove patterns, the mixing wakes are most pronounced in regions where the rotational direction of the wafer is most aligned with the grooves, or groove segments, as the case may be, of the pad. Mixing wakes are undesirable in many CMP applications because renewal of active chemical species and removal of heat are slower in the wake region than in the ungrooved areas of the pad immediately adjacent each groove. However, in other applications, mixing wakes can be beneficial precisely because they provide more gradual transitions from spent to fresh chemistry and from warmer to cooler zones of reaction. Without mixing wakes, these transitions can be unfavorably sharp and bring about significant variations in polish conditions point to point under the wafer. Consequently, there is a need for CMP polishing pad designs that are optimized, at least in part, based on the consideration of the occurrence of mixing wakes and the effects that such wakes have on polishing. |
Q:
AWK Beginner Bar Graph/Histogram
I've been assigned this prompt as legitimately my first real awk program and I'm not even sure where to start. Any help to get started would be greatly appreciated.
Write an awk program called hist.awk that reads a file of numbers and prints a histogram of occurrences. For the input shown below:
1
4
5
0
2
4
6
8
1
3
2
4
6
7
2
3
3
4
4
The output will be:
0: 1 ***
1: 2 ******
2: 3 ********
3: 3 ********
4: 5 **************
5: 1 ***
6: 2 ******
7: 1 ***
8: 1 ***
The first column contains the numbers from the file. The second contains the number of times that number occurred. The graph shows the percentage of the total, scaled to 50, so 50 asterisks indicates 100%, 25 asterisks indicates 50%, and so on.
A:
Could you please try following awk and let me know if this helps you(though I am still not sure about your * printing in output).
awk 'function astrick_printing(var){;num=val=count="";count=((var*100)/50);while(++num<=count){val=val "*"};return val;} {a[$0]++} END{for(i in a){print i,a[i],astrick_printing(a[i])}}' Input_file | sort
Adding non one liner form of above solution too now:
awk '
function astrick_printing(var){
num=val=count="";
count=((var*100)/50);
while(++num<=count){
val=val "*"};
return val
}
{
a[$0]++
}
END{
for(i in a){
print i,a[i],astrick_printing(a[i])}
}
' Input_file | sort
Output will be as follows:
0 1 **
1 2 ****
2 3 ******
3 3 ******
4 5 **********
5 1 **
6 2 ****
7 1 **
8 1 **
|
Arthur C. Clarke's Third Law of Prediction: Any sufficiently advanced technology is indistinguishable from magic.
Mike's literary corollary to Clarke's Third of Prediction: Any sufficiently advanced technology that is used without precedent in the story is indistinguishable from bad writing.
This book was first and foremost a disappointment. I loved the premise of the book: a secretive rogue government agency harvests advanced technology before it becomes widespread and disruptive to society and h
Arthur C. Clarke's Third Law of Prediction: Any sufficiently advanced technology is indistinguishable from magic.Mike's literary corollary to Clarke's Third of Prediction: Any sufficiently advanced technology that is used without precedent in the story is indistinguishable from bad writing.This book was first and foremost a disappointment. I loved the premise of the book: a secretive rogue government agency harvests advanced technology before it becomes widespread and disruptive to society and humanity.What I got was a story populated exclusively by tropes, little to no innovative story telling, and boring, bland characters. Some spoilers to follow.I fully recognize that tropes cannot be avoided. They are merely recognized conventions writers use. Using them is not a mark of a bad writer, every writer uses them. Even the greatest characters of literature can be boiled down to a trope. What differentiates good writers from bad ones is the ability to breathe life into the tropes, giving them unique twists or interpretation.Influx is populated by characters that barely rise to the level of trope. There is the rebel scientist who refuses to cooperate with the rogue government organization, the wise mentor he learns from in captivity, the power hungry antagonist, the beautiful enemy agent that is converted to the hero's cause. I just couldn't care about all of them and it seemed like the author did not either.There were plenty of opportunities to explore the characters' deeper motivations or perspective on events that . For instance, near the end of the book, our hero is infiltrating the bad guys' secret base and runs across his old mentor who had apparently decided to cooperate with the bad guys. Instead of providing a passage of text to describe the mentor's reaction to seeing his old pupil and the revelation the pupil bestows upon him, he gets a few pages of story before getting ignobly offed. Personally I was actually interested to see how the mentor would have reacted to having lived under a significant lie for many years and the consequences of his actions. Instead he is treated as a disposable character. By the end of the book I was just skimming pages during the climatic showdown between the various heroes and villains; I just didn't care.I also thought the author relies much too heavily on "advanced technology" out of nowhere. It did literally verge on the edge of magic. Obviously I am aware of Clarke's third law (see above). But there is a difference between introducing advanced technology in a natural way and having it appear out of left field. For instance, early in the book we are shown that the shadowy government agency has access to fusion technology. When it is shown later in the book, it is not a surprise. When the bad guys literally summon a golem to chase the hero I am pretty sure I heard my suspension of disbelief shatter. If that sort of technology was introduced as a possibility earlier I would have been less critical, instead of viewing it as "I, the writer, need some way to get around this obstacle to where I want the story to go". It is as though Chekhov's gun went off in a play that took place in ancient Rome: I was surprised both that it went off and that it existed in the first place.Because I was so indifferent to the story and the characters, I noticed a fair amount of little things that were just wrong. The military force that ends up trying to attack the bad guy's headquarters is described as the 82nd airborn division, but it shown having lots of heavy main battle tanks. It doesn't take a genius (or someone with access to google) to know that airborne forces don't have a heavy tank forces. The author also demonstrated a distinct lack of knowledge of just how much energy a megawatt was, stating several times that a small amount of them (60 MW in one case) was enough to power a small city. As someone who works in the energy field I can assure you that a small city has a much higher power requirement than 60 MW. Perhaps if the story was better and the characters more engaging I could overlook this mistakes, but they weren't so I didn't.On a side note, the whole gravity manipulation to fly and do other neat things was done first and much, MUCH, better by Brandon Sanderson in the Stormlight Archives . He had the benefit of also having complex and interesting chaarcters as well.On a second side note, this book merely showed to me that the sort of technologies the rogue government organization was keeping away from the population as a whole was a good thing. The amount of destruction they were able to unleash with them (summoned golem, giant gravity weapons, guns that use anti-matter) strongly convinced me that they were doing the world a service. That level of destruction in the hands of unstable or nefarious organizations would be unfathomable. While I couldn't root for the bad guys at the end (because they had gone full Skeletor evil), I did see the value of their organization's stated goals.On a final side note, there were several splinter factiosn from the main evil organization. What happens once the main evil organization is wiped out? I don't know, and the author certainly doesn't give them a second thoght (or really explore them much at all, much to my further chagrin).At the end of the day this struck me as a very poor execution of a neat and fascinating idea. Poorly developed characters abounded and easily mingled with lazy tropes masquerading as characters. The story wasn't gripping since there always seemed to be an easy "advanced technology" solution to any problem the hero or villain faced. By the end I was just angry that the book had wasted the potential of the premise with such terrible writing. |
Family and Medical Leave Act only the first step
By Hilary O. Shelton and Debra L. Ness
In February, we celebrated the 20th anniversary of the Family and Medical Leave Act (FMLA), which was the first bill President Clinton signed into law. President Obama hailed the law, as did current and former lawmakers from both sides of the political aisle. Indeed, it was a singular accomplishment for the nation - the first national law ever to help workers balance the dual demands of job and family.
That law is making a huge difference for the country. Most directly, the FMLA allows about 60 percent of workers to take up to 12 weeks of unpaid leave to care for a newborn, newly adopted or foster child, to recover from serious illness, or to help a close family member facing a serious health problem. When workers take leave under the FMLA, their health insurance continues and a job is waiting for them when they return. In the 20 years since the FMLA became law, workers have used the law to take leave more than 100 million times.
The FMLA had indirect benefits, too, changing the culture by embedding in law that workers have family as well as job responsibilities. It helped create a climate in which work/family responsibilities became part of a national conversation. This has meant support for families from all communities, parents providing childcare as well as children providing eldercare. It’s made our workplaces more humane and family friendly.
In these times when there is so much rancor and so little consensus, it’s important to keep in mind that passage of the FMLA did not come quickly or easily. It was a nine-year battle to get both houses of Congress to pass it at a time when we had a president who would sign it into law. It took an extraordinary coalition that included women’s, civil rights, children’s, health, labor, aging and other groups. The National Partnership led that coalition and the NAACP contributed mightily to its success. We proved that progress is possible, even in contentious times.
But for all we accomplished, it’s important to remember that the FMLA was always intended to be the first step on the road to a family-friendly nation. And 20 years later, the country has not taken the next step. That’s a real disappointment and a painful one, because workers in our communities are being cheated out of the policies they urgently need.
The good news is that a broad coalition continues to work for family friendly policies, because we recognize that the FMLA’s unpaid leave is not sufficient to meet the needs of workers and families. According to the Department of Labor’s 2012 survey, most often workers who forgo leave do so because they can’t afford to take leave without pay.
The next step needs to be improving the law so it covers more workers who need to take leave for more reasons, and adopting a national paid leave insurance system that provides some wage replacement, so low-wage and part-time workers, too, can take family and medical leave when they need it most.
The country is ready. A bipartisan poll taken in November showed that, across all demographic lines, workers are struggling to balance their work and family responsibilities, and they want Congress and the president to consider new laws like paid family and medical leave insurance. African Americans, Latinos, women and young people - the very voters that decided the last election - felt strongest about the importance of congressional and presidential action: 77 percent of African Americans, 79 percent of Latinos, 69 percent of women and 68 percent of people under 30 considered it "very important."
They are right. It’s time to take the next step. 40 percent of the workforce still isn’t covered by the Family and Medical Leave Act, and tens of millions of workers still can’t afford to take the unpaid leave the law provides. When babies are born, illness strikes, or relatives need care, they either show up at work or risk losing their jobs.
It’s time to rededicate ourselves to this issue, make some noise, and demand that lawmakers take the next step. Making the nation more family friendly is the unfinished business of our time.
Hilary O. Shelton is Washington bureau director and senior vice president of policy and advocacy for the NAACP; Debra L. Ness is president of the National Partnership for Women & Families
.
Copyright 2006-2014 The Hudson Valley Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. |
That’s because the wife of Yankee pitcher CC Sabathia has a simple philosophy when it comes to motherhood and her work.
“We moms need to remember to worry less,” she told PEOPLE at her CCandy show during the Petite Parade Kids Fashion Week on Mar. 9. “Schedules, planning, busy days — whatever it is, it works out. You have to breathe.”
Another way to cope is to have friends who share your interests. Sabathia’s good pals, and fellow athletes wives, Traci Lynn Johnson and Alexis Stoudemire, sat front row beside their CCandy-clad kiddies at her fashion presentation.Read More
She’s the wife of Yankee star pitcher CC Sabathia, a busy mom of four kids (all under the age of ten!) with not a minute to waste, so when Amber Sabathia couldn’t find children’s clothing in the bright colors her kids craved she decided to create her own.
“I was always picking out blue for the boys and pink for the girls and they had nothing for kids in fun fashion colors,” says the new designer, who hooked up with Outerstuff and MLB to launch a children’s line called CCandy this past August.
The collection includes a rainbow of outfits (tees, hoodies, rompers and onesies) in sizes infant up to 16/18. She even included items for extra large boys — she is wife of the 6’7″ CC after all. Even cooler? Each item includes MLB team logos in neons, which was a family design call. |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.XIB" version="3.0" toolsVersion="11201" systemVersion="15F34" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" colorMatched="YES">
<dependencies>
<deployment identifier="iOS"/>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="11161"/>
<capability name="Aspect ratio constraints" minToolsVersion="5.1"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<objects>
<placeholder placeholderIdentifier="IBFilesOwner" id="-1" userLabel="File's Owner"/>
<placeholder placeholderIdentifier="IBFirstResponder" id="-2" customClass="UIResponder"/>
<tableViewCell contentMode="scaleToFill" selectionStyle="default" indentationWidth="10" reuseIdentifier="shopCartCell" rowHeight="90" id="KGk-i7-Jjw" customClass="shopCartCell">
<rect key="frame" x="0.0" y="0.0" width="320" height="90"/>
<autoresizingMask key="autoresizingMask" flexibleMaxX="YES" flexibleMaxY="YES"/>
<tableViewCellContentView key="contentView" opaque="NO" clipsSubviews="YES" multipleTouchEnabled="YES" contentMode="center" tableViewCell="KGk-i7-Jjw" id="H2p-sc-9uM">
<frame key="frameInset" width="320" height="89"/>
<autoresizingMask key="autoresizingMask"/>
<subviews>
<view contentMode="scaleToFill" translatesAutoresizingMaskIntoConstraints="NO" id="f4N-ZR-a77">
<subviews>
<imageView userInteractionEnabled="NO" contentMode="scaleToFill" horizontalHuggingPriority="251" verticalHuggingPriority="251" image="test01.jpg" translatesAutoresizingMaskIntoConstraints="NO" id="RbF-3l-y1G">
<constraints>
<constraint firstAttribute="width" secondItem="RbF-3l-y1G" secondAttribute="height" multiplier="1:1" id="jof-Ne-VCz"/>
</constraints>
</imageView>
<label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" text="老师一号" lineBreakMode="tailTruncation" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="V1g-tv-3gd">
<fontDescription key="fontDescription" type="system" pointSize="17"/>
<color key="textColor" cocoaTouchSystemColor="darkTextColor"/>
<nil key="highlightedColor"/>
</label>
<button opaque="NO" contentMode="scaleToFill" contentHorizontalAlignment="center" contentVerticalAlignment="center" lineBreakMode="middleTruncation" translatesAutoresizingMaskIntoConstraints="NO" id="AUk-fk-yqe">
<constraints>
<constraint firstAttribute="width" constant="50" id="GgF-2R-KJD"/>
<constraint firstAttribute="height" constant="35" id="OgK-N1-FZl"/>
</constraints>
<fontDescription key="fontDescription" type="system" pointSize="15"/>
<state key="normal" title="购买">
<color key="titleColor" red="1" green="0.0" blue="0.0" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<color key="titleShadowColor" red="0.5" green="0.5" blue="0.5" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
</state>
<connections>
<action selector="onClickCartBtn:" destination="KGk-i7-Jjw" eventType="touchUpInside" id="JXm-Vh-6m7"/>
</connections>
</button>
</subviews>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<constraints>
<constraint firstItem="AUk-fk-yqe" firstAttribute="centerY" secondItem="RbF-3l-y1G" secondAttribute="centerY" id="4Y8-9W-emQ"/>
<constraint firstItem="RbF-3l-y1G" firstAttribute="top" secondItem="f4N-ZR-a77" secondAttribute="top" constant="10" id="54E-ph-9wm"/>
<constraint firstAttribute="bottom" secondItem="RbF-3l-y1G" secondAttribute="bottom" constant="10" id="IKr-Wl-kVT"/>
<constraint firstItem="V1g-tv-3gd" firstAttribute="top" secondItem="f4N-ZR-a77" secondAttribute="top" id="QUx-tS-PUu"/>
<constraint firstAttribute="bottom" secondItem="V1g-tv-3gd" secondAttribute="bottom" id="Qce-Op-T6s"/>
<constraint firstItem="RbF-3l-y1G" firstAttribute="leading" secondItem="f4N-ZR-a77" secondAttribute="leading" constant="10" id="Sli-AM-Xqb"/>
<constraint firstAttribute="trailing" secondItem="AUk-fk-yqe" secondAttribute="trailing" constant="15" id="XHO-gv-Fh5"/>
<constraint firstItem="V1g-tv-3gd" firstAttribute="leading" secondItem="RbF-3l-y1G" secondAttribute="trailing" constant="10" id="Xah-AD-LLH"/>
</constraints>
</view>
</subviews>
<constraints>
<constraint firstItem="f4N-ZR-a77" firstAttribute="leading" secondItem="H2p-sc-9uM" secondAttribute="leading" id="JLn-mq-45C"/>
<constraint firstAttribute="bottom" secondItem="f4N-ZR-a77" secondAttribute="bottom" id="KVM-Xo-aZ9"/>
<constraint firstItem="f4N-ZR-a77" firstAttribute="top" secondItem="H2p-sc-9uM" secondAttribute="top" id="QkP-ft-448"/>
<constraint firstAttribute="trailing" secondItem="f4N-ZR-a77" secondAttribute="trailing" id="We8-WK-iUK"/>
</constraints>
</tableViewCellContentView>
<connections>
<outlet property="deatailLabel" destination="V1g-tv-3gd" id="urU-ba-0RG"/>
<outlet property="headImageView" destination="RbF-3l-y1G" id="j9C-St-WI6"/>
</connections>
<point key="canvasLocation" x="249" y="246"/>
</tableViewCell>
</objects>
<resources>
<image name="test01.jpg" width="491" height="686"/>
</resources>
</document>
|
Spread the love
Once again, Colorado is showing the good things that happen when people gain freedom and a new industry is born. For the second year in a row, $40 million from taxes on legal pot sales will be going into a program to repair and replace rundown schools.
According to the Denver Post, this figure funds a significant portion of the $300 million in funding for the Building Excellent Schools Today (BEST) program, along with funding from other sources including the lottery and the Colorado Land Board. Although billions more are needed to completely fix the problem of crumbling schools, officials welcome anything they can get to provide the kids a good school.
“I don’t care where the money comes from, if we get a new school, I’m for it,” said Hayley Whitehead, a Deer Trail graduate who works as the district’s administrative assistant. “I see the invoices and see what we need for repairs, so I have a pretty good idea of the situation here.” “There are lots of so-called ‘sin taxes’ for uses and products that people don’t necessarily endorse,” added Jay Hoskinson, regional program manager for capital construction for the Colorado Department of Education. “But I think people also start looking at it as a possible new revenue source. And it kind of gets intermingled with other funding and becomes pretty much all part of the same package.” “And so far, we’ve not heard from any school districts who say, “No, we are not going to use that money,’” Hoskinson said.”
The Denver Post cites Deer Trail as a poignant example. The old school—where wheelchair-bound students must be hoisted upstairs—will be demolished and replaced by a state-of-the-art school serving students from preK to 12th grade. The new school will cost $34 million.
There is also a housing rush in Deer Trail, which is another hint that legal recreational pot sales are part of an economic upswing in Colorado. We already know that the cannabis industry is contributing more to the economy than any other industry in the state.
The bottom line is, if people want to ingest substances considered “illicit” by government, they are going to do it. Instead of the futility of prohibition—which only brings cruelty and suffering—states like Colorado have embraced the rationality of decriminalization. And they’re making a lot of money from it.
Meanwhile, none of the horrible things that drug war fanatics predicted are coming true. Teenagers are not using cannabis more than they did when it was illegal, car crashes are not increasing from ‘stoned driving,’ and society has refrained from going berserk.
Millions of dollars that went into the black market are now going into schools and other beneficial endeavors. People can now buy cannabis from reputable vendors who tell consumers exactly what is in the products. Traffic stops resulting in searches have drastically declined, meaning citizens are less likely to be extorted, injured or killed by cops.
Safety, human rights and economy are the most important reasons for cannabis decriminalization, even if states are doing it just for the tax money. Even the most die-hard prohibitionist must think twice when seeing new schools being built in part thanks to legal pot.
Spread the love
Sponsored Content: |
from coalib.bearlib.languages import Language
from coalib.bearlib.aspects import Root, Taste
@Root.subaspect
class Redundancy:
"""
This aspect describes redundancy in your source code.
"""
class Docs:
example = """
int foo(int iX)
{
int iY = iX*2;
return iX*2;
}
"""
example_language = 'C++'
importance_reason = """
Redundant code makes your code harder to read and understand.
"""
fix_suggestions = """
Redundant code can usually be removed without consequences.
"""
@Redundancy.subaspect
class Clone:
"""
Code clones are multiple pieces of source code in your
codebase that are very similar.
"""
class Docs:
example = """
extern int array_a[];
extern int array_b[];
int sum_a = 0;
for (int i = 0; i < 4; i++)
sum_a += array_a[i];
int average_a = sum_a / 4;
int sum_b = 0;
for (int i = 0; i < 4; i++)
sum_b += array_b[i];
int average_b = sum_b / 4;
"""
example_language = 'C++'
importance_reason = """
Code clones make editing more difficult due to unnecessary increases
in complexity and length.
"""
fix_suggestions = """
Usually code clones can be simplified to only one occurrence. In a
lot of cases, both just use different values or variables and can
be reduced to one function called with different parameters or
loops.
"""
min_clone_tokens = Taste[int](
'The number of tokens that have to be equal for it to'
' be detected as a code clone.',
(20, ), default=20)
ignore_using = Taste[bool](
'Ignore ``using`` directives in C#.',
(True, False), default=False,
languages=(Language.CSharp, ))
@Redundancy.subaspect
class UnusedImport:
"""
Unused imports are any kind of import/include that is not needed.
This aspect have following taste:
>>> len(UnusedImport.tastes)
1
>>> UnusedImport.remove_non_standard_import
<...Taste[bool] object at 0x...>
>>> UnusedImport.remove_non_standard_import.default
True
"""
class Docs:
example = """
import sys
import os
print('coala is always written with lowercase c')
"""
example_language = 'python'
importance_reason = """
Redundant imports can cause a performance degrade and make code
harder to understand when reading through it. Also it causes
unneeded dependencies within your modules. In most programming
languages, unused imports may have side effects and that may
be a common false positive. However those should be avoided.
"""
fix_suggestions = """
Usually, unused imports can simply be removed.
"""
remove_non_standard_import = Taste[bool](
"Remove ALL unused import, include those not from language's "
'standard library.',
(True, False), default=True)
@Redundancy.subaspect
class UnreachableCode:
"""
Unreachable code, sometimes called dead code, is source code that
can never be executed during the program execution.
"""
class Docs:
example = """
def func():
return True
if func():
a = {}
else:
a = (i for i in range (5))
print (id(a))
"""
example_language = 'python'
importance_reason = """
Unreachable code, makes the source code longer and more difficult
to maintain.
"""
fix_suggestions = """
Those pieces of code can easily be removed without consequences.
"""
@UnreachableCode.subaspect
class UnusedFunction:
"""
An unused function is a function that is never called during
code execution.
"""
class Docs:
example = """
def func():
pass
print('coala is always written with lowercase c')
"""
example_language = 'python'
importance_reason = """
Unused functions make the source code more longer and more
difficult to maintain.
"""
fix_suggestions = """
It is recommended to remove those functions. If you would like
to access it's source code later for other purposes, you can
rely on a version control system like Git, Mercurial or
Subversion.
"""
@UnreachableCode.subaspect
class UnreachableStatement:
"""
An unreachable statement is a statement that is never executed
during code execution.
"""
class Docs:
example = """
def func():
return True
if func():
a = {}
else:
a = (i for i in range (5))
print (id(a))
"""
example_language = 'python'
importance_reason = """
We should always keep our codebase clean, having dead code uselessly
makes the code longer and ambiguous.
"""
fix_suggestions = """
These statement can be remove without harming the code.
"""
@Redundancy.subaspect
class UnusedVariable:
"""
Unused variables are declared but never used.
"""
class Docs:
example = """
a = {}
print ('coala')
"""
example_language = 'python'
importance_reason = """
Unused variables can degrade performance marginally but more importantly
makes the source code harder to read and understand.
"""
fix_suggestions = """
Those variables can easily be removed without consequences.
"""
@UnusedVariable.subaspect
class UnusedParameter:
"""
Unused parameters are functions arguments which are never used.
"""
class Docs:
example = """
def func(a):
pass
"""
example_language = 'python'
importance_reason = """
Unused paramaters are useless to functions, they them difficult to
use and maintain.
"""
fix_suggestions = """
Those parameters can easily be removed without consequences.
"""
@UnusedVariable.subaspect
class UnusedLocalVariable:
"""
These are variable which are defined locally but never used.
"""
class Docs:
example = """
def func():
for i in range (5):
a = 0
print ( ' coala ' )
"""
example_language = 'python'
importance_reason = """
They make the code difficult to maintain.
"""
fix_suggestions = """
These can easily be removed without consequences.
"""
@UnusedVariable.subaspect
class UnusedGlobalVariable:
"""
These are variable which have a global scope but are never used.
"""
class Docs:
example = """
a = 0
for i in range (5):
print ( ' coala ' )
"""
example_language = 'python'
importance_reason = """
They make the code difficult to maintain.
"""
fix_suggestions = """
These can easily be removed without consequences.
"""
|
---
abstract: 'Due to the ever rising importance of the network paradigm across several areas of science, comparing and classifying graphs represent essential steps in the networks analysis of complex systems. Both tasks have been recently tackled via quite different strategies, even tailored *ad-hoc* for the investigated problem. Here we deal with both operations by introducing the Hamming-Ipsen-Mikhailov (HIM) distance, a novel metric to quantitatively measure the difference between two graphs sharing the same vertices. The new measure combines the local Hamming distance and the global spectral Ipsen-Mikhailov distance so to overcome the drawbacks affecting the two components separately. Building then the HIM kernel function derived from the HIM distance it is possible to move from network comparison to network classification via the Support Vector Machine (SVM) algorithm. Applications of HIM distance and HIM kernel in computational biology and social networks science demonstrate the effectiveness of the proposed functions as a general purpose solution.'
bibliography:
- 'jurman12glocal.bib'
title: |
The HIM glocal metric and kernel for\
network comparison and classification
---
Introduction {#sec:intro}
============
The arising prevalence of the network paradigm [@barabasi12network] as the elective model for complex systems analysis in different workfields has strongly contributed in stimulating graph theoretical techniques in the recent scientific literature. Methods based on graph properties have spread through the static and dynamic analysis of different economical, chemical and biological system, computer networking, social networks and neuroscience. As a relevant example, it is worthwhile mentioning the rapid diffusion, in computational biology, of the differential network analysis [@sharan06modeling; @ideker12differential; @yoon12comparative; @csermely13structure; @chuang07network; @yang13network; @pavlopoulos11using; @barla12machine; @barla13machine]. In particular, two key tasks constitute the backbone of most of the aforementioned analysis techinques, namely network comparison and network classification, and they both rely on the basic idea of measuring the similarity between two graphs.
Network comparison consists in the quantification of the difference between two homogeneous objects in some network space, while the aim of network classification is to predictively discriminate graphs belonging to different classes, for instance by means of machine learning algorithms. Network comparison has its roots in the quantitative description of main properties of a graph (*e.g.*, degree distribution), which can be encoded into a feature vector [@xiao08structure], thus providing a convenient representation for classification tasks (see for instance [@dehmer13discrimination] for a very recent approach). As a major alternative strategy, one can adopt a direct comparison method stemming from the graph isomorphism problem, by defining a suitable similarity measure on the topology of the underlying (possibly directed and/or weighted) graphs. This line of study dates back to the 70’s with the theory of graph distances, regarding both inter- and intra-graphs metrics [@entringer76distance]. Since then, a wide range of similarity measures has been defined, based on very different graph indicators. To mention some of the most important metrics, we list the family of edit distances, evaluating the minimum cost of transformation of one graph into another by means of the usual edit operations (insertion and deletion of links), the family of common network subgraphs, looking for shared structures between the graphs and the family of spectral measures, relying on functions of the eigenvalues of one of the graph connectivity matrices. Similarly, graph classification can be tackled by a number of different techniques, for instance nearest neighbours on Euclidean distance of the features’ vectors of the graphs [@zhu11classifying; @aliakbary13learning; @chen12discovery], or Support Vector Machine with the graph Laplacian as a regularization term [@chen11identifying], or via different subgraph-based lerning algorithms [@thorat13survey]. However, in general the most efficient techniques use a kernel machine, where the kernel itself corresponds to a scalar product (and hence a distance) in a suitable Hilbert space [@mahe04extension; @gaertner06short; @gaertner07kernel; @borgwardt07graph; @ketkar09empirical; @vishwanathan10graph; @tsuda10graph; @vert05supervised; @vert03graph]. For more recent advances, we cite the Weisfeiler-Lehman graph kernel [@shervashidze11weisfeiler], and its use in neuroimaging classification for discriminating mild cognitive impairment from Alzheimer’s disease [@jie13integration]. This last citation stands as an example of the increasing interest for these techniques recently appearing in neurosciences [@richiardi13machine; @su13discriminative].
In the present work we propose a novel solution to both the comparison and the classification tasks by introducing the novel HIM metric for comparing graphs (even directed and weighted) and a graph kernel induced by the HIM measure. The HIM distance is defined as the one-parameter family of product metrics linearly combining – by a non-negative real factor $\xi$ – the normalized Hamming distance H [@dougherty10validation; @tun06metabolic; @iwayama12characterizing; @morris08specification] and the normalized Ipsen-Mikhailov distance IM [@ipsen02evolutionary]; the product metric is normalized by the factor $\sqrt{1+\xi}$ to set its upper bound to 1. In absence of a gold standard driving the search for the optimal weight ratio, we decided for an equal contribution from the two components $\xi=1$ as the most natural choice. The Hamming distance is the simplest member of the family of edit distances, evaluating the occurrence of matching links in the compared networks: by definition, it is a local measure of dissimilarity between graphs, because it only focusses on the links as independent entities, disregarding the overall structure. On the other hand, the spectral distances are global measures, evaluating the differences between the whole network structures: however, they cannot discriminate between isospectral non-identical graphs: for a recent spectral approach, see [@rajendran13analysis]. In the comparative review [@jurman11introduction], the properties of the existing graph spectral distances were studied, and the Ipsen-Mikhailov metric emerged as the more reliable and stable. The combination of the two components within a single metric allows overcoming their drawbacks and obtaining a measure which is simultaneously global and local. Moreover, the imposed normalization limits the values of the HIM distance between zero (reached only by comparing identical networks) and one (attained when comparing a clique and the empty graph), regardless of the number of vertices. Finally, the HIM distance can also be applied to multilayer networks [@kivela13multilayer; @dedomenico13mathematical], since a rigorous definition of their Laplacian has just been proposed [@sole-ribalta13spectral; @sanchezgarcia13dimensionality]. By a Gaussian-like map [@cortes03positive], the HIM distance generates the HIM kernel. Plugging the HIM kernel [@shawe-taylor04kernel] into a Support Vector Machine gives us a classification algorithm based on the HIM distance, to be used as is or together with other graph kernels in a Multi-Kernel Learning framework to increase the classification performance and to enhance the interpretability of the results [@kloft11lp]. Note that, although positive definiteness does not hold globally for the HIM kernel, this property can be guaranteed on the given training data, thus leading to positive definite matrices suitable for the convergence of the SVM optimizer.
To conclude with, we present some applications of the HIM distance and the HIM kernel to some real datasets belonging to different areas of science. These examples support the positive impact of the HIM suite as general analysis tool whenever it is required to extract information from the quantitative evaluation of the difference among diverse instances of a complex system.
We also provide for analysis the R [@R2013] package *nettools* including functions to compute the HIM distance. The package is provided as a working beta version and it is accessible on GitHub at <https://github.com/filosi/nettools.git>. To reduce computing time, the software can be used on multicore workstations and on high performance computing (HPC) clusters.
The HIM family of distances {#sec:him}
===========================
Notations {#ssec:notations}
---------
Let $\mathcal{N}_1$ and $\mathcal{N}_2$ be two simple networks on $N$ nodes, described by the corresponding adjacency matrices $A^{(1)}$ and $A^{(2)}$, with $a^{(1)}_{ij}, a^{(2)}_{ij}\in\mathcal{F}$, where $\mathcal{F}=\mathbb{F}_2=\{0,1\}$ for unweighted graphs and $\mathcal{F}=[0,1]$ for weighted networks. Let then $\mathbb{I}_N$ be the $N\times N$ identity matrix $\mathbb{I}_N = \left( \begin{smallmatrix} 1&0&\cdots & 0 \\ 0&1&\cdots&0 \\ &\cdots \\ 0&0&\cdots &1 \end{smallmatrix} \right)$, let $\mathbb{1}_N$ be the $N\times N$ unitary matrix with all entries equal to one and let $\mathbb{0}_N$ be the $N\times N$ null matrix with all entries equal to zero. Denote then by $\mathcal{E}_N$ the empty network with $N$ nodes and no links (with adjacency matrix $\mathbb{0}_N$) and by $\mathcal{F}_N$ the clique (undirected full network) with $N$ nodes and all possible $N(N-1)$ links, whose adjacency matrix is $\mathbb{1}_N-\mathbb{I}_N$. For an undirected network, its adjacency matrix is symmetric. For a directed network $\mathcal{N}^\uparrow$, following the convention in [@liu11controllability], a link ${i}\rightarrow{j}$ is represented by setting $a_{ji}=1$ in the corresponding adjacency matrix $A_{\mathcal{N}^\uparrow}$, which thus is, in general, not symmetric.
For instance, the matrix $A_{\mathcal{N}^\uparrow}=\mathbb{1}_N-\mathbb{I}_N$ represents the full directed network $\mathcal{F}^\uparrow_N$, with all possible $N^2-N$ directed links ${i}\rightarrow{j}$.
The Hamming distance {#ssec:hamming}
--------------------
The Hamming distance is one of the most common dissimilarity measures in coding and string theory, recently used also for (biological) network comparison [@dougherty10validation; @tun06metabolic; @morris08specification; @iwayama12characterizing]. Since the Hamming measure basically evaluates the presence/absence of matching links on the two networks being compared, it has a simple expression in terms of the neworks’ adjacency matrices. This is not the case for many other members of the edit distance family, whose computation is known to be a NP-hard task. The definition of the normalized Hamming distance H is in fact the following: $$\label{eq:hamming}
\textrm{H}(\mathcal{N}_1,\mathcal{N}_2) =
\frac{\textrm{Hamming}(\mathcal{N}_1,\mathcal{N}_2)}{\textrm{Hamming}(\mathcal{E}_N,\mathcal{F}_N)} =
\frac{\textrm{Hamming}(\mathcal{N}_1,\mathcal{N}_2)}{N(N-1)} =
\frac{1}{N(N-1)}\sum_{1\leq i\not = j\leq N} \vert A^{(1)}_{ij} - A^{(2)}_{ij} \vert\ ,$$ where the normalization factor $N(N-1)$ bound the range of the function H in the interval $[0,1]$. The lower bound $0$ is attained only for identical networks $A^{(1)}=A^{(2)}$, while the upper bound $1$ is reached whenever the two networks are complementary $A^{(1)}+A^{(2)}=\mathbb{1}_N-\mathbb{I}_N=\left( \begin{smallmatrix} 0&1&\cdots & 1 \\ 1&0&\cdots&1 \\ &\cdots \\ 1&1&\cdots &0 \end{smallmatrix} \right)$. When $\mathcal{N}_1$ and $\mathcal{N}_2$ are unweighted networks, $\textrm{H}(\mathcal{N}_1,\mathcal{N}_2)$ is just the fraction of different matching links over the total number $N(N-1)$ of possible links between the two graphs.
The Ipsen-Mikhailov distance {#ssec:ipsen}
----------------------------
Originally introduced in [@ipsen02evolutionary] as a tool for network reconstruction from its Laplacian spectrum, the definition of the Ipsen-Mikhailov $\textrm{IM}$ metric follows the dynamical interpretation of an $N$ nodes network as an $N$ molecules system connected by identical elastic strings as in Fig. \[fig:springs\](a-b), where the pattern of connections is defined by the adjacency matrix $A$ of the corresponding network.
----- -- --------------------------------- -- -----
$
\begin{pmatrix}
0 & 1 & 0 & \frac{1}{2} & 0 \\
1 & 1 & 0 & 0 & \frac{1}{2} \\
0 & 0 & 0 & 1 & 0 \\
\frac{1}{2} & 0 & 1 & 0 & 1 \\
0 & \frac{1}{2} & 0 & 1 & 0
\end{pmatrix}
$
(a) (b) (c)
----- -- --------------------------------- -- -----
The dynamical system is described by the set of $N$ differential equations $$\label{eq:ipsen_model}
\ddot{x}_i+\sum_{j=1}^N A_{ij}(x_i-x_j)=0\quad\textrm{for\ }i=0,\cdots,N-1\ .$$ We recall that the Laplacian matrix $L$ of an undirected network is defined as the difference between the degree $D$ and the adjacency $A$ matrices $L=D-A$, where $D$ is the diagonal matrix with vertex degrees as entries. $L$ is positive semidefinite and singular [@chung97spectral; @spielman09spectral; @tonjes09perturbation; @atay06network], so its eigenvalues are $0 = \lambda_0 \leq \lambda_1\leq \cdots\leq \lambda_{N-1}$. The vibrational frequencies $\omega_i$ for the network model in Eq. \[eq:ipsen\_model\] are given by the square root of the eigenvalues of the Laplacian matrix of the network: $\lambda_i = \omega^2_i$, with $\lambda_0=\omega_0=0$. In [@chung97spectral], the Laplacian spectrum is called the vibrational spectrum. Estimates (actual and asymptotic) of the eigenvalues distribution are available for complex networks [@rodgers05eigenvalue], while the relations between the spectral properties and the structure and the dynamics of a network are discussed in [@jost02evolving; @jost07dynamical; @almendral07dynamical]. The spectral density for a graph as the sum of Lorentz distributions is defined as $$\rho(\omega,\gamma)=K\sum_{i=1}^{N-1} \frac{\gamma}{(\omega-\omega_i)^2+\gamma^2}\ ,$$ where $\gamma$ is the common width and $K$ is the normalization constant defined by the condition $\displaystyle{\int_0^\infty \rho(\omega,\gamma)\textrm{d}\omega =1}$, and thus $$K = \frac{1}{\gamma\displaystyle{\sum_{i=1}^{N-1} \int_0^\infty \frac{\textrm{d}\omega}{(\omega-\omega_i)^2+\gamma^2} }}\ .$$ The scale parameter $\gamma$ specifies the half-width at half-maximum, which is equal to half the interquartile range. An example of Lorentz distribution for two networks is shown In Fig. \[fig:lorentz\]. Then the spectral distance $\epsilon_\gamma$ between two graphs $\mathcal{N}_1$ and $\mathcal{N}_2$ on $N$ nodes with densities $\rho_{\mathcal{N}_1}(\omega,\gamma)$ and $\rho_{\mathcal{N}_2}(\omega,\gamma)$ can then be defined as $$\epsilon_\gamma(\mathcal{N}_1,\mathcal{N}_2) = \sqrt{\int_0^\infty \left[\rho_{\mathcal{N}_1}(\omega,\gamma)-\rho_{\mathcal{N}_2}(\omega,\gamma)\right]^2 \textrm{d}\omega}\ .$$ The highest value of $\epsilon_\gamma$ is reached, for each $N$, when evaluating the distance between $\mathcal{E}_N$ and $\mathcal{F}_N$. Denoting by $\overline{\gamma}$ the unique solution of $$\label{eq:gamma_implicit}
\epsilon_\gamma(\mathcal{E}_N, \mathcal{F}_N) = 1\ ,$$ the normalized Ipsen-Mikahilov distance between two undirected (possibly weighted) networks can be defined as $$\label{eq:epsilon}
\textrm{IM}(\mathcal{N}_1,\mathcal{N}_2)=\epsilon_{\overline\gamma}(\mathcal{N}_1,\mathcal{N}_2) = \sqrt{\int_0^\infty \left[\rho_{\mathcal{N}_1}(\omega,\overline{\gamma})-\rho_{\mathcal{N}_2}(\omega,\overline{\gamma})\right]^2 \textrm{d}\omega}\ ,$$ so that $\textrm{IM}$ is bounded between 0 and 1, with upper bound attained only for $\{\mathcal{N}_1,\mathcal{N}_2\}=\{\mathcal{E}_N,\mathcal{F}_N\}$. A detailed description of the uniqueness of the solution of Eq. \[eq:gamma\_implicit\] is described in Appendix \[sec:appendix\]. Isospectral networks (and thus also isomorphic networks) cannot be distinguished by this class of measures, so this is a distance between classes of isospectral graphs. Although the number of isospectral networks is negligible for large number of nodes [@haemers04enumeration], their fraction is relevant for smaller networks. The case of directed networks is discussed in a later paragraph.
![Representation of the HIM distance in the Ipsen-Mikhailov (IM axis) and Hamming (H axis) distance space between networks A versus B, E and F, where E is the empty network and F is the clique.[]{data-label="fig:himspace"}](./himspace.pdf){width="80.00000%"}
The HIM distance {#ssec:him}
----------------
Consider now two copies of the space $\pmb{N}(N)$ of all simple undirected networks on $N$ nodes, and endow the first copy with the Hamming metric H and the second copy with the Ipsen-Mikhailov distance IM. Then the two obtained pairs $(\pmb{N}(N),\textrm{H})$ and $(\pmb{N}(N),\textrm{IM})$ are metric spaces. Define now on their Cartesian product the one-parameter HIM function as the $L_2$ (Euclidean) product metric [@deza09encyclopedia] combining H and $\sqrt{\xi}\cdot$ IM, normalized by the factor $\frac{1}{\sqrt{1+\xi}}$, for $\xi\in [0,+\infty)$. Via the natural correspondence of the same network in the two spaces, the HIM function becomes a distance on $\pmb{N}(N)$: $$\label{eq:glocal}
\textrm{HIM}_{\xi}(\mathcal{N}_1,\mathcal{N}_2) = \frac{1}{\sqrt{1+\xi}} || (\textrm{H}(\mathcal{N}_1,\mathcal{N}_2) , \sqrt{\xi}\cdot\textrm{IM}(\mathcal{N}_1,\mathcal{N}_2)) ||_{2} = \frac{1}{\sqrt{1+\xi}} \sqrt{ \textrm{H}^2(\mathcal{N}_1,\mathcal{N}_2) + \xi\cdot \textrm{IM}^2(\mathcal{N}_1,\mathcal{N}_2) } \ ,$$ where in what follows we will omit the subscript $\xi$ when it is equal to one. Obviously, $\textrm{HIM}_0 = \textrm{H}$ and $\displaystyle{\lim_{\xi\to +\infty} \textrm{HIM}_\xi= \textrm{IM}}$ (see Fig. \[fig:springs\](c)); apart from values of $\xi$ close to the bounds $\{0, +\infty\}$ where the prevalence of one of the factors becomes dominant, the qualitative impact of $\xi$ is minimal in practice when using $\textrm{HIM}_\xi$ as a distance. In what follows, when no *a priori* hypothesis supports unbalancing the metric towards one of the two components, $\xi=1$ will be assumed. However, the impact of $\xi$ is definitely more relevant when $\textrm{HIM}_\xi$ is used to generate a kernel function to be used for classification purposes, as we will show in a later section. The metric $\textrm{HIM}_\xi(\mathcal{N}_1,\mathcal{N}_2)$ is bounded in the interval $[0,1]$, with lower bound attained for every couple of identical networks, and upper bound attained only on the pair $(\mathcal{E}_N, \mathcal{F}_N)$. Moreover, all distances $\textrm{HIM}_\xi$ will be nonzero for non-identical isomorphic/isospectral graphs.
Consider now the $[0,1]\times[0,1]$ Hamming/Ipsen-Mikhailov (H/IM) space, where a point $P$ has coordinates $(\textrm{H}(\mathcal{N}_1,\mathcal{N}_2),\textrm{IM}(\mathcal{N}_1,\mathcal{N}_2))$, and the distance of $P$ from the origin is $\sqrt{2}\cdot\textrm{HIM}(\mathcal{N}_1,\mathcal{N}_2)$. If we (roughly) split the Hamming/Ipsen-Mikhailov space into four main zones I,II,III,IV as in Fig. \[fig:himspace\], two networks whose distances correspond to a point in zone I are quite close both in terms of matching links and of structure, while those falling in the zone III are very different with respect to both measures. Networks corresponding to a point in zone II have many common links, but their structure is rather different (for instance, they have a different number of connected components), while a point in zone IV indicates two networks with few common links, but with similar structure (*e.g.*, isospectral non-identical graphs). In Fig. \[fig:himspace\] we show some examples of points in the Hamming/Ipsen-Mikhailov space.
The directed network case {#ssec:directed}
-------------------------
In this situation, the connectivity matrices are not symmetric, thus the Laplacian spectrum lies in $\mathbb{C}$. Hence, computing the Ipsen-Mikhailov distance would require extending the Lorentzian distribution to the complex plane. A simpler solution can be obtained by transforming the directed network $D^\uparrow$ into an undirected (bipartite) one $\hat{D}^\uparrow$, as in [@liu11controllability]. For each node $x_i$ in $D^\uparrow$, the graph $\hat{D}^\uparrow$ has two nodes $x_i^I$ and $x_i^O$ (where I and O stand for In and Out respectively) and for each directed link $x_i\longrightarrow x_j$ in $D^\uparrow$ there is a link $x_i^O - x_j^I$ in $\hat{D}^\uparrow$. If the adjacency matrix for $D^\uparrow$ is $A_{D^\uparrow}$, the corresponding matrix for $\hat{D}^\uparrow$ is $A_{\hat{D}^\uparrow}=\left( \begin{smallmatrix}0 & A^T_{D^\uparrow} \\ A_{D^\uparrow} & 0\end{smallmatrix}\right)$, with respect to the node ordering $x_1^O, x_2^O, \ldots x_n^O, x_1^I, \ldots, x_n^I$. An example of the above transformation is shown in Fig. \[fig:dir2undir\].
[cccc]{} ![A directed network $D^\uparrow$ on three nodes and the equivalent undirected network $\hat{D}^\uparrow$ on six nodes, together with their adjacency matrices.[]{data-label="fig:dir2undir"}](./dir2undir2.pdf "fig:"){width="20.00000%"} & & ![A directed network $D^\uparrow$ on three nodes and the equivalent undirected network $\hat{D}^\uparrow$ on six nodes, together with their adjacency matrices.[]{data-label="fig:dir2undir"}](./dir2undir1.pdf "fig:"){width="20.00000%"} &\
\
\
&\
Thus it is possible to define $\textrm{HIM}(\mathcal{N}^\uparrow_1,\mathcal{N}^\uparrow_2)$ as $\textrm{HIM}(\hat{\mathcal{N}}^\uparrow_1,\hat{\mathcal{N}}^\uparrow_2)$ after substituing the normalizing factors $\overline{\eta}$ and $\overline{\gamma}$ with the corresponding $\overline{\eta}^\uparrow$ and $\overline{\gamma}^\uparrow$ derived by imposing the conditions $\textrm{Hamming}(\hat{\mathcal{E}}_N,\hat{\mathcal{F}}_N)/\overline{\eta}^\uparrow=1$ and $\epsilon_{\overline{\gamma}^\uparrow}(\hat{\mathcal{E}}_N,\hat{\mathcal{F}}_N)=1$, so that $\textrm{HIM}(\hat{\mathcal{E}}_N,\hat{\mathcal{F}}_N)=1$ by using Eq. (\[eq:glocal\]). It is immediate to compute $\bar{\eta}^\uparrow = 2N(N-1)$, while $\bar{\gamma}^\uparrow$ can be numerically computed as for $\bar{\gamma}$: details are given in Appendix \[sec:appendixb\].
------------------------------------- -- ------------------------------------- --
$ $
A^{I_1} = \left(\begin{smallmatrix} A^{I_2} = \left(\begin{smallmatrix}
0&1&0&0&1&0&0&1\\ 0&1&0&0&0&1&1&0\\
1&0&0&0&0&0&1&1\\ 1&0&0&0&1&1&0&0\\
0&0&0&0&0&1&1&0\\ 0&0&0&0&0&0&0&0\\
0&0&0&0&1&1&0&0\\ 0&0&0&0&0&0&1&1\\
1&0&0&1&0&0&0&0\\ 0&1&0&0&0&0&0&0\\
0&0&1&1&0&0&0&0\\ 1&1&0&0&0&0&0&0\\
0&1&1&0&0&0&0&1\\ 1&0&0&1&0&0&0&1\\
1&1&0&0&0&0&1&0\\ 0&0&0&1&0&0&1&0\\
\end{smallmatrix} \end{smallmatrix}
\right) \right)
$ $
------------------------------------- -- ------------------------------------- --
The HIM kernel {#sec:kernel}
==============
Following [@cortes03positive], a kernel can be naturally derived from a distance by means of a Gaussian (Radial Basis Function) map (see also [@bolla13spectral]). Thus, given two graphs $x$ and $y$ on the same $n$ nodes and a positive real number $\gamma$, the HIM kernel can be defined as $$K(x,y) = e^{-\gamma\cdot\textrm{HIM}_\xi^2(x,y)}\ .$$ Whenever a novel kernel is introduced, one has to check whether it is positively defined.
A function $\Psi\colon X\times X\to \mathbb{R}$ is a kernel of condionally negative type if
1. $\Psi(x,x)=0\quad\forall x\in X$;
2. $\Psi(x,y)=\Psi(y,x)\quad\forall x,y\in X$;
3. $\displaystyle{\sum_{i,j=1}^n} c_i c_j \Psi(x_i,x_j)\leq 0\quad \forall n\in\mathbb{N}, \forall x_1,\ldots,x_n\in X,\forall c_1,\ldots,c_n\in\mathbb{R}\;\textrm{such as}\; \displaystyle{\sum_{i=1}^n c_i}=0$.
A variant of Schoenberg’s theorem [@schoenberg38metric] (proved in [@ressel76short; @bekka08kazhdan]) states that
For a function $\Psi\colon X\times X\to \mathbb{R}$, the following are equivalent:
1. $\Psi$ is of conditionally negative type;
2. $K(x,y)=e^{-\gamma \Psi(x,y)}$ is a positive semidefinite kernel for all $\gamma\in\mathbb{R}_0^+$.
The above theorem describes the correspondence between negative-type distances and positive definite kernel, which is also equivalent to $\ell_2^2$ embeddability [@berg84harmonic]. Hence, $K(x,y)$ is a positive semidefinite kernel if and only if $\textrm{HIM}_\xi^2(x,y)$ is a symmetric function of conditionally negative type. Although the square of many distances are condionally negative type functions, $\textrm{HIM}_\xi^2(x,y)$ cannot be proven to be of conditionally negative type (actually, it is probably not of negative type, as it is the case for many edit distances [@cortes03positive; @martins06generative; @neuhaus07bridging; @li05class; @cuturi09positive], the HIM kernel $K$ is not positively defined in general for all $\gamma\in\mathbb{R}_0^+$. Nevertheless, this problem can be overcomed by using Prop. 1.3.4 in [@schoelkopf97support] (see also [@bolla13spectral; @li05class]):
\[th:k\] Suppose the data $x_1,\ldots,x_l$ and the kernel $k(\cdot,\cdot$) are such that the matrix $$K_{ij} =k(x_i,x_j)$$ is positive. Then it is possible to construct a map $\Phi$ into a feature space $F$ such that $$k(x_i, x_j) = \langle \Phi(x_i), \Phi(x_j) \rangle\ .$$ Conversely, for a map $\Phi$Φ into some feature space $F$, the matrix $Ki_ij = \langle \Phi(x_i), \Phi(x_j) \rangle$ is positive.
Note that Th. \[th:k\] does not even require $x_1,\ldots,x_l$ to belong to a vector space. This theorem implies that, even though the kernel is not positive definite, it is still possible to use it in Support Vector Machines or other algorithms requiring $k$ to correspond to a dot product in some space if the kernel matrix $K$ is positive for the given training data. This condition can be obtained by choosing a suitable value of $\gamma$: in the experiments shown hereafter, the HIM kernel is always positively defined on the given training data, leading to positive definite matrices, and thus posing no difficulties for the SVM optimizer, as in [@sonnenburg05large].
Applications {#sec:apps}
============
A minimal example {#ssec:minimal}
-----------------
Consider the two networks $I_1, I_2\in \pmb{N}(8)$ with corresponding adjacency matrices $A^{I_1}, A^{I_2}$ shown in Fig. \[fig:I1\_I2\]. The Hamming distance between $I_1$ and $I_2$ is $$\textrm{H}(I_1,I_2) = \frac{1}{N(N-1)} \sum_{1\leq i\not = j\leq N} \vert A^{I_1}_{ij} - A^{I_2}_{ij} \vert
= \frac{1}{56} \sum_{1\leq i\not= j \leq 8}
\left(
\begin{smallmatrix}
0&0&0&0&1&1&1&1\\
0&0&0&0&1&1&1&1\\
0&0&0&0&0&1&1&0\\
0&0&0&0&1&1&1&1\\
1&1&0&1&0&0&0&0\\
1&1&1&1&0&0&0&0\\
1&1&1&1&0&0&0&0\\
1&1&0&1&0&0&0&0\\
\end{smallmatrix}
\right)
= \frac{28}{56}
= 0.5\ .$$
From the spectral point of view, the corresponding Laplacian matrices and eigenvalues are $$\begin{aligned}
L^{I_1} &=
\left(
\begin{smallmatrix}
3&-1& 0& 0&-1& 0& 0&-1\\
-1& 3& 0& 0& 0& 0&-1&-1\\
0& 0& 2& 0& 0&-1&-1& 0\\
0& 0& 0& 2&-1&-1& 0& 0\\
-1& 0& 0&-1& 2& 0& 0& 0\\
0& 0&-1&-1& 0& 2& 0& 0\\
0&-1&-1& 0& 0& 0& 3&-1\\
-1&-1& 0& 0& 0& 0&-1& 3\\
\end{smallmatrix}
\right)
\quad
&\textrm{spec}(L^{I_1}) &= [0,0.657077,1,2.529317,3,4,4,4.813607]
\\
L^{I_2} &=
\left(
\begin{smallmatrix}
3&-1& 0& 0& 0&-1&-1& 0\\
-1& 3& 0& 0&-1&-1& 0& 0\\
0& 0& 0& 0& 0& 0& 0& 0\\
0& 0& 0& 2& 0& 0&-1&-1\\
0&-1& 0& 0& 1& 0& 0& 0\\
-1&-1& 0& 0& 0& 2& 0& 0\\
-1& 0& 0&-1& 0& 0& 3&-1\\
0& 0& 0&-1& 0& 0&-1& 2\\
\end{smallmatrix}
\right)
\quad
&\textrm{spec}(L^{I_2}) &= [0,0,0.340321,1.145088,3,3,3.854912,4.659679]\ . \end{aligned}$$ From the above spectra, we can compute the corresponding Lorentz distributions $\rho_{I_{\{1,2\}}}(\omega,\overline{\gamma})$, where $\overline{\gamma}=0.4450034$: their plots are shown in Fig. \[fig:lorentz\].
[ccc]{} ![Lorentzian distribution of the Laplacian spectra for $I_1$ (left) and $I_2$ (center) with vertical lines indicating eigenvalues, and $\textrm{HIM}(I_1,I_2)$ in the Hamming/Ipsen-Mikhailov space (right).[]{data-label="fig:lorentz"}](./rho1.pdf "fig:"){width="33.00000%"}& ![Lorentzian distribution of the Laplacian spectra for $I_1$ (left) and $I_2$ (center) with vertical lines indicating eigenvalues, and $\textrm{HIM}(I_1,I_2)$ in the Hamming/Ipsen-Mikhailov space (right).[]{data-label="fig:lorentz"}](./rho2.pdf "fig:"){width="33.00000%"}& ![Lorentzian distribution of the Laplacian spectra for $I_1$ (left) and $I_2$ (center) with vertical lines indicating eigenvalues, and $\textrm{HIM}(I_1,I_2)$ in the Hamming/Ipsen-Mikhailov space (right).[]{data-label="fig:lorentz"}](./I1I2n.pdf "fig:"){width="33.00000%"}\
\
$\rho_{I_1}(\omega,\overline{\gamma})$ & $\rho_{I_2}(\omega,\overline{\gamma})$ & $\textrm{HIM}(I_1,I_2)$
The resulting Ipsen-Mikhailov distance is $$\textrm{IM}(I_1,I_2)= \sqrt{\int_0^\infty \left[\rho_{I_1}(\omega,\overline{\gamma})-\rho_{I_2}(\omega,\overline{\gamma})\right]^2 \textrm{d}\omega}= 0.1004144\ ,$$ so that the HIM distance results $$\textrm{HIM}(I_1,I_2)=\frac{\sqrt{2}}{2}|| (\textrm{H}(I_1,I_2) , \textrm{IM}(I_1,I_2)) ||_{2} \approx 0.707168\sqrt{0.5^2+0.1004144^2} \approx 0.3606127\ .$$ The situation can be graphically represented as in Fig. \[fig:lorentz\]: the two networks are quite different in terms of matching links, but their structures are not so diverse.
Small networks {#ssec:small}
--------------
Fixed the number of nodes $N$, there are exactly $2^\frac{N(N-1)}{2}$ different simple undirected unweighted networks, which can be grouped into isomorphism classes. As anticipated before, isomorphic graphs cannot be distiguished by spectral metrics, while their mutual Hamming distances are non zero, since their links are in different positions. As an example, for $N=3$ there are 8 networks grouped in 4 isomorphism classes, for $N=4$ there are 11 isomorphism classes including a total of 64 graphs and for $N=5$ 34 classes with 1024 networks (for $N=6,7$, the number of classes is respectively 156 e 1044).
To give an overview of a broader situation, we compute a number of mutual distances between networks with a given number of nodes (all possible couples for $N=3,4,5$ and a subset of them for $N=15$) and we display the results in Fig. \[fig:mutual\]. To select a good range of variability for the networks with 15 nodes, we select the empty graph, the full graph (with 105 nodes) and 10 different graphs with $i$ edges each, for $1\leq i\leq 104$.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Mutual distances between (a) all 28 couples of networks with 3 nodes, (b) all 2016 couples of networks with 4 nodes, (c) all 523776 couples of networks with 5 nodes and (d) the 542361 mutual distances between a set of 1042 networks with 15 nodes.[]{data-label="fig:mutual"}](./3nodes.pdf "fig:"){width="22.00000%"} ![Mutual distances between (a) all 28 couples of networks with 3 nodes, (b) all 2016 couples of networks with 4 nodes, (c) all 523776 couples of networks with 5 nodes and (d) the 542361 mutual distances between a set of 1042 networks with 15 nodes.[]{data-label="fig:mutual"}](./4nodes.pdf "fig:"){width="22.00000%"} ![Mutual distances between (a) all 28 couples of networks with 3 nodes, (b) all 2016 couples of networks with 4 nodes, (c) all 523776 couples of networks with 5 nodes and (d) the 542361 mutual distances between a set of 1042 networks with 15 nodes.[]{data-label="fig:mutual"}](./5nodes.pdf "fig:"){width="22.00000%"} ![Mutual distances between (a) all 28 couples of networks with 3 nodes, (b) all 2016 couples of networks with 4 nodes, (c) all 523776 couples of networks with 5 nodes and (d) the 542361 mutual distances between a set of 1042 networks with 15 nodes.[]{data-label="fig:mutual"}](./15nodes.pdf "fig:"){width="22.00000%"}
(a) (b) (c) (d)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
As shown by the plots, all possible situations can occur, apart from points in the northwest corner of zone II which are the rarest. For instance, the point $P(1,0)$ in Fig. \[fig:mutual\](b) corresponds to 6 different pairs $(O_1,O_2)$ of networks with $4$ nodes with maximal Hamming distance and minimal spectral distance: as an example, we show one of these pairs in Fig. \[fig:P\].
Comparison with Matthews Correlation Coefficient {#ssec:mcc}
------------------------------------------------
When assessing performances in a link prediction task (for instance, in the series of DREAM challenges [@stolovitzky07dialogue; @marbach10revealing; @prill10towards]), the standard strategy following the machine learning approach, is to rely on functions of the confusion matrix, *i.e.*, the table collecting the number of correct and wrong predictions with respect to the ground truth. Classical measures of this kind are the pairs Sensitivity/Specificity and Precision/Recall, and the derived Area Under the Curve.
A reliable alternative is the Matthews Correlation Coefficient ($\textrm{MCC}$ for short) [@matthews75comparison], summarizing into a single value the confusion matrix of a binary classification task. This is a measure of common use in the machine learning community [@baldi00assessing], recently accepted as an effective metric also for network comparison [@supper07reconstructing; @stokic09fast]. Also known as the $\phi$-coefficient, for a $2\times 2$ contingency table $\textrm{MCC}$ corresponds to the square root of the average $\chi^2$ statistic $$\textrm{MCC}=\sqrt{\chi^2 / N}\ ,$$ where $N$ is the total number of observations. In the binary case of two classes positive P and negative N, for the confusion matrix $\left(\begin{smallmatrix} \textrm{TP} & \textrm{FN} \\ \textrm{FP} & \textrm{TN}\end{smallmatrix}\right)$, where T and F stand for true and false respectively, the Matthews Correlation Coefficient has the following shape: $$\textrm{MCC} = \frac{\textrm{TP}\cdot\textrm{TN}-\textrm{FP}\cdot\textrm{FN}}{\sqrt{\left(\textrm{TP}+\textrm{FP}\right)\left(\textrm{TP}+\textrm{FN}\right)\left(\textrm{TN}+\textrm{FP}\right)\left(\textrm{TN}+\textrm{FN}\right)}}\ .$$ $\textrm{MCC}$ lives in the range $[-1,1]$, where $1$ is perfect classification, $-1$ is reached in the complete misclassification case while $0$ corresponds to coin tossing classification, and it is invariant for scalar multiplication of the whole confusion matrix.
Here we want to provide a quick comparison of $\textrm{MCC}$ and $\textrm{HIM}$ distances in a few cases. First of all, some considerations on the extreme cases:
- $\textrm{HIM}(G,H)=1$ only for $\{G,H\}=\{\mathcal{E}_N,\mathcal{F}_N\}$, which has $\textrm{MCC}=0$.
- $\textrm{HIM}(G,H)=0$ only for $G=H$; in this case, $\textrm{MCC}(G,H)=0$ for $G=H\in \{\mathcal{E}_N,\mathcal{F}_N\}$, and $\textrm{MCC}(G,H)=1$ in all other cases.
- $\textrm{MCC}(G,H)=1$ only for $G=H\not\in \{\mathcal{E}_N,\mathcal{F}_N\}$, and thus HIM=0.
- The two values $\textrm{MCC}=0$ or $\textrm{MCC}=-1$ can correspond to a landscape of quite different pairs of networks, for which the $\textrm{HIM}$ distance can assume very diverse values.
To investigate the last case in the above list, we randomly generated 250,000 pairs of networks of different size, and we compared the $\textrm{MCC}$ with the H, IM and HIM distances: the corresponding scatterplots are shown in Fig. \[fig:mcc\]. Since $\textrm{MCC}$ is a similarity measure, for a direct comparison we displayed it as the $[0,1]$-normalized dissimilarity measure $\frac{1-\textrm{MCC}}{2}$.
As predictable, since the confusion matrix is unaware of the network structure but it takes into account only matching and mismatching links, the $\textrm{MCC}$ is well correlated with the Hamming distance (Pearson Coefficient PC=0.92) and poorly correlated with the Ipsen-Mikhailov distance (PC=0.01), resulting in an good global correlation with the HIM distance (PC=0.79). Nonetheless, the plots in Fig. \[fig:mcc\] show that the relevant variability of one measure for a given value of the other one supports the claim of a strong independency between $\textrm{MCC}$ and $\textrm{HIM}$. Finally, as an example giving a quantitative basis to the last claim of the above list, for all the pairs with $\textrm{MCC}=0$ we obtain values of HIM ranging in $[0.11, 1]$, with median 0.37 and mean 0.39, while when $\textrm{MCC}=-1$ the range of the HIM values is $[0.71,0.86]$, with mean and median equal to 0.74.
![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g13.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g50.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g14.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g49.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g19.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g44.pdf "fig:"){width="13.00000%"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g22.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g41.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g26.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g37.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g28.pdf "fig:"){width="13.00000%"} ![The six pairs of networks on four nodes with Hamming distance one and Ipsen-Mikhailov distance zero.[]{data-label="fig:P"}](./g31.pdf "fig:"){width="13.00000%"}
Dynamical networks {#ssec:dynamic}
------------------
In what follows we show the evolution of the Hamming, the Ipsen-Mikhailov and the HIM distance during the evolution of the following dynamical processes $\mathbb{P}(i)$ moving through consecutive steps:
- [Random Addition]{} $\mathbb{P}_\textsc{ra}(i+1)$ is obtained from $\mathbb{P}_\textsc{ra}(i)$ by randomly adding a not already present link.
- [Random Removal]{} $\mathbb{P}_\textsc{rr}(i+1)$ is obtained from $\mathbb{P}_\textsc{rr}(i)$ by randomly removing an existing link.
- [Sequential Addition]{} $\mathbb{P}_\textsc{sa}(i+1)$ is obtained from $\mathbb{P}_\textsc{sa}(i)$ by adding a new link in the same row and in the next available column of the last added link, if possible, or in the following row, starting from the first available column. The whole process starts from the first available row with the smallest index. As an example, if $\mathbb{P}_\textsc{sa}(0)=\mathcal{E}_5$, then the process evolves inserting ones in the adjacency matrix following the sequence $1\to 2\to 3\to\cdots 10$ in $\left( \begin{smallmatrix} & 1 & 2 & 3 & 4 \\ & & 5 & 6 & 7 \\ & & & 8 & 9 \\ & & & & 10 \end{smallmatrix} \right)$.
- [Sequential Removal]{} $\mathbb{P}_\textsc{sr}$: as in $\mathbb{P}_\textsc{sa}$, but removing one link at each step.
- [Highest Degree Addition]{} $\mathbb{P}_\textsc{hda}(i+1)$ is obtained from $\mathbb{P}_\textsc{hda}(i)$ by adding a previously not existing link connecting the node with the highest degree.
- [Highest Degree Removal]{} $\mathbb{P}_\textsc{hdr}(i+1)$ is obtained from $\mathbb{P}_\textsc{hdr}(i)$ by removing an existing link connecting the node with the highest degree.
As a first example, consider the processes $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{sa}$ with the empty graph as starting network $\mathbb{P}_\textsc{ra}(0)=\mathbb{P}_\textsc{sa}(0)=\mathcal{E}_N$. They both end at the N-nodes clique after $N_{\max}=\frac{N(N-1)}{2}$ steps: $\mathbb{P}_\textsc{ra}(N_{\max})=\mathbb{P}_\textsc{sa}(N_{\max})=\mathcal{F}_N$. The corresponding inverse processes $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{sr}$ evolve in the opposite direction: $\mathbb{P}_\textsc{rr}(0)=\mathbb{P}_\textsc{sr}(0)=\mathcal{F}_N$ and $\mathbb{P}_\textsc{rr}(N_{\max})=\mathbb{P}_\textsc{sr}(N_{\max})=\mathcal{E}_N$. In Fig. \[fig:process\] we show the curves of $d(\mathbb{P}_{\circ}(i),\mathbb{P}_{\circ}(0))$ for $d$=H, IM and HIM in the cases $N=$10, 25 and 100 nodes.
For the representation of the curves, we use two different spaces: the already introduced Hamming/Ipsen-Mikhailov space, with the metric H on the $x$ axis and the metric IM on the $y$ axis, and the Fraction-of-nodes/HIM space, with the ratio between the number of newly added or removed links over the total number $N_{\max}$ of links on the $x$ axis and the HIM distance on the $y$ axis; in this representation, the $i$-th step $\mathbb{P}_{\circ}(i)$ of a process has coordinates $\left(\frac{2i}{N(N-1)},\textrm{HIM}\left(\mathbb{P}_{\circ}(i),\mathbb{P}_{\circ}(0)\right)\right)$. In all cases, since one edge is removed or added at each step, in both spaces the evolution of the processes proceeds from left to right in both graphs, and the trend of the curves representing the distances of the same process in the Hamming/Ipsen-Mikhailov space and in the Fraction-of-nodes/HIM space are similar, varying only for a scaling factor.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Scatterplot of $\frac{1-\textrm{MCC}}{2}$ versus Hamming (a), Ipsen-Mikhailov (b) and HIM (c) distances when comparing 250,000 random pairs of networks of different size 3-100.[]{data-label="fig:mcc"}](./h_mcc_jpg.pdf "fig:"){width="30.00000%"} ![Scatterplot of $\frac{1-\textrm{MCC}}{2}$ versus Hamming (a), Ipsen-Mikhailov (b) and HIM (c) distances when comparing 250,000 random pairs of networks of different size 3-100.[]{data-label="fig:mcc"}](./im_mcc_jpg.pdf "fig:"){width="30.00000%"} ![Scatterplot of $\frac{1-\textrm{MCC}}{2}$ versus Hamming (a), Ipsen-Mikhailov (b) and HIM (c) distances when comparing 250,000 random pairs of networks of different size 3-100.[]{data-label="fig:mcc"}](./him_mcc_jpg.pdf "fig:"){width="30.00000%"}
(a) (b) (c)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For the random processes $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$ we show the means of the distances computed on 100 runs; no standard deviation or confidence intervals are plotted, because they are negligible at the scale of the plot. For instance, in the case $N=$25, the order of magnitude of the standard deviation for HIM at each step is $10^{-3}$, and the span of the 95% boostrap confidence intervals is in the range of $10^{-4}$. As a first observation, all curves are monotonically increasing and the bigger the graph, the larger the distances, but in the empty-to-clique case, where, in the second half of the process, $\mathbb{P}_\textsc{sa}$ induces distances which are smaller than $\mathbb{P}_\textsc{ra}$ and which are smaller for larger graphs. The most interesting observation is the different shape of the curves between the empty-to-clique process and the clique-to-empty: for the same Hamming distance (or fraction of links), the corresponding Ipsen-Mikhailov (or HIM, respectively) distance is larger when the nodes are added rather than removed, because adding links quickly generates degree correlation. Furthermore, in the empty-to-clique case, not much difference occurs between the random and the sequential process, while this difference is much wider (with the random one inducing larger distances) for the clique-to-empty case.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for the processes evolving from the empty network to the clique (a and b) or vice versa (c and d), in the Hamming/Ipsen-Mikhailov space (a and c) or plot of the HIM distance as a function of the ratio of added/removed links (b and d), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of random evolution, while dashed lines denote the sequential processes $\mathbb{P}_\textsc{sa}$ and $\mathbb{P}_\textsc{sr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner.[]{data-label="fig:process"}](./EtoF_H_IM.pdf "fig:"){width="25.00000%"} ![Distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for the processes evolving from the empty network to the clique (a and b) or vice versa (c and d), in the Hamming/Ipsen-Mikhailov space (a and c) or plot of the HIM distance as a function of the ratio of added/removed links (b and d), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of random evolution, while dashed lines denote the sequential processes $\mathbb{P}_\textsc{sa}$ and $\mathbb{P}_\textsc{sr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner.[]{data-label="fig:process"}](./EtoF_HIM.pdf "fig:"){width="25.00000%"} ![Distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for the processes evolving from the empty network to the clique (a and b) or vice versa (c and d), in the Hamming/Ipsen-Mikhailov space (a and c) or plot of the HIM distance as a function of the ratio of added/removed links (b and d), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of random evolution, while dashed lines denote the sequential processes $\mathbb{P}_\textsc{sa}$ and $\mathbb{P}_\textsc{sr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner.[]{data-label="fig:process"}](./FtoE_H_IM.pdf "fig:"){width="25.00000%"} ![Distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for the processes evolving from the empty network to the clique (a and b) or vice versa (c and d), in the Hamming/Ipsen-Mikhailov space (a and c) or plot of the HIM distance as a function of the ratio of added/removed links (b and d), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of random evolution, while dashed lines denote the sequential processes $\mathbb{P}_\textsc{sa}$ and $\mathbb{P}_\textsc{sr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner.[]{data-label="fig:process"}](./FtoE_HIM.pdf "fig:"){width="25.00000%"}
(a) (b) (c) (d)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
An analogous experiment was carried out within the family of Poissonian graphs, with Erdös-Rényi model [@erdos59random; @erdos60evolution] $G(N,p)$. In particular, for $N=$10, 25 and 100, let $S_N$ be a sparse network $G(N,p=0.05)$, with 2, 11 and 230 edges respectively and let $D_N$ be a dense network $G(N,p=0.9)$, with 39, 275 and 4462 edges respectively. Consider the following four processes, of which we represent the initial $\lfloor\frac{N_{\max}}{\sqrt{2}}\rfloor=\lfloor\frac{\sqrt{2}}{2}\cdot\frac{N(N-1)}{2}\rfloor+1$ steps in Fig. \[fig:poisson\]:
- $\mathbb{P}_\textsc{ra}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{ra}(0)=S_N$, for $N$=10, 25, 100.
- $\mathbb{P}_\textsc{rr}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{rr}(0)=D_N$, for $N$=10, 25, 100.
- $\mathbb{P}_\textsc{hda}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{hda}(0)=S_N$, for $N$=10, 25, 100.
- $\mathbb{P}_\textsc{hdr}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{hdr}(0)=D_N$, for $N$=10, 25, 100.
In this case, too, results on the random processes are averaged over 100 runs, with negligible confidence intervals. To better highlight the differences of the resulting distances in the various processes, in Fig. \[fig:poisson\] (c) and (d) we show the ratio of some pairs of HIM distances as a function of the removed/added links. In particular, in subfigure (c), for each step $i$, we show the quotient of HIM distances for $\mathbb{P}_\textsc{ra}$ over $\mathbb{P}_\textsc{hda}$, in the three cases $N=$10, 25 and 100. The three curves show that HIM distances for $\mathbb{P}_\textsc{ra}$ are larger than the HIM distances for $\mathbb{P}_\textsc{hda}$ for $N$=25 and 100, and their difference is higher in the first steps of the process $i< 0.3 N_{\max}$, while they tend to get closer as far as the processes evolve. In the other cases $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{hdr}$ (not shown here), the differences are smaller and they converge faster to one, but in this case the process $\mathbb{P}_\textsc{hdr}$ accounts for the smaller values of HIM distances. In the plot (c) of Fig. \[fig:poisson\], we show the curves for $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{rr}(i),\mathbb{P}_\textsc{rr}(0))}$ as a function of $\frac{i}{N_{\max}}$. All the three curves are monotonically decreasing and converging to one after the first stages of the processes, yielding that, for all values of $N$, adding links produces higher values of HIM distance. In the case of the evolution targeting higher degree nodes first (not shown here), the trend is the same, only scaled down to smaller ratios.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Plot of H, IM and HIM distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. In panel (c), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{hda}(i),\mathbb{P}_\textsc{hda}(0))}$ as a function of $\frac{i}{N_{\max}}$ and, in panel (d), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{rr}(i),\mathbb{P}_\textsc{rr}(0))}$ as a function of $\frac{i}{N_{\max}}$.[]{data-label="fig:poisson"}](./ER_adding_H_IM.pdf "fig:"){width="25.00000%"} ![Plot of H, IM and HIM distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. In panel (c), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{hda}(i),\mathbb{P}_\textsc{hda}(0))}$ as a function of $\frac{i}{N_{\max}}$ and, in panel (d), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{rr}(i),\mathbb{P}_\textsc{rr}(0))}$ as a function of $\frac{i}{N_{\max}}$.[]{data-label="fig:poisson"}](./ER_removing_H_IM.pdf "fig:"){width="25.00000%"} ![Plot of H, IM and HIM distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. In panel (c), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{hda}(i),\mathbb{P}_\textsc{hda}(0))}$ as a function of $\frac{i}{N_{\max}}$ and, in panel (d), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{rr}(i),\mathbb{P}_\textsc{rr}(0))}$ as a function of $\frac{i}{N_{\max}}$.[]{data-label="fig:poisson"}](./rand_vs_deg.pdf "fig:"){width="25.00000%"} ![Plot of H, IM and HIM distances between $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. In panel (c), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{hda}(i),\mathbb{P}_\textsc{hda}(0))}$ as a function of $\frac{i}{N_{\max}}$ and, in panel (d), plot of $\frac{\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),\mathbb{P}_\textsc{ra}(0))}{\textrm{HIM}(\mathbb{P}_\textsc{rr}(i),\mathbb{P}_\textsc{rr}(0))}$ as a function of $\frac{i}{N_{\max}}$.[]{data-label="fig:poisson"}](./add_vs_rem.pdf "fig:"){width="25.00000%"}
(a) (b) (c) (d)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The final examples consider processes having scale free networks as starting graphs. For $N$=10, 25 and 100, let ${SS}_N$ be a scale free sparse network generated following the Albert-Barabasi model [@barabasi99emergence], with power law exponent 2.3 and with 9, 24 and 99 edges rispectively, and $SD_N$ a dense network with the same exponent 2.3 but with 35, 300 and 4150 edges respectively. The same four processes of the previous case were tested for the initial $\lfloor\frac{N_{\max}}{\sqrt{2}}\rfloor=\lfloor\frac{\sqrt{2}}{2}\cdot\frac{N(N-1)}{2}\rfloor+1$ steps:
- $\mathbb{P}_\textsc{ra}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{ra}(0)=SS_N$, for $N$=10, 25, 100.
- $\mathbb{P}_\textsc{rr}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{rr}(0)=SD_N$, for $N$=10, 25, 100.
- $\mathbb{P}_\textsc{hda}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{hda}(0)=SS_N$, for $N$=10, 25, 100.
- $\mathbb{P}_\textsc{hdr}(i)$, for $i=0,\ldots, \lfloor\frac{\sqrt{2}}{2}\frac{N(N-1)}{2}\rfloor$, with $\mathbb{P}_\textsc{hdr}(0)=SD_N$, for $N$=10, 25, 100.
The corresponding curves are plotted in Fig. \[fig:scalefree\] (a) and (b). We recall here that scalefree networks are not invariant for percolation, *i.e.*, they do not remain scalefree when links are randomly removed or added. However, the evolution of the processes is not very different from the Erd[ö]{}s-R[é]{}nyi case, especially for the processes removing links as shown in Fig. \[fig:scalefree\], panel (b). Some differences emerge for the processes adding links, and a few peculiarities that are also present in the Poissonian case here become more evident. In particular, for all $N$ and for both $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{hda}$ the derivative of the curves are larger than those in panel (b), and it is not true anymore that the larger the number of nodes, the larger the distances. For instance, in the case $N=100$, both the processes quickly modify the network structure, resulting in a fast increment of the Ipsen-Mikhailov distance for $i<0.2N_{\max}$, while later the curves grow at a much smaller rate. To better study this behaviour, a larger starting network $SB_{200}$ was generated following the scale free model in [@goh01universal], with 200 nodes and 1000 edges, power law exponent 2.001 and degree distribution as in the histogram of Fig. \[fig:scalefree\], panel (c). The following processes were started from $SB_{200}$, and they were carried on until they reach either the empty network or the clique:
- $\mathbb{P}_\textsc{ra}(i)$, with $\mathbb{P}_\textsc{ra}(0)={SB}_{200}$ and $\mathbb{P}_\textsc{ra}(18900)=\mathcal{F}_{200}$.
- $\mathbb{P}_\textsc{hda}(i)$, with $\mathbb{P}_\textsc{hda}(0)={SB}_{200}$ and $\mathbb{P}_\textsc{hda}(18900)=\mathcal{F}_{200}$.
- $\mathbb{P}_\textsc{rr}(i)$, with $\mathbb{P}_\textsc{rr}(0)={SB}_{200}$ and $\mathbb{P}_\textsc{rr}(1000)=\mathcal{E}_{200}$.
- $\mathbb{P}_\textsc{hdr}(i)$, with $\mathbb{P}_\textsc{hdr}(0)={SB}_{200}$ and $\mathbb{P}_\textsc{hdr}(1000)=\mathcal{E}_{200}$.
The curves corresponding to HIM distances from $SB_{200}$ in the four aforementioned processes are plotted in Fig. \[fig:scalefree\], panel (d), versus the percentage of progress of the process, *i.e.*, $100\cdot \frac{i}{N_{\circ}}$, with $0\leq i \leq N_{\circ}$ and $N_{\textsc{ra}}=N_{\textsc{hda}}=18900$, $N_{\textsc{rr}}=N_{\textsc{hdr}}=1000$. The HIM distance for the processes $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{hdr}$ are monotonically and similarly increasing when evolving from $SB_{200}$ to $\mathcal{E}_{200}$, slower at the beginning and much faster in the last steps of the process. The two other processes instead show the same effect previously noted: $\textrm{HIM}(\mathbb{P}_\textsc{ra}(i),SB_{200})$ and $\textrm{HIM}(\mathbb{P}_\textsc{hdaa}(i),SB_{200})$ change rapidly in the initial 10% of the processes, yielding a fast increase in the Ipsen-Mikhailov distance, due to the quick modification in the network structure. After this initial period, the growth of both curves proceed with a smaller derivatives until they reach their maximum at the end of the process.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Plot of Hamming versus Ipsen-Mikhailov distance for $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)=SB_{200}$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. (c) Histogram of the node degrees of the network ${SB}_{200}$. (d) Hamming distances versus percentage of processes steps for $\mathbb{P}_\textsc{ra}$, $\mathbb{P}_\textsc{hda}$, $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{hdr}$.[]{data-label="fig:scalefree"}](./SF_adding_H_IM.pdf "fig:"){width="25.00000%"} ![Plot of Hamming versus Ipsen-Mikhailov distance for $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)=SB_{200}$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. (c) Histogram of the node degrees of the network ${SB}_{200}$. (d) Hamming distances versus percentage of processes steps for $\mathbb{P}_\textsc{ra}$, $\mathbb{P}_\textsc{hda}$, $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{hdr}$.[]{data-label="fig:scalefree"}](./SF_removing_H_IM.pdf "fig:"){width="25.00000%"} ![Plot of Hamming versus Ipsen-Mikhailov distance for $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)=SB_{200}$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. (c) Histogram of the node degrees of the network ${SB}_{200}$. (d) Hamming distances versus percentage of processes steps for $\mathbb{P}_\textsc{ra}$, $\mathbb{P}_\textsc{hda}$, $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{hdr}$.[]{data-label="fig:scalefree"}](./scale_free_degree.pdf "fig:"){width="25.00000%"} ![Plot of Hamming versus Ipsen-Mikhailov distance for $\mathbb{P}_{\circ}(i)$ and $\mathbb{P}_{\circ}(0)=SB_{200}$ for $\circ=\textsc{ra},\textsc{hda}$ (a) and $\circ=\textsc{rr},\textsc{hdr}$ (b), for $N=$ 10 (black), 25 (blue), 100 (red) nodes. Solid lines denote the average of distances for 100 runs of $\mathbb{P}_\textsc{ra}$ and $\mathbb{P}_\textsc{rr}$, while dashed lines identify $\mathbb{P}_\textsc{hda}$ and $\mathbb{P}_\textsc{hdr}$. In all cases, the process evolves from the left-bottom corner to the right-top corner. (c) Histogram of the node degrees of the network ${SB}_{200}$. (d) Hamming distances versus percentage of processes steps for $\mathbb{P}_\textsc{ra}$, $\mathbb{P}_\textsc{hda}$, $\mathbb{P}_\textsc{rr}$ and $\mathbb{P}_\textsc{hdr}$.[]{data-label="fig:scalefree"}](./powerlaw.pdf "fig:"){width="25.00000%"}
(a) (b) (c) (d)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Graph families {#ssec:families}
--------------
In this section we investigate the distribution of the distances from the empty network of a set of graphs randomly extracted from five families. In particular, for each $N$= 10, 20, 50, 100 and 1000 we extracted 1000 networks on $N$ nodes from each of the following class of graphs:
- [BA]{} Barabasi-Albert model [@barabasi99emergence], with power of preferential attachment extracted from the uniform distribution between 0.1 and 10.
- [ER]{} Erdös-Rényi model [@erdos59random; @erdos60evolution], with link probability extracted from the uniform distribution between 0.1 and 0.9.
- [WS]{} Watts-Strogatz model [@watts98collective], with neighborhood within which the vertices of the lattice will be connected uniformly sampled in $\{1,\ldots,10\}$ and rewiring probability extracted from the uniform distribution between 0.1 and 0.9.
- [PL]{} Scale-free random graphs from vertex fitness scores [@goh01universal], with number of edges uniformly sampled between 1 and $\frac{N(N-1)}{2}$ and power law exponent of the degree distribution extracted from the uniform distribution between 2.005 and 3.
- [KR]{} Random regular graphs, with all possible values of node degree.
In Tab. \[tab:families\] we list mean $\mu$ and standard deviation $\sigma$ of $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type: note that we do not report the corresponding median, because its distance from the mean $\mu$ is always smaller than 0.02 nor the bootstrap confidence intervals, whose range is always smaller than 0.02 from either side of the mean. In Fig. \[fig:boxplots\] we also show the corresponding boxplots, while in Fig. \[fig:families\](a) we display the scatterplot in the Hamming/Ipsen-Mikhailov space of all the aforementioned distances. In the Hamming/Ipsen-Mikhailov space all the BA nets are confined in the narrow rectangle $[0,0.2]\times [0.6,9.75]$, while all other classes of graphs span a much wider area. In particular, the points corresponding to distances of the PL nets occupy densely all the upper left triangle of the H/IM plane, and the same happens, with $H>0.1$, also for the ER networks, while WS and KR points lie in the upper rectangle $[0,1]\times [0.6,1]$. Thus, different PL networks show very different stucture, while the BA nets are very homogeneous. Notably, no point occurs in the lower right corner of the H/IM space. Moreover, in average, the standard deviation decreases inversely with the network size, showing larger homogeneity in bigger networks.
-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- --
![Boxplots of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type.[]{data-label="fig:boxplots"}](./BA.pdf "fig:"){height="5cm"} ![Boxplots of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type.[]{data-label="fig:boxplots"}](./PL.pdf "fig:"){height="5cm"}
![Boxplots of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type.[]{data-label="fig:boxplots"}](./ER.pdf "fig:"){height="5cm"} ![Boxplots of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type.[]{data-label="fig:boxplots"}](./KR.pdf "fig:"){height="5cm"}
![Boxplots of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type.[]{data-label="fig:boxplots"}](./WS.pdf "fig:"){height="5cm"} ![Boxplots of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type.[]{data-label="fig:boxplots"}](./all-types.pdf "fig:"){height="5cm"}
-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- --
Finally, to better highlight the difference among the diverse families, we randomly extracted 100 networks with 100 nodes for the four families BA, ER, WS and PL and we computed the mutual distances between all possible pairs of these 400 graphs. A few statistics of these HIM distances are reported in Tab. \[tab:stats\], while the planar multidimensional scaling plot [@cox01multidimensional] is displayed in Fig. \[fig:families\](b). Apart from the PL networks, the three families BA, ER and WS can be mutually well separated as shown in the multidimensional plot; moreover, the graphs in the BA and in the WS families are mutually quite similar, as supported by the small interclass mean HIM distance. On the other side, the PL networks have essentially the same distance from all other groups, so they cannot be easily distiguished.
[c|cccc]{} & & & &\
\
& 0.05 (0.06) & 0.69 (0.10) & 0.47 (0.13) & 0.65 (0.17)\
& & 0.50 (0.13) & 0.56 (0.15) & 0.51 (0.14)\
& & & 0.29 (0.12) & 0.56 (0.18)\
& & & & 0.50 (0.17)\
----- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ----------
$\mu$ $\sigma$ $\mu$ $\sigma$ $\mu$ $\sigma$ $\mu$ $\sigma$ $\mu$ $\sigma$ $\mu$ $\sigma$
0.53 0.01 0.51 0.01 0.50 0.02 0.49 0.02 0.50 0.02 0.51 0.02
0.69 0.18 0.73 0.12 0.77 0.09 0.77 0.08 0.76 0.08 0.74 0.12
0.91 0.14 0.76 0.15 0.64 0.08 0.62 0.06 0.62 0.05 0.71 0.16
0.72 0.19 0.72 0.19 0.75 0.15 0.74 0.14 0.72 0.11 0.73 0.16
0.60 0.11 0.54 0.10 0.50 0.05 0.49 0.00 0.48 0.00 0.52 0.08
All 0.69 0.19 0.65 0.17 0.63 0.15 0.62 0.14 0.62 0.13 0.64 0.16
----- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ----------
: Mean $\mu$ and standard deviation $\sigma$ of HIM distances $\textrm{HIM}(\circ,\mathcal{E}_N)$ from the empty network for all combinations of network type T and network size N, and cumulatively across node sizes and graph classes.[]{data-label="tab:families"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(a) Scatterplot in the Hamming/Ipsen-Mikhailov space of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type; (b) Multidimensional Scaling of the mutual HIM distances of 400 networks with 100 nodes in the BA, ER, WS and PL families.[]{data-label="fig:families"}](./scatter_families.pdf "fig:"){height="6cm"} ![(a) Scatterplot in the Hamming/Ipsen-Mikhailov space of the $\textrm{HIM}(\circ,\mathcal{E}_N)$ for all combinations of node size and network type; (b) Multidimensional Scaling of the mutual HIM distances of 400 networks with 100 nodes in the BA, ER, WS and PL families.[]{data-label="fig:families"}](./families_mds.pdf "fig:"){height="6cm"}
(a) (b)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The *D. melanogaster* development dataset {#ssec:droso}
-----------------------------------------
In [@kolar10estimating], the authors used the Keller algorithm to infer the gene regulatory networks of *Drosophila melanogaster* from a time series of gene expression data measured during its full life cycle, originally published in [@arbeitman02gene]. They followed the dynamics of 588 development genes along 66 time points spanning through four different stages (Embryonic – time points 1-30, Larval – t.p. 31-40, Pupal – t.p. 41-58, Adult – t.p. 59-66), constructing a time series of inferred networks $N_i$, publicly available at <http://cogito-b.ml.cmu.edu/keller/downloads.html>. Hereafter we evaluate the structural differences between $N_i$ and the initial network $N_1$, as measured by the HIM distance: the resulting plot is displayed in Fig. \[fig:time\]. The largest variations, both between consecutive terms and with respect to the initial network $N_1$, occur in the embrional stage (E): in particular, the HIM distance grows until time points 23, then the following networks start getting closer again to $N_1$, showing that the interactions of the selected 588 genes in the adult stage are more similar to the corresponding net of interaction in the embrional stage, rather than in the other two stages. Moreover, while Hamming distance ranges between $0$ and $0.0223$, the Ipsen-Mikhailov distance has $0.0851$ as its maximum, indicating an higher variability of the networks in terms of structure rather than matching links. Finally, using a Support Vector Machine with HIM kernel built in the *kernlab* package in R, a 5-fold Cross Validation with $\gamma=10^3$ and $C=1$ reached accuracy 0.97 in discriminating Embryonic and Adult networks from Larval and Pupal, while, in the same setup, we reach perfect separation between Embryonic and Adult stages for all values of $\gamma$ larger than 1000.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(a) Evolution of distances of the *D. melanogaster* development gene network time series in the Hamming/Ipsen-Mikhailov space with zoom (b) on the final timepoints and (c) evolution of same HIM distances along 66 time points in the 4 stages Embryonic (E), Larval (L), Pupal (P) and Adult (A).[]{data-label="fig:time"}](./melanogasterH-IM.pdf "fig:"){height="30.00000%"} ![(a) Evolution of distances of the *D. melanogaster* development gene network time series in the Hamming/Ipsen-Mikhailov space with zoom (b) on the final timepoints and (c) evolution of same HIM distances along 66 time points in the 4 stages Embryonic (E), Larval (L), Pupal (P) and Adult (A).[]{data-label="fig:time"}](./melanogasterH-IMzoom.pdf "fig:"){height="30.00000%"} ![(a) Evolution of distances of the *D. melanogaster* development gene network time series in the Hamming/Ipsen-Mikhailov space with zoom (b) on the final timepoints and (c) evolution of same HIM distances along 66 time points in the 4 stages Embryonic (E), Larval (L), Pupal (P) and Adult (A).[]{data-label="fig:time"}](./melanogasterTime.pdf "fig:"){height="30.00000%"}
(a) (b) (c)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The HCC dataset {#ssec:hcc}
---------------
Publicly available at the Gene Expression Omnibus (GEO) <http://www.ncbi.nlm.nih.gov/geo>, at the Accession Number GSE6857, the HepatoCellular Carcinoma (HCC) dataset [@budhu08identification; @ji09microrna] collects 482 tissue samples from 241 patients affected by HCC, a well-studied pathology [@law11emerging; @gu12gene] where the impact of microRNA (miRNA) is notably relevant [@volinia10reprogramming; @bandyopadhyay10development]. For each patient, a sample from cancerous hepatic tissue and a sample from surrounding non-cancerous hepatic tissue are available, hybridized on the Ohio State University CCC MicroRNA Microarray Version 2.0 platform collecting the signals of 11,520 probes of 250 non-redundant human and 200 mouse miRNA. After a preprocessing phase including imputation of missing values [@troyanskaya01missing] and discarding probes corresponding to non-human (mouse and controls) miRNA, we consider the dataset HCC of 240+240 paired samples described by 210 human miRNA, with the cohort consisting of 210 male and 30 female patients. We thus parted the whole dataset HCC into four subsets combining the sex and disease status phenotypes, collecting respectively the cancer tissue for the male patients (MT), the cancer tissue for the female patients (FT) and the corresponding two datasets including the non cancer tissues (MnT, FnT). Then we first generated the four co-expression networks on the 210 miRNA as vertices, inferred via absolute Pearson’s correlation and corresponding to the combinations of the two binary phenotypes, and we computed all mutual HIM distances. In particular, to show the possible effects due to the different sample size, we computed 30 instances of the MT and MnT networks, inferred using only 30 matching samples and then averaging all the mutual HIM distances. One instance of MT and MnT is displayed as an hairball in Fruchterman-Reingold layout [@fruchterman91graph] together with the nets FT and FnT. The corresponding two-dimensional scaling plot [@cox01multidimensional] in the right panel of the Figure \[fig:hcc\_him\]. The four networks are widely separated, with orthogonal separations for the two phenotypes, but the values of the HIM distances between the network support the known different development of HCC in male and female: for instance, the FT network is closer to the MnT net (HIM=0.08), rather than to the MT and FnT (HIM=0.13 and 0.16, respectively). Note that the largest distance (HIM=0.23) is detected between the two non-tumoral networks MnT, FnT.
[ccc]{} & &\
\
MT & MnT\
&\
\
FT & FnT\
An expanded version of the example is shown in [@jurman12stability; @filosi13stability], where more networks are generated from the same dataset using different inference algorithms and a stability analysis is performed.
The Gulf Dataset {#ssec:gulf}
----------------
Part of the Kansas Event Data System, available at <http://vlado.fmf.uni-lj.si/pub/networks/data/KEDS/>, the Gulf Dataset collects, on a monthly bases, political events between pairs of countries focusing on the Gulf region and the Arabian peninsula for the period 15 April 1979 to 31 March 1999, for a total of 240 months. Political events belong to 66 classes (including for instance ”pessimist comment”, ”meet”, ”formal protest”, ”military engagement”, etc.) and involve 202 countries. This dataset formally translates into a time series of 240 unweighted and undirected graphs with 202 nodes, for which we computed all the mutual $\frac{240\cdot 239}{2}$ HIM distances. These distances are then used to project the 240 networks on a plane through a multidimensional scaling [@cox01multidimensional]: the resulting plot is displayed in Fig. \[fig:gulf\]. The months corresponding to the First Gulf War months (July 1990 - April 1991) are close together and confined in the lower left corner of the plane, showing both a mutual high degree of homogeneity and, at the same time, a relevant difference to the graphs of all other months. This shows that, at the onset of the conflict, the diplomatic relations worldwide changed consistently and their structure remained very similar throughout the whole event. Note that the blue point (closer to the war-like period) corresponds to February 1998, the time of Iraq disarmament crisis: Iraqi President Saddam Hussein negotiates a deal with U.N. Secretary General Kofi Annan, allowing weapons inspectors to return to Baghdad, preventing military action by the United States and Britain.
![Planar HIM distance based multidimensional scaling plot of the monthly Gulf Dataset. Red dots corresponds to the First Gulf War months (July 1990 - April 1991), while grey points correspond to months outside that temporal window and the blue point corresponds to February 1998, the month of the Iraq disarmament crisis.[]{data-label="fig:gulf"}](./him-war.pdf){width="70.00000%"}
The International Trade Network data {#ssec:wtn}
------------------------------------
As an application of the HIM distance on directed and weighted networks, we show four examples based on the International Trade Network (ITN) data, version 4.1, by Gledisch [@gleditsch02expanded] available at [http://privatewww.essex.ac.uk/∼ksg/exptradegdp.html](http://privatewww.essex.ac.uk/∼ksg/exptradegdp.html), collecting estimates of trade flows between independent states (1948-2000) and GDP per capita of independent states (1950-2000). As noted by [@fronczak12statistical], due to differences in reporting procedures between countries, incongruences occur between exports from $i$ to $j$ and imports from $i$ to $j$: to avoid this issues, in our analysis we only use the figures reported as export in the dataset.
In what follows, we extract four sets of countries, and we study the evolution of their trade subnetworks during the aforementioned period. In each example, chosen the set of $N$ countries $C_1,\ldots C_N$, we construct, for every year, the weighted directed network having $C_1,\ldots C_N$ as nodes. A link between country $C_i$ and country $C_j$ represents the export from $C_i$ to $C_j$, and its weight $w_{ij}$ corresponds to the volume of the export flow. Then we compute all mutual HIM distances among these networks, first rescaling link weights in the unit interval. Finally, using these $\frac{N(N-1)}{2}$ HIM distances we construct a planar classical Multidimensional Scaling plot, transforming the networks in a set of points such that the distances between the points are approximately equal to the mutual HIM dissimilarities, using the methods in [@gower66some; @mardia78some; @cailliez83analytical; @cox01multidimensional] as implemented in R. The aim here is to connect the structural changes in yearly trade networks with time periods and events having a role in explaing such changes. Note that in [@fronczak12statistical], the authors show that bilateral trade fulfills fluctuation-response theorem [@fronczak06fluctuation], stating that the average relative change in import (export) between two countries is a sum of relative changes in their GDPs. This result yields that directed connections, *i.e.*, bilateral trade volumes, are only characterized by the product of the trading countries’ GDPs.
As a first example we present the BRICS countries case. Introduced in 2001, the acronym BRICS collects the five nations Brazil, Russia, India, China and South Africa (Fig. \[fig:brics\](a)) which, although developing or newly industrialized countries, are distinguished by their large and fast-growing economies and by their significant influence on regional and global affairs. To such aim, in Fig. \[fig:brics\] we show the bidimensional scaling of their trade networks for the years 1950–2000, with the HIM matrix as the distance constraint. As shown by the plot, three groups of years can be clearly divided, thus yielding that the corresponding networks are similar within each group, but diverse across different groups: the early years recovering after WWII (until about 1963), the seventies and eighties, where the economies of the involved countries started to develop, and the nineties, where their growth begun to accelerate.
----- -----
(a) (b)
----- -----
A very similar situation occurs in the regional trade network among the South American countries (Fig. \[fig:southamerica\]), where the global behaviour is essentially controlled by the two local giants Brazil and Argentina, and for which the larger differences between the nets can be appreciated between the economic growth of the 90s and the suffering economies in the late 70s / early 80s due to the struggling political situations.
[cc]{} &\
Not much different is the case of the larger trade subnetworks of the top 20 world economies ranked by Gross Domestic Product 2012 (PPP) (Top20 for short) as listed by the World Bank <http://data.worldbank.org> and shown in Fig. \[fig:top20\], with the notable difference that the networks for the 60s are more homogeneous to those of the 70s and 80s, supporting a faster recovery of these economies after WWII than the BRICS or the South American countries. Again, the 90s are remarkably separated by the previous periods, as a consequence of the fact that economic growth for high-income countries such as the United States, Japan, Singapore, Hong Kong, Taiwan, South Korea and Western Europe was steady and coupled with “an unprecedented extension and intensification of globalization in terms of the international integration of capital and product markets” [@crafts06world], thus causing a structural evolution of the trade networks for these countries, whose economies account for approximately 85% of the gross world product (GWP), 80 percent of world trade (including EU intra-trade), and two-thirds of the world population.
[cc]{} &\
We conclude with a more local example: between 1975 and 1990, the civil war heavily damaged Lebanon’s economic infrastructure, reducing the role of the country as the major West Asian banking hub. The following period of relative peace stimulated economic recovery also through an increasing flow of manufactured and farm exports. In this last example we consider the trading network $W$ between Lebanon and its three major economic partners, Saudi Arabia, Kuwait and United Arab Emirates. In Fig. \[fig:leb\_nets\] we show 4 examples of the trade networks with the Lebanon export figures. In the bidimensional scaling plot of Fig. \[fig:lebanon\] the trajectory emerges of the evolution of the $W$ graphs across the different decades 50s, 60s, 70s, 80 and 90s, even more clearly than in the previous cases. Here the rightmost points in the plot, corresponding to the years 1977–1990, in the middle of the civil war in Lebanon, where a contraction of the trading flow was recorded. Finally, in the plot of Fig. \[fig:lebanonex\], we show the relation between the volume of export flow of Lebanon and the curve representing the HIM distance of $W_i$ from $W_{1950}$ for $i\in\{1951,\ldots,2000\}$. Pearson correlation between the two curves is 0.71, and their shape shows that the trade network is following the trend of the other curve with a temporal shift of about a decade.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The trade network between Lebanon, Saudi Arabia, Kuwait and United Arab Emirates in 1951, 1974, 1985 and 1996; red links indicate Lebanon export flow, with the corresponding volume figure. Edge width is proportional to export flow volume.[]{data-label="fig:leb_nets"}](./g1951c.pdf "fig:"){height="3cm"} ![The trade network between Lebanon, Saudi Arabia, Kuwait and United Arab Emirates in 1951, 1974, 1985 and 1996; red links indicate Lebanon export flow, with the corresponding volume figure. Edge width is proportional to export flow volume.[]{data-label="fig:leb_nets"}](./g1974c.pdf "fig:"){height="3cm"} ![The trade network between Lebanon, Saudi Arabia, Kuwait and United Arab Emirates in 1951, 1974, 1985 and 1996; red links indicate Lebanon export flow, with the corresponding volume figure. Edge width is proportional to export flow volume.[]{data-label="fig:leb_nets"}](./g1985c.pdf "fig:"){height="3cm"} ![The trade network between Lebanon, Saudi Arabia, Kuwait and United Arab Emirates in 1951, 1974, 1985 and 1996; red links indicate Lebanon export flow, with the corresponding volume figure. Edge width is proportional to export flow volume.[]{data-label="fig:leb_nets"}](./g1996c.pdf "fig:"){height="3cm"}
1951 1974 1985 1996
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[cc]{}
[c]{}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) Maps and flags of the countries in the Lebanon trade net . (top right) Multidimensional Scaling of the HIM distances among the intertrade networks of the Lebanon trade net countries in the periods 1950–1961 (black), 1962–1971 (green), 1971–1981 (gray), 1982–1990 (blue), 1991–2000 (red).[]{data-label="fig:lebanon"}](./sa.pdf "fig:"){height="0.75cm"} ![(left) Maps and flags of the countries in the Lebanon trade net . (top right) Multidimensional Scaling of the HIM distances among the intertrade networks of the Lebanon trade net countries in the periods 1950–1961 (black), 1962–1971 (green), 1971–1981 (gray), 1982–1990 (blue), 1991–2000 (red).[]{data-label="fig:lebanon"}](./kw.pdf "fig:"){height="0.75cm"} ![(left) Maps and flags of the countries in the Lebanon trade net . (top right) Multidimensional Scaling of the HIM distances among the intertrade networks of the Lebanon trade net countries in the periods 1950–1961 (black), 1962–1971 (green), 1971–1981 (gray), 1982–1990 (blue), 1991–2000 (red).[]{data-label="fig:lebanon"}](./ae.pdf "fig:"){height="0.75cm"} ![(left) Maps and flags of the countries in the Lebanon trade net . (top right) Multidimensional Scaling of the HIM distances among the intertrade networks of the Lebanon trade net countries in the periods 1950–1961 (black), 1962–1971 (green), 1971–1981 (gray), 1982–1990 (blue), 1991–2000 (red).[]{data-label="fig:lebanon"}](./lb.pdf "fig:"){height="0.75cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\
\
&\
![Lebanon export flow in the years 1950–2000 (Million of USD, solid multicolor line left y axis) and curve of $\textrm{HIM}(W_i,W_{1950})$ (dashed orange line).[]{data-label="fig:lebanonex"}](./lebanon_export.pdf){height="10cm"}
The MEG Biomag 2010 competition 1 dataset {#ssec:meg}
-----------------------------------------
The challenge dataset for the 2010 Biomag competition was derived from [@vangerven09attention] and consisted in monitoring 4 subjects by a MEG in a set of trials where a fixation cross was presented to the subject and after that at regular intervals a cue indicated which direction, either left or right, they had to covertly attend to during the next 2500ms. After this period, a target in the indicated direction appeared. Brain activity was recorded from 500ms before cue offset to 2500ms after cue offset through 274 sensors at 300Hz; a total of 128 trials per condition were collected, 256 total trials per subject. MEG data were first preprocessed as explained in [@vangerven09attention]: the raw signals of each trial are independently decomposed with a multitaper frequency transformation in the 5-40 Hz interval with 2 Hz bin width. The results of the frequency transforms are used to construct a coherence network for each trial, which is successively rescaled such that its eigenvalues are between +1 and -1. After rescaling, on a separated instance of the dataset, the eigenvalues of the network are subjected to a network deconvolution procedure as explained in [@feizi13network]. Finally, an Elastic Net [@zou05regularization] linear regression using the Lasso [@tibshirani96regression] in two phases with the mixed $\ell 1\ell 2$ algorithm [@demol09elastic; @mosci10solving], resulting in a final dataset of 252 covariance networks on 274 nodes, equally distributed between label ”right” and ”left”. A suite of Support Vector Machines from *mlpy* <http://mlpy.fbk.eu> [@albanese12mlpy] with different $\textrm{HIM}_\xi$ kernels were tested, together with the linear kernel L-SVM, Random Forest RF [@breiman01random] and Elastic-Net EN as a baseline, on a set of 100 MonteCarlo resampling of stratified training (84+84 networks) and test (42+42) sets of both the deconvolved and the original dataset, yielding the performance shonw in Tab. \[tab:acc\].
Non-Deconvolved Deconvolved
------------------------------ ----------------- -------------------
L-SVM 0.65 (0.02) 0.72 (0.02)
$\textrm{HIM}_0$-SVM 0.67 (0.03) **0.74 (0.03)**
$\textrm{HIM}_{+\infty}$-SVM 0.56 (0.04) 0.48 (0.05)
$\textrm{HIM}$-SVM 0.61 (0.04) 0.63 (0.04)
RF 0.71 (0.01) 0.70 (0.02)
EN 0.71 (0.03) **0.74 (0.03)**
: Average Classification Accuracies for Deconvolved and non Deconvolved 11 Hz Networks, standard errors in brackets. As a baseline, authors in [@kia13discrete] reach 0.67 accuracy using the Elastic Net with summary statistics of spatio-temporal activations and 0.73 using 2-D DCT basis.\
\[tab:acc\]
As a general consideration, the deconvolution procedure helps improving classification. The better accuracy is reached by the kernel with only the Hamming component ($\textrm{HIM}_0$), which performs better of baseline methods, while the Ipsen-Mikhailov component ($\textrm{HIM}_{+\infty}$) performs very poorly, on both versions of the datasets. All intermediate values of $\xi$ (including $\xi=1$ reported in the table) gives decreasing performance for increasing values of $\xi$, implying that the topological features of the graph are not useful for classification in this task, maybe because of the symmetric nature of the task.
The obtained results of this off-the-shelf method are comparable with the range of performances obtained by far more complex and properly targeted approaches [@bahramisharif10covert; @signoretto12classification; @kia13discrete], representing a promising starting point for an effective use of the HIM kernel, for instance coupled with other graph kernels or some feature selection techniques. An extended version of this example can be found at [@furlanello13sparse].
Conclusions {#sec:conclusion}
===========
We introduced $\textrm{HIM}_{\xi}$, a novel family of distances between graphs with same nodes, even directed and weighted, aimed at combining the local and global aspects of the comparison between networks, *i.e.*, the difference between matching vertices and the difference between the spectral structure. After unveiling definitions and properties, we provided a range of applications in several fields, from functional genomics to economics, to show the usefulness of the proposed solution. In particular, we underlined the effectiveness of the HIM metrics when used as a kernel functions for classification purposes, *e.g.*, in Support Vector Machines, applied to heterogenous data in diverse areas. A final comment on the computational feasibility: the costly part when computing the HIM distance is the extraction of the spectrum from the Laplacian matrices of the two compared graph. This task is both CPU intensive and requiring a fair amount of RAM, but allows for a wide parallelization: nonetheless, huge graphs should be dealt with HPC facilities. As an example, the size of the largest graphs we compared (using a Python implementation making use of the NumPy library) is about 40,000 nodes: on a workstation with 48 Intel Xeon CPU E5649 at 2.53GHz and powered by 48Gb RAM we were able to run 4 parallel processes which took about 36 hours to compute the mutual distances between a set of 45 networks, for a total of 990 comparisons.
Uniqueness of $\pmb{\overline{\gamma}}$ {#sec:appendix}
=======================================
Fix the number $N$ of nodes, and consider the two extremal networks $\mathcal{E}_N$ and $\mathcal{F}_N$, whose Laplacian spectrum is respectively $$\textrm{spec}(L(\mathcal{E}_N)) = ( \underbrace{0,\cdots,0}_N)
\quad\textrm{and}\quad
\textrm{spec}(L(\mathcal{F}_N)) = ( 0,\underbrace{N,\cdots,N}_{N-1})\ ,$$ so that $\omega_i=0$ for the empty network and $\omega_i=\sqrt{N}$ for the fully connected network, for $i=1,\ldots,N-1$.
The Lorentz distribution for the empty network is thus $$\begin{split}
\rho_{\mathcal{E}_N}(\omega,\gamma) &= K\sum_{i=1}^{N-1} \frac{\gamma}{\gamma^2+(\omega-\omega_i)^2} \\
&= \frac{K\gamma (N-1)}{\gamma^2+\omega^2}\ ,
\end{split}$$ where $K$ can be computed as $$\begin{split}
K &= \frac{1}{\displaystyle{\int_0^{+\infty} \frac{\gamma (N-1)}{\gamma^2+\omega^2} \textrm{d}\omega}} \\
&= \frac{1}{(N-1)\left[ \arctan\left(\frac{\omega}{\gamma}\right)\right]_0^{+\infty}} \\
&= \frac{1}{\displaystyle{\frac{\pi}{2}}(N-1)} \\
&= \frac{2}{(N-1)\pi}\ ,
\end{split}$$ so that $$\label{eq:rhoE}
\begin{split}
\rho_{\mathcal{E}_N}(\omega,\gamma) &= \frac{K\gamma (N-1)}{\gamma^2+\omega^2} \\
&= \frac{2\gamma}{\pi(\gamma^2+\omega^2)}\ .
\end{split}$$
For the fully connected network we have $$\begin{split}
\rho_{\mathcal{F}_N}(\omega,\gamma) &= K\sum_{i=1}^{N-1} \frac{\gamma}{\gamma^2+(\omega-\omega_i)^2} \\
&= K\sum_{i=1}^{N-1} \frac{\gamma}{\gamma^2+(\omega-\sqrt{N})^2} \\
&= \frac{\gamma K (N-1) }{\gamma^2+\omega^2+N-2\omega\sqrt{N}}\ ,
\end{split}$$ where $K$ is $$\begin{split}
K &= \frac{1}{\gamma (N-1) \displaystyle{\int_0^{+\infty} \frac{\textrm{d}\omega}{\gamma^2+\omega^2+N-2\omega\sqrt{N}}}} \\
&= \frac{1}{\frac{\gamma (N-1)}{\gamma} \left[ \arctan\left(\frac{\omega-\sqrt{N}}{\gamma}\right)\right]_0^{+\infty}} \\
&= \frac{1}{(N-1)\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)}\ ,
\end{split}$$ so that $$\begin{split}
\rho_{\mathcal{F}_N}(\omega,\gamma) &= \frac{\gamma K (N-1) }{\gamma^2+\omega^2+N-2\omega\sqrt{N}}\\
&= \frac{1}{(N-1)\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)} \cdot \frac{\gamma (N-1) }{\gamma^2+\omega^2+N-2\omega\sqrt{N}}\\
&= \frac{\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)} \ .
\end{split}$$
Thus, we expand Eq. \[eq:gamma\_implicit\] as follows: $$\label{eq:gamma_explicit1}
\begin{split}
1 &= \epsilon_\gamma(\mathcal{E}_N, \mathcal{F}_N) \\
&= \sqrt{\int_0^\infty \left(\rho_{\mathcal{E}_N}(\omega,\gamma)-\rho_{ \mathcal{F}_N }(\omega,\gamma)\right)^2 \textrm{d}\omega}\\
&= \sqrt{\displaystyle{\int_0^\infty \left(
\frac{2\gamma}{\pi(\gamma^2+\omega^2)} -
\frac{\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)}
\right)^2 \textrm{d}\omega}
}
\\
&= \sqrt{
\displaystyle{ \int_0^\infty A^2 \textrm{d}\omega} +
\displaystyle{ \int_0^\infty B^2 \textrm{d}\omega} -
2\displaystyle{ \int_0^\infty AB \textrm{d}\omega}
}\ ,
\end{split}$$ where $$\begin{split}
A & = \frac{2\gamma}{\pi(\gamma^2+\omega^2)} \\
B &= \frac{\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)}\ .
\end{split}$$ The three terms in Eq. \[eq:gamma\_explicit1\] can be expanded as follows: $$\label{eq:A}
\begin{split}
\displaystyle{\int_0^{+\infty}} A^2 \textrm{d}\omega &= \displaystyle{\int_0^{+\infty}} \left(\frac{2\gamma}{\pi(\gamma^2+\omega^2)}\right)^2 \textrm{d}\omega \\
&= \frac{4\gamma^2}{\pi^2} \displaystyle{\int_0^{+\infty}} \frac{\textrm{d}\omega}{(\gamma^2+\omega^2)^2} \\
&= \frac{4\gamma^2}{\pi^2} \frac{1}{2\gamma^3} \left[ \frac{\gamma\omega}{\gamma^2+\omega^2} + \arctan\left(\frac{\omega}{\gamma} \right) \right]_0^{+\infty} \\
&= \frac{2}{\gamma\pi^2}\left[\frac{\pi}{2}\right] \\
&= \frac{1}{\pi\gamma}\ ;
\end{split}$$
$$\label{eq:B}
\begin{split}
\displaystyle{\int_0^{+\infty}} B^2 \textrm{d}\omega &= \displaystyle{\int_0^{+\infty}} \left( \frac{\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)} \right)^2 \textrm{d}\omega \\
&= \displaystyle{\int_0^{+\infty}} \frac{\gamma^2}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)^2 \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)^2} \textrm{d}\omega \\
&= \frac{\gamma^2}{\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)^2} \displaystyle{\int_0^{+\infty} \frac{\textrm{d}\omega}{\left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)^2 } } \\
&= \frac{\gamma^2}{2\gamma^3\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)^2} \left[ \frac{\gamma(\omega-\sqrt{N})}{\gamma^2+(\omega-\sqrt{N})^2} + \arctan\left( \frac{\omega-\sqrt{N}}{\gamma} \right) \right]_0^{+\infty} \\
&= \frac{1}{2\gamma\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)^2} \left( \frac{\pi}{2} + \frac{\gamma\sqrt{N}}{\gamma^2+N} + \arctan\left( \frac{\sqrt{N}}{\gamma} \right) \right) \ ;
\end{split}$$
$$\label{eq:AB}
\begin{split}
-2\displaystyle{\int_0^{+\infty}} AB \textrm{d}\omega &= -2 \displaystyle{\int_0^{+\infty}} \frac{2\gamma}{\pi(\gamma^2+\omega^2)} \frac{\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right)} \textrm{d}\omega \\
&= \frac{-2\cdot\gamma\cdot 2\gamma}{\pi\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) } \displaystyle{\int_0^{+\infty}} \frac{\textrm{d}\omega}{ (\gamma^2+\omega^2) \left(\gamma^2+\omega^2+N-2\omega\sqrt{N}\right) } \\
&= \frac{-4\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \pi ( 4\gamma^2+N) } \left[
\frac{\gamma}{\sqrt{N}} \log\frac{\gamma^2+\omega^2}{\gamma^2+\omega^2+N-2\omega\sqrt{N}} + \right. \\
&\phantom{=} \left. \arctan\left( \frac{\omega-\sqrt{N}}{\gamma}\right) + \arctan\left( \frac{\omega}{\gamma} \right)
\right]_0^{+\infty}\\
&= \frac{-4\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \pi ( 4\gamma^2+N) } \left[
\frac{\pi}{2} +\frac{\pi}{2} - \frac{\gamma}{\sqrt{N}}
\log\frac{\gamma^2}{\gamma^2+N} + \right. \\
&\phantom{=} \left. \arctan\left( \frac{\sqrt{N}}{\gamma}\right)
\right]\ .
\end{split}$$
Plugging Eqs. \[eq:A\],\[eq:B\],\[eq:AB\] into Eq. \[eq:gamma\_explicit1\], we obtain: $$\begin{split}
1 &= \epsilon_\gamma(\mathcal{E}_N, \mathcal{F}_N) \\
&= \frac{1}{\pi\gamma} + \frac{1}{2\gamma\left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right)^2} \left( \frac{\pi}{2} + \frac{\gamma\sqrt{N}}{\gamma^2+N} + \arctan\left( \frac{\sqrt{N}}{\gamma} \right) \right) - \\
&\phantom{=} \frac{-4\gamma}{ \left( \frac{\pi}{2} + \arctan\left(\frac{\sqrt{N}}{\gamma}\right)\right) \pi ( 4\gamma^2+N) } \left[
\pi - \frac{\gamma}{\sqrt{N}}
\log\frac{\gamma^2}{\gamma^2+N} + \arctan\left( \frac{\sqrt{N}}{\gamma}\right) \right] \ .
\end{split}$$
Consider now the function $f(N,\gamma)=\epsilon_\gamma(\mathcal{E}_N, \mathcal{F}_N) -1$: for a fixed value of $N$, it is a monotonically decreasing function of $\gamma$, so the equation Eq. \[eq:gamma\_implicit\] has an unique solution $\overline{\gamma}$. In Fig. \[fig:gammaN\] (a) and (b) we display the situation for $N$= 5,10 and 100000.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(a) Behaviour of $f(\gamma,N)$ for $N$=5, 10 and 10000, in the interval $\gamma\in (0,200]$ and (b) zoomed in the interval $\gamma\in [0.35,0.5]$ with the solutions of Eq. \[eq:gamma\_implicit\].[]{data-label="fig:gammaN"}](./f_gamma_N.pdf "fig:"){width="40.00000%"} ![(a) Behaviour of $f(\gamma,N)$ for $N$=5, 10 and 10000, in the interval $\gamma\in (0,200]$ and (b) zoomed in the interval $\gamma\in [0.35,0.5]$ with the solutions of Eq. \[eq:gamma\_implicit\].[]{data-label="fig:gammaN"}](./f_gamma_N_zoom.pdf "fig:"){width="40.00000%"}
(a) (b)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uniqueness of $\pmb{\overline{\gamma}^\uparrow}$ {#sec:appendixb}
================================================
The spectra of the laplacian matrices of the two extremal graphs $\hat{\mathcal{E}}_N^\uparrow$ and $\hat{\mathcal{F}}_N^\uparrow$ are now $$\textrm{spec}(L(\hat{\mathcal{E}}_N^\uparrow)) = ( \underbrace{0,\cdots,0}_{2N})
\quad\textrm{and}\quad
\textrm{spec}(L(\hat{\mathcal{F}}_N^\uparrow)) = ( 0,\underbrace{N-2,\cdots,N-2}_{N-1},\underbrace{N,\cdots,N}_{N-1},2N-2)\ .$$
It follows that $$K_{\hat{\mathcal{E}}_N^\uparrow} = \frac{2}{(2N-1)\pi}$$ and $$K_{\hat{\mathcal{F}}_N^\uparrow} = \frac{1}{ (2N-1)\frac{\pi}{2} + (N-1)\left(\arctan\frac{\sqrt{N-2}}{\gamma} + \arctan\frac{\sqrt{N}}{\gamma}\right) +\arctan\frac{\sqrt{2N-2}}{\gamma}} \ .$$
Thus the equation $$\epsilon_{\gamma}(\hat{\mathcal{E}}^\uparrow,\hat{\mathcal{F}}^\uparrow) = 1$$ (whose solution is the normalizing factor $\overline{\gamma}^\uparrow$) reads as follows: $$1 = \sqrt{
\int_0^{+\infty}
\left[
\frac{2\gamma}{\gamma^2+\omega^2} -
\frac{
\gamma
\left(
\frac{N-1}{\gamma^2+(\omega-\sqrt{N-2})^2} +
\frac{N-1}{\gamma^2+(\omega-\sqrt{N})^2} +
\frac{1}{\gamma^2+(\omega-\sqrt{2N-2})^2}
\right)
}
{
(2N-1)\frac{\pi}{2} + (N-1)\left(\arctan\frac{\sqrt{N-2}}{\gamma} + \arctan\frac{\sqrt{N}}{\gamma}\right) +\arctan\frac{\sqrt{2N-2}}{\gamma}
}
\right]^2
\textrm{d}\omega
}\ .
\label{eq:gammahat}$$
Introduce now a few shorthands: define, for $T,U\in\mathbb{R}$, the following integral $$\int_0^{+\infty}{ \frac{\textrm{d}\omega}{(\gamma^2+(\omega-\sqrt{T})^2)(\gamma^2+(\omega-\sqrt{U})^2)}} =
\begin{cases}
M(T) & \text{if $T=U$,}
\\
L(T,U) & \text{if $T\not= U$}\ .
\end{cases}$$ Then, $$M(T) = \frac{
\frac{1}{2}\left(
\gamma^2\arctan\frac{\sqrt{T}}{\gamma} + T \arctan\frac{\sqrt{T}}{\gamma} + \gamma\sqrt{T}
\right)
}{
\gamma^5+T\gamma^3
}
+\frac{\pi}{4\gamma^3},$$ and $$L(T,U) =
\frac{
-\log\left(\gamma^{2} + U\right)+ \log\left(\gamma^{2} + T\right)
}{{\left(4 \, \gamma^{2} + T + 3 \, U\right)} \sqrt{T} - {\left(4 \, \gamma^{2} + 3 \, T + U\right)} \sqrt{U}} +
\frac{\pi+ \arctan\left(\frac{\sqrt{T}}{\gamma}\right) + \arctan\left(\frac{\sqrt{U}}{\gamma}\right)}
{4 \, \gamma^{3} + T \gamma - 2 \, \sqrt{T} \sqrt{U} \gamma + U \gamma} \ .$$ To shorten notations, define furthermore $$Z = \frac{2\gamma}{\pi}\quad W=\gamma (N-1) K_{\hat{\mathcal{F}}_N^\uparrow}\quad W' = \frac{W}{N-1}\ .$$
With the aforementioned positions, Eq. \[eq:gammahat\] becomes $$\begin{split}
1 &= Z^2 M(0) + W^2 M(N-2) + W^2 M(N) + W'^2 M(2N-2) \\
&\phantom{=}- 2ZW L(0,N-2) - 2ZWL(0,N) -2 ZW' L(0,2N-2) \\
&\phantom{=}+ 2 W^2 L(N-2,N) +2 WW' L(N-2,2N-2) +2WW' L(N,2N-2)\ .
\end{split}
\label{eq:gammahatexp}$$
As in the undirected case, for each $N$ Eq. \[eq:gammahatexp\] has an unique solution $\overline{\gamma}^\uparrow$, whose value is quite close to $\overline{\gamma}$, as shown in Fig. \[tab:gammas\].
[cc]{}
------- ----------- -----------
5 0.4272836 0.3866861
10 0.4517012 0.4300291
50 0.4752742 0.4704579
100 0.4777976 0.4753463
500 0.4787492 0.4782538
1000 0.4785596 0.4783119
10000 0.4779060 0.4778813
------- ----------- -----------
&
|
Q:
javascriptでpromiseの最終結果を変数へ格納する方法
javascript初心者です。(初めて3日目) promiseの結果を変数に格納したいのですが、昨日からずっと調べているのですが、やり方がわかりません。
【やりたいこと】
フォルダに保存したプロジェクトのファイル一覧(配列)を、ブラウザ上に並べて表示させる。
【困っていること】
async functionで取得した値(プロジェクト一覧の配列)に変数に格納したいが、出来ませんでした。
console.logすると、array(2)と表示されて中身が見えるのですが、実際に中身を取り出そうとすると(例えばgli[0])
undefindと表示されます。lengthを調べても0と出ています。
中身が見えるのに取り出せないのは、不思議でした。global変数に格納しても同じ結果となりました。
【試したこと・調べたこと】
・async functionを変数に格納するのではなく、asyncの内部で処理をしたほうが良いとのことだったので、
処理をすべて内部に移動させたところ、処理はされたのですが、画像が表示されませんでした。→失敗
(overlayなどとの相性でしょうか??)
promiseの概念についての理解がまだ出来ていないのですが、.thenで処理結果へのアクセスは出来るにも関わらず、
いろいろ調べても、その結果を格納できない理由が、理解できませんでした。。。
promiseの結果を変数に格納して、functionの外側から使用したいというのは、やはり使い方を間違っているのでしょうか。
また正しい使い方をする場合、そのように書けばよいのでしょうか。
申し訳ないのですが、わかる方は教えてくだされば幸いです。
【コード】
<script>
async function get_list(){
let li=await eel.projectlist()();//list型のobjectが返ってくる。(liに格納される)
var txt =""
for (let i = 0; i < li.length; i++){
gli.push(li[i])
}
return li
}
window.onload = function() {
gli=[];
get_list().then(value => console.log(value));
console.log(gli)
for (let i = 0; i < gli.length; i++) {
console.log(gli[i])
var txt = txt+`<div class="col-lg-3 col-md-6 col-sm-6 work"> <a href="images/work-8.jpg" class="work-box"> <img src="images/work-8.jpg">
<div class="overlay">
<div class="overlay-caption">
<h5>Project Name</h5>
<p>${gli[i]}</p>
</div>
</div>
<!-- overlay -->
</a> </div>`;}
document.getElementById("message").innerHTML =txt;
}
</script>
A:
JavaScriptの同期処理と非同期処理の理解が追いついていないかと思うので、実装の流れに混乱が生じていると考えられます。まずは(おそらく動くであろう)書き直したコードはこちらです。
async function get_list() {
// get_listの実行後のthenの第一引数で取得することができる
return await eel.projectlist()();
}
window.onload = function () {
get_list().then(function (gli) {
var txt = "";
for (let i = 0; i < gli.length; i++) {
txt =
txt +
`<div class="col-lg-3 col-md-6 col-sm-6 work"> <a href="images/work-8.jpg" class="work-box"> <img src="images/work-8.jpg">
<div class="overlay">
<div class="overlay-caption">
<h5>Project Name</h5>
<p>${gli[i]}</p>
</div>
</div>
<!-- overlay -->
</a> </div>`;
}
document.getElementById("message").innerHTML = txt;
});
};
Promiseから値を取得するような場合、Promiseチェーンを繋げる必要があります。今回の質問では、get_list関数がPromiseを返すので、値を取得するためには、thenを繋げ、callback中で値を操作する必要があります。
get_list().then(function (value) {
console.log(value); // Promise Chainで値を取得できる。
})
次に、Promiseは非同期で処理されるため、コードを書いた順番で処理されるとは限りません。
get_list().then(function () {
console.log("hello from promise chain!");
})
console.log("hello from out of promise chain!")
これを実行すると、ログには以下の順番で出てくると思います。
"hello from out of promise chain!"
"hello from promise chain!"
非同期処理はJavaScriptのruntimeによって制御されるため、実際に上記の順序出てくるかもしれないし、出てこないかもしれません。これは正確にはわかりません。ただ明らかなのは、Promise ChainのCallback中に処理を書くことで非同期処理の完了を確実に取得することが可能です。
さて、ここまでの回答で不明な点があればもう少しPromiseについて学習してみてください。
参考
JavaScript Promiseの本: https://azu.github.io/promises-book/
|
Isotopic separation of [(14)n]- and [(15)n]aniline by capillary electrophoresis using surfactant- controlled reversed electroosmotic flow.
Separation of isotopically labeled [(14)N]- and [(15)N]aniline was achieved using capillary electrophoresis based on the isotopic effect on pK(a). The effects of the buffer co-ion, pH, and electroosmotic mobility on the resolution are investigated in this paper. Electroosmotic flow (EOF) was controlled using the zwitterionic surfactant Rewoteric AM CAS U as buffer additive. The resultant EOF was anodic (reversed) and low in magnitude (0.6 × 10(-)(4) cm(2)/(V·s)). The resolution of [(14)N]- and [(15)N]aniline was 1.22. Addition of a cationic surfactant, cetyltrimethylammonium bromide, to the zwitterionic surfactant increased the magnitude of the anodic EOF. This EOF improved the resolution to 1.33 based on mobility counterbalance. |
3.5 tennis in the Greater Boston area
Hey, it's that time of the year again. I'm looking for a hitting partner to get back into the groove. I have a flexible schedule and mostly play in Brighton, Brookline or Cambridge but can drive anywhere within reason.
I'm a 3.5-4.0 player who can play at the Babson College tennis courts in Wellesley. I don't have a vehicle, so I can't get to any other courts unfortunately. I'm available to play anytime this weekend and most of next week.
I think I'm a 3.0 or 3.5 and work in Cambridge usually from 8 to 5 and can play around the Boston or Cambridge area. I have played at Tufts University in Medford. I can meet you and play if there's parking around the area. Let me know. |
Cannabinoid receptor antagonism and inverse agonism in response to SR141716A on cAMP production in human and rat brain.
The effects of cannabinoid drugs on cAMP production were examined in mammalian brain. The cannabinoid receptor agonist (R)-(+)-[2,3-dihydro-5-methyl-3-[(4-morpholinyl)methyl]pyrrolo[1,2,3,-d,e-1,4-benzoxazin-6-yl]-(1-naphthalenyl) methanone (WIN55,212-2) decreased forskolin-induced cAMP accumulation in a concentration-dependent manner (10(-8)-10(-5) M) in membranes from several rat and human brain regions, this effect being antagonized by 10(-5) M N-(piperidin-1-yl)-5-(4-chlorophenyl)-1-(2,4-dichlorophenyl)-4-methyl-1H-pyrazole-3-carboxamide (SR141716A). Furthermore, high micromolar concentrations of SR141716A evoked a dose-dependent increase in basal cAMP in rat cerebellum and cortex, as well as in human frontal cortex. This effect was antagonized by WIN55,212-2 and abolished by N-ethylmaleimide, consistent with the involvement of cannabinoid CB(1) receptors through the activation of G(i/o) proteins. These results suggest a ligand-independent activity for cannabinoid CB(1) receptor signaling cascade in mammalian brain. |
The novel bi-level vacuum-actuated test fixture disclosed in United States utility patent application Ser. No. 586,010, invented by Golder et al and assigned to the same assignee as the instant invention, incorporated herein by reference, includes an electronic circuit device receiving face that is mounted for movement relative to a fixed probe support plate in such a way as to define a vacuum chamber therebetween. The probe support plate has mounted thereto a plurality of spring-loaded probes having ends that are constrained to lie on a first level defining an in-circuit testing array, and has mounted thereto a plurality of spring-loaded probes having ends that are constrained to lie on a second level different from the first level defining a functional testing array. A plurality of coil springs are mounted between and abutting the probe support plate and the electronic circuit board receiving face, and a plurality of spring-loaded buttons are mounted to the probe support plate. A bi-level vacuum source is operatively coupled to the vacuum chamber for selectively providing first and second preselected vacuum levels thereto. The first vacuum level is selected to have a magnitude greater than the combined resilient force provided by the plurality of coil springs but less than the resilient force of the coil springs and the spring-loaded buttons. The second preselected vacuum level is selected to have a magnitude greater than the combined resilient force of both the plurality of coil springs and of the spring-loaded buttons. Whenever the first, and lower, vacuum level is applied to the vacuum chamber, an electronic circuit device mounted to its receiving face is moved therewith into mechanical and electrical contact with the first plurality of spring-loaded probes as a result of a dynamic equilibrium condition established by the opposition between the vacuum pressure induced downward force and the upward conbined spring force of the springs and spring-loaded buttons. Whenever the full, and higher, vacuum level is applied to the vacuum chamber, the vacuum pressure induced force is sufficient to overcome the combined resilient force of the plurality of coil springs and spring-loaded buttons such that the electronic circut device moves into contact with the second plurality of spring-loaded probes. |
#
# Copyright 2010 The Apache Software Foundation
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
module Shell
module Commands
class ShowTables < Command
def help
return <<-EOF
List all tables in wasp.
wasp> show tables
EOF
end
def command(regex = ".*")
now = Time.now
formatter.header([ "TABLE" ])
regex = /#{regex}/ unless regex.is_a?(Regexp)
list = admin.show_tables.grep(regex)
list.each do |table|
formatter.row([ table ])
end
formatter.footer(now, list.size)
end
end
end
end
|
Q:
Typescript extensions. Conditional Types
I was trying to use conditional type but it does not work as expected.
I was expecting type of abc would be number but it returns a string.
Any help on the same would be appreciated.
class TableIdClass {
public tableId?: string;
constructor(props: TableIdClass) {
const { tableId } = props;
this.tableId = `${Math.random()}`;
}
}
export class TableBase extends TableIdClass {
public createDate?: Date;
constructor(props: TableBase) {
super(props)
const { createDate } = props;
this.createDate = (createDate) ? createDate : new Date();
}
}
export class EntityBase extends TableBase {
public entityId?: string;
public entityName?: string;
public isActive?: boolean;
constructor(props: EntityBase) {
super(props)
const { entityId, entityName, isActive } = props;
this.entityId = entityId;
this.entityName = (entityName) ? entityName : '';
this.isActive = (typeof isActive === 'undefined') ? true : isActive;
}
};
class SomeClass extends TableIdClass {
constructor(prop: SomeClass) {
super(prop)
}
}
type abc = SomeClass extends EntityBase ? string : number; // Returns string.
A:
You should add to EntityBase some non-optional property e.g.
class EntityBase {
public isEntityBase = undefined
...
}
This is because TypeScript uses structural subtyping, in other words it checks whether an object implements some interface, by looking at the object's structure (names of properties).
In order to not pollute the API of EntityBase, you could use Symbols:
const isEntityBase = Symbol()
class EntityBase {
public [isEntityBase] = undefined
}
|
You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. There are also more forums available to members, such as the Lounge - where members chat about just about anything under the sun except cricket!
Well Southee has been dropped. He had an amazing Under 19 World Cup but I never thought he would go on like Sharma has. I think he might do well in ODIs should he manage to get back in the side. People will say he is only 19 and has only played 4 Tests but he will have to wait for injuries to the other fast bowlers.
4 Tests 10 wickets at 42.30
Franklin is quite an interesting left arm bowler. Sometimes he goes for lots of runs and other times is amazing. Another leftie to join Khan, Singh, Pathan, Sidebottom, Vaas, Johnson, Tanvir.
The only bowlers still playing with strike rates of under 50.0, a must for any team. I have a feeling that Steyn's will rise when he faces Australia this month. Franklin and Asif stand out as they aren't as fast as the other four. Usually these types of bowlers go for more runs than a good Test fast bowler who would have an economy rate of under 3.0 but as a team you would want one as they can be really called strike bowlers. I think an ideal for a bowler would be an average under 25.0, economy under 3.0 and a strike rate under 50 but those bowlers are few and far between. Asif seems to be the nearest but he has been out of cricket a while and when he comes back might have different stats after having used Nandralone.
Well Southee has been dropped. He had an amazing Under 19 World Cup but I never thought he would go on like Sharma has. I think he might do well in ODIs should he manage to get back in the side. People will say he is only 19 and has only played 4 Tests but he will have to wait for injuries to the other fast bowlers.
4 Tests 10 wickets at 42.30
Franklin is quite an interesting left arm bowler. Sometimes he goes for lots of runs and other times is amazing. Another leftie to join Khan, Singh, Pathan, Sidebottom, Vaas, Johnson, Tanvir.
The only bowlers still playing with strike rates of under 50.0, a must for any team. I have a feeling that Steyn's will rise when he faces Australia this month. Franklin and Asif stand out as they aren't as fast as the other four. Usually these types of bowlers go for more runs than a good Test fast bowler who would have an economy rate of under 3.0 but as a team you would want one as they can be really called strike bowlers. I think an ideal for a bowler would be an average under 25.0, economy under 3.0 and a strike rate under 50 but those bowlers are few and far between. Asif seems to be the nearest but he has been out of cricket a while and when he comes back might have different stats after having used Nandralone.
I was surprised to see Franklin has as many as 76 test wickets. Whilst his strike-rate is pretty good, he only has three 5-fers, against Australia, Bangladesh and the West Indies, three of the weaker nations in test cricket. If he has such a good strike-rate, should he not have more 5 wicket hauls?
In contrast Steyn has three times as many in just 6 more tests (27 compared to 21), Bond has more in 4 tests fewer, Akhtar had 4 times as many in little over half as many tests, Asif has more in half as many. Jones also only has three 5 wicket hauls, but again in fewer tests.
Good to see Franklin back in International cricket after all his injury problems. Southee will come again I'm sure. Time in domestic cricket will do him good in fact I'm sure a stint in England playing for a county would really help.
I was surprised to see Franklin has as many as 76 test wickets. Whilst his strike-rate is pretty good, he only has three 5-fers, against Australia, Bangladesh and the West Indies, three of the weaker nations in test cricket. If he has such a good strike-rate, should he not have more 5 wicket hauls?
In contrast Steyn has three times as many in just 6 more tests (27 compared to 21), Bond has more in 4 tests fewer, Akhtar had 4 times as many in little over half as many tests, Asif has more in half as many. Jones also only has three 5 wicket hauls, but again in fewer tests.
Well he has not done very well against the two of the weaker sides you mention Aus and West Indies. He has only played 21 Tests so it's a 5fer every 7 Tests at this early stage in his career, like another left armer Vaas who has also has taken one every 7 Tests. With Pollock it was one every 6 Tests but he still went on to take 421 wickets.
Well he has not done very well against the two of the weaker sides you mention Aus and West Indies. He has only played 21 Tests so it's a 5fer every 7 Tests at this early stage in his career, like another left armer Vaas who has also has taken one every 7 Tests. With Pollock it was one every 6 Tests but he still went on to take 421 wickets.
Pollock and Vaas were fine bowlers, but I wouldn't consider either strike bowlers. I don't think Franklin is in their class either, and I think his record benefits from a small sample size.
Pollock and Vaas were fine bowlers, but I wouldn't consider either strike bowlers. I don't think Franklin is in their class either, and I think his record benefits from a small sample size.
No I wouldn't call them strike bowlers either and I agree he is not in their class and won't have as long a career as they did. Just picking the first names I could think off here are the bowler's s/rs at their 21st Test and how they stand at present. I don't think Steyn and Franklin will keep their present strike rates just as Lee and Gillespie haven't. Donald and Akhtar got theirs under 50.0
The bowlers who have over 150 wickets who are under 30 are the spinners Harbhajan, Vettori and Kaneria. All have another 10 years international bowling.
Murali could play another 4 years and as he averages about 62 wickets a year so he possibly could have 1000 wickets. I don't think anyone will go past him but then I thought no one would ever pass Walsh when he got his 500th wicket.
Only one fast bowler has managed to pass him, McGrath with 563. With all the cricket around these days no fast bowler will match McGrath's international career of 15 years.
Maybe Ntini and Vaas are hoping to make that 500 mark but the older they get the more chance of injuries and an up and coming bowler replacing them after a couple of poor series. Lee has said he wants to play another 4 years but he might not even be playing in the next match. |
[Progress and extensive meaning of mammalian target of rapamycin involved in restoration of nervous system injury].
To review the possible mechanisms of the mammalian target of rapamycin (mTOR) in the neuronal restoration process after nervous system injury. The related literature on mTOR in the restoration of nervous system injury was extensively reviewed and comprehensively analyzed. mTOR can integrate signals from extracellular stress and then plays a critical role in the regulation of various cell biological processes, thus contributes to the restoration of nervous system injury. Regulating the activity of mTOR signaling pathway in different aspects can contribute to the restoration of nervous system injury via different mechanisms, especially in the stress-induced brain injury. mTOR may be a potential target for neuronal restoration mechanism after nervous system injury. |
Predators center Craig Smith (15) thanks center Kevin Fiala (22) for the assist after his goal against the Senators during the third period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators goalie Pekka Rinne (35) protects the goal as Senators and Predators crash the goal during the third period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators center Kevin Fiala (22) gathers a loose puck during their game against the Senators during the third period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators goalie Pekka Rinne (35) is sprayed with ices as he blocks a shot by Senators center Jean-Gabriel Pageau (44) during the second period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators right wing Miikka Salomaki (20) gets into a scuffle with Senators right wing Marian Gaborik (12) during the second period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators right wing Viktor Arvidsson (33) celebrates his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators right wing Viktor Arvidsson (33) is congratulated by Predators left wing Filip Forsberg (9), center Ryan Johansen (92) and defenseman Roman Josi (59) after his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators congratulate defenseman Roman Josi (59) after his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators defenseman Roman Josi (59) is congratulated by center Kyle Turris (8) after his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. George Walker IV / Tennessean.com
Predators congratulate defenseman Roman Josi (59) after his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn.(Photo: George Walker IV / Tennessean.com)
To those of you overwhelmed with concern over the Predators' recent funk, know that none exists inside of their dressing room.
Let the Predators' 5-2 victory against the Ottawa Senators on Monday calm those fears, at least for now.
Here are three observations from Monday's win:
Back to basics
The Predators spent a substantial amount of their time in the previous seven games — virtually half of it — playing from behind.
A major component of the Predators' successful formula this season has been their ability to seize a lead and rarely relinquish it.
Predators right wing Viktor Arvidsson (33) is congratulated by Predators left wing Filip Forsberg (9), center Ryan Johansen (92) and defenseman Roman Josi (59) after his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. (Photo: George Walker IV / Tennessean.com)
They rediscovered that swagger Monday at the Senators' expense. Two goals in the first period — captain Roman Josi's one-timer on the power play and a splendid passing display finished off by forward Viktor Arvidsson — gave Nashville its first multi-goal lead in more than two weeks.
In a strange coincidence, the Senators scored both of their goals 91 seconds after the Predators took 2-0 and 3-1 leads. The Predators, however, didn't relent, looking more like themselves than they have in recent weeks.
Predators right wing Viktor Arvidsson (33) celebrates his goal against the Senators during the first period at Bridgestone Arena Monday, Feb. 19, 2018 in Nashville, Tenn. (Photo: George Walker IV / Tennessean.com)
Shots are on Arvidsson
Arvidsson reached 20 goals Monday with his two-goal performance, his first this season. He, Ryan Johansen and Filip Forsberg decidedly controlled possession in a return to form for the Predators' dynamic top forward line.
Arvidsson had a career-high nine shots Monday, tied for the second-most by a Predators player this season. He generated seven of those shots during the third period as he sought a hat trick.
In an encouraging development, Arvidsson's two goals and forward Craig Smith's in the third period all originated from within or around high-danger areas, something that the team has struggled to do with regularity.
Fisher getting closer to return
Mike Fisher, who played the first 675 games of his NHL career with the Senators, was a popular interview subject Monday morning.
He updated his progress as he works toward a return to the Predators' active roster.
"I'm definitely feeling closer," Fisher said. "Hopefully by March I'll be good to go. ... I've still got to get a lot more good, good skates in. I feel like my speed's pretty good. It's just getting endurance and game-shape stuff that I've been working at after practice." |
Top 10 Best Portable Air Conditioners with Remote Control In 2019
A good portable air conditioner makes a healthy living environment. It saves you from costly installation and makes relocation from one room to another a hassle-free task. Getting the best portable air conditioner needs you to take many different factors into consideration. From your room size to the number of operation modes, you just need to get everything right. That said, there are a number of portable air conditioners that have proven to be more efficient and cost-effective.
Below, we look at these outstanding models and what makes them a perfect option for your air conditioning needs. And remember, we’ve factored in expert opinions to give a product recommendation that will be worth every penny spend. Sit tight as we give you a review of the best portable air conditioners worth putting into consideration.
10. Honeywell MN12CES 12,000 BTU Portable Air Conditioners
At 12,000 BTU, this 3-in-1 air conditioner makes a perfect match for large rooms measuring 450 square feet. Its remote controlled fan generates a maximum airflow of 163 CFM and can be set to 3 different speeds. The unit provides the convenience of an automatic evaporation system that eliminates the need for a bucket. It got a daily dehumidifying capacity of 70 pints and a low noise level of 53 dB. The integrated automatic shutoff timer can be set to a minimum of 1 hour and a maximum of 24 hours.
9. Honeywell MN10CESWW 10,000 BTU Air Conditioner
This air conditioner comes in handy for an average sized room. At 10,000 BTU, it cools areas measuring a maximum of 350 square feet. The 3-in-1 design lets it double as a dehumidifier or cooler. The dehumidifying function eliminates the need for any bucket, thanks to its automatic evaporation system that dehumidifies 70 pints in a 24-hour period. It got a 3-speed fan that generates a maximum airflow of 163 CFM. The air conditioner operates quietly at 55 dB.
8. Whynter Dual-Hose Portable Air Conditioner
Being more of a full package, this unit comes equipped with 3 operation modes. It’s designed to function as a cooler, dehumidifier and air conditioner. With it, you get the convenience of a digital thermostat sporting a temperature range of 61 to 89 degrees Fahrenheit. At 14,000 BTU, it cools large rooms sporting a maximum of 500 square feet. The unit has a dehumidifying capacity of 101 pints per day and comes equipped with a fan that can be set to 3 different speeds.
7. Whynter BTU Portable Air Conditioner (ARC-10WB)
This single-hose unit does well in relatively small spaces of up to 300 square feet. It sports a 3-in-1 design that has a conditioning, cooling and dehumidifying mode. An adjustable thermostat comes in handy to regulate the temperature in the range of 62 to 88 degrees Fahrenheit. A self-evaporating system gives you 55 points per day of dehumidifying capacity.
6. Portable Air Conditioner with Remote Control
Here’s one of the largest units in Honeywell air conditioners. This unit sports the precision of feather-touch controls for hassle-free operation. It’s designed to cool larger rooms measuring 550 square feet. With it, you get a dehumidifying capacity of 79.2 pints per day. And it’s all automated to eliminate the need for emptying. This unit has a 3-speed fan and operates at 55
5. Tripp Lite Portable Cooling Unit Air Conditioner
Unique as it’s portable, this unit is one of the most efficient in spot air cooling. It’s got 12,000 BTU of cooling power designed for spaces having a maximum area of 500 square feet. The spot cooling function lets you direct air to the place you need it most. This air conditioner is specially designed for IT environments and gives the convenience of a more compact
4. Cooling & Heating Portable Air Conditioner with Remote Control
Get both a heating and cooling function with this unit. It’s got a cooling and heating capacity of 14,000 BTU and 11,000 BTU respectively. It cools rooms up to 550 square feet and heats spaces of up to 200 square feet. And you don’t have to leave your seat to adjust the settings since the whole operation is remotely controlled.
3. Indoor Portable Evaporative Air Cooler with Remote Control
This is an energy efficient evaporative air cooler designed for small apartments up to 175 square feet. It’s one compact and a lightweight unit providing the convenience of natural cooling. The non-compressor system generates a powerful airflow of 300 CFM and can be set to 4 different speeds. Its dehumidifying mode is recommended for rooms that sport at least 60-percent humidity.
2. LG Electronics LP1414GXR Air Conditioner
Cool large up to 600 square feet with this 14,000 BTU air conditioner. The unit has an hourly dehumidifying capacity of 3.4 pints. A 3 speed lets you set the cooling speed to 3 different levels to fit the size and temperature of your room. And you get the convenience of a programmable timer that can be set to a maximum of 24 hours.
1. Honeywell MM14CHCS 14,000 BTU Portable Air Conditioner
Get a large capacity heating and cooling system with this 4-in-1 air conditioning unit. At 14,000 BTU cooling capacity and 13,000 BTU heating capacity, this unit cools and heats a maximum area of 550 square feet and 400 square feet respectively. It’s got one of the highest automatic dehumidifying capacity of 95 pints per day. The integrated fan generates an airflow of 265 CFM. With this unit, you get a quiet operation of 54 dB. |
Pages
Tuesday, 14 May 2013
A Loan Repaid
I first met Zaheer at Motiram's garage where he often
whiled away his time. He was unemployed like many other youths then. He helped me to get a chicken that I was desperately
looking for, as we were expecting guests for lunch.
Nazira, a sleepy nook in the late seventies,
was grappling with the requirements of the oil personnel who were posted here
from diverse Indian regions. Earlier it was a content little town evolving from
the many tea gardens that surrounded it. With the discovery of oil-fields
around it, it was only natural for the ONGC to set up a colony here. The
means of meeting the household needs were the co-operative store just
outside the colony gates and other small kiosks. In the evenings, a “haat” sprung up selling local produce of
vegetables, fruits, fish, poultry and eggs. The vendors’ cries mingled with the
smell of kerosene flames as the people peered at the wares and poked the fish
to check for freshness. This was the only time and place to stock up. That
should explain my desperation when I met Zaheer. For it is sacrilegious to
offer a meal to guests without the fish and the meat in any self respecting
Assamese household.
Assam was in a turmoilwith discontent
brewing like a bubbling cauldron. The Student’s Agitation was gaining momentum.
The youth across the state were swept away in its currents like the Brahmaputra
ruthlessly eroded chunks of land when in spate.
To them the out-siders were exploiters looting away the resources of the
place while the locals were left penniless.
I met him outside our colony
gate one evening.
“Hello Zaheer! How’s everything? ”
He came up to me with an embarrassed look and gazed
straight at me, “Can you help me to get a job, dada?” I was taken aback.
“Why a job? Why don’t you do something on your own?”
“Dada, I don’t have the money to start on my own and
I cannot ask abba."
I discussed
Zaheer with my wife Moni. He seemed a nice lad to me. There was something
latent in him, restless with caged energy.
“Why don’t we help him?” said Moni quietly, “ I've some savings, you know.” I called him
the next day.
“My wife and I thought about it, Zaheer. If you are
serious about it, we will give you a loan of fifteen hundred rupees which is my
wife’s savings actually. You can repay us once you make headway.”
He was
speechless. “You could start with a bakery since there isn't one here. Things
like bread, cakes, biscuits…There would be a demand for them in the colony”
said I.
Zaheer’s face lighted up as he saw the idea taking
shape.
“ I've a friend who has a bakery in Sibsagar. You
could begin by sourcing the products
from there” I said.
“Yes, I can tie up to get the stuff by the early
morning State Transport bus,” said Zaheer his eyes shining.
And so
began Zaheer’s shop T-fin. Every
morning his wares would arrive in a black tin box by the first bus from
Sibsagar. Initially these barely managed to cover the shelves that his
carpenter friend made. Breads, biscuits, puffs were suddenly available in
Nazira. People started trickling in, first out of curiosity and then out of
habit.
After a long
tenure at Nazira I was posted to Madras,
now renamed Chennai.
The next
time I met him was during our home town visit when he came to take us to
Nazira. "Zaheer, you have done well for yourself" I said."Allah has been kind, dada!" said he. I saw a T-fin, all spruced up and swanky being
handled by Bulbul, his brother. Next to it was an
electronics showroom flaunting gadgets from small transistors to televisions.
"But I have had my moments of doubts as well" said he with a smile. "I took up a job with the Accounts department in ONGC for a couple of years leaving the bakery with a manager, thinking of a secured future."This was news to me. "And now you are back to business again. Why?"" You know I had five sisters to be educated and married off. I needed money fast and the salary was not enough for this. When I finally decided to leave the job everyone thought I was mad. They dissuaded me, counselled me...But I knew what I wanted and how to get it. No, the salary was not going to tide me over. " Zaheer laughed. We were sitting in his office room catching up after a long time.
After leaving the job he plunged into business building up from what he had. Zaheer, I
learnt, had forayed into construction business soon after. He built a reputation for himself for his quality of work and soon it was flourishing. Assam in 1990s had a parallel murky goings-on with the surrendered militants of ULFA demanding a fees for applying for tenders. And once you got the tender they were expected to be given a commission. This was eating into the business man's profit. "In such a situation, the quality of work would have to be compromised for I couldn't work on a loss. I spent many sleepless nights. My reputation was at stake. That was when I decided to give it up. I gave up my flourishing business of construction. It was tough but there was no other way" said Zaheer with a grimace. Just then the phone rang. Zaheer excused himself to attend it. Right from the beginning it has been a constant struggle. But Zaheer was a fighter.I looked out of his office at the showroom. He had a couple of more branches. It was bristling with customers and salesmen."Your showrooms are also doing well" I said when he looked around from his call." Dada, I've realised that it is better to change route once you hit a dead end. So when I see a particular venture not doing well or not giving the expected returns, I start something different" said Zaheer, "something that I believe I can give my best to. Come dada, lets go for lunch. Manju is waiting for us."A young lad who was once
scouring for employment now fed many homes. A reputed businessman, Zaheer never forgot his own humble beginnings. At a time when his
contemporaries were fumbling to find their bearings, Zaheer realised his calling. Every time I went back to Nazira, I saw him grow. He had long paid me back the
loan. What I witnessed now was the interest. I couldn’t have asked for a better repayment.I wish to get my story published in Chicken Soup for the Indian Entrepreneurs Soul in association with BlogAdda.com |
Q:
How Can I Cascade Radio Buttons In Javascript?
I'm new to programming and am tasked with creating a clickable tournament bracket. 8 teams, where you can pick the winners who will advance to the next round.
The problem I'm running into is when a person would want to revise picks in the first or second round after filling out the bracket.
For example in the first round the user picks:
Team1 v. Team8 --> #1 moves on
Team4 v. Team5 --> #4 moves on
Team3 v. Team6 --> #3 moves on
Team2 v. Team7 --> #2 moves on
In the Second Round let's say they pick:
Team1 v. Team4 --> #1 moves on
Team2 v. Team3 --> #2 moves on
In the Finals, they pick:
Team1 v. Team2 --> #1 wins.
Then, let's say the user changes their mind and picks Team8 to upset Team1 in the first round. Right now the code would still show Team1 as the winner and in the final game. How can I remove the text for the future rounds that currently shows Team1?
Attaching JSFiddle here: https://jsfiddle.net/a4hya2c7/5/ or see below code:
<!DOCTYPE html>
<html>
<head>
<title>Button</title>
</head>
<body>
<p>First Round</p>
<div id="round1">
<input type="radio" name="g1" id="i1" value="#1" onclick="changeText(this.value);"><label id="r1">#1</label>
<input type="radio" name="g1" id="i2" value="#8" onclick="changeText(this.value);"><label id="r2">#8</label></br>
<input type="radio" name="g2" id="i3" value="#4" onclick="changeText2(this.value);"><label id="r3">#4</label>
<input type="radio" name="g2" id="i4" value="#5" onclick="changeText2(this.value);"><label id="r4">#5</label></br>
<input type="radio" name="g3" id="i5" value="#3" onclick="changeText3(this.value);"><label id="r5">#3</label>
<input type="radio" name="g3" id="i6" value="#6" onclick="changeText3(this.value);"><label id="r6">#6</label></br>
<input type="radio" name="g4" id="i7" value="#7" onclick="changeText4(this.value);"><label id="r7">#7</label>
<input type="radio" name="g4" id="i8" value="#2" onclick="changeText4(this.value);"><label id="r8">#2</label></br>
</div>
</br>
<p>Second Round</p>
<div id="round2">
<input type="radio" name="g5" id="i9" onclick="changeText5(this.value);"><label id="r9"></label>
<input type="radio" name="g5" id="i10" onclick="changeText5(this.value);"><label id="r10"></label></br>
<input type="radio" name="g6" id="i11" onclick="changeText6(this.value);"><label id="r11"></label>
<input type="radio" name="g6" id="i12" onclick="changeText6(this.value);"><label id="r12"></label></br>
</div>
</br>
<p>Finals</p>
<div id="round3">
<input type="radio" name="g7" id="i13" onclick="changeText7(this.value);"><label id="r13"></label>
<input type="radio" name="g7" id="i14" onclick="changeText7(this.value);"><label id="r14"></label></br>
</div>
</br>
<p>Winner</p>
<div id="round4">
<label value="#8" id="r15"></label></br>
</div>
</br>
</body>
</html>
JS:
function changeText(value) {
document.getElementById('r9').innerHTML = value;
document.getElementById('i9').value = value;
}
function changeText2(value) {
document.getElementById('r10').innerHTML = value;
document.getElementById('i10').value = value;
}
function changeText3(value) {
document.getElementById('r11').innerHTML = value;
document.getElementById('i11').value = value;
}
function changeText4(value) {
document.getElementById('r12').innerHTML = value;
document.getElementById('i12').value = value;
}
function changeText5(value) {
document.getElementById('r13').innerHTML = value;
document.getElementById('i13').value = value;
}
function changeText6(value) {
document.getElementById('r14').innerHTML = value;
document.getElementById('i14').value = value;
}
function changeText7(value) {
document.getElementById('r15').innerHTML = value;
document.getElementById('r15').value = value;
}
A:
Firstly your <br> tags aren't correct as you can see here:
</div>
</br>
You are using the close tag without the opening. You can either use:
<br/>
Or use the newer html5 self closing tags style:
<br>
As to the JS itself, you can simplify it a lot if you use an attribute, similar as the name you are already using to point where it must put the value. With that change the code would be a lot smaller and simpler:
const inputs = document.querySelectorAll("input[type=radio]");
for (let inp of inputs){
inp.addEventListener("change", function(){
let targetLabel = document.getElementById(inp.dataset.target);
targetLabel.previousSibling.value = inp.value;
targetLabel.innerHTML = inp.value;
});
}
<p>First Round</p>
<div id="round1">
<input type="radio" name="g1" id="i1" value="#1" data-target="r9"><label id="r1">#1</label>
<input type="radio" name="g1" id="i2" value="#8" data-target="r9"><label id="r2">#8</label><br>
<input type="radio" name="g2" id="i3" value="#4" data-target="r10"><label id="r3">#4</label>
<input type="radio" name="g2" id="i4" value="#5" data-target="r10"><label id="r4">#5</label><br>
<input type="radio" name="g3" id="i5" value="#3" data-target="r11"><label id="r5">#3</label>
<input type="radio" name="g3" id="i6" value="#6" data-target="r11"><label id="r6">#6</label><br>
<input type="radio" name="g4" id="i7" value="#7" data-target="r12"><label id="r7">#7</label>
<input type="radio" name="g4" id="i8" value="#2" data-target="r12"><label id="r8">#2</label><br>
</div>
<br>
<p>Second Round</p>
<div id="round2">
<input type="radio" name="g5" id="i9" data-target="r13"><label id="r9"></label>
<input type="radio" name="g5" id="i10" data-target="r13"><label id="r10"></label><br>
<input type="radio" name="g6" id="i11" data-target="r14"><label id="r11"></label>
<input type="radio" name="g6" id="i12" data-target="r14"><label id="r12"></label><br>
</div>
<br>
<p>Finals</p>
<div id="round3">
<input type="radio" name="g7" id="i13" data-target="r15"><label id="r13"></label>
<input type="radio" name="g7" id="i14" data-target="r15"><label id="r14"></label><br>
</div>
<br>
<p>Winner</p>
<div id="round4">
<label value="#8" id="r15"></label><br>
</div>
<br>
Note that the targets i set were directly to the labels:
<input type="radio" ... data-target="r9">
Then to get to the corresponding radio i used previousSibling.
I used several functions that you may not know:
querySelectorAll - to get an array of elements matching the selector passed
for ... of to iterate on all inputs retrieved by the querySelectorAll
addEventListener - to set the event handler directly on Javascript and make the code cleaner
And with just those little lines of Javascript you have the same functionality, and most importantly, without code duplication.
Now if you want to cascade a change on the first radios to the lower ones, you can call the change event directly with:
const event = new Event('change');
element.dispatchEvent(event);
Whenever you see that the target element already has a value set. This works like a recursive call:
const inputs = document.querySelectorAll("input[type=radio]");
for (let inp of inputs){
inp.addEventListener("change", function(){
let targetLabel = document.getElementById(inp.dataset.target);
let targetRadio = targetLabel.previousSibling;
targetRadio.value = inp.value;
targetLabel.innerHTML = inp.value;
//if this isn't the last and it's checked, to not cascade non selected radios
if (targetRadio.hasAttribute && targetRadio.checked){
const nextTargetLabel = document.getElementById(targetRadio.dataset.target);
if (nextTargetLabel.innerHTML != ''){ //if it has a value then cascade
targetRadio.dispatchEvent(new Event('change'));
}
}
});
}
<p>First Round</p>
<div id="round1">
<input type="radio" name="g1" id="i1" value="#1" data-target="r9"><label id="r1">#1</label>
<input type="radio" name="g1" id="i2" value="#8" data-target="r9"><label id="r2">#8</label><br>
<input type="radio" name="g2" id="i3" value="#4" data-target="r10"><label id="r3">#4</label>
<input type="radio" name="g2" id="i4" value="#5" data-target="r10"><label id="r4">#5</label><br>
<input type="radio" name="g3" id="i5" value="#3" data-target="r11"><label id="r5">#3</label>
<input type="radio" name="g3" id="i6" value="#6" data-target="r11"><label id="r6">#6</label><br>
<input type="radio" name="g4" id="i7" value="#7" data-target="r12"><label id="r7">#7</label>
<input type="radio" name="g4" id="i8" value="#2" data-target="r12"><label id="r8">#2</label><br>
</div>
<br>
<p>Second Round</p>
<div id="round2">
<input type="radio" name="g5" id="i9" data-target="r13"><label id="r9"></label>
<input type="radio" name="g5" id="i10" data-target="r13"><label id="r10"></label><br>
<input type="radio" name="g6" id="i11" data-target="r14"><label id="r11"></label>
<input type="radio" name="g6" id="i12" data-target="r14"><label id="r12"></label><br>
</div>
<br>
<p>Finals</p>
<div id="round3">
<input type="radio" name="g7" id="i13" data-target="r15"><label id="r13"></label>
<input type="radio" name="g7" id="i14" data-target="r15"><label id="r14"></label><br>
</div>
<br>
<p>Winner</p>
<div id="round4">
<label value="#8" id="r15"></label><br>
</div>
<br>
Edit:
To clear the following radios instead of cascading the values you can use a while, and navigating with the targets until you reach the end:
const inputs = document.querySelectorAll("input[type=radio]");
for (let inp of inputs){
inp.addEventListener("change", function(){
let targetLabel = document.getElementById(inp.dataset.target);
let targetRadio = targetLabel.previousSibling;
targetLabel.innerHTML = inp.value;
targetRadio.value = inp.value;
//while there is a next target clear it
while (targetLabel.previousSibling.hasAttribute){
targetLabel = document.getElementById(targetRadio.dataset.target);
targetRadio = targetLabel.previousSibling;
targetRadio.checked = false;
targetLabel.innerHTML = '';
}
});
}
<p>First Round</p>
<div id="round1">
<input type="radio" name="g1" id="i1" value="#1" data-target="r9"><label id="r1">#1</label>
<input type="radio" name="g1" id="i2" value="#8" data-target="r9"><label id="r2">#8</label><br>
<input type="radio" name="g2" id="i3" value="#4" data-target="r10"><label id="r3">#4</label>
<input type="radio" name="g2" id="i4" value="#5" data-target="r10"><label id="r4">#5</label><br>
<input type="radio" name="g3" id="i5" value="#3" data-target="r11"><label id="r5">#3</label>
<input type="radio" name="g3" id="i6" value="#6" data-target="r11"><label id="r6">#6</label><br>
<input type="radio" name="g4" id="i7" value="#7" data-target="r12"><label id="r7">#7</label>
<input type="radio" name="g4" id="i8" value="#2" data-target="r12"><label id="r8">#2</label><br>
</div>
<br>
<p>Second Round</p>
<div id="round2">
<input type="radio" name="g5" id="i9" data-target="r13"><label id="r9"></label>
<input type="radio" name="g5" id="i10" data-target="r13"><label id="r10"></label><br>
<input type="radio" name="g6" id="i11" data-target="r14"><label id="r11"></label>
<input type="radio" name="g6" id="i12" data-target="r14"><label id="r12"></label><br>
</div>
<br>
<p>Finals</p>
<div id="round3">
<input type="radio" name="g7" id="i13" data-target="r15"><label id="r13"></label>
<input type="radio" name="g7" id="i14" data-target="r15"><label id="r14"></label><br>
</div>
<br>
<p>Winner</p>
<div id="round4">
<label value="#8" id="r15"></label><br>
</div>
<br>
|
Q:
React Native : Why is "navigation" not being passed to my component?
I have been working on this problem for two days now and nothing on the web seems to be exactly what I am looking for.
I am attempting to implement a StackNavigator into my React Native app, but for some reason "navigation" is not being passed as a prop to the involved components. Therefore when I call this.props.navigation.navigator by pressing Button, I get the error undefined is not an object (evaluating this.props.navigation.navigate).
I have logged the props several times and the props object is empty, so the issue is not a deconstruction-of-the-props-object issue like others who get this error have had, but the fact that the navigation prop is not there in the first place.
I've tried placing the navigator code in its own file and in the App.js file thinking that it was somehow called after the components are rendered, and therefore not getting a chance to pass the navigation prop in, but that didn't work either. I've also looked to see if it is part of the props in the componentDidMount event. Still not.
import React, { Component } from 'react'
import { Text, View, Button, StyleSheet, FlatList } from 'react-native'
import { StackNavigator } from 'react-navigation'
import { getDecks } from '../utils/api'
import NewDeckView from './NewDeckView'
import DeckListItem from './DeckListItem'
export default class DeckListView extends Component {
constructor(props){
super(props)
this.state = {
decks: []
}
}
componentDidMount(){
console.log('props now test',this.props)
getDecks()
.then( result => {
const parsedResult = JSON.parse(result);
const deckNames = Object.keys(parsedResult);
const deckObjects = [];
deckNames.forEach( deckName => {
parsedResult[deckName].key = parsedResult[deckName].title
deckObjects.push(parsedResult[deckName])
})
this.setState({
decks:deckObjects
})
} )
}
render(){
return (
<View style={styles.container}>
<Text style={styles.header}>Decks</Text>
<FlatList data={this.state.decks} renderItem={({item})=><DeckListItem title={item.title} noOfCards={item.questions?item.questions.length:0}/>} />
<Button styles={styles.button} title="New Deck" onPress={()=>{this.props.navigation.navigate('NewDeckView')}}/>
</View>
)
}
}
const styles = StyleSheet.create({
header:{
fontSize:30,
margin:20,
},
container:{
flex:1,
justifyContent:'flex-start',
alignItems:'center'
},
button:{
width:50
}
})
const Stack = StackNavigator({
DeckListView : {
screen: DeckListView,
},
NewDeckView: {
screen:NewDeckView,
}
})
A:
Like Vicky and Shubhnik Singh mentioned, you need to render the imported navigation stack in App.js like so:
import React from 'react';
import { Stack } from './navigator/navigator'
export default class App extends React.Component {
render() {
return <Stack/>
}
}
The navigator should look something like this and the first key in the object passed to StackNavigator will be rendered by default. In this case, it will be DeckListView.
import { StackNavigator } from 'react-navigation'
import DeckListView from '../components/DeckListView'
import NewDeckView from '../components/NewDeckView'
export const Stack = StackNavigator({
DeckListView : {
screen: DeckListView,
navigationOptions: {
headerTitle: 'Home',
},
},
NewDeckView: {
screen:NewDeckView,
navigationOptions: {
headerTitle: 'New Deck',
},
},
})
Thanks guys for the support! Somehow this wasn't clear for me in the documentation.
|
Q:
htaccess caching not working
I inputted the following into my ".htacces" file in order to start caching the Web Content. According to Google Page Speed and Yslow the pages are still not Cached. Are the modules wrong? Or is it that the apps are not showing the data correctly?
Site is running on Apache 2.0 on Server
.htaccess (part with caching modules):
# Expire headers
<ifModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 1 seconds"
ExpiresByType image/x-icon "access plus 2592000 seconds"
ExpiresByType image/jpeg "access plus 2592000 seconds"
ExpiresByType image/png "access plus 2592000 seconds"
ExpiresByType image/gif "access plus 2592000 seconds"
ExpiresByType application/x-shockwave-flash "access plus 2592000 seconds"
ExpiresByType text/css "access plus 604800 seconds"
ExpiresByType text/javascript "access plus 216000 seconds"
ExpiresByType application/javascript "access plus 216000 seconds"
ExpiresByType application/x-javascript "access plus 216000 seconds"
ExpiresByType text/html "access plus 600 seconds"
ExpiresByType application/xhtml+xml "access plus 600 seconds"
</ifModule>
# Cache-Control Headers
<ifModule mod_headers.c>
#month
<filesMatch "\.(ico|jpe?g|png|gif|swf)$">
Header set Cache-Control "max-age=2592000, public"
</filesMatch>
#week
<filesMatch "\.(css|js)$">
Header set Cache-Control "max-age=604800, public"
</filesMatch>
#day
<filesMatch "\.(x?html?|php)$">
Header set Cache-Control "max-age=43200, private, must-revalidate"
</filesMatch>
</ifModule>
# END Cache-Control Headers
# Turn ETags Off
<ifModule mod_headers.c>
Header unset ETag
</ifModule>
FileETag None
# Remove Last-Modified Header
<ifModule mod_headers.c>
Header unset Last-Modified
</ifModule>
A:
Go into httpd.conf and look for the mod_expires line, it should not be commented out. Look for the mod_headers line and make sure it is not commented out.
Or (not for a critical app) there is an easy and dirty test: remove <ifModule mod_expires.c> and </ifModule> leave stuff in between, same goes for <ifModule mod_headers.c> and if your server fails with 500 internet Server Error then your probably missing one or both of those modules and they are not enabled. If so then go into httpd.conf and enable what you need.
You can also test your site's response headers using a tool like REDbot. Simply pick a resource URL like one pointing to an image and paste it in the tool to see what headers get sent back along with some recommendations. Note that it follows the domain's robots.txt rules and will not check the resource if it is disallowed.
And like Gerben said, using the net tab in firefox, chrome dev tools, or some equivalent web developer tool helps see what headers are being sent and received.
You also don't need to set Cache-Control public. And you don't need to use max age if you're also using ExpiresByType calls.
For more info read this great tutorial: http://www.mnot.net/cache_docs/
And learn by example: checkout how it is done in the html5-boilerplate at https://github.com/h5bp/html5-boilerplate/blob/master/dist/.htaccess
For other popular server config examples like lighthttpd, Node.js, Nginx, etc. see:
https://github.com/h5bp/server-configs
|
Facebook Shutters Atlas Ad Server, Ending Its Assault On DoubleClick; Atlas To Live On As Measurement Pixel
On Friday, Facebook made the inevitable official by retiring the ad-serving component of Atlas, thereby making it primarily a people-based measurement pixel. The ad-serving capability will be phased out over the next couple of months so as not to be disruptive to users.
Facebook’s ad stack looks quite different today than it did during Advertising Week 2014, when it first rolled out the revamped Atlas ad server it acquired in 2013 from Microsoft. The company appeared ready to mount a challenge to Google's DoubleClick, an impression confirmed by its acquisition of sell-side video platform LiveRail.
Could Facebook put together a scaled, mature ad stack that would act as a check on Google's power? Turns out it couldn't.
Since then, Facebook scrapped plans to build a demand-side platform for Atlas in March, shuttered its video SSP, LiveRail, in April and shut down FBX, its desktop retargeter, on Nov. 1.
What’s left now is the Facebook Audience Network – basically an ad network for a walled garden, albeit it a very lucrative one with a $1 billion run rate, based on Q4 performance – and campaign measurement via Atlas.
Ad tech at Facebook, in terms of being competitive with Google’s DoubleClick, is effectively dead.
In September, Facebook moved Atlas under its Marketing Sciences group led by Brad Smallwood, VP of measurement and insights. The combined teams will focus on developing better integrations between Atlas and Facebook’s measurement platforms.
Slimming Atlas down to focus solely on measurement is a logical move, said Erik Johnson, whose title will change from head of Atlas to head of client measurement.
“We see demand for measurement coupled with the fact that a greater and greater percentage of inventory is first-party served, especially on mobile,” Johnson said. “There’s a decreasing interest in third-party ad serving, which we saw coming, but it just happened much quicker than anticipated.”
It’s also hard to sell people on technology once that tech starts to get commoditized.
“People need a compelling reason to switch from one product to another,” Johnson said. “Clients could get all of the benefits of measurement without switching ad servers, so continuing to invest in building a product that a smaller and smaller percentage of our user base wanted felt like not the best place to focus our efforts.”
But it’s also true that as a platform with its origins in the desktop world, mobile and video ad serving were never Atlas’ forte to begin with, with some agencies and advertisers experiencing difficulty serving ads across formats and devices. It was clear by early this year that Atlas adoption was not tracking with goals. Facebook said at the time of the ad server relaunch that the product had 20 marketer customers. A year later it hadn't updated that figure.
Although Facebook is axing its ad-serving capability, it’s still committed to cross-device, Johnson said.
“You can’t help marketers understand how their media is performing and how to make business decisions without some kind of cross-device offering,” he said. “You need that data in order to help clients, and we’re not moving away from that at all. But where a lot of that data sits today doesn’t require an ad server. The way to make our position clear is to make measurement the lead on everything we do and to do that we need to remove the noise around ad serving.”
But it remains an open question whether Facebook will eventually cave to buy-side pressure and provide user-level attribution outside of the Facebook ecosystem. Johnson effectively said the industry shouldn’t hold its breath.
“Some people have asked for that, yes, but people ask for all kinds of things,” he said. “We’re focused on protecting the data that our end users entrust us with and we’re tight on that for now. But the future is an unknown for everyone."
1 Comment
The 'Facebook DSP' we shall never see. FB remains 'walled garden'. No DSP with transparent steering mechanisms, since FB - other than Google - has too few influence on publishers (outside its own platform) - and might not be willing to share mobile best practices here. For publishers this is not the best news, in terms of independence. Header bidding enthusiasts will feel affirmed once more. |
Q:
How did they make the stickies in this blog?
I'm wondering how the people at Panic made the stickies in their blog page!!!
http://www.panic.com/blog/
I got the 3d trasformation, but i really can't understand how they did the moving shadow!
any idea?
(Warning: webkit browser needed)
A:
Just look at the source. They scaled the shadow up (vertically) by 2%.
#features ul li:hover div {
-webkit-transform: scaleY(1.02);
}
The origin and css transition was set in an earlier declaration.
#features ul li div { /* fake blank div included at the start of each out; it holds the shadow */
width: 225px;
height: 210px;
position: absolute;
top: 0;
background-repeat: no-repeat;
-webkit-transform-origin: 0 0;
-webkit-transition: -webkit-transform .4s ease;
}
|
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<Button
android:id="@+id/btn_test_anim"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Test Animation" />
<ListView
android:id="@+id/list_view_test"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:persistentDrawingCache="animation|scrolling" />
</LinearLayout>
|
#!/usr/bin/env python3
import datetime
import os
import appscript as closure
def main():
print ('Script run started at: {0}'.format(datetime.datetime.now()))
# set general error handler
closure.setup_exc_handler(None)
closure.proceed_with_closure_execution()
if __name__ == "__main__":
main() |
Epithelial ovarian tumours of low malignant potential.
The available literature and the management of epithelial tumours of low malignant potential (LMP) is reviewed. The criteria for a diagnosis of LMP at the University of the Witwatersrand are delineated in detail. Based on the records in the Ovarian Tumour Registry of this University, experience with 29 such tumours over 4 years is presented. Of these, 14 (48.3%) were of the serous variety, 12 were mucinous (41.4%), and 2 (6.9%) were mucinous-serous, the remaining 1 (3.4%) being endometrioid. LMP tumours accounted for 12.9% of proliferating epithelial ovarian tumours in black patients compared with 16.9% in white patients. Pelviperitoneal cytological washings for detection of malignant cells in patients with LMP tumours is mandatory. |
/*--------------------------------*- C++ -*----------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.3.0 |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
object controlDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
libs ("libOpenFOAM.so" "libfieldFunctionObjects.so");
application pisoFoam;
startFrom startTime;
startTime 0;
stopAt endTime;
endTime 0.7;
deltaT 1e-4;
writeControl timeStep;
writeInterval 1000;
purgeWrite 0;
writeFormat binary;
writePrecision 6;
writeCompression compressed;
timeFormat general;
timePrecision 6;
runTimeModifiable true;
functions
{
#include "readFields"
#include "cuttingPlane"
#include "streamLines"
#include "forceCoeffs"
}
// ************************************************************************* //
|
Nov.28 (GMM) Formula One could be set to shed yet another team from the back of the grid ahead of the 2014 season.
Twelve teams contested last year's world championship, but struggling HRT succumbed at the end of the year.
Now, according to Germany's Auto Motor und Sport, the sport could be reduced to just 10 teams and twenty cars ahead of the 2014 season.
Respected correspondent Michael Schmidt said backmarker Marussia could merge with the financially struggling but far more established midfielder Sauber.
Esteban Gutierrez, Sauber C32 Photo by: XPB Images
"(Marussia team owner) Andrei Cheglakov is apparently fed up with digging so deep into his own pockets only to be at the back of the field," said Schmidt.
"But he wants to stay in Formula One. First, he wanted to buy Toto Wolff's Williams shares," the German correspondent added.
"Bernie Ecclestone is said to have endorsed the deal, but Claire Williams rejected it because they want to remain independent.
"Now, the Russians apparently have Sauber in their sights," said Schmidt.
Meanwhile, RTL Nederland has reported that Marcel Boekhoorn, a Dutch businessman and billionaire, could be interested in buying into Force India.
Fascinatingly, Boekhoorn is Caterham driver Giedo van der Garde's father-in-law. |
This current bull market run remains the most hated rally we've ever seen. Our meetings with
consultants and observations from the media confirm this. In July 2009, at 900 on the S&P, our
quarterly letter stated "over the next several years, we could reasonably expect a move back
towards 1400 or more on the S&P 500." In October 2009 at 1069 on the S&P, our quarterly
letter stated "We think over the course of the next several years, it could move back towards the
2007 highs of 1550 or so." With those targets surpassed, we want to point out that the odds are
the market is in a secular bull trend and the longer term direction of the market has shifted from
sideways to upward with the recent breakout. Most market participants are not positioned for
such a move, which makes it even more likely in our opinion.
Something extraordinary just happened, and most market participants have missed it. It's never
happened in my investing career and has only happened once in the past 50 years. It has nothing
to do with the government shutdown, wrangling in Washington or the gloomy predictions you
hear from the TV pundits. The market decisively broke out of a 14 year trading range! That's
big news, because we think the market may be forecasting a better decade to come, as has
occurred after other breakouts like this. The chart below notes the four periods since 1900 where
stocks traded sideways for extended periods.
Dow Jones Industrial Average Versus 10 year Treasury YieldChart 1
After the market broke out the last two times in 1980 and 1954, it was the prelude to at least ten
good years in the market with annual percentage returns averaging in the mid-teens. The current
market rally is not being treated as a potential new secular bull, and investors seem to be
avoiding stocks, particularly US stocks. We believe the "Great Rotation" back to stocks is
beginning, and may accelerate as interest rates are likely to rise from here. Markets can always
correct, or even decline 20% or more in a year, but our sense is the market just signaled that the
secular sideways market is over and a rising trend is back in place. If this is the case, S&P 2500
could be a conservative goal over the next several years.
Let's take a look at some prior instances where markets broke out and review what could
potentially occur. In the chart below, we present the market action during and after the Great
Depression. The market peaked out in 1929, and traded in a range for the 22 years between 1932
and 1954. During that time frame, policy errors aggravated the situation and extended the
depression as money supply decreased, trade between nations was discouraged and frequent
changes in Washington Policies left business leaders unwilling to invest. After the Depression
and World War II, investors were simply worn out. In 1954, the US was just a year removed
from the Korean War and the Cold War was just getting started. Despite these headwinds, the
market broke out in 1954 and went on to achieve 14% annual returns over the next ten years as
the economies of Europe and Japan rebuilt from the destruction of the war.
The S&P 500 1927-1964Chart 2
The other modern instance of the US market breaking out from a long term trading range
occurred after the weakness of the 1970's made way for the optimism of the 1980s. The next
chart illustrates this period. Following the 1973 OPEC oil embargo, the market bottomed in
1974 and traded sideways until 1980. During this period, the US economy suffered inflation,
recession (producing stagflation) and another oil crisis in 1979. Those were only the economic
issues, as political and social issues from the period included Watergate, a general decline in US
stature overseas and the Iranian hostage crisis. Despite these concerns the stock market broke
out in 1980. Regardless of enduring the 1982 recession, the market went on over the ten years
from 1980 to 1990 to achieve nearly 17% annual returns.
The S&P 500 1971-1991Chart 3
All of this brings us to the current situation, presented in the chart below. We endured a second
bear market in ten years with the financial crisis of 2008 and the Great Recession of 2009. The
recovery took 4 years, aided by extraordinary fiscal and monetary stimulus. Now, we've seen a
fairly decisive breakout and most observers are not yet willing to say that it could be a
confirmation that the bull market we have seen since the end of the financial crisis is likely to
continue.
The S&P 500 2007-PresentChart 4
Nobody knew why markets were going to improve in 1954 or in 1980, but the market indicated
something was going to change. This market is indicating a potential improvement from here.
There may be catalysts emerging that would explain this, and several we are thinking about
are presented below.
Economically, several catalysts could be emerging, including:
A Globally synchronized recovery.
Emerging market growth could continue for years.
Financially, several catalysts could be emerging, including:
Easy money with deflationary pressures leaves few alternatives to stocks.
Who knows what the future holds for grand bargains, entitlement reform or
business policies after future elections.
Investors voice many concerns about this scenario, with the largest being "the market has run for
almost 5 years now without a bear market, aren't we due for one?" That is a very reasonable
question. To answer this, we ask two questions. First, what are the excesses in the economy that
would lead to a dislocation that could drive a bear market? Second, has the yield curve inverted or
flattened out yet? Responding to the first question, the largest dislocation within the economy currently is the
size and scope of the Federal Government post the 2008 financial crisis. As you will note from the
attached chart, the Federal deficit as a percent of GDP ballooned to over 10% of GDP after the stimulus of the
financial crisis. In the wake of a modest economic recovery and reinstatement of payroll taxes, that has contracted to less than
-4.5% and is projected by the CBO to decline to -2.4% in 2015.
Chart 5 Source: ISI Group
Investor concerns about a potential bear market may be misplaced as well. We reviewed the
history of bear markets since 1980 and found a strong correlation between inverted yield curves
and bear markets as noted in the chart below.
The worst three bear markets of the past 30 years were preceded by short term rates rising above
long term rates. This makes sense, as normally bear markets are caused by Fed Action designed
to slow the economy and ease inflation concerns. We do not see the likelihood of this happening
for quite some time to come. There is the legitimate concern that as the Fed tapers their bond
purchases that long term rates rise, which could disrupt the market. While that could happen,
unless it is the precursor for a recession, we do not think a bear market is likely.
So, what could go right to make 2500 on the S&P a possibility? We see several potential
catalysts playing out both domestically and internationally. The first trend we would
point to is the availability of cheap energy from the US. Chart 7
illustrates the production of crude from Texas has surpassed Persian Gulf Imports
recently. The availability of new energy sources in the US is a game changer, in our
opinion. It may change the geopolitical landscape if North America becomes self sufficient in
energy resources. Between discoveries in Texas, North Dakota and the development of the Canadian Tar Sands, we might
accomplish this.
Chart 7
The second item we would point to is the potential for a synchronized global economic recovery.
Europe has emerged from recession, and we are not seeing signs of financial stress in their
markets. While most observers are skeptical, we would point out they were skeptical of the US
potential for recovery in 2009 as well. Since then, we have seen a steady, albeit slow, economic recovery and excellent
market performance. Chart 8 shows the growth of foreign versus domestic profits for S&P 500
companies. With foreign sales approaching 40% of the total, we think this growth could continue especially if
you see an uptick in growth from Europe and the emerging markets. Our sense is this is underappreciated as most pundits
seem to focus on negative aspects of the global economy and conveniently ignore that the European recession is over, Japan is stimulating
and Emerging Markets are still growing.
Chart 8
We believe there is pent up demand both in the US and overseas for consumer durables. Autos
and housing have been stronger performers in the US than many other industries. We are
starting to see signs of improved demand in Europe for autos. In the UK, the housing market is
strong. In the Emerging Markets, China has recently become the largest auto market worldwide,
and growth in other consumer products and infrastructure continues.
As Emerging Markets gain clout, they are working to improve living standards for their citizens. Cart 9
from The Economist shows EMs have over 80% of global population, most international foreign exchange reserves,
yet only about 30% of global consumer spending. They should produce more GDP than the developed markets within
the decade. Improving living standards in EMs could play a role in a better outlook over the coming decade.
Chart 9
Undoubtedly, there are concerns to be addressed. Many investors worry about US government
debt levels and how sustainable they are at current levels. The Federal Reserve's bond purchases
have changed the dynamics of the bond market. Investors are worried that ending these
purchases over the next few years could disrupt the economy and financial markets.
Additionally, the implementation of the Affordable Care Act will have unforeseen and
unintended consequences. The labor market is improving, but many question whether that is
because of an underlying improvement or simply fewer people participating in it. We are not
trying to dismiss these concerns, but simply saying the market is inferring that some solution is
likely to address these issues over the next few years. How these play out is anyone's guess.
Still, we think the S&P 500 is telling us something better than the consensus expectations is
likely to occur. Many concerns we note are being discussed in detail in the media, which should
mean that they are discounted in the markets valuation. What if something goes right? We can
and will experience cyclical bear markets in the next 10 years, but they should be seen as buying
opportunities within a market that is trending upward over that time. Time will tell, but we think
the long secular bear market has ended, and worldwide markets are on a better footing for the
next decade to come.
Past performance does not provide any guarantee of future performance, and one should not rely on the composite
performance as an indication of future performance. Investment return and principal value of an investment will fluctuate
so that the value of the account may be worth more or less than the original invested cost. |
This service is designed to allow HPFF users to alert the staff about inappropriate reviews.
Review:
Gaiapet says:HUGE LOVE! Yayness! They are so perfect together! I love when they are together and happy. It makes me happy! I love Jane's hair in the chapter image. My hair rarely gets that curly. :( Wonky quilt! Jane finds out what book her mother is reading! I thought that it was a little odd that she made a joke right after finding out, since she made such a big point of saying that she wanted to know what it was in an earlier chapter. But I guess that's just how Jane deals with emotion. I just wish there was a bit more. Incidentally, Emma is the only book by Jane Austen that I enjoy reading. I love the plots and movies and characters in the other ones, but I just find them dull. Jane Austen wrote Emma as a heroine that probably only she would love, but I love her. But hey, I love Libby and Amanda too. Mr. P said Dapper. Another point in his box! I'm not sure if that made sense. Plus I'm not actually keeping score. But he pretty much rocks. DOGER! I have made a decision. Doger. I love both Liam and Doger for pretty much the same reasons, but I have yet to see Liam act awkwardly. Doger does. And I do love an awkward duck. Awkward ducks are just so freaking adorable! Adorkable! Haha. Red wine. Jane's legs. If I had known that the pillow on the couch would play such an important role in Jane's life I would have mentioned it when it came up earlier. Better late than never. Oliver's pillow! The gladiator makes an appearance! I love that costume and where it takes me in my mind. Very pretty picture. It should be the chapter image for "The Last Costume". That or Libby in her bunny costume... Haha! That meal sounds wicked good! Pork and mushroom sauce. Yum. Too bad about the green beans though. In my mind it will be broccoli. Have a recipe by any chance? I love jealous Oliver. He is just so vulnerable and un-Olivery when he is jealous. It's sweet. I also love how Jane throws a spoon at him. He deserves it.
It could mean my pleasures were insane, but half the world (like Oliver) could understand them and to the other half it meant I was daft for even considering it.
- Interesting concept. I like it. Better than a the dirty analysis.
Oi, Roger, Oliver and I are having lunch and I’m dressed in a tiny skirt because he’ll think it’s sexy. Oh, by the way I still have feelings for him and I’m going over there with the hopes that he’ll return them.
Bugger.
- Oh, Jane. I just felt so bad for her when I read that. But at least she admits it to herself, even if she later denies it.
“How do you know it won’t happen again?”
“Because I let you go once,” he said, squeezing my fingers. “I’m not letting you go again.”
- Sweet. Not Lee's speech in the garden sweet, but sweet. That made me cry, this just makes me go "awww". But Oliver isn't much of a romantic compared to Lee. Still, very sweet.
“Oh what in blazes does that mean?” I rolled over again, thinking about it. “I can’t understand Oliver’s pleasures? That sounds dirty. Eugh, this is rubbish.”
- Haha. Poor Jane. Motherly advice can be very confusing.
“Where did the food go anyway?” I asked, pulling open the door. “I thought since you got together with Lou we were supposed to look like we actually ate regular food.”
He made a face. “I got sick of the healthy stuff.”
- Mr.P! I guess being tricky is just too difficult. Big love to this interaction. And every interaction between Jane and Mr.P
I don’t know how well it’s going to pan out but I asked her a very stupid question (where the kitchens were) and she looked at me like I was insane, but then I thanked her later and she looked at me like I was less than insane. That’s a start, right? I’m going to see her at a Magpies fundraiser this coming weekend so maybe I’ll ask her to dance. Unless her date is a burly bloke, then no can do.
- DOGER! My love! My chosen one! But not Harry Potter chosen one. More like Ester was the King's chosen one and then they fell in love. Loved the letter! Love the tone of it and what he wrote! Made me laugh quite a few times.
Oliver leapt out from the hallway, standing in the light from the window, and I gasped. Literally, I nearly choked on nothing at all.
- Haha. Seems to happen a lot in this story... I would do the same thing though.
I couldn’t help it, I stared. My jaw fell lopsided again. A conversation on the Quidditch pitch came briefly back to me. The things I would do to him if I saw him in that costume. There was a bit of drool. I could feel it. My face was hot.
So. Never mind about that whole telling him off situation.
- Hahahahahahahahahahaha! At last he used his powers for good!
He broke out red silk napkins and I placed on in my lap, making a joke about how if I spilled the red wine it wouldn’t matter.
Then I did spill it and I was right. It didn’t matter. My lap was wet though.
- You have no idea how funny this is to me. I literally laughed for five minutes after reading it. Or a minute. Basically I did the exact same thing once. Except it was grape juice instead of wine. Hahahahahaha!
Love the next few chapters and this one. I mean, come on, Brownies, Doger, AND Oliver Wood! AMAZING!
Author's Response: Hi again!
My hair won't curl like that if I tried to pay it off with salon products. It just won't. It's straight, but not even that pretty shiny straight. It's like that whole when it gets wet it curls a bit, but it's more like wonky waves because it's layered. Anyway, going to end the immense paragraph about my hair texture now.
You are completely right about that little quip of a joke put in during the whole Emma thing. That is exactly how she deals with stuff like that. She puts off her thoughts onto something else and goes right back to humor as a parachute.
I love a lot of Jane Austen, though I must confess I haven't read everything. I really love Pride and Prejudice and Emma. I need to see all of their movies. I've only seen P&P and Sense and Sensibility, both of which I watch quite often.
I'm glad you finally decided between Roger and Liam! If Liam had more screen time I know you would get to see his awkwardness and more of his personality, but unfortunately there just isn't enough plotties to get him in all the time. :)
I'd use those as chapter images, but wouldn't be allowed to use the second.
I love how it gets compared to the Lee in the garden speech! One of my personal faves!!
I'm glad you enjoyed Roger's letter. I love writing the letters from him, which is probably why I kept him coming back. Honestly, originally I didn't have him planned to appear at ALL in this story. Then I thought maybe I could use him for a few other things. Then I realized his friendship powers with jane. After writing a few letters and letting them play off each other in the restaurant, etc, I realized I needed him to stay for good. I adore him :)
haha I love how you laughed at that red wine thing. That comes from personal experience actually, so that's awesome. Except it was liquor instead of wine lol. |
Rochelle Fitzgerald
Radio Interview: CMA vs Zillow Estimate
Posted Sep 18, 2016
Per Zillow, a “zestimate” is ‘Zillow’s estimated market value for an individual home and is calculated for about 100 million homes nationwide. It is a starting point in determining a home’s value and is not an official appraisal. The Zestimate is automatically computed daily based on millions of public and user-submitted data points’.
How do real estate agents determine the value of your home? And which estimate valuation is most accurate? |
Q:
Views centered in parent below each other
I'm trying to make this view with the Layout extension, I tried a little bit, but can't figure it out.
This is my code so far:
import UIKit
import Material
class ViewController: UIViewController {
private var nameField: TextField!
private var emailField: ErrorTextField!
private var passwordField: TextField!
private let constant: CGFloat = 32
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = Color.indigo.base
prepareNameField()
preparePasswordField()
prepareResignResponderButton()
}
/// Prepares the resign responder button.
private func prepareResignResponderButton() {
let btn = FlatButton(title: "Login", titleColor: Color.white)
btn.addTarget(self, action: #selector(handleResignResponderButton(button:)), for: .touchUpInside)
view.layout(btn).width(100).height(constant).right(0).top(8 * constant).horizontally(left: constant, right: constant);
}
/// Handle the resign responder button.
@objc
internal func handleResignResponderButton(button: UIButton) {
nameField?.resignFirstResponder()
passwordField?.resignFirstResponder()
}
private func prepareNameField() {
nameField = TextField()
nameField.placeholderNormalColor = Color.indigo.lighten4
nameField.placeholderActiveColor = Color.white
nameField.dividerNormalColor = Color.indigo.lighten4
nameField.dividerActiveColor = Color.white
nameField.isClearIconButtonEnabled = true
nameField.textColor = Color.white
nameField.placeholder = "Username"
view.layout(nameField).top(4 * constant).horizontally(left: constant, right: constant)
}
private func preparePasswordField() {
passwordField = TextField()
passwordField.placeholderNormalColor = Color.indigo.lighten4
passwordField.placeholderActiveColor = Color.white
passwordField.dividerNormalColor = Color.indigo.lighten4
passwordField.dividerActiveColor = Color.white
passwordField.isClearIconButtonEnabled = true
passwordField.textColor = Color.white
passwordField.placeholder = "Password"
passwordField.clearButtonMode = .whileEditing
passwordField.isVisibilityIconButtonEnabled = true
// Setting the visibilityIconButton color.
passwordField.visibilityIconButton?.tintColor = Color.white.withAlphaComponent(passwordField.isSecureTextEntry ? 0.38 : 0.54)
view.layout(passwordField).top(6 * constant).horizontally(left: constant, right: constant)
}
}
I'm new to swift so if someone can explain me how to accomplish it that would be great.
A:
Here is an example with TextFields.
import UIKit
import Material
class ViewController: UIViewController {
fileprivate var emailField: ErrorTextField!
fileprivate var passwordField: TextField!
/// A constant to layout the textFields.
fileprivate let constant: CGFloat = 32
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = Color.grey.lighten5
preparePasswordField()
prepareEmailField()
prepareResignResponderButton()
}
/// Prepares the resign responder button.
fileprivate func prepareResignResponderButton() {
let btn = RaisedButton(title: "Resign", titleColor: Color.blue.base)
btn.addTarget(self, action: #selector(handleResignResponderButton(button:)), for: .touchUpInside)
view.layout(btn).width(100).height(constant).centerVertically(offset: emailField.height / 2 + 60).right(20)
}
/// Handle the resign responder button.
@objc
internal func handleResignResponderButton(button: UIButton) {
emailField?.resignFirstResponder()
passwordField?.resignFirstResponder()
}
}
extension ViewController {
fileprivate func prepareEmailField() {
emailField = ErrorTextField()
emailField.placeholder = "Email"
emailField.detail = "Error, incorrect email"
emailField.isClearIconButtonEnabled = true
emailField.delegate = self
view.layout(emailField).center(offsetY: -passwordField.height - 60).left(20).right(20)
}
fileprivate func preparePasswordField() {
passwordField = TextField()
passwordField.placeholder = "Password"
passwordField.detail = "At least 8 characters"
passwordField.clearButtonMode = .whileEditing
passwordField.isVisibilityIconButtonEnabled = true
// Setting the visibilityIconButton color.
passwordField.visibilityIconButton?.tintColor = Color.green.base.withAlphaComponent(passwordField.isSecureTextEntry ? 0.38 : 0.54)
view.layout(passwordField).center().left(20).right(20)
}
}
extension UIViewController: TextFieldDelegate {
public func textFieldDidEndEditing(_ textField: UITextField) {
(textField as? ErrorTextField)?.isErrorRevealed = false
}
public func textFieldShouldClear(_ textField: UITextField) -> Bool {
(textField as? ErrorTextField)?.isErrorRevealed = false
return true
}
public func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {
(textField as? ErrorTextField)?.isErrorRevealed = false
return true
}
}
|
Neuromagnetic signatures of syllable processing in fetuses and infants provide no evidence for habituation.
Habituation, as a basic form of learning, is characterized by decreasing amplitudes of neuronal reaction following repeated stimuli. Recent studies indicate that habituation to pure tones of different frequencies occurs in fetuses and infants. Neural processing of different syllables in fetuses and infants was investigated. An auditory habituation paradigm including two different sequences of syllables was presented to each subject. Each sequence consisted of eight syllables (sequence /ba/: 5× /ba/, 1× /bi/ (dishabituator), 2× /ba/; sequence /bi/: 5× /bi/, 1× /ba/ (dishabituator), 2× /bi/). Each subject was stimulated with 140 sequences. Neuromagnetic signatures of auditory-evoked responses (AER) were recorded by fetal magnetoencephalography (fMEG). Magnetic brain signals of N=30 fetuses (age: 28-39weeks of gestation) and N=28 infants (age: 0-3months) were recorded. Forty-two of the 60 fetal recordings and 29 of the 58 infant recordings were included in the final analysis. AERs were recorded and amplitudes were normalized to the amplitude of the first stimulus. In both fetuses and infants, the amplitudes of AERs were found not to decrease with repeated stimulation. In infants, however, amplitude of syllable 6 (dishabituator) was significantly increased compared to syllable 5 (p=0.026). Fetuses and infants showed AERs to syllables. Unlike fetuses, infants showed a discriminative neural response to syllables. Habituation was not observed in either fetuses or infants. These findings could be important for the investigation of early cognitive competencies and may help to gain a better understanding of language acquisition during child development. |
Archbishop Emeritus Desmond Tutu said Sunday that Israel's treatment of Palestinians reminds him of South African apartheid, and reiterated his support of the Boycott, Divestment and Sanctions movement.
"I have witnessed the systemic humiliation of Palestinian men, women and children by members of the Israeli security forces," he said in a statement. "Their humiliation is familiar to all black South Africans who were corralled and harassed and insulted and assaulted by the security forces of the apartheid government."
The former Anglican archbishop of Cape Town made the statement as the 10th annual Israel Apartheid Week opened Sunday in South Africa. The initiative, part of the international boycott, divestment and sanctions movement’s campaign against Israel, is being marked in 87 cities this year. Tutu visited Israel in 1989.
International pressure similar to the BDS movement led to the end of apartheid in South Africa, the statement said.
"In South Africa, we could not have achieved our democracy without the help of people around the world, who through the use of non-violent means, such as boycotts and divestment, encouraged their governments and other corporate actors to reverse decades-long support for the apartheid regime.
The same issues of inequality and injustice today motivate the divestment movement trying to end Israel's decades long occupation of Palestinian territory and the unfair and prejudicial treatment of the Palestinian people by the Israeli government ruling over them."
Tutu, who won the Nobel Peace Prize in 1984 for standing up against white-minority rule in South Africa, added that people who don't act against injustice are complicit in it.
''Those who turn a blind eye to injustice actually perpetuate injustice. If you are neutral in situations of injustice, you have chosen the side of the oppressor," he said. "It doesn't matter where we worship or live. We are members of one family, the human family, God's family."
Last May, the retired archbishop joined calls for UEFA to move the Under-21 European soccer championship from Israel because of the state's treatment of Palestinian sport. |
Q:
Visual Studio Code, how to change background of a specific
I want to change the background of a specific scope (code.block) in VSCode as I do with sublime. sublimetext
Block comment has different background, hence it is easier to notice.
Although I made necessary modifications to relevant json file, and foreground colors displayed correctly, backgroundcolor is always overriden by the background of editor. the background field (ff0000 marked with red) changes correctly but the displayed background field (1d1a18 marked with green) is the background of the editor.
VSCode
I use meterial dark soda theme.
Anyone knows how to disable this setting.
A:
You can't. Here's the issue tracking: #3429.
|
adidas Nations Wrap-Up
Ready for something a little different? We’ll continue our extensive coverage of the Iguodala acquisition soon, but the long-awaited conclusion to the Dwight Howard saga presents a great opportunity to change gears a bit and recap the recently concluded 2012 adidas Nations.
As I mentioned in my earlier reports, this year’s Nations had plenty of Denver Nuggets connections despite not being an official NBA event. I caught up with high-school senior Isaac Hamilton, Jordan’s little brother and one of top recruits in the Los Angeles area. On a more somber note, I also witnessed Arron Afflalo attend one of his last official functions as representative of the Denver Nuggets. Here’s the full rundown of these stories and my impression of the talent showcased at the 2012 adidas Nations.
Isaac Hamilton forging his own path
At first glance, the younger Hamilton looks like a glimpse five years into Jordan’s past. Physically, he looks just like a mini-Jordan and has the same quick release and buttery smooth jump shot.
On the court however, Isaac’s crafty game is a stark contrast to Jordan’s more physical, athletic style of play. He does have good athleticism at 6-5 and 185 pounds, but he plays below the rim more often than Jordan and has a good feel for where to be in terms of running a team offense. He has a good handle and his passing skills are advanced for a high school player.
“I guess I’m more of an all-around type of player” said Isaac when asked to compare himself to Jordan. “I don’t really have to score the ball to impact the game. I think Jordan, he’s a better scorer and a lot taller and stronger than me — so that’s a slight advantage.”
I would say Isaac’s biggest strength right now is his pull-up jumper and his ability to quickly change directions, elevate and shoot over his defender. There are very few players at his level with the type of mid-range game Isaac has. He definitely needs to add strength and improve his overall feel for the game, most notably in terms of moving without the ball.
Isaac is one of the most heavily recruited high school players in Los Angeles, having received offers from UCLA, USC, Colorado, Louisville and a host of other top-flight programs. He told me he’s continuing to work on his overall skills and wants to develop his floor game as a point guard. When asked what it’s like going against his NBA brother Jordan, Isaac had a heartwarming, classic response.
It’s fun, competitive. Jordan, whenever he comes back home — even if it’s an away game and they’re playing the Lakers — we play one-on-one. It’s always competitive. Sometimes we can’t finish because there’s either a fight or arguments, but it’s just fun.
Andre Roberson scouting report
It’s no secret I like Roberson a lot, as I spent a ton of time getting acquainted with his game in my first two days of covering the camp. He’s a very solid NBA talent coming out of Colorado, a state which hasn’t been known for producing NBA-caliber basketball players.
Roberson burst onto the scene as an energetic, do-everything forward in his freshman year for Tad Boyle’s Buffaloes. Since then, he’s made himself into a dominant rebounder in the NCAA and one of the best all-around talents in the Pac-12.
Roberson may lack the strength and physical tools to play the four in the NBA, but his body is developing nicely and his underrated perimeter skills should allow him to play the three as well. At the adidas Nations, Roberson wasn’t featured in the pick-and-roll heavy NBA sets, but he somehow found his way to the ball with hustle, grit, and determination.
As Kenneth Faried showed last year, playing hard is a skill. Being able to sustain a high level of effort is something that can be developed and translated to NBA success. Roberson has that. He’s a tireless worker and often finds a way to make good things happen due to the energy he plays with on both ends of the floor.
Roberson gets almost all of his offense off back cuts, offensive rebounds and off-ball movement, but he does know how to shoot and can knock down an open jumper. Defensively is where Roberson could be truly special. His long arms, excellent shot blocking instincts and quick lateral movement provide all of the tools he needs to be a Kenyon Martin-type terror on the defensive end. He was far and away one of the best perimeter defenders I saw at the adidas Nations, able to apply solid ball pressure despite being a 6-7 post player.
Keep an eye on Roberson and the Buffs this year, as he’s sure to be on the 2013 NBA draft radar all season. He’d be a great fit in Denver with his tireless work ethic, ability to run the floor and defend multiple positions.
Aussie Aussie Aussie!
Australia has a proud sporting history and a long-standing tradition of professional basketball. For whatever reason, it’s produced hardly any quality NBA talent in the modern era. Andrew Bogut was supposed to be a star — which sort of happened — but outside of fringe NBA players Patty Mills and David Andersen, Australia hasn’t produced an exciting basketball prospect in quite some time.
I believe that’s about to change in the next few years with Dante Exum and Ben Simmons set to explode on the college basketball scene. Exum will be a high school senior next year and is already rumored to be attending the University of North Carolina in 2014. Ben Simmons, however, is just 15 years old and has dual citizenship in the USA. He could come over for a year or two of prep before hitting the college recruiting trail, where he’d no doubt be among the most coveted prospects in the country.
Dante Exum is a stud. Everything about the long, athletic 6-6 combo guard tells me he’s destined to be a big-time player. He gets to the basket so easily against high-schoolers it’s almost unfair. He’s also the best playmaker on his team and one of the more skilled all-around players at the entire camp. There may be better scorers out there, but not by much and Exum does everything else at an elite level for his age. If I had to bet money on one player here becoming a star, I’d probably lose my money. But if I did I’d bet it all on Exum. He has the full package of tools to make it happen – smarts, athleticism, all-around skills and intangibles.
Ben Simmons is the other Aussie to watch. He’s a 15-year old, 6-8 combo forward whose physical tools and all-around game have drawn early comparisons to a recent two-time MVP. Simmons told me he doesn’t like to compare himself to other players, but teammates and fellow writers likened his game to Lebron’s. He’s extremely young and already physically outclasses most everyone else in high school basketball. The smooth lefty has a great feel for running the floor, dunking with force and shooting with range. There are very few things he can’t do at this level. The true test will be to see if he can translate it to the college level against better competition.
Australians and worldwide hoops fans alike should keep an eye on these two most interesting of high school prospects. If either played in the US, they might be ranked as the very top players in their class.
Afflalo leads the right way
As part of adidas’ efforts to provide guidance and learning resources to the college counselors, Arron Afflalo joined Alec Burks, Luc Mbah a Moute and others to act as NBA ambassadors for the event. Most of the other NBA guys showed up in street clothes, watched the games and mingled with the coaches and campers in attendance.
In typical Afflalo fashion, Arron went above and beyond the call of duty, choosing to get down and dirty in the actual scrimmages. He joined one of the undermanned college squads and led them to an impressive victory in the final scrimmage of the event. Afflalo played with his signature effort and unselfish demeanor, providing a prime example of how to lead by example and play the right way.
I talked to New York Knicks assistant Kenny Atkinson, who coached Afflalo’s team, about what that experience meant to the rest of the campers. “What was really cool is he didn’t come in with a cocky attitude. It was a very dignified, subtle leadership” said Atkinson. “A lot of NBA guys would have come in and just started jacking shots. He fit in with everybody and then took over at the end. I don’t even know the kid and I love his personality.”
Players to watch
Here are a few guys, in no particular order, who stood out to me or caught my eye during the camp.
Ed Daniel, 6-7 PF, Junior at Murray State
Daniel wears his hair in a Ben Wallace style afro and has a huge personality on and off the court. He’s a Kenneth Faried style bruiser who could rise up draft boards this season. He’s a very physical player with great leaping ability and NBA athleticism.
Isaiah Austin, 7-0 C, Freshman at Baylor
He’s listed at 7-foot, but looked taller to me. He’s pretty skinny but could be a game changing force on defense. He’s athletic and very mobile for a 7-footer, but he’s somewhat clumsy and likely won’t do much on offense. I loved how hard he played throughout the camp, giving all-out effort every minute he was on the floor.
Noah Vonleh, 6-8 SF/PF, Class of 2014
Canadian Andrew Wiggins is widely assumed to be the best player in high-school basketball right now, but guys like Vonleh are proof he could have some competition down the road. Vonleh has the strength and athleticism of a 21 year old and physically overpowered just about everyone at the adidas Nations. He also made the game winning three to win the whole tournament and put himself on the map as a future top recruit and legit NBA prospect.
Zack LaVine, 6-3 SG, Class of 2013
Do you like dunks? Zack Lavine is an incredible leaper and one of the most stylish dunkers I’ve seen in person. He was pulling off 360’s and Eastbay variations of all kinds with ease. Not only does he dunk with force, he really gets up and hangs in the air just oozing with style. He’s known as a deadly scorer and the athletic dunk-machine has already committed to UCLA.
Post navigation
3 comments on “adidas Nations Wrap-Up”
I am a CU fan, but I still can’t understand why Andre Roberson is getting so much draft and potential lottery buzz. Yes he is an extremely hard worker, but he has Corey Brewer’s body mixed with Reggie Evans game…Doesn’t sound like someone I would want to take in the lottery. Unless he improves his offense dramatically, I can’t ever see him being any better than a poor man’s Nicholas Batum.
His body needs work, but he really has gotten stronger since coming to college. Offensively his game is raw but I wouldn’t go so far as to say Reggie Evans. He can fill a role and defend at a high level. Because of his size and lack of position he probably doesn’t go in lottery, but I still see a ton of NBA upside in his ability to play defense. The things he can do are rare. Also he’s got a nice frame and could probably bulk up a lot with a decent strength coach, as I said his body has already improved nicely since coming to college. |
The Ultimatum Game has become a popular paradigm for investigating and elucidating the evolution of fairness[@b1]. In this simple game, two players, one acting as a proposer and the other as a responder, have to share a pie. The proposer suggests a split of the pie, and the responder can either accept it or not. If the responder accepts the offer, the deal is done. If the responder rejects the offer, neither player obtains anything. Apparently, a rational responder should accept any nonzero offer, or else he will end up with getting nothing, and thus a selfish proposer should always claim the large majority of the pie, which is known as the subgame perfect equilibrium in game theory[@b2]. This is also the observed outcome for the evolutionary Ultimatum Game in a well-mixed population[@b3]. However, large amount of empirical experiments show that the majority of proposers offer 40% to 50% of the total sum, and about half of responders reject offers below 30%[@b1][@b4][@b5][@b6][@b7][@b8][@b9], which is obviously at odds with above analytical reasoning. Then how can we understand the emergence and persistence of fairness in a population of self-interest individuals?
Recently, considerable efforts have been made to explore the origins of this altruistic behaviour. Some studies have demonstrated that many people are not only concerned with their own benefits but also influenced by the payoffs of others, which is usually considered in the definitions of utility functions[@b8][@b10][@b11][@b12][@b13]. While others have shown that the preference of people towards fairness may be due to the repeated interactions in the Ultimatum Game[@b6][@b8][@b13][@b14]. In the context of evolutionary game theory[@b15], theoretical studies indicate that small group size[@b16], reputation[@b17], empathy[@b18], population structure[@b19][@b20][@b21][@b22][@b23][@b24][@b25][@b26][@b27][@b28][@b29] and heterogeneity[@b30][@b31] play a vital role in the evolution of fairness in the Ultimatum Game.
To our knowledge, an important issue, which has so far remained unexplored, is how the random allocation of pies affects the evolution of fairness in the Ultimatum Game. In hunter-gatherer societies[@b32], the Ultimatum Game can describe such a situation, where two individuals have to divide in advance the reward of a task which can be obtained only by jointly effort, such as cooperative hunting, forming an alliance against another group member, or food sharing. Obviously, there is no reason that the sizes of rewards must be uniform. Let us take cooperative hunting as an example. It apparently can not be guaranteed that the prey is always the same for each hunting activity. Actually, it seems more plausible to assume that the sizes of pies are subject to some kind of distribution, which motivates us to model and study the random allocation scheme of pies in the Ultimatum Game under the framework of evolutionary game theory (see **Model definition** in **Methods** section). Interestingly, we find that whenever individuals compete for stochastic sizes of pies (introduced by random allocation scheme), evolution can lead to fairer split, without the support of any additional evolutionary mechanisms. Our results thus demonstrate how the randomness can be crucial for the emergence and maintenance of fairness.
Results
=======
In this report, we mainly focus on how the amplitude of fluctuation of the pies, Λ, influences the evolution of fairness in the Ultimatum Game. The strategy of a player is given by a vector *S* = \[*p*, *q*\], where *p* represents the offer level, i.e., the fraction of the pie offered by the player when acting as a proposer, and *q* indicates the acceptance threshold, i.e., the minimum fraction that the player accepts when acting as a responder.
We start by studying how local random allocation scheme of pies affects the evolution of fairness in the spatial Ultimatum Game. It should be noticed that Λ = 0 recovers the original spatial Ultimatum Game[@b19], wherein uniform allocation scheme of pies is adopted. With the increment of Λ, the allocated pies become increasingly stochastic. [Figure 1](#f1){ref-type="fig"} shows the results for the evolution of fairness across the whole applicable span of Λ. Comparison between the results on uniform allocation scheme (i.e., Λ = 0) with those on local random allocation scheme (i.e., 0 \< Λ ≤ 1) reveals the impact of randomness on the evolution of fairness. For uniform allocation scheme, the population evolves towards a state deviating from the game-theoretic prediction, and equilibrates at and on spatial networks, which is similar to the positive effect of network structure on facilitating cooperation[@b33][@b34][@b35][@b36][@b37]. As Λ increases, both and monotonically increase, and reach their own maximums (i.e., and ) at Λ = 1.
The time evolution of typical spatial strategy distributions for Λ = 0 and Λ = 1 is depicted in [Figs. 2(a) and 2(b)](#f2){ref-type="fig"}, respectively. Generally, the evolutionary process for the case of Λ = 0 can be characterized by two distinct dynamical phases: local aggregation (from *t* = 1 to *t* = 10) followed by global expansion (from *t* = 10 to *t* = 10000). Initially, self-incompatible strategies (i.e., strategies satisfying *p* \< *q*) are extinct after the first few time steps \[see *t* = 10 for Λ = 0 in [Fig. 2(a)](#f2){ref-type="fig"}\]. Such strategies obtain nothing when interacting with themselves. Consequently, they will disappear in a spatial world. On the other hand, self-compatible strategies (i.e., strategies satisfying *p* ≥ *q*), which can be roughly classified into two categories: generous strategies (i.e., strategies that satisfy the condition that *p* is large, while *q* is small) and quasiempathic strategies (i.e., strategies that satisfy the condition that *p* and *q* are similar with each other.), gradually form spatial clusters in a self-organized manner \[see *t* = 10 for Λ = 0 in [Fig. 2(a)](#f2){ref-type="fig"}\]. Therefore, we can make the macroscopic observation that the average offer level of the population increases, while the average acceptance threshold decreases in this stage \[see *t* = 1 and *t* = 10 for Λ = 0 in [Fig. 2(a)](#f2){ref-type="fig"}\]. Subsequently, the system enters into the global expansion phase. The more fair quasiempathic strategies can expand into the territories of the less fair quasiempathic and the generous ones in the form of spatial clusters. We emphasize that theories related to spatial selection of cooperators in the prisoner\'s dilemma game[@b38] or the public goods game[@b39] do not help explain fairness in the Ultimatum Game at this stage, as a cluster of individuals with more fair quasiempathic strategies receives the same average payoff as a cluster of individuals with less fair quasiempathic strategies or generous ones. On the contrary, it is the performance of one strategy against other strategies that determines its own evolutionary fate. With these facts in mind, we can explain the above phenomenon by considering the following situation: a player with more fair quasiempathic strategy *S*~2~ = \[*p*~2~, *q*~2~\] competes with another player with less fair quasiempathic strategy *S*~1~ = \[*p*~1~, *q*~1~\]. Then the following two cases should be considered: (a) or , and (b) . The payoff difference between the player with *S*~2~ and the one with *S*~1~ is 2(*p*~1~ − *p*~2~) if condition (a) holds, and 1 − 2*p*~2~ if condition (b) holds. Obviously, it is advantageous for players to enhance their acceptance thresholds. Though it gives a good estimation of the local competition between players on spatial networks especially if the connectivity is low, this simple analysis ignores the fact that the performance of a player depends not merely on one single interaction, but on the interactions with players in the whole neighborhood. From this perspective, players are tempted to lower their acceptance thresholds. Put differently, there is a tradeoff between rejecting unfair offers (achieving by increasing *q*) and making more successful splits (achieving by decreasing *q*) for players in structured populations. As a result, we can inspect a sharp increase of the average acceptance threshold of the population from a low level at *t* = 10 to a moderate level at *t* = 100, while a relatively moderate increase of the average offer level of the population in this stage \[see *t* = 10 and *t* = 100 for Λ = 0 in [Fig. 2(a)](#f2){ref-type="fig"}\]. Still, there are a few small residual clusters of generous strategies embedded in a spatial world of fair quasiempathic strategies at *t* = 100. The offer level of each player is roughly equal \[*σ~p~* ≈ 0.0328 at *t* = 100, see [Fig. 2(a)](#f2){ref-type="fig"}\]. On the other hand, the strategies surviving in the structured population are self-compatible \[see [Fig. 2(a)](#f2){ref-type="fig"}\]. Both factors lead to the result that the payoff of each player is approximately equal in the population. The strategy evolution is largely controlled by slowly coarsening dynamics, which is similar to the behaviour of the voter model[@b40]. Since the fraction of fair quasiempathic strategies is much higher compared to the fraction of generous ones, the fair quasiempathic strategies will take over the whole population eventually \[see *t* = 100 and *t* = 10000 for Λ = 0 in [Fig. 2(a)](#f2){ref-type="fig"}\]. Regarding the case of Λ = 1, we inspect a very similar evolutionary process as Λ = 0 \[see [Fig. 2(b)](#f2){ref-type="fig"}\].
For the purpose of shedding light on the constructive impact of local random allocation scheme of pies on the evolutionary success of fairness in our agent based simulations, we turn to consider a mini spatial Ultimatum Game[@b17][@b19][@b41], in which only two strategies, i.e., *S*~1~ = \[*p*~1~, *q*~1~\] and *S*~2~ = \[*p*~2~, *q*~2~\] satisfying , are present. It was previously reported that the fate of a random arrangement of two strategies on the two-dimensional grid relies on whether a 3 × 3 cluster of one strategy can spread or not[@b19][@b41]. We thus investigate how the invasion ability of a 3 × 3 *S*~2~ mutant cluster varies with Λ in a spatial world of players with strategy *S*~1~. In [Supplementary Note](#s1){ref-type="supplementary-material"}, we find that the parameter region, under which the 3 × 3 *S*~2~ mutant cluster is expected to expand \[i.e., \], is enlarged when Λ increases (see [Supplementary Fig. S13](#s1){ref-type="supplementary-material"}). More intriguingly, there exists a intermediate value of Λ that leads to the most favorable condition for the evolution of fairness in the mini spatial Ultimatum Game. Such observation reminds us of the coherence resonance phenomenon in dynamical systems, where noise can anticipate the behaviour of the system past a bifurcation point in a resonant manner[@b42][@b43][@b44][@b45][@b46]. Indeed, results presented in [Supplementary Fig. S13](#s1){ref-type="supplementary-material"} clearly show that there exists an optimal Λ for which the parameter region of is maximal. As a matter of fact, can be assumed to represent the constructive effects of noise on the system, and has similar meaning as the signal-to-noise ratio in dynamical systems[@b42]. Then we can regard the phase transition as a bifurcation point of a dynamical system. This conjecture can be strengthened by considering that the parameter *p*~2~ in the noisy regime truly acts as a bifurcation parameter, since increasing its value pushes the system further away from the transition line, which makes it increasingly difficult for noise to anticipate the behaviour of the system beyond the bifurcation (see [Supplementary Fig. S13](#s1){ref-type="supplementary-material"}). Extensive computer simulations also show the existence of intermediate optimum Λ in the full spatial Ultimatum Game (i.e., with its continuum of strategies) by expanding Λ beyond the reasonable range \[0, 1\], and thus verify our explanation (see [Supplementary Fig. S1](#s1){ref-type="supplementary-material"}).
[Figure 3](#f3){ref-type="fig"} shows how fairness evolves if the randomness of pies is extended from local to global in the spatial Ultimatum Game. The results obtained here are qualitatively identical as the ones for the case of local random allocation scheme. Namely, as Λ increases, both the average offer level and the average acceptance threshold of the population increase. Thus together with the results obtained from [Fig. 1](#f1){ref-type="fig"}, one can conclude that the stochasticity in the sizes of pies facilitates the evolution of fairness in the spatial Ultimatum Game.
To check the universality of the drawn conclusions, we have tested various aspects of the model. Considering a well-mixed population, a small-world network, or a scale-free network does not affect our qualitative results (see [Supplementary Figs. S2, S3 and S4](#s1){ref-type="supplementary-material"}). Without the aid of "spatial reciprocity"[@b19], the well-mixed population evolves into a considerably unfair state (see [Supplementary Fig. S2](#s1){ref-type="supplementary-material"}). The degree of fairness increases compared with that observed on square lattices, when individuals interact on small-world networks. Namely, the small-world effect can further enhance the level of fairness (compare [Figs. 1](#f1){ref-type="fig"} and [3](#f3){ref-type="fig"} with [Supplementary Fig. S3](#s1){ref-type="supplementary-material"}). In contrast, the heterogeneity of the degree distribution disfavors fairness, as the level of fairness achieved in the scale-free networks is more modest in comparison with that observed on square lattices (compare [Figs. 1](#f1){ref-type="fig"} and [3](#f3){ref-type="fig"} with [Supplementary Fig. S4](#s1){ref-type="supplementary-material"}). This is in sharp contrast with other theoretical investigations that inhomogeneity of the networks can result in a remarkable boost in cooperation[@b37][@b47]. Altering the update rule (employing asynchronous updating rule) results in the same qualitative outcomes (see [Supplementary Fig. S5](#s1){ref-type="supplementary-material"}). Applying overlapping generations (asynchronous updating) instead of non-overlapping generations (synchronous updating) can further elevate the degree of fairness (compare [Figs. 1](#f1){ref-type="fig"} and [3](#f3){ref-type="fig"} with [Supplementary Fig. S5](#s1){ref-type="supplementary-material"}). Moreover, the variation of noise or learning error does not affect our qualitative outcomes (see [Supplementary Figs. S6 and S7](#s1){ref-type="supplementary-material"}). Moderately adjusting the noise as well as learning error does not alter the conclusion that random allocation of pies promotes the evolution of fairness. As a further test of robustness, we investigate another pattern of Ultimatum Game, in which the game is played only once between two parties, and roles (proposer or responder) are randomly assigned to them. Again, we find qualitative equivalent behaviour (see [Supplementary Fig. S8](#s1){ref-type="supplementary-material"}). Moreover, we also test another widely applied initial strategy distribution setup, in which the two components *p* and *q* of each individual\'s strategy vector \[*p*, *q*\] are initially picked up in the interval \[0, 1\] randomly and independently. We show that such change does not affect the generality of the reported results (see [Supplementary Fig. S9](#s1){ref-type="supplementary-material"}). In addition, when replacing the uniform distribution of pies with exponential or power-law distribution, we find qualitatively the same results (see [Supplementary Fig. S10](#s1){ref-type="supplementary-material"}). All the above results have indicated that the main conclusions are robust against a wide variety of perturbations of the model.
Discussion
==========
In sum, we have studied how random allocation of pies affects the evolution of fairness in the spatial Ultimatum Game. It was found that the evolution of fairness can be promoted if randomness is involved in the allocation of pies. In order to elaborate the underlying reason for the facilitation of fairness, we have analyzed a mini spatial Ultimatum Game, and found that the introduced randomness can favor fairness in a resonant manner, which is similar to coherence resonance in other dynamical systems[@b36][@b42]. This explanation is further supported by the observation of the resonant phenomenon made on the full spatial Ultimatum Game. Moreover, we have demonstrated that the main findings are robust against numerous variations of the model, and thus designating the randomness of pies as a universal mechanism to promote the evolution of fairness.
Lastly, we would like to relate the present work to some other game-theoretical ones. By using stochastic evolutionary game theory, Rand *et al.*[@b48] studied the effect of randomness on the evolution of fairness in the Ultimatum Game. Interestingly, they found that natural selection favors fairness, when selection is sufficiently weak or mutation is sufficiently high. However, such assumptions are not necessary in our study. Instead, we find that the randomness arising from the allocation of pies can promote the evolution of fairness in the Ultimatum Game even if selection is strong and mutation is low (see [Supplementary Fig. S2](#s1){ref-type="supplementary-material"}). Note that it is the errors in social learning process and the finiteness of the populations that lead to the randomness in their study. Although the randomness origins from different aspects for these two works, both of them lead to the conclusion that fairness has a better chance to triumph in a random world.
In fact, the effect of the size of pies on the origin of fair behavior has also attracted lots of interests from experimental economists. In one setup of experimental designs[@b49][@b50][@b51], the empirical studies assumed that the size of the pies to be allocated would decay over rounds until an agreement is reached. While in our model, the size is randomly changed during the evolutionary process, and we focus on the one-round Ultimatum Game. In another pattern[@b52][@b53], researchers investigate the Ultimatum Game with incomplete information, that is, both parties have limited information regarding the game (e.g., only proposer knows the size of pie, while the responder is merely informed about the probability distribution of possible pie sizes when responding to the proposal[@b52].). In our study, however, players have no information about their co-players and they have completely no idea on the size of pies. Our analysis shows that randomness of pies facilitates the evolution of fairness in the Ultimatum Game even in such information-deficiency situation.
Methods
=======
Model definition
----------------
The strategy of each player can be characterized by a vector *S* = \[*p*, *q*\]. The value of *p* denotes the fraction of the pie offered by the player when acting as a proposer, while the value of *q* indicates the acceptance threshold, i.e., the minimum fraction that the player accepts when acting as a responder. Each time step, every individual plays the Ultimatum Game with each of its neighbors, once in the role of proposer and once in the role of responder. Let *P* (*S~i~*, *S~j~*) be the payoff that player *i* with strategy *S~i~* = \[*p~i~*, *q~i~*\] gets from player *j* with strategy *S~j~* = \[*p~j~*, *q~j~*\]. Thus *P*(*S~i~*, *S~j~*) is given by where *R~i~* (*R~j~*) is the pie allocated to the Ultimatum Game, in which player *i* (*j*) acts as a proposer, and *j* (*i*) as a responder. As far as we know, previous studies regarding the Ultimatum Game simply assume the uniform allocation scheme of pies, that is, the size of pies is constant (i.e., *R~i~* = *R~j~* = 1). In this report, we would like to relax this assumption, and introduce randomness by considering random allocation scheme of pies.
### Random allocation scheme
For random allocation scheme, pies are randomly allocated to the Ultimatum Games. As the first step to model the random allocation scheme and for the convenience of analysis, the pies *R~i~* and *R~j~*, which are split between player *i* and *j*, are simply assumed to be *R~i~* = 1 + *ξ* and *R~j~* = 1 − *ξ*, where *ξ* is a random variable and subject to uniform distribution ranging from −Λ to Λ \[see [Fig. 4(a)](#f4){ref-type="fig"}\]. As the total size of pie *R~i~* + *R~j~* = 2 to be split between *i* and *j* is constant, the randomness of the allocation of pies is merely local in this case. Therefore, we term this mode of random allocation scheme as local random allocation scheme. Later, we will also investigate the so called global random allocation scheme, wherein *R~i~* = 1 + *ξ~i~* and *R~j~* = 1 + *ξ~j~* \[see [Fig. 4(b)](#f4){ref-type="fig"}\]. Here *ξ~i~* and *ξ~j~* are independent random variables, and subject to uniform distribution ranging from −Λ to Λ. The parameter Λ determines the amplitude of undulation of the pies. For reasonability of the model, we set 0 ≤ Λ ≤ 1 for both local and global random allocation schemes, making sure that *R~i~* ≥ 0 and *R~j~* ≥ 0. It is important to note that all the expectations of the above random variables equal to zero, and thus there is no net contribution of both kinds of random allocation schemes to the total expected payoff of the population statistically.
Subsequently to the games, players consider updating their strategies. Every player accumulates the payoff and then would experience the strategy updating synchronously. Particularly, player *i* adopts the strategy *S~j~* of a randomly selected neighbor *j* with the probability where *P~i~* and *P~j~* are the payoffs of *i* and *j*, respectively. The parameter *K* quantifies the amplitude of noise[@b36]. As the strategies in the Ultimatum Game are continuous, it is almost impossible to imitate the strategy of the role model precisely. Thus we add a small perturbation to the process of strategy updating. Namely, after learning from *j*, the strategy of *i* becomes *S~i~* = \[*p~i~* + *ε*~1~, *q~i~* + *ε*~2~\] with *ε*~1~ and *ε*~2~ being randomly picked up from the interval \[−*ε*, *ε*\]. Both the noise *K* and the learning error *ε* are used to create a "trembling hand" effect[@b2]. After these updating events have been performed for all of the individuals in the population, a new time step begins.
Model parameters settings
-------------------------
The simulation results were obtained by applying a square lattice with the *von Neumann* neighborhood (i.e., the number of neighbors for each site is 4), which is of size *N* = 100 × 100 and with periodic boundary condition. Based on rational self-interest, the two components *p* and *q* of each individual\'s strategy vector \[*p*, *q*\] are initially picked up in the interval \[0, 0.5\] randomly and independently[@b17][@b26]. To evaluate the stationary state, we simulated the model for 3 × 10^4^ generations, calculated the mean value over the last 1 × 10^4^ generations, and averaged the results of 50 independent initial realizations. We confirm that runs for longer time periods did not affect the the presented results.
Author Contributions
====================
X.W., X.C. and L.W. conceived and designed the study and carried out the research. All authors contributed to writing the manuscript.
Supplementary Material {#s1}
======================
###### Supplementary Information
Supplementary Information
The authors are supported by National 973 Program (2012CB821203) and NSFC (61020106005, 61375120, 61203374 and 11161011).
{#f1}
{#f2}
{#f3}
{#f4}
|
Q:
How to release my view from stack?
In my application I m using following coding convention to open my default screen :--
AppDelegate *ptrDefaultScreen = (AppDelegate *)[[UIApplication sharedApplication]delegate];
[self.navigationController presentModalViewController:ptrDefaultScreen.FrmFlashScreenLink animated:YES];
but when I move to other screen after default screen ,my default screen is still exists even i used [self dismissModelViewController:YES]; to dimiss default screen from view.
where I m wrong I want my default screen will be completely removed from view.
Is any other way to call default screen before actual application.
Please help me out.Thanks in advance
A:
The approach I use here is to make my default screen a subview of my main view. When I am done with it I either hide it, or removeFromSuperview it depending on memory constraints.
|
770 P.2d 1305 (1988)
Donald L. ANDERSON, Plaintiff-Appellee,
v.
Victor D. MOLITOR and Molitor Industries, Inc., Defendants-Appellants.
No. 87CA0480.
Colorado Court of Appeals, Div. III.
October 13, 1988.
Rehearing Denied December 8, 1988.
Certiorari Granted March 13, 1989.
*1306 Miller & Associates, P.C., James R. Miller, Netzorg & McKeever, P.C., J. Nicholas McKeever, Jr., Gordon W. Netzorg, Denver, for plaintiff-appellee.
Dickinson & Herrick-Stare, P.C., Gilbert A. Dickinson, Leonard M. Cooper, Denver, for defendants-appellants.
Certiorari Granted (Molitor) March 13, 1989.
CRISWELL, Judge.
Victor D. Molitor and Molitor Industries, Inc., defendants, appeal the trial court's denial of their C.R.C.P. 60(b) motion for relief from judgment. We affirm.
In mid-July 1986, the trial court entered a judgment on a jury verdict against defendants, who thereafter filed a timely motion for new trial under C.R.C.P. 59, claiming that the court had committed instructional error and had improperly excluded certain testimony during the course of the trial. However, because of various requests by the parties for extensions of time for the filing of legal memoranda, this motion was not denied by that court until November 17, 1986a date well beyond the 60-day period provided by C.R.C.P. 59(j) for a trial court to dispose of such a motion. Thus, this motion was deemed to have been denied on October 6, 1986. Baum v. State Board for Community Colleges & Occupational Education, 715 P.2d 346 (Colo. App.1986).
As a result of this delay, the notice of appeal that defendants filed with this court was untimely, and, although they argued before us that the untimeliness of the notice was the result of "excusable neglect" and involved "unique circumstances" under Converse v. Zinke, 635 P.2d 882 (Colo. 1981), we rejected these arguments and dismissed their appeal. Anderson v. Molitor, 738 P.2d 402 (Colo.App.1987).
While that appeal was pending, however, defendants filed their unverified motion under C.R.C.P. 60(b) in the trial court, asking that court to vacate, and then immediately to re-enter, the judgment, so as to allow them to file a new notice of appeal with this court. After we entered our order of dismissal, but before the time set for the filing of any petition for rehearing had expired, the district court denied defendants' C.R.C.P. 60(b) motion, finding that they had failed to demonstrate any "excusable neglect" under C.R.C.P. 60(b)(1). It also concluded that "failure to timely file an appeal is not a sufficient ground to justify extraordinary relief from a judgment" under C.R.C.P. 60(b)(5).
Later, we denied defendants' petition for rehearing, the supreme court refused their request for a writ of certiorari, and our mandate, returning the cause to the trial court, issued.
I.
Without arguing the point, defendants initially suggest that the trial court may have lacked jurisdiction to act upon their C.R.C.P. 60(b) motion, because its order *1307 was entered while their appeal was still pending before this court and before this court's mandate was issued. We conclude, however, that the trial court had proper jurisdiction to deny the motion.
It is true that, as a general rule, once a proper appeal is filed with an appellate court, a trial court is without jurisdiction to enter any order that enlarges, diminishes, or changes the rights or obligations of the parties arising out of the judgment from which the appeal has been taken, absent an order of remand from the appellate court. Rivera v. Civil Service Commission, 34 Colo.App. 152, 529 P.2d 1347 (1974). See Schnier v. District Court, 696 P.2d 264 (Colo.1985). This does not mean that an appeal divests the trial court of jurisdiction over all matters that might arise, however.
In Rivera, this court held that a trial court did not have jurisdiction to modify the terms of a preliminary injunction while an appeal from that injunction was pending. In doing so, however, we adopted the rule, approved by several federal courts, that the trial court did retain jurisdiction to deny a motion to modify the injunction's terms; it is only if the trial court concludes that the motion has merit that an order of remand is required to be issued out of this court in order to re-invest the trial court with jurisdiction to grant relief from the injunction's terms.
It appears that a majority of the federal courts of appeal that have passed upon this issue have adopted a similar rule for the disposition of motions under Fed.R.Civ.P. 60(b) during the pendency of appeals. Under these decisions, the trial court retains jurisdiction to deny such motions, but an order of remand is required if such a motion is to be granted. Textile Banking Co., Inc. v. Rentschler, 657 F.2d 844 (7th Cir.1981). See generally Annot., 62 A.L.R. Fed. 165 (1983).
Since such a rule has already been adopted by this court for motions to modify injunctions, we now extend the ruling in Rivera to make it applicable to all C.R.C.P. 60(b) motions that request modification or vacation of the order or judgment being appealed. We hold, then, that a trial court continues to retain jurisdiction to consider and to deny such motions, but that it lacks jurisdiction to take any action that would modify or vacate the order or judgment, absent an order for partial remand entered by the appellate court. Thus, the trial court here retained jurisdiction to enter the order of denial about which defendants complain.
II.
Defendants assert that the trial court abused its discretion in denying their C.R. C.P. 60(b) motion to vacate. We disagree.
A C.R.C.P. 60(b) motion cannot be used to circumvent the operation of C.R. C.P. 59(j). Sandoval v. Trinidad Area Health Ass'n, Inc., 752 P.2d 1062 (Colo. App.1988); see Cavanaugh v. State Department of Social Services, 644 P.2d 1 (Colo.1982), appeal dismissed, 459 U.S. 1011, 103 S.Ct. 367, 74 L.Ed.2d 504 (1982). The sole exception that has been established is when the C.R.C.P. 60(b) motion is based upon "extraordinary circumstances" and involves "extreme situations." Canton Oil Corp. v. District Court, 731 P.2d 687 (Colo.1987) (juror misconduct not disclosed by record of trial may be basis for C.R.C.P. 60(b)(5) relief, even where new trial motion based on same misconduct was automatically denied under C.R.C.P. 59(j)).
Here, defendants did not file their C.R.C.P. 60(b) motion in order to obtain a new trial; their motion specifically requested the vacation, and then the immediate re-entry, of the judgment. The sole purpose of this motion was to relieve defendants from the effect of their prior untimely actions by having the trial court enter a new, but identical, judgment from which a new appeal could be taken. We conclude that, under such circumstances, the trial court was not required to consider the various *1308 factors that it normally would be charged with assessing under Buckmiller v. Safeway Stores, Inc., 727 P.2d 1112 (Colo.1986). Rather, to determine defendants' right to relief, it was required to decide whether the grounds asserted for such relief presented an "extreme situation" or a "unique circumstance" under Canton Oil v. District Court, supra.
Upon this issue, defendants' failure to perfect a timely appeal was not the type of "excusable neglect" that warrants relief under C.R.C.P. 60(b)(1), Cavanaugh v. Department of Social Services, supra, and they have failed to demonstrate any affirmative action by the trial court upon which they could have reasonably relied in not perfecting a timely appeal. Cf. Tyler v. Adams County Department of Social Services, 697 P.2d 29 (Colo.1985) (although the failure to file post-trial motion under C.R. C.P. 59(f) was jurisdictional defect to appeal, trial court did not err in granting relief from judgment on basis of excusable neglect where party seeking relief relied on trial court's affirmative, but improper, order dispensing with filing of post-trial motion). Further, the substantive grounds upon which they relied to justify the relief requested were all claims of rather pedestrian error during the course of the trial, and thus, they did not, as a matter of law, present the type of circumstance that would warrant relief under C.R.C.P. 60(b)(5); it was, by no means, a Canton Oil situation. Hence, the trial court properly denied defendants' motion.
III.
Defendants also contend that the trial court erred by ruling upon their motion before the time set by C.R.C.P. 121 § 1-15 for them to file a reply to plaintiff's memorandum in opposition to their motion. We agree that defendants should have been granted the right to file such a reply. However, since the materials before the trial court were entirely documentary, so that we are not bound by the trial court's findings and conclusions, Burks v. Verschuur, 35 Colo.App. 121, 532 P.2d 757 (1974), and since defendants have been given the right to present full argument, both written and oral, before this court, the trial court's action did not result in any ultimate prejudice to them. See C.A.R. 35(e); Denver Land & Security Co. v. Rosenfeld Construction Co., 19 Colo. 539, 36 P. 146 (1894).
ORDER AFFIRMED.
TURSI and JONES, JJ., concur.
|
About 80 train passengers had to be evacuated by firefighters after getting stuck on a bridge 50 feet above the ground.
It happened on the A Line, near East 78th Avenue and Gun Club Road, just north of Pena Boulevard, by the cell phone lot at Denver International Airport.
DFD airport rigs assisting RTD evacuate 80 train passengers stuck 50 feet above ground. — Denver Fire Dept. (@Denver_Fire) May 24, 2016
The passengers were stuck because a power outage stopped service on the A Line from Denver's Union Station to the airport Tuesday afternoon.
MORE | #TrainToThePlane down again: Power outage stops commuter train from Union Station to Denver airport
The passengers walked along the tracks and fire trucks were used to get them off the bridge. Busses were waiting to take them the rest of the way to the airport, RTD told Denver7.
--------- Sign up for Denver7 email alerts to stay informed about breaking news and daily headlines.
Or, keep up-to-date on the latest news and weather with the Denver7 apps for iPhone/iPads, Android and Kindle. |
Thirsty Thursday: Holgate Brewhouse Written by smith
Sitting just north of Melbourne in the appropriately woody Woodend, the Holgate Brewhouse (and the beer contained within) is an unexpected pleasure.
Presided over by the house-proud Paul and Natasha Holgate – whose name as well as the bull's head presiding over their labels can be traced back to a coat of arms in Medieval England – the converted hotel is a clear labour of love, where food, beer, ambience, beer and accommodation (if you want to drink lots of beer) exist all of a part.
But obviously, we're here for the beer and the Holgate selection is one that manages to be both surprising and familiar at the same time. The regular range spins from a straightforward German pilsner (the Pilsner) to an American pale (the Mt. Macedon) and a chocolate porter (the Temptress), while seasonal variants come in white ale, north English brown and Indian pale varieties, amongst others. But it's the limited releases where things really kick up a notch. Christmas ales, ancient spiced beers (the fantastically named Gruit Expectations) and mocha porters reign, but if you're really looking for a unique experience, it's hard to go past Beelzebub's Jewels, a 12 per cent Belgian quadrupel beer in a 750ml bottle that will set you back $70. It's pricey, sure, but it's also like no other beer you've ever tasted – ballsy, grape-tinged and a perfect stand-in for a bottle of champagne. |
Involvement of angiotensin converting enzyme in cerebral hypoperfusion induced anterograde memory impairment and cholinergic dysfunction in rats.
Forebrain cholinergic dysfunction is the hallmark of vascular dementia (VaD) and Alzheimer's dementia (AD) induced by cerebral hypoperfusion during aging. The aim of the present study is to evaluate the role of angiotensin converting enzyme (ACE) in cerebral hypoperfusion-induced dementia and cholinergic dysfunction. Chronic cerebral hypoperfusion (CHP) was induced by permanent bilateral common carotid artery (2VO) occlusion in rats. Chronic cerebral hypoperfusion resulted in anterograde memory impairment revealed from Morris water maze (MWM) and passive avoidance step through tasks (PA), which was significantly attenuated by ACE inhibitor, captopril. Cerebral hypoperfusion down-regulated the relative expression of cholinergic muscarinic receptor (ChM-1r) and choline acetyltransferase (ChAT) as well as up-regulated the angiotensin II type-1 receptor (AT-1) expression in hippocampus of vehicle treated CHP group on the 54th day post-hypoperfusion. The diminished number of presynaptic cholinergic neurons and the pyramidal neurons were evident from ChAT-immunofluorescence and the hematoxylin and eosin (H&E) staining studies respectively in hippocampal Cornu ammonis1 (CA1); region of vehicle-treated hypoperfused animals. Further the lipid peroxidation level was also found to be elevated in the hippocampus of the vehicle-treated group. Our results demonstrated that continuous captopril treatment (50 mg/kg, i.p. twice daily) for 15 days mitigated the hypoperfusion-induced cholinergic hypofunction and neurodegeneration in hippocampus. The present study robustly reveals that the angiotensinergic system plays a pivotal role in progression of neuronal death and memory dysfunctions during cerebral hypoperfusion. |
Veterans who have been discharged in the last 3 years are now eligible for in-state tuition rates at public schools in all 50 states.
On Veterans Day the White House announced that all 50 states are compliant with the Veterans Access, Choice, and Accountability Act that the President signed into law last August. The law mandates that all veterans and their eligible dependents must be charged the in-state tuition at public schools or the schools will lose GI Bill funding. This law applies to the Post-9/11 GI Bill, Montgomery GI Bill - Active Duty, and the GySgt John D. Fry Scholarship.
The law was originally slated to take effect on July 1 of this year, but due to slow action by some state legislatures VA Secretary Bob McDonald issued a waiver in May giving states until December 31 of this year to comply with the law. As of Veterans Day, the VA says that all 50 states, the District of Columbia, and territories are compliant with the law. Only the Northern Marianas Islands have been granted a waiver from the VA and intend to comply with the law at a later date.
This means that a veteran using the Post-9/11 GI Bill, their dependent using transferred benefits, or the orphan or a veteran who died on active duty will have their full tuition and fees paid at any public school in the United States or territories. There are no longer any time in residence requirements, or higher non-resident tuition charges for veterans or their dependents using the covered GI Bill programs.
Of course, as with any government program there are lots of exceptions to the rule:
This only applies to veterans who enroll in school within 3 years of discharge, or their dependents.
For Fry Scholarship recipients, they must enroll in school within 3 years of their parent's date of death.
Some totally online programs at public schools may charge the higher non-resident rate if the GI Bill recipient doesn't live in-state.
GI Bill recipients who originally enroll in school within 3 years of discharge, then stop using their GI Bill for at least a semester (not a summer semester), or transfer schools and lose more than 12 credit hours in the transfer will lose eligibility if their second or subsequent enrollment is more than 3 year from the date of discharge
GI Bill recipients who were originally within the 3 year time period when they started school before July 2, 2015 but are now past their 3 year eligibility are not covered or "grandfathered" with this program, they will still have to pay the higher non-resident rate at their school, unless their school makes an exception
This also doesn't apply to active duty service member or their dependents, the law only applies to veterans or their dependents
Prior to this law, the Post-9/11 GI Bill only covered in-state tuition at public schools. Out-of-state, or non-resident, tuition can be more than $10,000 per year higher than in-state tuition.
Keep Up With Your Education Benefits
Whether you need a guide on how to use your GI Bill, want to take advantage of tuition assistance and scholarships, or get the lowdown on education benefits available for your family, Military.com can help. Sign up for a free Military.com membership to have education tips and benefits updates delivered directly to your inbox. |
Multi Car Business Insurance
At Esurance, we make it a breeze to insure more than one car on the same policy . In fact, we often reward you for it. We ll explain the Multi Car discount, which is available in most states, and detail how additional cars can impact your car insurance policy..As well as the possibility of a discount, the convenience of having a single contact point for all policies in the household could be an attractive benefit of multi car insurance. A comprehensive multi car policy could include any of the following as standard, or as an optional add on First car discount. Courtesy car.. But, if you do drive it with no insurance and get into an accident, you could be in a financial pickle. Instead, experts recommend insuring both vehicles and discussing a multi car discount with your insurer. Image Chad Horwedel. Robyn Parets is a personal finance and business writer based in Boston.. If you ve got more than one car to insure at the same address, you could save yourself some money on every additional policy you take out with us. Think of it as a multi policy discount. Online you can buy a maximum of car policies per household, and over the phone cars per household.. Some car insurance companies offer a discount only if the insured autos are in the same household and insured by related parties. Other insurers only require you to be at the same address and don t care if you re related or not. You can qualify for the discount mid term if you place an additional car on your policy..Bundle your insurance policies and stay protected with a multi policy discount. You can save by choosing Nationwide to protect your home, vehicle and more.. The multi car discount is probably one of the most common car insurance discounts. It is simple getting the multi car discount because it is basically fool proof. If more than one vehicle is on a single policy, the multi car discount is automatically applied. Multiple vehicles by design get the multi car discount..Whether you rely on your van or car to get you from A to B or if it s a crucial part of your job, Aviva business vehicle insurance gives you the protection you need. Choose from either comprehensive or third party, fire and theft cover and pick from a range of optional extras to tailor your cover to your business requirements.. How does MultiCar Insurance work? Defaqto is a financial information business, helping financial institutions and consumers make better informed decisions. And of respondents July Dec who gave us a best alternative price when getting a quote saved with Admiral MultiCar!.Learn how to save money by extending your auto insurance coverage to two or more vehicles. Get a multi car insurance quote from USAA today and be on your way!.
Get Free Multi Car Business Insurance 2018
Get Multi Car Business Insurance online
Multi Car Business Insurance reviews
Multi Car Business Insurance download
Multi Car Business Insurance bonus
Brand New Multi Car Business Insurance sample
Best Quality Multi Car Business Insurance 2018
Households with more than one car could get cheaper premiums by opting for a multi car insurance policy. Compare quotes online with MoneySuperMarket..Multi car insurance lets you bundle all your car insurance together with a single insurer, either with multiple policies held together or a single policy for all .If you use your car for business purposes, then you will need a business car insurance policy. Use MoneySuperMarket to find business car insurance..Have more than one car in your household? Multi car insurance discounts from Direct Line help you save when you insure or more cars with us.. saved over with Admiral Multi Car Insurance. Voted by consumers as Best Car Insurance Provider Personal Finance Awards Get a quote now.Some insurers reduce their premiums if you insure more than one vehicle on the same policy or with the same provider. Compare insurance companies that could offer .Get Insurance in the Northern Territory from TIO. We cover car, home, business, travel, boat more. Get a quick quote online now save..Insure Up To Company Cars, Trucks Or Business Vehicles With Allianz Australia. Get A Quick And Easy Online Business Vehicle Insurance Quote Now..Learn more about GEICO car insurance discounts, premium reductions and special programs that could save you money on your GEICO auto insurance premium..Compare more quote features than ever from top UK brands at Gocompare.com, where getting the right insurance deal is now even easier with Defaqto star ratings.
Have more than one car in your household? Multi car insurance discounts from Direct Line help you save when you insure or more cars with us..Compare insurance companies that could offer discounts on multi car What is business insurance You cannot get quotes for multi car insurance on .Households with more than one car could get cheaper premiums by opting for a multi car insurance policy. Compare quotes online with MoneySuperMarket..Clegg Gifford is the UK’s leading Multi Car and Van Insurance broker. Contact us today for a non obligating quote at market beating rates..
Top 10 Multi Car Business Insurance Sample
Multicar Insurance Explained
Multi Policy Discount Multi Policy Insurance Savings
First Solution Insurance Com Quotes The Best Rates On Your Auto And Car Insurance For
Cheap Car Insurance Quotes Online Teamalabama Business
Full Size Of Car Insuranceyou Crashed Your Car You Dont Have
Three Cars Parked On Driveway
Multi Car Insurance
Compare Multi Car Insurance
Multi Vehicle Insurance If You Run A Business That Relies On The Use Of Vehicles Then Youll Know How Important It Is To Have Them Properly Insured
Multi car insurance lets you bundle all your car insurance together with a single insurer, either with multiple policies held together or a single policy for all .Households with more than one car could get cheaper premiums by opting for a multi car insurance policy. Compare quotes online with MoneySuperMarket.. |
Revolving Views At Movenpick
The elevator opened to the view of lights, twinkling in the dark – Nairobi by night shone in all its glitz and glamour, acknowledging the beauty that she is. The city beamed in the most remarkable way, we just had to stop and take it all in as we looked out of the floor to ceiling glass windows. From the skyscrapers in Upperhill to those in the CBD, it felt like we were on top of Nairobi.
“I can see us moving!” said one of my colleagues in excitement. Salome Jepkorir, the communication manager at The View Restaurant at Movenpick hotel, confirmed she was right: “The restaurant revolves 360 degrees in 80 minutes,” she informed us. We concentrated hard so we could notice the slow movement; after overcoming some slight disorientation, I took a seat facing the window.
At the View, Nairobi’s newest upmarket culinary destination, the tasteful and minimalistic decor, allows for the main feature of the restaurant to stand out. The menu features Swiss dishes such as veal, barley soup, Zurich style chicken and Fondue specialities. Our team of five silently watched the constantly changing panorama down below. We were the ones revolving, yet it felt like it was the city that was presenting itself to us.
We started with a selection of hot and cold starters; tomato soup with small “Bufalina” mozzarella balls, cherry tomatoes, basil and grissini and the onion and bacon tart with crème fraîche both won us over. The soup was thick and warm, and the richness from the mozzarella toned down the tanginess that tomato soups sometimes have. The onion and bacon tart had wafer-thin slices of onion wrapped in crisp puff pastry revealing a fluffy bacon-filled middle, each bite providing a forkful of flavour.
Onion and Bacon Tart
When we arrived, we had a clear view of the CBD but by the time the main course dishes were placed in front of us, The Oval in Westlands was now in plain sight and the restaurant’s entrance, which was right behind us when we arrived, now on a different side of the restaurant. The fine dining restaurant revolves in such a way that only the area where patrons sit, revolves.
For the main course options, we had the strip loin steak and the pork medallion. The steak made a sizzling entrance on a hot plate with roasted cherry tomatoes and potato gratin on the side. Medium cooked, just how I like it, the steak was juicy, tender and oozing in flavour. The evenly browned potato gratin was silky with a heavily creamy profile.
Striploin Steak with Potato Gratin
Stuffed, but with a little room left over for dessert, we had warm almond savarin served with a raspberry sorbet, chocolate fondue served with fresh fruits, and the gluten-free Swiss carrot cake. The cake was soft and velvety and a truly delicious dessert.
Carrot Cake
Revolving restaurants are a popular find around the globe and it’s definitely nice to see Kenya offering a similar experience. If you’re looking for a special place to take your date The View provides an unforgettable rich and rare experience that you must try. |
The visual man-machine interface is constantly trying to improve the images for a wide range of applications: military, biomedical research, medical imaging, genetic manipulation, airport security, entertainment, videogames, computing, and other display systems.
Three-dimensional (3D) information is the key for achieving success in critical missions requiring realistic three-dimensional images, which provide reliable information to the user.
Stereoscopic vision systems are based on the human eye's ability to see the same object from two different perspectives (left and right). The brain merges both images, resulting in a depth and volume perception, which is then translated by the brain into distance, surface and volumes.
In the state-of-the-art, several attempts have been made in order to achieve 3D images, e.g., the following technologies have been used: Red-blue polarization Vertical-horizontal polarization Multiplexed images glasses. 3D virtual reality systems Volumetric displays Auto-stereoscopic displays
All of the aforementioned technologies have presentation incompatibilities, collateral effects and a lack of compatibility with the current existing technology.
For example, red-blue polarization systems require, in order to be watched, a special projector and a large-size white screen; after a few minutes, collateral effects start appearing, such as headache, dizziness, and other symptoms associated to images displayed using a three-dimensional effect. This technology was used for a long time in cinema display systems but, due to the problems mentioned before, the system was eventually withdrawn from the market. Collateral symptoms are caused by the considerable difference in the content received by the left eye and the right eye (one eye receives blue-polarized information and the other receives red-polarized information), causing an excessive stress on the optical nerve and the brain. In addition, two images are displayed simultaneously. In order to be watched, this technology requires an external screen and the use of polarized color glasses. If the user is not wearing red-blue glasses, the three-dimensional effect cannot be watched, but instead only double blurry images are watched.
The horizontal-vertical polarization system merges two images taken by a stereoscopic camera with two lenses; the left and right images have a horizontal and vertical polarization, respectively. These systems are used in some new cinema theaters, such as Disney® and IMAX®3D theaters. This technology requires very expensive production systems and is restricted to a dedicated and selected audience, thus reducing the market and field of action. A special interest in the three-dimensional (3D) format has grown during the past three years; such is the case of Tom Hanks' productions and Titanic, which have been produced with 3D content by IMAX3D technology. However, this technology also results in collateral effects for the user after a few minutes of display, requires an external screen and uses polarized glasses; if the user is not wearing these glasses, only blurred images can be watched.
Systems using multiplexed-image shutting glasses technology toggle left and right images by blocking one of these images, so it cannot get to the corresponding eye for a short time. This blocking is synchronized with the image's display (in a monitor or TV set). If the user is not wearing the glasses, only blurred images are seen, and collateral effects become apparent after a few minutes. This technology is currently provided by (among others), BARCO SYSTEMS for Mercedes Benz®, Ford® and Boeing® companies, by providing a kind of “room” to create 3D images by multiplexing (shutter glasses) in order to produce their prototypes before they are assembled in the production line.
3D virtual reality systems (VR3D) are computer-based systems that create computer scenes that can interact with the user by means of position interfaces, such as data gloves and position detectors. The images are computer generated and use vector, polygons, and monocular depth reproduction based images in order to simulate depth and volume as calculated by software, but images are presented using a helmet as a displaying device, placed in front of the eyes; the user is immersed in a computer generated scene existing only in the computer and not in the real world. The name of this computer-generated scene is “Virtual Reality”. This system requires very expensive computers, such as SGI Oxygen® or SGI Onyx Computers®, which are out of reach of the common user. Serious games and simulations are created with this technology, which generates left-right sequences through the same VGA or video channel, the software includes specific instructions for toggling video images at on-screen display time at a 60 Hz frequency. The videogame software or program interacts directly with the graphics card.
There is a technology called I-O SYSTEMS, which displays multiplexed images in binocular screens by means of a left-right multiplexion system and toggling the images at an 80 to 100 Hz frequency, but even then the flicker is perceived.
Only a few manufacturers, such as Perspectra Systems®, create volumetric display systems. They use the human eye capability to retain an image for a few milliseconds and the rotation of a display at a very high speed; then, according to the viewing angle, the device shows the corresponding image turning the pixels' color on and off, due to the display's high speed rotation the eye can receive a “floating image”. These systems are very expensive (the “sphere” costs approximately 50,000 USD) and require specific and adequate software and hardware. This technology is currently used in military applications.
Auto-stereoscopic displays are monitors with semi-cylindrical lines running from top to bottom and are applied only to front and back images; this is not a real third dimension, but only a simulation in two perspective planes. Philips® is currently working in this three-dimension technology as well as SEGA® in order to obtain a technological advantage. Results are very poor and there is a resolution loss of 50%. This technology is not compatible with the present technological infrastructure and requires total replacement of the user's monitor. Applications not specifically created for this technology are displayed blurred, making them totally incompatible with the inconveniences of the current infrastructure. In order to watch a 3D image, the viewer needs to be placed at an approximate distance of 16″ (40.64 cm), which varies according to the monitor's size, and the viewer must look at the center of the screen perpendicularly and fix his/her sight in a focal point beyond the real screen. With just a little deviation of the sight or a change in the angle of vision, the three-dimensional effect is lost.
In the state-of-the-art, there are several patents, which are involved in the development of this technology, namely:
U.S. Pat. No. 6,593,929, issued on Jul. 15, 2003 and U.S. Pat. No. 6,556,197, issued on Apr. 29, 2003, granted to Timothy Van Hook, et al., refer to a low cost video game system which can model a three-dimensional world and project it on a two-dimensional screen. The images are based on interchangeable viewpoints in real-time by the user, by means of game controllers.
U.S. Pat. No. 6,591,019, issued on Jul. 8, 2003, granted to Claude Comair et al., uses the compression and decompression technique for the transformation of a matrix into 3D graphical systems generated by a computer. This technique consists in converting real numbers matrixes into integer matrixes during the zeroes search within the matrix. The compressed matrixes occupy a much smaller space in memory and 3D animations can be decompressed in real-time in an efficient manner.
U.S. Pat. No. 6,542,971, issued on Apr. 1, 2003, granted to David Reed, provides a memory access system and a method which uses, instead of an auxiliary memory, a system with a memory space attached to a memory which writes and reads once the data input from one or more peripheral devices.
U.S. Pat. No. 6,492,987, issued on Dec. 10, 2002, granted to Stephen Morein, describes a method and device for processing the elements of the objects not represented. It starts by comparing the geometrical properties of at least one element of one object with representative geometric properties by a pixels group. During the representation of the elements of the object, a new representative geometric property is determined and is updated with a new value.
U.S. Pat. No. 6,456,290, issued on Sep. 24, 2002, granted to Vimal Parikh et al., provides a graphical system interface for the application of a use and learning program. The characteristic includes the unique representation of a vertex which allows the graphic line to retain the vertex status information, projection matrix and immersion buffer frame commands are set.
Any videogame is a software program written in some computer language. Its objective is to simulate a non-existent world and take a player or user into this world. Most videogames are focused in enhancing the visual and manual dexterity, pattern analysis and decision taking, in a competitive and improvement (difficulty level) environment, and are presented in large scenarios with a high artistic content. As a game engine, most videogames are divided into the following structure: videogame, game library with graphics and audio engines associated, the graphical engine contains the 2D source code and the 3D source code, and the audio engine contains the effects and music code. Every block of the game engine mentioned is executed in a cyclic way called a game loop, and each one of these engines and libraries is in charge of different operations, by example:
Graphics engine: displays images in general
2D source code: static images, “backs” and “sprites” appearing in a videogame screen.
3D source code: dynamic, real-time vector handled images, processed as independent entities and with xyz coordinates within the computer-generated world.
Audio engine: sound playback
Effects code: when special events happen, such as explosions, crashes, jumps, etc.
Music code: background music usually played according to the videogame's ambience.
The execution of all these blocks in a cyclic way allows the validation of current positions, conditions and game metrics. As a result of this information the elements integrating the videogame are affected.
The difference between game programs created for game consoles and computers is that originally, the IBM PC was not created for playing in it. Ironically, many of the best games run under an IBM PC-compatible technology. If we compare the PCs of the past with the videogames and processing capabilities of the present, we could say that PCs were completely archaic, and it was only by means of a low-level handling (assembly language) that the first games were created, making direct use of the computer's graphics card and speaker. However, the situation has changed. The processing power and graphics capabilities of present CPUs, as well as the creation of cards specially designed for graphics processes acceleration (GPUs) have evolved to such a degree that they surpass by far the characteristics of the so-called supercomputers in the 1980s.
In 1996, a graphics acceleration system known as “hardware acceleration” was introduced which included graphics processors capable of making mathematical and matrix operations at a high speed. This reduced the main CPU's load by means of card-specific communications and a programming language, located in a layer called a “Hardware Abstraction Layer” (HAL). This layer allows the information handling of data associated to real-time xyz coordinates, by means of coordinate matrixes and matrix mathematical operations, such as addition, scalar multiplication and floating point matrix comparison. |
featured book
Crusade
Destroyermen, Book II
Format
Price
Additional Formats
Overview
Swept from the World War II Pacific into an alternate world, Lietenant Commander Matthew Patrick Reddy and the crew of the USS Walker have allied with the peaceful Lemurians in their struggle against the warlike, reptilian Grik. But the greatest threat is yet to come. For the massive Japanese battleship that Walker was fleeing back in the Pacific also came through the rift, and it?s in the hands of the Grik. |
ithout replacement from bddddbddddbd?
9/55
Calculate prob of sequence yb when two letters picked without replacement from {m: 8, w: 4, c: 2, b: 2, q: 3, y: 1}.
1/190
Two letters picked without replacement from vknhvhvvhqvhhv. What is prob of sequence kv?
3/91
What is prob of sequence ppk when three letters picked without replacement from {c: 5, u: 9, p: 3, k: 3}?
1/380
Two letters picked without replacement from ffffafadkkaakaafa. Give prob of sequence ka.
21/272
Four letters picked without replacement from {z: 2, q: 4, p: 1, n: 9}. Give prob of sequence zqnn.
6/455
What is prob of sequence vte when three letters picked without replacement from ceutcuctctv?
1/330
What is prob of sequence cc when two letters picked without replacement from {a: 1, c: 1, b: 1}?
0
Calculate prob of sequence tmmt when four letters picked without replacement from {t: 3, m: 2}.
1/10
What is prob of sequence nri when three letters picked without replacement from {i: 1, r: 1, n: 5, e: 1, a: 3}?
1/198
Calculate prob of sequence rrri when four letters picked without replacement from {r: 3, y: 1, i: 9, b: 1}.
9/4004
Calculate prob of sequence tp when two letters picked without replacement from {p: 2, t: 7, r: 1}.
7/45
Two letters picked without replacement from rri. What is prob of sequence ri?
1/3
Two letters picked without replacement from {x: 4, n: 2, y: 1, a: 1, z: 1}. Give prob of sequence na.
1/36
What is prob of sequence kki when three letters picked without replacement from kkkckkkkikk?
4/55
Four letters picked without replacement from vvnppnpnppnvvv. Give prob of sequence ppvp.
25/2002
Two letters picked without replacement from neqeqn. What is prob of sequence ne?
2/15
What is prob of sequence yt when two letters picked without replacement from tytytytyyytt?
3/11
Calculate prob of sequence gno when three letters picked without replacement from nggggogvnvov.
1/66
Two letters picked without replacement from {y: 1, b: 4, e: 1, j: 4, n: 2, g: 2}. What is prob of sequence ej?
2/91
Calculate prob of sequence wt when two letters picked without replacement from wqqntqtqtwt.
4/55
Two letters picked without replacement from tohhothhoonttoootoh. What is prob of sequence ot?
20/171
Calculate prob of sequence kka when three letters picked without replacement from {k: 2, f: 2, a: 2, w: 4}.
1/180
Three letters picked without replacement from ccccc. Give prob of sequence ccc.
1
Four letters picked without replacement from {o: 4, w: 6}. Give prob of sequence wwow.
2/21
What is prob of sequence pfp when three letters picked without replacement from {f: 9, p: 3}?
9/220
Two letters picked without replacement from ttt. What is prob of sequence tt?
1
Calculate prob of sequence tx when two letters picked without replacement from {i: 1, t: 3, f: 2, c: 2, x: 2, h: 2}.
1/22
Three letters picked without replacement from ooookekooooeooooeo. What is prob of sequence oee?
13/816
Calculate prob of sequence ooo when three letters picked without replacement from ooooeooeooeoeoooooo.
455/969
Three letters picked without replacement from wirwuuyaryywiuw. What is prob of sequence yuw?
6/455
Calculate prob of sequence vy when two letters picked without replacement from {v: 11, p: 3, y: 4}.
22/153
What is prob of sequence hda when three letters picked without replacement from hhfdffhhhaadfy?
5/546
Two letters picked without replacement from gpggggggggggggggggg. What is prob of sequence gp?
1/19
Four letters picked without replacement from {p: 6}. What is prob of sequence pppp?
1
Four letters picked without replacement from ffkkufkuukkkfnu. What is prob of sequence fknf?
1/455
What is prob of sequence uhh when three letters picked without replacement from huhuuhuuuh?
1/10
What is prob of sequence vvvq when four letters picked without replacement from {q: 3, v: 9, c: 4}?
9/260
Two letters picked without replacement from xxiiiiiiiiiixi. What is prob of sequence ix?
33/182
Four letters picked without replacement from vnunjdnvnvv. What is prob of sequence unvv?
1/165
Three letters picked without replacement from {a: 9, y: 7}. What is prob of sequence yay?
9/80
What is prob of sequence nbb when three letters picked without replacement from abnbnakann?
1/90
Three letters picked without replacement from {o: 5, f: 5}. Give prob of sequence off.
5/36
What is prob of sequence nv when two letters picked without replacement from {n: 1, v: 1, z: 4, f: 3, j: 4, o: 1}?
1/182
Two letters picked without replacement from {o: 3, k: 3, l: 10, c: 1, v: 1, n: 1}. What is prob of sequence vc?
1/342
Calculate prob of sequence xutx when four letters picked without replacement from {u: 1, t: 5, x: 5}.
5/396
Calculate prob of sequence rsdp when four letters picked without replacement from {d: 3, r: 2, p: 1, y: 6, s: 5, z: 2}.
5/15504
Two letters picked without replacement from {r: 2, v: 2, s: 3, j: 8}. Give prob of sequence vj.
8/105
Three letters picked without replacement from xssljuxlsxejxjl. What is prob of sequence jxu?
2/455
Three letters picked without replacement from {y: 2, b: 5, c: 8}. What is prob of sequence bcy?
8/273
Two letters picked without replacement from tctttcwfpctcpfwtct. Give prob of sequence pf.
2/153
Calculate prob of sequence vvuv when four letters picked without replacement from {v: 16, u: 2}.
14/153
What is prob of sequence ycu when three letters picked without replacement from ucyy?
1/12
What is prob of sequence gsk when three letters picked without replacement from {g: 1, s: 3, k: 3}?
3/70
What is prob of sequence ll when two letters picked without replacement from udrnulrlndnrd?
1/78
Calculate prob of sequence egew when four letters picked without replacement from gggwwweweg.
2/315
Four letters picked without replacement from {e: 8, n: 2, r: 4}. Give prob of sequence erre.
4/143
Calculate prob of sequence nni when three letters picked without replacement from ttinanom.
1/168
What is prob of sequence ip when two letters picked without replacement from iiiiiiiiiiiiiiipii?
1/18
What is prob of sequence wllw when four letters picked without replacement from wtlwtlwtwwtwtllw?
3/260
Two letters picked without replacement from {o: 6, x: 4}. What is prob of sequence ox?
4/15
Two letters picked without replacement from rrrrrr. What is prob of sequence rr?
1
Calculate prob of sequence dm when two letters picked without replacement from {m: 1, p: 2, j: 1, d: 1}.
1/20
Calculate prob of sequence uull when four letters picked without replacement from lluulul.
3/35
What is prob of sequence hff when three letters picked without replacement from hdedtiff?
1/168
Three letters picked without replacement from {e: 2, w: 10, f: 4}. Give prob of sequence efe.
1/420
Three letters picked without replacement from {r: 7, g: 2}. What is prob of sequence rgr?
1/6
Calculate prob of sequence qq when two letters picked without replacement from {q: 5, m: 5}.
2/9
Four letters picked without replacement from {z: 3, m: 8}. What is prob of sequence mzmz?
7/165
What is prob of sequence gg when two letters picked without replacement from {c: 2, g: 5}?
10/21
Calculate prob of sequence llv when three letters picked without replacement from vlsvnlv.
1/35
Four letters picked without replacement from {t: 5, z: 4, f: 2}. Give prob of sequence tztf.
2/99
Calculate prob of sequence sa when two letters picked without replacement from {x: 3, p: 1, a: 1, c: 4, s: 1, r: 5}.
1/210
Four letters picked without replacement from {v: 1, w: 5, a: 3, k: 1, o: 2, b: 3}. What is prob of sequence vobw?
1/1092
Four letters picked without replacement from dyddidyiyiyd. Give prob of sequence ddyi.
2/99
Calculate prob of sequence oolo when four letters picked without replacement from lolboolloollobllb.
2/119
Four letters picked without replacement from {w: 6, y: 1, v: 1, n: 2, f: 5}. Give prob of sequence wwyv.
1/1092
Calculate prob of sequence dh when two letters picked without replacement from {h: 16, p: 1, d: 3}.
12/95
Four letters picked without replacement from {j: 1, m: 3}. Give prob of sequence jmmm.
1/4
What is prob of sequence vviv when four letters picked without replacement from {v: 4, i: 2, a: 2}?
1/35
Calculate prob of sequence is when two letters picked without replacement from {e: 11, i: 6, s: 2}.
2/57
Three letters picked without replacemen |
Q:
Why is my elastic search prefix query case-sensitive despite using lowercase filters on both index and search?
The Problem
I am working on an autocompleter using ElasticSearch 6.2.3. I would like my query results (a list of pages with a Name field) to be ordered using the following priority:
Prefix match at start of "Name" (Prefix query)
Any other exact (whole word) match within "Name" (Term query)
Fuzzy match (this is currently done on a different field to Name using a ngram tokenizer ... so I assume cannot be relevant to my problem but I would like to apply this on the Name field as well)
My Attempted Solution
I will be using a Bool/Should query consisting of three queries (corresponding to the three priorities above), using boost to define relative importance.
The issue I am having is with the Prefix query - it appears to not be lowercasing the search query despite my search analyzer having the lowercase filter. For example, the below query returns "Harry Potter" for 'harry' but returns zero results for 'Harry':
{ "query": { "prefix": { "Name.raw" : "Harry" } } }
I have verified using the _analyze API that both my analyzers do indeed lowercase the text "Harry" to "harry". Where am I going wrong?
From the ES documentation I understand I need to analyze the Name field in two different ways to enable use of both Prefix and Term queries:
using the "keyword" tokenizer to enable the Prefix query (I have applied this on a .raw field)
using a standard analyzer to enable the Term (I have applied this on the Name field)
I have checked duplicate questions such as this one but the answers have not helped
My mapping and settings are below
ES Index Mapping
{
"myIndex": {
"mappings": {
"pages": {
"properties": {
"Id": {},
"Name": {
"type": "text",
"fields": {
"raw": {
"type": "text",
"analyzer": "keywordAnalyzer",
"search_analyzer": "pageSearchAnalyzer"
}
},
"analyzer": "pageSearchAnalyzer"
},
"Tokens": {}, // Other fields not important for this question
}
}
}
}
}
ES Index Settings
{
"myIndex": {
"settings": {
"index": {
"analysis": {
"filter": {
"ngram": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "15"
}
},
"analyzer": {
"keywordAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "keyword"
},
"pageSearchAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "standard"
},
"pageIndexAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding",
"ngram"
],
"type": "custom",
"tokenizer": "standard"
}
}
},
"number_of_replicas": "1",
"uuid": "l2AXoENGRqafm42OSWWTAg",
"version": {}
}
}
}
}
A:
Prefix queries don't analyze the search terms, so the text you pass into it bypasses whatever would be used as the search analyzer (in your case, the configured search_analyzer: pageSearchAnalyzer) and evaluates Harry as-is directly against the keyword-tokenized, custom-filtered harry potter that was the result of the keywordAnalyzer applied at index time.
In your case here, you'll need to do one of a few different things:
Since you're using a lowercase filter on the field, you could just always use lowercase terms in your prefix query (using application-side lowercasing if necessary)
Run a match query against an edge_ngram-analyzed field instead of a prefix query like described in the ES search_analyzer docs
Here's an example of the latter:
1) Create the index w/ ngram analyzer and (recommended) standard search analyzer
PUT my_index
{
"settings": {
"index": {
"analysis": {
"filter": {
"ngram": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "15"
}
},
"analyzer": {
"pageIndexAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding",
"ngram"
],
"type": "custom",
"tokenizer": "keyword"
}
}
}
}
},
"mappings": {
"pages": {
"properties": {
"name": {
"type": "text",
"fields": {
"ngram": {
"type": "text",
"analyzer": "pageIndexAnalyzer",
"search_analyzer": "standard"
}
}
}
}
}
}
}
2) Index some sample docs
POST my_index/pages/_bulk
{"index":{}}
{"name":"Harry Potter"}
{"index":{}}
{"name":"Hermione Granger"}
3) Run the a match query against the ngram field
POST my_index/pages/_search
{
"query": {
"match": {
"query": "Har",
"operator": "and"
}
}
}
|
Front-back confusion resolution in three-dimensional sound localization using databases built with a dummy head.
Sound localization plays an important role in everyday life. It helps us to separate sounds coming from different sources and thus to acquire acoustic information. This paper describes an algorithm for localizing the position of a sound source, as recorded by dummy head microphones. The recorded signals are considered to be basic, random signals within an imaginary round room. The goal of this research is to localize random signals produced from different positions using information about basic signals. The method used is based on the identification of similarities between basic and random signals. It includes an interaural time difference comparison at the beginning, and continues with further analysis of the differences in signal spectrums. One of the main issues arising in sound localization is the problem of front-back confusion, and this paper shows how it was resolved by the use of reference signals. |
Abstract
Objectives:
Acinetobacter baumannii is an opportunistic pathogen that causes serious infections in humans by colonization of medical devices. The capacity of this pathogen to persist in hospital settings could be due to its ability to form biofilms. In the present study we evaluated the effect of antibodies against one of surface components of A. baumannii on in vitro biofilm formation.
Materials and Methods:
The 1113 bp fragment of Bap (biofilm associate protein)gene from A. baumannii genome was amplified and cloned. The recombinant protein was expressed and purified and used to raise antibodies in mice. Antibody titer was evaluated by ELISA. In vitro biofilm inhibition was evaluated using the mice sera. |
Analysis: Dangers remain in Iraq and Afghanistan as U.S. operations wind down
Battlefield conditions remain a question as U.S. operations in Iraq and Afghanistan wind down
By Missy Ryan
WASHINGTON — American soldiers in Iraq are packing up military gear and shutting down bases as the United States races to remove all but a couple hundred troops by year’s end.
In Afghanistan, U.S. generals are scrambling to stretch a shrinking force to match enemy insurgents who remain dangerous and defiant after more than 10 years of war.
Plans to withdraw U.S. troops from Iraq by Dec. 31 and to steadily reduce the force in Afghanistan over the next three years reflect President Barack Obama’s determination to end the costly, bloody wars that defined the decade after the Sept. 11, 2001, attacks on the United States.
But do they also reflect battlefield conditions? For those who might suspect that military decisions are being made without sufficient attention to lingering risks on the ground and long-term security, the withdrawals are troubling.
“In some ways this has been done backward. We are looking at the resources we have and the political climate we face, and from that deriving the end state we want in Iraq and Afghanistan,” said Joshua Foust, a security analyst at the American Security Project, a nonpartisan think tank.
When Obama, who opposed the Iraq war from the outset, announced last month that he was abandoning efforts to secure a modest troop presence in Iraq after 2011, he repeated his view that the “tide of war” was receding.
He promised that the U.S. force that stood at 180,000 troops in Iraq and Afghanistan when he took office in 2009 would be halved by 2012. “Make no mistake: it will continue to go down,” he said.
Obama has promised responsible, conditions-based withdrawals. But his political opponents are quick to link his blueprint for ending the unpopular wars to his hopes for winning re-election in November 2012.
Obama also wants to cut spending — the Iraq war alone has cost U.S. taxpayers more than US$700-billion in purely military expenditures — and focus on the struggling U.S. economy.
Obama’s top military advisors voiced doubts in June when he announced his plan for withdrawing from Afghanistan the extra 33,000 troops he deployed there after a 2009 strategy review by the end of September 2012. They said they had initially sought a slower, less risky drawdown, but later backed Obama’s plan.
The White House has asked the Pentagon for initial recommendations for the U.S. troop presence in Afghanistan in 2014, a first step in planning the final U.S. drawdown there despite a bleak security outlook.
Despite intense deliberations about the pace of withdrawal, most foreign combat troops are expected to be gone by the time Afghan forces are due to take over responsibility for the country’s security at the end of 2014.
While U.S. and NATO soldiers have driven Taliban insurgents out of some southern strongholds, bloodshed continues to be fueled by militants’ safe havens in Pakistan. The United Nations says overall violence is at its worst since the war began in 2001.
Last week, a suicide car bomber killed 17 people in Kabul, including 13 troops and civilian employees of the NATO-led forces, the latest bold attack in the Afghan capital that deepened questions about NATO claims of progress.
’GRAVE CONCERNS’
Anthony Cordesman, a U.S. security expert at the Center for Strategic and International Studies think tank, said the Obama administration was continuing to debate what U.S. goals should be in Afghanistan after the U.S. raid that killed al-Qaeda leader Osama bin Laden in Pakistan in May.
Achievements like the killing of the al-Qaeda leader, many officials believe, have reduced the threat of an attack by the network on U.S. soil and gone a long way toward accomplishing a finite U.S. mission in the region. It’s not the responsibility of the United States — or within its power — to turn Afghanistan into a model, modern democracy, they say.
“We’re in Afghanistan because of 2001, not because this is a vital American interest,” Cordesman said. “We got rid of Osama bin Laden, but we can’t solve the Pakistan problem, we can’t solve the Taliban problem, we can’t solve the governance problem.”
While U.S. commanders dream of having a larger force to battle the Taliban and militant allies like the Haqqani network, they voice cautious confidence that they can achieve their narrowly defined goals in Afghanistan.
“We’ve been given an order … and our assessment is that yes we can do it,” a U.S. military official said on condition of anonymity. “Will there be spikes in violence? Yes. Will there be casualties and setbacks? Yes. Will we prevail and push through those things? I think so.”
“I don’t necessarily think we’re hell-bent for leather to pull troops out regardless of what’s happening,” the official said.
Come Jan. 1, after 8-1/2 years of fighting in Iraq and almost 4,500 U.S. soldiers killed, and in line with a deal negotiated under former President George W. Bush, the major U.S. military presence in that country will come to an end.
Less than 200 U.S. soldiers are expected to remain in Iraq, part of a State Department task force responsible for military sales and, to some extent, advising Iraq’s security forces.
Violence in Iraq has dropped dramatically. But the country remains unstable, haunted by the ghosts of a civil war that killed tens of thousands of civilians, and unable to settle major sectarian and ethnic conflicts impeding political progress and economic growth.
The withdrawal has worried U.S. conservatives who fear the United States is handing Iran influence over Iraq, a country that was supposed to anchor U.S. interests in the Middle East.
“It’s a fulfillment of a campaign promise by the president of the United States in 2008 to get out of Iraq. That’s all it is,” Senator John McCain, a top Republican, said this week.
Others say Iran’s influence in Iraq is overstated.
Osama al-Nujaifi, speaker of Iraq’s parliament, said the full U.S. drawdown is in Iraq’s interest, regardless of what it means for the United States. “Now there is no way but to depend on ourselves,” he said.
Security experts like Jeff Dressler of the Institute for the Study of War think tank warn against thinking that al-Qaeda has been disabled permanently in Iraq or Afghanistan and say the group may seek to recruit new supporters by proclaiming that it forced the United States out of both countries.
“There is nothing al-Qaeda would like more than capitalizing on the anarchy of a post-U.S. Iraq and Afghanistan,” Dressler said. |
Morning Docket: 04.10.17
* According to reports, Donald Trump is “obsessed” with his next possible Supreme Court nomination, and it seems like the president is trying to use their sons’ friendship to remain in Justice Kennedy’s good graces — after all, he’s banking on the high court’s swing justice to retire. [POLITICO]
* The new year has not been kind as far as employment in the legal profession is concerned. Per the Bureau of Labor Statistics, the legal sector took a beating in March, losing about 1,500 jobs. This is the third month in a row that the legal sector has lost jobs. Ouch. [Am Law Daily]
* Ajit Pai, the chairman of the Federal Communications Commission, is planning to repeal Obama-era landmark net neutrality rules in the hope of internet providers volunteering to maintain an open internet, and then binding them to compliance through their terms of service. Let’s see how well this works out… [Reuters]
* Remember Shon Hopwood, the bank robber who won a SCOTUS case as a jailhouse lawyer, went to law school, and clerked for the D.C. Circuit? He’s got a new job as a Georgetown Law prof. Talk about a remarkable career path. Congrats! [Seattle Times]
* “SCOTUS judge, feminist icon, Bubby. Notorious.” Believe it or not, Justice Ruth Bader Ginsburg won a March Madness bracket. Click the link to see what we mean. [Jewcy][Read More …]
Source: Daily Dose of Law |
/*
* Copyright 2015 herd contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.finra.herd.service.helper;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.finra.herd.dao.S3Operations;
import org.finra.herd.model.ObjectNotFoundException;
import org.finra.herd.model.api.xml.BusinessObjectData;
import org.finra.herd.model.api.xml.BusinessObjectDataInvalidateUnregisteredRequest;
import org.finra.herd.model.api.xml.BusinessObjectDataInvalidateUnregisteredResponse;
import org.finra.herd.model.api.xml.StorageUnit;
import org.finra.herd.model.jpa.BusinessObjectDataEntity;
import org.finra.herd.model.jpa.BusinessObjectDataStatusEntity;
import org.finra.herd.model.jpa.BusinessObjectFormatEntity;
import org.finra.herd.model.jpa.StorageEntity;
import org.finra.herd.service.AbstractServiceTest;
public class BusinessObjectDataInvalidateUnregisteredHelperTest extends AbstractServiceTest
{
@Autowired
private S3Operations s3Operations;
@After
public void after()
{
s3Operations.rollback();
}
/**
* Test case where S3 and herd are in sync because there are no data in either S3 or herd. Expects no new registrations. This is a happy path where common
* response values are asserted.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS30Herd0() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response is null", actualResponse);
Assert.assertEquals("response namespace", request.getNamespace(), actualResponse.getNamespace());
Assert.assertEquals("response business object definition name", request.getBusinessObjectDefinitionName(),
actualResponse.getBusinessObjectDefinitionName());
Assert.assertEquals("response business object format usage", request.getBusinessObjectFormatUsage(), actualResponse.getBusinessObjectFormatUsage());
Assert.assertEquals("response business object format file type", request.getBusinessObjectFormatFileType(),
actualResponse.getBusinessObjectFormatFileType());
Assert
.assertEquals("response business object format version", request.getBusinessObjectFormatVersion(), actualResponse.getBusinessObjectFormatVersion());
Assert.assertEquals("response partition value", request.getPartitionValue(), actualResponse.getPartitionValue());
Assert.assertEquals("response sub-partition values", request.getSubPartitionValues(), actualResponse.getSubPartitionValues());
Assert.assertEquals("response storage name", request.getStorageName(), actualResponse.getStorageName());
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 0, actualResponse.getRegisteredBusinessObjectDataList().size());
}
/**
* Test case where herd and S3 are in sync because both have 1 object registered. Expects no new data registration.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS31Herd1() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
BusinessObjectFormatEntity businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 0);
createBusinessObjectDataEntityFromBusinessObjectDataInvalidateUnregisteredRequest(businessObjectFormatEntity, request, 0, true);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 0, actualResponse.getRegisteredBusinessObjectDataList().size());
}
/**
* Test case where S3 has 1 object, and herd has no object registered. Expects one new registration in INVALID status.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS31Herd0() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
// Given an object in S3
BusinessObjectFormatEntity businessObjectFormatEntity;
try
{
businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 0);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 1, actualResponse.getRegisteredBusinessObjectDataList().size());
{
BusinessObjectData businessObjectData = actualResponse.getRegisteredBusinessObjectDataList().get(0);
Assert.assertEquals("response business object data[0] version", 0, businessObjectData.getVersion());
Assert.assertEquals("response business object data[0] status", BusinessObjectDataInvalidateUnregisteredHelper.UNREGISTERED_STATUS,
businessObjectData.getStatus());
Assert.assertNotNull("response business object data[0] storage units is null", businessObjectData.getStorageUnits());
Assert.assertEquals("response business object data[0] storage units size", 1, businessObjectData.getStorageUnits().size());
{
String expectedS3KeyPrefix = s3KeyPrefixHelper.buildS3KeyPrefix(S3_KEY_PREFIX_VELOCITY_TEMPLATE, businessObjectFormatEntity,
businessObjectDataHelper.createBusinessObjectDataKey(businessObjectData), STORAGE_NAME);
StorageUnit storageUnit = businessObjectData.getStorageUnits().get(0);
Assert.assertNotNull("response business object data[0] storage unit[0] storage directory is null", storageUnit.getStorageDirectory());
Assert.assertEquals("response business object data[0] storage unit[0] storage directory path", expectedS3KeyPrefix,
storageUnit.getStorageDirectory().getDirectoryPath());
}
}
}
/**
* Test case where S3 has 1 object, and herd has no object registered. The data has sub-partitions. Expects one new registration in INVALID status.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS31Herd0WithSubPartitions() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
// Given an object in S3
BusinessObjectFormatEntity businessObjectFormatEntity;
try
{
businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 0);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response sub-partition values is null", actualResponse.getSubPartitionValues());
Assert.assertEquals("response sub-partition values", request.getSubPartitionValues(), actualResponse.getSubPartitionValues());
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 1, actualResponse.getRegisteredBusinessObjectDataList().size());
{
BusinessObjectData businessObjectData = actualResponse.getRegisteredBusinessObjectDataList().get(0);
Assert.assertEquals("response business object data[0] version", 0, businessObjectData.getVersion());
Assert.assertEquals("response business object data[0] status", BusinessObjectDataInvalidateUnregisteredHelper.UNREGISTERED_STATUS,
businessObjectData.getStatus());
Assert.assertNotNull("response business object data[0] storage units is null", businessObjectData.getStorageUnits());
Assert.assertEquals("response business object data[0] storage units size", 1, businessObjectData.getStorageUnits().size());
{
String expectedS3KeyPrefix = s3KeyPrefixHelper.buildS3KeyPrefix(S3_KEY_PREFIX_VELOCITY_TEMPLATE, businessObjectFormatEntity,
businessObjectDataHelper.createBusinessObjectDataKey(businessObjectData), STORAGE_NAME);
StorageUnit storageUnit = businessObjectData.getStorageUnits().get(0);
Assert.assertNotNull("response business object data[0] storage unit[0] storage directory is null", storageUnit.getStorageDirectory());
Assert.assertEquals("response business object data[0] storage unit[0] storage directory path", expectedS3KeyPrefix,
storageUnit.getStorageDirectory().getDirectoryPath());
}
}
}
/**
* Test case where S3 has 2 objects, and herd has 1 object registered. Expects one new registration in INVALID status.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS32Herd1() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
// Given 1 business object data registered
// Given 2 S3 objects
BusinessObjectFormatEntity businessObjectFormatEntity;
try
{
businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
createBusinessObjectDataEntityFromBusinessObjectDataInvalidateUnregisteredRequest(businessObjectFormatEntity, request, 0, true);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 0);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 1);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 1, actualResponse.getRegisteredBusinessObjectDataList().size());
{
BusinessObjectData businessObjectData = actualResponse.getRegisteredBusinessObjectDataList().get(0);
Assert.assertEquals("response business object data[0] version", 1, businessObjectData.getVersion());
Assert.assertEquals("response business object data[0] status", BusinessObjectDataInvalidateUnregisteredHelper.UNREGISTERED_STATUS,
businessObjectData.getStatus());
Assert.assertNotNull("response business object data[0] storage units is null", businessObjectData.getStorageUnits());
Assert.assertEquals("response business object data[0] storage units size", 1, businessObjectData.getStorageUnits().size());
{
String expectedS3KeyPrefix = s3KeyPrefixHelper.buildS3KeyPrefix(S3_KEY_PREFIX_VELOCITY_TEMPLATE, businessObjectFormatEntity,
businessObjectDataHelper.createBusinessObjectDataKey(businessObjectData), STORAGE_NAME);
StorageUnit storageUnit = businessObjectData.getStorageUnits().get(0);
Assert.assertNotNull("response business object data[0] storage unit[0] storage directory is null", storageUnit.getStorageDirectory());
Assert.assertEquals("response business object data[0] storage unit[0] storage directory path", expectedS3KeyPrefix,
storageUnit.getStorageDirectory().getDirectoryPath());
}
}
}
/**
* Test case where S3 has 2 objects, but herd has no object registered. Expects 2 new registrations in INVALID status.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS32Herd0() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
// Given 1 business object data registered
// Given 2 S3 objects
BusinessObjectFormatEntity businessObjectFormatEntity;
try
{
businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 0);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 1);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 2, actualResponse.getRegisteredBusinessObjectDataList().size());
// Assert first data registered
{
BusinessObjectData businessObjectData = actualResponse.getRegisteredBusinessObjectDataList().get(0);
Assert.assertEquals("response business object data[0] version", 0, businessObjectData.getVersion());
Assert.assertEquals("response business object data[0] status", BusinessObjectDataInvalidateUnregisteredHelper.UNREGISTERED_STATUS,
businessObjectData.getStatus());
Assert.assertNotNull("response business object data[0] storage units is null", businessObjectData.getStorageUnits());
Assert.assertEquals("response business object data[0] storage units size", 1, businessObjectData.getStorageUnits().size());
{
String expectedS3KeyPrefix = s3KeyPrefixHelper.buildS3KeyPrefix(S3_KEY_PREFIX_VELOCITY_TEMPLATE, businessObjectFormatEntity,
businessObjectDataHelper.createBusinessObjectDataKey(businessObjectData), STORAGE_NAME);
StorageUnit storageUnit = businessObjectData.getStorageUnits().get(0);
Assert.assertNotNull("response business object data[0] storage unit[0] storage directory is null", storageUnit.getStorageDirectory());
Assert.assertEquals("response business object data[0] storage unit[0] storage directory path", expectedS3KeyPrefix,
storageUnit.getStorageDirectory().getDirectoryPath());
}
}
// Assert second data registered
{
BusinessObjectData businessObjectData = actualResponse.getRegisteredBusinessObjectDataList().get(1);
Assert.assertEquals("response business object data[1] version", 1, businessObjectData.getVersion());
Assert.assertEquals("response business object data[1] status", BusinessObjectDataInvalidateUnregisteredHelper.UNREGISTERED_STATUS,
businessObjectData.getStatus());
Assert.assertNotNull("response business object data[1] storage units is null", businessObjectData.getStorageUnits());
Assert.assertEquals("response business object data[1] storage units size", 1, businessObjectData.getStorageUnits().size());
{
String expectedS3KeyPrefix = s3KeyPrefixHelper.buildS3KeyPrefix(S3_KEY_PREFIX_VELOCITY_TEMPLATE, businessObjectFormatEntity,
businessObjectDataHelper.createBusinessObjectDataKey(businessObjectData), STORAGE_NAME);
StorageUnit storageUnit = businessObjectData.getStorageUnits().get(0);
Assert.assertNotNull("response business object data[1] storage unit[0] storage directory is null", storageUnit.getStorageDirectory());
Assert.assertEquals("response business object data[1] storage unit[0] storage directory path", expectedS3KeyPrefix,
storageUnit.getStorageDirectory().getDirectoryPath());
}
}
}
/**
* Test case where S3 has 1 object, and herd has no object registered. The S3 object is registered under version 1 so there is a gap for version 0 of
* registration. Expects no new registrations since the API does not consider the S3 objects after a gap.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS31Herd0WithGap()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
// Given an object in S3
BusinessObjectFormatEntity businessObjectFormatEntity;
try
{
businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 1);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 0, actualResponse.getRegisteredBusinessObjectDataList().size());
}
/**
* The prefix search for S3 object should match prefixed directories, not sub-strings. For example: - If an S3 object exists with key "c/b/aa/test.txt" - If
* a search for prefix "c/b/a" is executed - The S3 object should NOT match, since it is a prefix, but not a prefix directory.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataS3PrefixWithSlash() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
// Given an object in S3
BusinessObjectFormatEntity businessObjectFormatEntity;
try
{
businessObjectFormatEntity = businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
request.setPartitionValue("AA"); // Create S3 object which is contains the partition value as substring
businessObjectDataServiceTestHelper.createS3Object(businessObjectFormatEntity, request, 0);
request.setPartitionValue("A"); // Send request with substring
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions, expect no data updates since nothing should match
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 0, actualResponse.getRegisteredBusinessObjectDataList().size());
}
/**
* Asserts that namespace requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationNamespaceRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
request.setNamespace(BLANK_TEXT);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The namespace is required", e.getMessage());
}
}
/**
* Asserts that business object definition name requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationBusinessObjectDefinitionNameRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BLANK_TEXT, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION,
PARTITION_VALUE, NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The business object definition name is required", e.getMessage());
}
}
/**
* Asserts that business object format usage requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationBusinessObjectFormatUsageRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, BLANK_TEXT, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The business object format usage is required", e.getMessage());
}
}
/**
* Business object format must exist for this API to work
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationBusinessObjectFormatMustExist()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Modify a parameter specific to a format to reference a format that does not exist
request.setBusinessObjectFormatFileType("DOES_NOT_EXIST");
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a ObjectNotFoundException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", ObjectNotFoundException.class, e.getClass());
Assert.assertEquals("thrown exception message",
"Business object format with namespace \"" + request.getNamespace() + "\", business object definition name \"" +
request.getBusinessObjectDefinitionName() + "\", format usage \"" + request.getBusinessObjectFormatUsage() + "\", format file type \"" +
request.getBusinessObjectFormatFileType() + "\", and format version \"" + request.getBusinessObjectFormatVersion() + "\" doesn't exist.",
e.getMessage());
}
}
/**
* Asserts that business object format file type requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationBusinessObjectFormatFileTypeRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, BLANK_TEXT, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The business object format file type is required", e.getMessage());
}
}
/**
* Asserts that business object format version requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationBusinessObjectFormatVersionRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// nullify version after format is created so that the format is created correctly.
request.setBusinessObjectFormatVersion(null);
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The business object format version is required", e.getMessage());
}
}
/**
* Asserts that business object format version positive validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationBusinessObjectFormatVersionNegative()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, -1, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The business object format version must be greater than or equal to 0", e.getMessage());
}
}
/**
* Asserts that partition value requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationPartitionValueRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, BLANK_TEXT,
NO_SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The partition value is required", e.getMessage());
}
}
/**
* Asserts that storage name requiredness validation is working.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationStorageNameRequired()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, BLANK_TEXT);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The storage name is required", e.getMessage());
}
}
/**
* Storage must exist for this API to work.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationStorageMustExist()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, "DOES_NOT_EXIST");
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a ObjectNotFoundException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", ObjectNotFoundException.class, e.getClass());
Assert.assertEquals("thrown exception message", "Storage with name \"" + request.getStorageName() + "\" doesn't exist.", e.getMessage());
}
}
/**
* Storage is found, but the storage platform is not S3. This API only works for S3 platforms since it requires S3 key prefix.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationStoragePlatformMustBeS3()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
NO_SUBPARTITION_VALUES, STORAGE_NAME);
// Given a business object format
try
{
storageDaoTestHelper.createStorageEntity(request.getStorageName(), "NOT_S3");
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The specified storage '" + request.getStorageName() + "' is not an S3 storage platform.",
e.getMessage());
}
}
/**
* If sub-partition values are given, they must not be blank.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataValidationSubPartitionValueNotBlank()
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
Arrays.asList(BLANK_TEXT), StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// Call the API
try
{
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
Assert.fail("expected a IllegalArgumentException, but no exception was thrown");
}
catch (Exception e)
{
Assert.assertEquals("thrown exception type", IllegalArgumentException.class, e.getClass());
Assert.assertEquals("thrown exception message", "The sub-partition value [0] must not be blank", e.getMessage());
}
}
/**
* Asserts that values are trimmed before the request is processed.
*/
@Test
public void testInvalidateUnregisteredBusinessObjectDataTrim() throws Exception
{
BusinessObjectDataInvalidateUnregisteredRequest request =
new BusinessObjectDataInvalidateUnregisteredRequest(NAMESPACE, BDEF_NAME, FORMAT_USAGE_CODE, FORMAT_FILE_TYPE_CODE, FORMAT_VERSION, PARTITION_VALUE,
SUBPARTITION_VALUES, StorageEntity.MANAGED_STORAGE);
// Given a business object format
try
{
businessObjectFormatServiceTestHelper.createBusinessObjectFormat(request);
}
catch (Exception e)
{
throw new IllegalArgumentException("Test failed during setup. Most likely setup or developer error.", e);
}
// pad string values with white spaces
request.setNamespace(BLANK_TEXT + request.getNamespace() + BLANK_TEXT);
request.setBusinessObjectDefinitionName(BLANK_TEXT + request.getBusinessObjectDefinitionName() + BLANK_TEXT);
request.setBusinessObjectFormatFileType(BLANK_TEXT + request.getBusinessObjectFormatFileType() + BLANK_TEXT);
request.setBusinessObjectFormatUsage(BLANK_TEXT + request.getBusinessObjectFormatUsage() + BLANK_TEXT);
request.setPartitionValue(BLANK_TEXT + request.getPartitionValue() + BLANK_TEXT);
List<String> paddedSubPartitionValues = new ArrayList<>();
for (String subPartitionValue : request.getSubPartitionValues())
{
paddedSubPartitionValues.add(BLANK_TEXT + subPartitionValue + BLANK_TEXT);
}
request.setSubPartitionValues(paddedSubPartitionValues);
// Call the API
BusinessObjectDataInvalidateUnregisteredResponse actualResponse =
businessObjectDataInvalidateUnregisteredHelper.invalidateUnregisteredBusinessObjectData(request);
// Make assertions
/*
* Note: The API will modify the request to now contain the trimmed value.
*/
Assert.assertNotNull("response is null", actualResponse);
Assert.assertEquals("response namespace", request.getNamespace(), actualResponse.getNamespace());
Assert.assertEquals("response business object definition name", request.getBusinessObjectDefinitionName(),
actualResponse.getBusinessObjectDefinitionName());
Assert.assertEquals("response business object format usage", request.getBusinessObjectFormatUsage(), actualResponse.getBusinessObjectFormatUsage());
Assert.assertEquals("response business object format file type", request.getBusinessObjectFormatFileType(),
actualResponse.getBusinessObjectFormatFileType());
Assert
.assertEquals("response business object format version", request.getBusinessObjectFormatVersion(), actualResponse.getBusinessObjectFormatVersion());
Assert.assertEquals("response partition value", request.getPartitionValue(), actualResponse.getPartitionValue());
Assert.assertEquals("response sub-partition values", request.getSubPartitionValues(), actualResponse.getSubPartitionValues());
Assert.assertEquals("response storage name", request.getStorageName(), actualResponse.getStorageName());
Assert.assertNotNull("response business object datas is null", actualResponse.getRegisteredBusinessObjectDataList());
Assert.assertEquals("response business object datas size", 0, actualResponse.getRegisteredBusinessObjectDataList().size());
}
/**
* Creates and persists a business object data entity per specified parameters.
*
* @param businessObjectFormatEntity the business object format entity
* @param request the business object data invalidate unregistered request that contains the business object data key
* @param businessObjectDataVersion the business object data version
* @param latestVersion specifies if this business object data is the latest version
*
* @return the business object data entity
*/
private BusinessObjectDataEntity createBusinessObjectDataEntityFromBusinessObjectDataInvalidateUnregisteredRequest(
BusinessObjectFormatEntity businessObjectFormatEntity, BusinessObjectDataInvalidateUnregisteredRequest request, int businessObjectDataVersion,
boolean latestVersion)
{
BusinessObjectDataEntity businessObjectDataEntity = new BusinessObjectDataEntity();
businessObjectDataEntity.setBusinessObjectFormat(businessObjectFormatEntity);
businessObjectDataEntity.setPartitionValue(request.getPartitionValue());
businessObjectDataEntity.setPartitionValue2(herdCollectionHelper.safeGet(request.getSubPartitionValues(), 0));
businessObjectDataEntity.setPartitionValue3(herdCollectionHelper.safeGet(request.getSubPartitionValues(), 1));
businessObjectDataEntity.setPartitionValue4(herdCollectionHelper.safeGet(request.getSubPartitionValues(), 2));
businessObjectDataEntity.setPartitionValue5(herdCollectionHelper.safeGet(request.getSubPartitionValues(), 3));
businessObjectDataEntity.setVersion(businessObjectDataVersion);
businessObjectDataEntity.setLatestVersion(latestVersion);
businessObjectDataEntity.setStatus(businessObjectDataStatusDao.getBusinessObjectDataStatusByCode(BusinessObjectDataStatusEntity.VALID));
return businessObjectDataDao.saveAndRefresh(businessObjectDataEntity);
}
}
|
Q:
Android: layout animation like Inshorts news app
swipe up and down effect like those news apps inshorts,hike news,murmur.
whole layout smoothly up/down.
check app on this link inshorts and murmur.
i tried this code...
public class VerticalViewPager extends ViewPager {
public VerticalViewPager(Context context) {
super(context);
init();
}
public VerticalViewPager(Context context, AttributeSet attrs) {
super(context, attrs);
init();
}
private void init() {
// The majority of the magic happens here
setPageTransformer(true, new VerticalPageTransformer());
// The easiest way to get rid of the overscroll drawing that happens on the left and right
setOverScrollMode(OVER_SCROLL_NEVER);
}
private class VerticalPageTransformer implements PageTransformer {
@SuppressLint("NewApi")
@Override
public void transformPage(View view, float position) {
if (position < -1) { // [-Infinity,-1)
// This page is way off-screen to the left.
view.setAlpha(0);
} else if (position <= 1) { // [-1,1]
view.setAlpha(1);
// Counteract the default slide transition
view.setTranslationX(view.getWidth() * -position);
//set Y position to swipe in from top
float yPosition = position * view.getHeight();
view.setTranslationY(yPosition);
} else { // (1,+Infinity]
// This page is way off-screen to the right.
view.setAlpha(0);
}
}
}
/**
* Swaps the X and Y coordinates of your touch event.
*/
private MotionEvent swapXY(MotionEvent ev) {
float width = getWidth();
float height = getHeight();
float newX = (ev.getY() / height) * width;
float newY = (ev.getX() / width) * height;
ev.setLocation(newX, newY);
return ev;
}
@Override
public boolean onInterceptTouchEvent(MotionEvent ev){
boolean intercepted = super.onInterceptTouchEvent(swapXY(ev));
swapXY(ev); // return touch coordinates to original reference frame for any child views
return intercepted;
}
@Override
public boolean onTouchEvent(MotionEvent ev) {
return super.onTouchEvent(swapXY(ev));
}
}
In MainActivity.java
VerticalViewPager Pager2;
PagerAdapter adapter;
String[] articleTitle;
String[] articleName;
String[] articleDiscription;
OnCreate()
Pager2=(VerticalViewPager)findViewById(R.id.pager);
// Pass results to ViewPagerAdapter Class
adapter = new ViewPagerAdapter(getActivity(), articleTitle, articleName, articleDiscription, btnBack,articleImage);
// Binds the Adapter to the ViewPager
Pager2.setAdapter(adapter);
activity_main.xml
<com.example.flipnews.VerticalViewPager
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/pager"
/>
In my code simple up-down swipe done,like this link.
but i want to create better animation effect like above mentioned apps.
or phone inbuilt photo gallery effect.
Thanks in advance.
A:
I found solution after many research i hope its helpful for others.
Tip: set view pager background color black for better swipe effect.
private static class VerticalPageTransformer implements PageTransformer {
private static float MIN_SCALE = 0.75f;
public void transformPage(View view, float position) {
int pageWidth = view.getWidth();
if (position < -1) { // [-Infinity,-1)
// This page is way off-screen to the left.
view.setAlpha(0);
} else if (position <= 0) { // [-1,0]
// Use the default slide transition when moving to the left page
view.setAlpha(1);
//view.setTranslationX(1);
view.setScaleX(1);
view.setScaleY(1);
float yPosition = position * view.getHeight();
view.setTranslationY(yPosition);
view.setTranslationX(-1 * view.getWidth() * position);
} else if (position <= 1) { // (0,1]
// Fade the page out.
view.setAlpha(1 - position);
view.setTranslationX(-1 * view.getWidth() * position);
// Scale the page down (between MIN_SCALE and 1)
float scaleFactor = MIN_SCALE
+ (1 - MIN_SCALE) * (1 - Math.abs(position));
view.setScaleX(scaleFactor);
view.setScaleY(scaleFactor);
} else { // (1,+Infinity]
// This page is way off-screen to the right.
view.setAlpha(0);
}
}
}
private MotionEvent swapXY(MotionEvent ev) {
float width = getWidth();
float height = getHeight();
float newX = (ev.getY() / height) * width;
float newY = (ev.getX() / width) * height;
ev.setLocation(newX, newY);
return ev;
}
@Override
public boolean onInterceptTouchEvent(MotionEvent ev){
boolean intercepted = super.onInterceptTouchEvent(swapXY(ev));
swapXY(ev); // return touch coordinates to original reference frame for any child views
return intercepted;
}
@Override
public boolean onTouchEvent(MotionEvent ev) {
return super.onTouchEvent(swapXY(ev));
}
}
|
Early detection of adverse drug events within population-based health networks: application of sequential testing methods.
Active surveillance of population-based health networks may improve the timeliness of detection of adverse drug events (ADEs). Active monitoring requires sequential analysis methods. Our objectives were to (1) evaluate the utility of automated healthcare claims data for near real-time drug adverse event surveillance and (2) identify key methodological issues related to the use of healthcare claims data for real-time drug safety surveillance. We assessed the ability to detect ADEs using historical data from nine health plans involved in the HMO Research Network's Center for Education and Research on Therapeutics (CERT). Analyses were performed using a maximized sequential probability ratio test (maxSPRT). Five drug-event pairs representing known associations with an ADE and two pairs representing 'negative controls' were analyzed. Statistically significant (p < 0.05) signals of excess risk were found in four of the five drug-event pairs representing known associations; no signals were found for the negative controls. Signals were detected between 13 and 39 months after the start of surveillance. There was substantial variation in the number of exposed and expected events at signal detection. Prospective, periodic evaluation of routinely collected data can provide population-based estimates of medication-related adverse event rates to support routine, timely post-marketing surveillance for selected ADEs. |
The proposed work examines the factors that influence the course of interracial interactions. Research on intergroup relations has highlighted the importance of intergroup anxiety in determining people's responses toward outgroup members. Drawing upon previous theorizing from both the prejudice and social anxiety literatures, the proposed work offers a model of the antecedents and implications of anxiety in interracial interactions. Specifically, it is argued that intergroup anxiety results from a lack of positive previous experiences with outgroup members, negative expectation about the corse of interracial interactions, and a lack of personal motivation to respond without bias toward outgroup members. Further, this anxiety is posited to result in heightened hostility toward outgroup members and a desire to avoid interacting with outgroup members, particularly among people who are not personally motivated to respond without prejudice. Intergroup anxiety creates stress in people live's and can result in avoidant coping styles both of which have negative implications of people's psychological well-being. Avoidance of outgroup members can lead to feelings of social isolations among those people who are avoided, which has negative psychological and health-related implications. In addition, if anxiety results in hostility toward outgroup members, this hostility could be stressful for outgroup members and threaten their self-esteem. The proposed work tests the casual relationships in the model using a longitudinal survey approach and a laboratory experiment to examine by developing strategies and interventions to reduce anxiety and the concomitant avoidance, and hostility in interracial interactions. |
Q:
operator + and float argument
I have a curious issue with template, I am trying to make a basic addition between
a template class and "float/double/int" type. It is very basic but if I do:
template<class T>
class toto{
T a;
};
template<class T>
toto<T> operator+(toto<T> const&, T&){
std::cout << "hello " <<std::endl;
}
int main(){
toto<float> t;
toto<float> d = t +2.3;
}
It will not compile because 2.3 is considered like a double, it does not match the signature. I could use a second template parameter for my operator+ as
template<class T, class D>
toto<T> operator+(toto<T> const&, D&){
std::cout << "hello " <<std::endl;
}
It compiles, execute correctly but too dangerous D can be everything. An alternative is to create different signature with float, double or int (O_O). Boost::enable_if seems my solution but in the doc I read:
template <class T>
T foo(T t,typename enable_if<boost::is_arithmetic<T> >::type* dummy = 0);
Apply this method to the operator* doest not work, because the compiler complains default arguments are forbidden.
Any suggestions ?
Cheers,
++t
A:
Use a non-deduced context for the second parameter. And a const-reference as the parameter, to allow rvalues.
template <typename T> struct identity {using type = T;};
template <typename T>
using identity_t = typename identity<T>::type;
template<class T>
toto<T> operator+(toto<T> const&, identity_t<T> const&)
{
std::cout << "hello " <<std::endl;
}
A non-deduced context will cause deduction to ignore the call argument for a certain parameter as the invoked template parameters cannot be deduced. In some scenarios, as here, that is desired since inconsistent deductions are not possible anymore. In other words, the type of the second parameter fully depends on the first argument of the call, not the second (which may be implicitly converted).
toto<float> d = t + 2.3;
Should now compile, Demo.
|
Motus et bouche cousue (Dr Forlen) Caïn Bates
Le silence, un bijou porté par les sages, une prison dorée ou d'acier, un luxe à la portée de tous enrichi par les penseurs et appauvri par la bêtise. Le silence est un jeu qu'on propose aux enfants agités, qu'on impose aux enfants punis et qu'on propose aux bavards. Le silence est un mur qui peut être aussi dur à monter qu'à abattre.
Nana, 12 ans, sociopathe. Muette depuis quatre ans, voeu de silence. A perdu sa mère dans un incendie. Aucun symptôme de malaise social diagnostiqué, de nature serviable et volontaire. Déscolarisée.
Victoria, 15 ans, mutisme sélectif. Souffre de mutisme en présence d'inconnus et plus particulièrement en compagnie d'autres filles. Timidité grandissante et anxiété en société. Entrée depuis peu dans un centre de sociabilisation.
Rosalie, 6 ans, aucun trouble diagnostiqué. A été retrouvée cloitrée dans la cave du domicile de ses parents, traces de piqûres sur les bras, nombreux hématomes sur les membres inférieurs et le visage, trâces de strangulation légère datant approximativement du jour où elle a été trouvée, sutures des lèvres et début de soudure des parois zygomatiques. A été prise en charge et a subi des chirugies reconstructrices. Sociabilisation primitive avec l'une des infirmières, peu coopérative aux travaux mais en bonne voie de rétablissement.
Jack, 48 ans, maniaco-dépressif. Porté disparu depuis l'intervention des forces de l'ordre à son domicile. Baillonné, trâces de tortures légères aux chevilles, phalanges manquantes, mâchoire inférieure brisée, coupures aux lèvres (le fil de pêche a cédé). Sujet très enclin à la lâcheté, aveux enregistrés et remis aux autorités, devis d'un restaurateur s'élevant à une valeur de 400 dollars pour sa viande. Principale thérapie de la petite Rosalie. Vient de conclure de creuser sa propre tombe.
Patrick, 27 ans, cannibale. Accepte les pots de vin, a abandonné les recherches, m'a remercié pour le diner. |
Q:
1 Channel / grey images for object detection using deep learning/cnn
I have been working on object detection for some time, all the model that I have seen uses RGB only as input (well if we do not have 3 channel we copy data from one colour/dimension to other).
Do we have any deep learning models where we can only feed 1 channel as input to the model, like the face/eye detectors (LBPHFaceRecognizer) in open cv.
Basically looking for deep neural networks that are simple and computational less demanding, when we have only 1 channel from sources like - thermal images, infrared cameras, TOF cameras, radars etc.
A:
If we use AutoML/NN architecture search for object detection then the neural network will be efficient to handle the various inputs (including grey images), the Mobile net v3 or Nasnet are good examples for neural architecture search.
|
Q:
Can I use react-native-i18n library with react-native-web-boilerplate
Can I use react-native-i18n library with react-native-web-boilerplate?
react-native-web-boilerplate is a library (description here) using which you can build an app in react and navigate in desktop, mobile and web platforms.
I was trying my luck with react-native-i18n library but react-native-i18n import fails.
RNI18n object returned as undefined. I can't get current locale, language.
import I18n from 'react-native-i18n'
Has anyone tried using react-native-i18n in a cross platform app?
Or is there any other better way you would like to suggest.
A:
You can inject a constant RNI18n into NativeModules of react-native, because react-native-i18n depends that. And then ensure to execute it before other modules, likes below:
import { NativeModules } from 'react-native'
const languages = [navigator.language]
NativeModules.RNI18n = {
languages,
getLanguages: () => Promise.resolve(languages)
}
const App = require('./src').default // If it's ES6 module
require will keep the order of execution, if you import other modules by import statement, maybe it run first, the injection will invalid.
|
/********************************************************************************
* Copyright (c) 2020 Cedalo AG
*
* This program and the accompanying materials are made available under the
* terms of the Eclipse Public License 2.0 which is available at
* http://www.eclipse.org/legal/epl-2.0.
*
* SPDX-License-Identifier: EPL-2.0
*
********************************************************************************/
const MongoDBConnection = require('./mongoDB/MongoDBConnection');
module.exports = class RepositoryManager {
static init({
graphRepository,
machineRepository,
streamRepositoryLegacy,
backupRestoreManager,
configurationRepository
}) {
RepositoryManager.graphRepository = graphRepository;
RepositoryManager.machineRepository = machineRepository;
RepositoryManager.streamRepositoryLegacy = streamRepositoryLegacy;
RepositoryManager.backupRestoreManager = backupRestoreManager;
RepositoryManager.configurationRepository = configurationRepository;
}
static async populateDatabases(initJSON) {
if (initJSON) {
try {
const machines = initJSON.machines;
if (machines) {
// eslint-disable-next-line
for (const machineContainer of machines) {
try {
const { graph, machine } = machineContainer;
machine.scope = { id: 'root' };
// eslint-disable-next-line
await RepositoryManager.graphRepository.saveGraph(graph);
// eslint-disable-next-line
await RepositoryManager.machineRepository.saveMachine(machine);
} catch (error) {
// ignore machine
}
}
}
const streams = initJSON.streams;
if (streams) {
// eslint-disable-next-line
for (const stream of streams) {
try {
// TODO: replace with stream repository procy
stream.scope = { id: 'root' };
// eslint-disable-next-line
await RepositoryManager.streamRepositoryLegacy.saveConfiguration(stream);
} catch (error) {
// ignore stream
}
}
}
} catch (error) {
// console.error(error);
}
}
}
static async connectAll(existingConnection) {
const connection = existingConnection || (await MongoDBConnection.create());
const db = connection.db();
Object.values(RepositoryManager)
.filter((repository) => repository && repository.connect)
.forEach((repositoryWithConnect) => {
// FIXME: should provide a method
repositoryWithConnect.db = db;
});
}
static setupAllIndicies() {
return Promise.all(
Object.values(RepositoryManager)
.filter((repository) => repository && repository.setupIndicies && repository.db)
.map((repository) => repository.setupIndicies())
);
}
static async backup(config) {
if (RepositoryManager.backupRestoreManager) {
return RepositoryManager.backupRestoreManager.backup(config);
}
return null;
}
static async restore(config) {
if (RepositoryManager.backupRestoreManager) {
return RepositoryManager.backupRestoreManager.restore(config);
}
return null;
}
};
|
We discovered that a gp120-CD4 covalently bonded complex presents a specific subset of cryptic epitopes on gp120 and/or CD4 not present on the uncomplexed molecules. These complexes elicited neutralizing antibodies with novel specificities and are thus useful in vaccines and immunotherapy against HIV infection. In addition, the complexes or antibodies thereto can be used in immunological tests for HIV infection.
Neutralizing antibodies are considered to be essential for protection against many viral infections including those caused by retroviruses. Since the initial reports of neutralizing antibodies in HIV-infected individuals, it has become increasingly clear that high levels of these antibodies in serum correlate with better clinical outcome (3-5). These studies suggested that the identification of epitopes that elicit high titer neutralizing antibodies would be essential for vaccine development against HIV infection. The primary receptor for the human immunodeficiency virus type 1 (HIV-1) is the CD4 molecule, found predominantly on the surface of T-lymphocytes. The binding of HIV-1 to CD4 occurs via the major viral envelope glycoprotein gp120 and initiates the viral infection process.
Current strategies for developing vaccines against infection by the human immunodeficiency virus have focused on eliciting antibodies against the viral envelope glycoprotein gp120 or its cell surface receptor CD4. Purified gp120 typically elicits type specific neutralizing antibodies that are reactive against epitopes that vary among virus isolates. This characteristic has hindered the use of gp120 as a vaccine.
CD4 has also been considered as a major candidate for development of a vaccine against HIV-1. Recent studies have demonstrated that sCD4 elicits HIV neutralizing antibodies in animals and prevents the spread of infection in SIV-infected rhesus monkeys (1). However, autoantibodies to CD4 may themselves create immune abnormalities in the immunized host if they interfere with normal T-cell functions. Neutralizing antibodies against gp120 are elicited in vivo in HIV-1-infected individuals and can be elicited in vitro using purified envelope glycoprotein. However, gp120 contains five hypervariable regions one of which, the V3 domain, is the principal neutralizing epitope. Hypervariability of this epitope among strains is a major obstacle for the generation of neutralizing antibodies effective against diverse strains of HIV-1. For these reasons it has been believed that vaccine strategies using either purified CD4 or gp120 present several disadvantages.
We have overcome the shortcomings of type specific anti gp120 antibodies and antibodies against CD4 by raising anti-HIV-1 neutralizing antibodies using as the immunogen a complex of gp120 chemically coupled to either soluble CD4 or to the mannose-specific lectin, succinyl concanavalin A (SC). We have found that these compounds induce similar conformational changes in gp120. The complexed gp120 appears to undergo a conformational change that presents an array of epitopes that were hidden on the uncomplexed glycoprotein (2). A portion of such epitopes elicit group-specific neutralizing antibodies, which are not strain limited like the type specific antibodies discussed above. We have discovered that the covalently bonded CD4-gp120 complexes are useful for raising neutralizing antibodies against various isolates of HIV-1 and against HIV-2.
The major research effort in the development of subunit vaccines against HIV has been directed toward the envelope glycoprotein of the virus. Inoculation of gp160 or gp120 into animals elicits neutralizing antibodies against HIV (3, 4). The principal neutralizing epitope on gp120 has been located between amino acids 306 and 326 in the third variable domain (V3 loop) of the protein (4). This epitope has been extensively studied by using both polyclonal and monoclonal antibodies (3, 4). In most cases antibodies directed to this region neutralize HIV-1 in an isolate specific manner although there is evidence that a weakly neutralizing species of anti-V3 loop antibodies can cross-react with diverse isolates (8). The reason for type specificity of anti-V3 loop antibodies is the extensive sequence variability among various isolates. Prolonged culturing of HIV-infected cells with type specific anti-V3 loop antibodies induces escape mutants resistant to neutralization (9).
In addition to the V3 loop, other neutralizing epitopes encompassing genetically conserved regions of the envelope have been identified (10, 11). However, 25 immunization against these epitopes elicits polyclonal antisera with low neutralizing titers (12). For example, the CD4 binding region of gp120, encompassing a conserved region, elicits neutralizing antibodies against diverse isolates (13). However, this region is not normally an immunodominant epitope on the glycoprotein.
The interaction of gp120 with CD4 has been studied in considerable detail and regions of the molecules involved in complex formation have been determined (14-16). There are now several lines of evidence that interactions with CD4 induce conformational changes in gp120. First, recombinant soluble CD4 (sCD4) binding to gp120 increases the susceptibility of the V3 loop to monoclonal antibody binding and to digestion by exogenous proteinase (2). Second, sCD4 binding results in the dissociation of gp120 from the virus (17, 18). These conformational changes in gp120 are thought to facilitate the processes of virus attachment and fusion with the host cell membrane (2). Immunization with soluble CD4 and recombinant gp120, complexed by their natural affinity but not covalently bonded, resulted in the production of anti CD4 antibodies (31). Several murine monoclonal antibodies have been developed by immunization with mixtures of recombinant gp120 and sCD4 (31, 32). Antibodies raised in these studies were not strictly complex-specific and reacted with free gp120 or CD4; the neutralizing antibodies reacted with free sCD4, although they displayed various degrees of enhanced reactivity in the presence of gp120. The complexes used in these studies were unstable and comprised noncovalently bound gp120 and CD4.
A variety of N-linked carbohydrate structures of high mannose, complex and hybrid types present on the gp120 molecule may also play a role in the interaction of gp120 with host cell membranes (19-21). Indeed, a carbohydrate-mediated reactivity of gp120 has already been demonstrated with a serum lectin, known as mannose-binding protein, which has also been shown to inhibit HIV-1 infection of CD4+ cells (22). An additional carbohydrate-mediated interaction of gp120 has been shown with the endocytosis receptor of human macrophage membranes (21). It has been postulated that high affinity binding of accessible mannose residues on gp120 to the macrophage membrane may lead to virus uptake by the macrophage (21).
Recombinant soluble CD4 has been shown to inhibit HIV infection in vitro, mainly by competing with cell surface CD4. This observation has led to the possibility of using sCD4 for the therapy of HIV-infected individuals (23, 24). In addition, sCD4 has been used as an immunogen to block viral infection in animals. Treatment of SIVMAC-infected rhesus monkeys with sCD4 elicited not only an antibody response to human CD4 but also to monkey CD4. Coincident with the generation of such immunological responses, SIV could not be isolated from the PBL and bone marrow macrophages of these animals (1). A recent study with chimpanzees also demonstrated that human CD4 elicited anti-self CD4 antibody that inhibited HIV infection in vitro (25). Although immunization with sCD4 may be beneficial in blocking HIV infection, circulating antibody that recognizes self antigen may induce immune abnormality and dysfunction by binding to uninfected CD4+ cells. Nevertheless in theory anti-CD4 antibodies could be effective in blocking HIV infection provided they can disrupt virus attachment and entry without interfering with normal CD4 function. Ideally these antibodies should recognize CD4 epitopes that are present only after interaction with gp120.
We discovered that gp120-CD4 complex formation induces a specific subset of cryptic epitopes on gp120 and/or CD4 not present on the uncomplexed molecules. These epitopes elicit neutralizing antibodies with novel specificities and are thus useful in vaccines and/or immunotherapy of patients infected with HIV. In addition, the antibodies or the complexes can be used in immunological tests for HIV infection. We have demonstrated that the lectin, SC, mediates changes in the structure of gp120 in a manner similar to that mediated by CD4. The binding of SC to gp120 is another mechanism for inducing novel epitopes on the viral glycoprotein.
We used chemically-coupled gp120-CD4 complexes as immunogens for raising neutralizing antibodies. We found that gp120-CD4 complexes possess novel epitopes that elicit neutralizing antibodies. Coupling with SC caused perturbation in the gp120 conformation which in turn unmasked cryptic neutralizing epitopes on gp120. |
1. Field of the Invention
This invention relates to an improved process for producing carbon fibers from pitch which has been transformed, in part, to a liquid crystal or so-called "mesophase" state. More particularly, this invention relates to an improved process for producing carbon fibers from pitch of this type wherein the mesophase content is produced in substantially shorter periods of time than heretofore possible, at a given temperature, by passing an inert gas through the pitch during formation of the mesophase.
2. Description of the Prior Art
As a result of the rapidly expanding growth of the aircraft, space and missile industries in recent years, a need was created for materials exhibiting a unique and extraordinary combination of physical properties. Thus materials characterized by high strength and stiffness, and at the same time of light weight, were required for use in such applications as the fabrication of aircraft structures, re-entry vehicles, and space vehicles, as well as in the preparation of marine deep-submergence pressure vessels and like structures. Existing technology was incapable of supplying such materials and the search to satisfy this need centered about the fabrication of composite articles.
One of the most promising materials suggested for use in composite form was high strength, high modulus carbon textiles, which were introduced into the market place at the very time this rapid growth in the aircraft, space and missile industries was occurring. Such textiles have been incorporated in both plastic and metal matrices to produce composites having extraordinary high-strength- and high-modulus-to-weight ratios and other exceptional properties. However, the high cost of producing the high-strength, high-modulus carbon textiles employed in such composites has been a major deterrent to their widespread use, in spite of the remarkable properties exhibited by such composites.
One recently proposed method of providing high-modulus, high-strength carbon fibers at low cost is described in copending application Ser. No. 338,147, entitled "High Modulus, High Strength Carbon Fibers Produced From Mesophase Pitch". Such method comprises first spinning a carbonaceous fiber from a carbonaceous pitch which has been transformed, in part, to a liquid crystal or so-called mesophase state, then thermosetting the fiber so produced by heating the fiber in an oxygen-containing atmosphere for a time sufficient to render it infusible, and finally carbonizing the thermoset fiber by heating in an inert atmosphere to a temperature sufficiently elevated to remove hydrogen and other volatiles and produce a substantially all-carbon fiber. The carbon fibers produced in this manner have a highly oriented structure characterized by the presence of carbon crystallites preferentially aligned parallel to the fiber axis, and are graphitizable materials which when heated to graphitizing temperatures develop the three-dimensional order characteristic of polycrystalline graphite and graphitic-like properties associated therewith, such as high density and low electrical resistivity. At all stages of their development from the as-drawn condition to the graphitized state, the fibers are characterized by the presence of large oriented elongated graphitizable domains preferentially aligned parallel to the fiber axis.
When natural or synthetic pitches having an aromatic base are heated under quiescent conditions at a temperature of about 350.degree.C.-500.degree.C., either at constant temperature or with gradually increasing temperature, small insoluble liquid spheres begin to appear in the pitch and gradually increase in size as heating is continued. When examined by electron diffraction and polarized light techniques, these spheres are shown to consist of layers of oriented molecules aligned in the same direction. As these spheres continue to grow in size as heating is continued, they come in contact with one another and gradually coalesce with each other to produce larger masses of aligned layers. As coalescence continues, domains of aligned molecules much larger than those of the original spheres are formed. These domains come together to form a bulk mesophase wherein the transition from one oriented domain to another sometimes occurs smoothly and continuously through gradually curving lamellae and sometimes through more sharply curving lamellae. The differences in orientation between the domains create a complex array of polarized light extinction contours in the bulk mesophase corresponding to various types of linear discontinuity in molecular alignment. The ultimate size of the oriented domains produced is dependent upon the viscosity, and the rate of increase of the viscosity, of the mesophase from which they are formed, which, in turn are dependent upon the particular pitch and the heating rate. In certain pitches, domains having sizes in excess of two hundred microns up to in excess of one thousand microns are produced. In other pitches, the viscosity of the mesophase is such that only limited coalescence and structural rearrangement of layers occur, so that the ultimate domain size does not exceed one hundred microns.
The highly oriented, optically anisotropic, insoluble material produced by treating pitches in this manner has been given the term "mesophase", and pitches containing such material are known as "mesophase pitches". Such pitches, when heated above their softening points, are mixtures of two essentially immiscible liquids, one the optically anisotropic, oriented mesophase portion, and the other the isotropic non-mesophase portion. The term mesophase is derived from the Greek "mesos" or "intermediate" and indicates the pseudo-crystalline nature of this highly-oriented, optically anisotropic material.
Carbonaceous pitches having a mesophase content of from about 40 per cent by weight to about 90 per cent by weight are suitable for spinning into fibers which can subsequently be converted by heat treatment into carbon fibers having a high Young's modulus of elasticity and high tensile strength. In order to obtain the desired fibers from such pitch, however, it is not only necessary that such amount of mesophase be present, but also that it form, under quiescent conditions, a homogeneous bulk mesophase having large coalesced domains, i.e., domains of aligned molecules in excess of two hundred microns up to in excess of one thousand microns in size. Pitches which form stringy bulk mesophase under quiescent conditions, having small oriented domains, rather than large coalesced domains, are unsuitable. Such pitches form mesophase having a high viscosity which undergoes only limited coalescence, insufficient to produce large coalesced domains having sizes in excess of two hundred microns. Instead. small oriented domains of mesophase agglomerate to produce clumps or stringy masses wherein the ultimate domain size does not exceed one hundred microns. Certain pitches which polymerize very rapidly are of this type. Likewise, pitches which do not form a homogeneous bulk mesophase are unsuitable. The latter phenomenon is caused by the presence of infusible solids (which are either present in the original pitch or which develop on heating) which are enveloped by the coalescing mesophase and serve to interrupt the homogeneity and uniformity of the coalesced domains, and the boundaries between them.
Another requirement is that the pitch be non-thixotropic under the conditions employed in the spinning of the pitch into fibers, i.e., it must exhibit a Newtonian or plastic flow behavior so that the flow is uniform and well behaved. When such pitches are heated to a temperature where they exhibit a viscosity of from about 10 poises to about 200 poises, uniform fibers may be readily spun therefrom. Pitches, on the other hand, which do not exhibit Newtonian or plastic flow behavior at the temperature of spinning, do not permit uniform fibers to be spun therefrom which can be converted by further heat treatment into carbon fibers having a high Young's modulus of elasticity and high tensile strength.
Carbonaceous pitches having a mesophase content of from about 40 per cent by weight to about 90 per cent by weight can be produced in accordance with known techniques, as disclosed in aforementioned copending application Ser. No. 338,147, by heating a carbonaceous pitch in an inert atmosphere at a temperature above about 350.degree.C. for a time sufficient to produce the desired quantity of mesophase. By an inert atmosphere is meant an atmosphere which does not react with the pitch under the heating conditions employed, such as nitrogen, argon, xenon, helium, and the like. The heating period required to produce the desired mesophase content varies with the particular pitch and temperature employed, with longer heating periods required at lower temperatures than at higher temperatures. At 350.degree.C., the minimum temperature generally required to produce mesophase, at least one week of heating is usually necessary to produce a mesophase content of about 40 per cent. At temperatures of from about 400.degree.C. to 450.degree.C., conversion to mesophase proceeds more rapidly, and a 50 per cent mesophase content can usually be produced at such temperatures within about 1-40 hours. Such temperatures are generally employed for this reason. Temperatures above about 500.degree.C. are undesirable, and heating at this temperature should not be employed for more than about 5 minutes to avoid conversion of the pitch to coke.
Although the time required to produce a mesophase pitch having a given mesophase content is reduced as the temperature of preparation rises, it has been found that heating at elevated temperatures adversely affects the rheological properties of the pitch by altering the molecular weight distribution of both the mesophase and non-mesophase portions of the pitch. Thus, heating at elevated temperatures tends to increase the amount of high molecular weight molecules in the mesophase portion of the pitch. At the same time, heating at such temperatures also results in an increased amount of low molecular weight molecules in the non-mesophase portion of the pitch. As a result, mesophase pitches of a given mesophase content prepared at elevated temperatures in relatively short periods of time have been found to have a higher average molecular weight in the mesophase portion of the pitch and a lower average molecular weight in the non-mesophase portion of the pitch, than mesophase pitches of like mesophase content prepared at more moderate temperatures over more extended periods. This wider molecular weight distribution has been found to have an adverse effect on the rheology and spinnability of the pitch, evidently because of a low degree of compatibility between the very high molecular weight fraction of the mesophase portion of the pitch and the very low molecular weight fraction of the non-mesophase portion of the pitch. The very high molecular weight material in the mesophase portion of the pitch can only be adequately plasticized at very high temperatures where the tendency of the very low molecular weight molecules in the non-mesophase portion of the pitch to volatilize is greatly increased. As a result, when such pitches are heated to a temperature where they have a viscosity suitable for spinning and attempts are made to produce fibers therefrom, excessive expulsion of volatiles occurs which greatly interferes with the processability of the pitch into fibers of small and uniform diameter. For these reasons, means have been sought for shortening the time required to produce mesophase pitch at relatively moderate temperatures of preparation where more favorable rheological properties are imparted to the pitch. |
COLUMBUS (WCMH)–Franklin County Prosecutor Ron O’Brien announced Wednesday that a man has been sentenced to 15 years in prison for raping two women in 2014.
Gerardo Hernandez-Carrera,20, was sentenced after pleading guilty to raping a woman on two separate occasions, on September 9 and September 22, 2014.
In the September 9 rape, O’Brien said, the victim was picked up in the area of Highland and Sullivant Avenue, was driven to a secluded area, and was then raped. She escaped and flagged down a passerby for help.
In the September 22 attack, the victim was picked up in the area of Sullivant and Hague Avenues, was driven to a secluded area, and was beaten and raped. A security guard patrolling a nearby apartment complex heard her pleas for help and directed law enforcement to the area for help.
Two of his co-defendants, Bogar Diaz, age 16 at the time, and David Pablo, age 17 at the time, have both been bound over to Common Pleas Court to be tried as adults.What others are clicking on: |
Q:
Is there a way to bundle a binary file (such as chromedriver) with a single file app/exe compiled with Pyinstaller?
As noted in the answer to my question here, setting the path to chromedriver in binaries in the Pyinstaller spec file (binaries=[('/usr/bin/chromedriver', './selenium/webdriver')]) didn’t have an effect (unless it was set incorrectly). That is, chromedriver is accessed as long as it’s in the PATH (/usr/bin in this case). My question regards the possibility to bundle chromedriver in the background so that it doesn’t have to be manually installed on another machine.
A:
I succesfully bundled chromedriver with pyinstaller (although unfortunately, my virusscanner flagged it, after I ran the exe, but that's another problem)
I guess your problem is that you do not give the correct path to the webdriver in the script (using keyword executable_path). Also, I included the chromedriver as a data-file, although I'm not sure if that makes a difference..
Here is my example.
sel_ex.py:
from selenium import webdriver
import os, sys, inspect # http://stackoverflow.com/questions/279237/import-a-module-from-a-relative-path
current_folder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile(inspect.currentframe() ))[0]))
def init_driver():
chromedriver = os.path.join(current_folder,"chromedriver.exe")
# via this way, you explicitly let Chrome know where to find
# the webdriver.
driver = webdriver.Chrome(executable_path = chromedriver)
return driver
if __name__ == "__main__":
driver = init_driver()
driver.get("http://www.imdb.com/")
sel_ex.spec:
....
binaries=[],
datas=[("chromedriver.exe",".")],
....
In this way, the chromedriver was stored in the main folder, although it should not matter where it is stored, as long as the script correct path through the keyword executable_path
disclaimers:
-I did not use the one-file-settings, but that shouldn't make a difference.
-my OS is windows
|
Diagnostic value of resting tricuspid regurgitation velocity and right ventricular ejection flow parameters for the detection of exercise induced pulmonary arterial hypertension.
Our objectives were to evaluate resting tricuspid regurgitation velocity (TRV) and right ventricular outflow tract velocity curve (RVOTvc) profiles as markers for development of exercise induced pulmonary arterial hypertension (ExPHT). ExPHT is an elusive cause of dyspnea and fatigue. When present, Doppler echocardiography can detect and quantify elevated pulmonary pressure. However, the characteristics and diagnostic value of resting TRV and RVOTvc indices in patients with ExPHT have not been fully addressed. The study population consisted of 52 subjects (mean age 40.5 +/- 10.9, range 22-68 years) and was divided into three subsets as follows: 1. Patients (n = 22) with overt pulmonary hypertension (PHT), 2. Patients (n = 8) with ExPHT, 3. Healthy, asymptomatic volunteers (n = 22). RVOTvc indices included: Mean and peak velocity, systolic velocity time integral (VTI); velocity time integral at peak velocity (VTImax), acceleration time; ejection time. TRV was used as an index of pulmonary artery systolic pressure. There were significant differences between normals and ExPHT for TRV, acceleration time, VTI(Vmax). TRV and VTImax were predictive of EXPHT in a logistic regression model. (1) Patients with ExPHT have distinct Doppler velocity patterns suggesting the presence of a compromised pulmonary vascular bed even with normal pulmonary pressure at rest. (2) TRV and RVOTvc indices have potential diagnostic value in the early detection of ExPHT. |
Q:
Excel Calculate Date Differences between Orders Per Customer
I have a data-set with Account Names, Order IDs and Close Dates. I would like to see the number of days between orders, per customer, however Excel will not allow me to sort by date in my Pivot Table Values so that my Calculated Field works properly.
See example below to illustrate
Account Name Close Date Date Diff
Alice
74hde72hrg 29/01/2017
ery3yrtyhgf 29/01/2017 0
fdg5rrg3tg3 18/05/2018 474
fgj465df35y 26/05/2017 -357
h6hdh54y4 19/04/2018 328
rfhbswreyg 18/07/2018 90
Bob
436yrefg5y 19/04/2018
43grey43v 10/05/2017 -344
54ufhg54y 12/07/2017 63
sdg3vrf4f4 10/05/2017 -63
Jimmy
547feg4gsfd 20/07/2018
dfh5heafh5 11/01/2018 -190
fh35qhrdah 16/01/2018 5
fha4yfdhg3j 11/01/2018 -5
fhjwq54jrd5 20/07/2018 190
g53qyhry35 11/01/2018 -190
j655hrhg315 20/07/2018 190
Note that the Dates are not in order, and I cannot find how to put them in order in the Pivot Table so that the calculated Field is accurate.
I can do the Difference calculation in the Raw data, however it will also show me the difference between dates associated to adjacent Account Names, which I don't want.
Any Ideas?
A:
Convert your
Account Name Close Date Date Diff
Alice
74hde72hrg 29/01/2017
ery3yrtyhgf 29/01/2017 0
fdg5rrg3tg3 18/05/2018 474
fgj465df35y 26/05/2017 -357
h6hdh54y4 19/04/2018 328
rfhbswreyg 18/07/2018 90
Bob
436yrefg5y 19/04/2018
43grey43v 10/05/2017 -344
54ufhg54y 12/07/2017 63
sdg3vrf4f4 10/05/2017 -63
Jimmy
547feg4gsfd 20/07/2018
dfh5heafh5 11/01/2018 -190
fh35qhrdah 16/01/2018 5
fha4yfdhg3j 11/01/2018 -5
fhjwq54jrd5 20/07/2018 190
g53qyhry35 11/01/2018 -190
j655hrhg315 20/07/2018 190
to
Owner Account Name Close Date Date Diff
Alice 74hde72hrg 29/01/2017
Alice ery3yrtyhgf 29/01/2017 0
Alice fdg5rrg3tg3 18/05/2018 474
Alice fgj465df35y 26/05/2017 -357
Alice h6hdh54y4 19/04/2018 328
Alice rfhbswreyg 18/07/2018 90
Bob 436yrefg5y 19/04/2018
Bob 43grey43v 10/05/2017 -344
Bob 54ufhg54y 12/07/2017 63
Bob sdg3vrf4f4 10/05/2017 -63
Jimmy 547feg4gsfd 20/07/2018
Jimmy dfh5heafh5 11/01/2018 -190
Jimmy fh35qhrdah 16/01/2018 5
Jimmy fha4yfdhg3j 11/01/2018 -5
Jimmy fhjwq54jrd5 20/07/2018 190
Jimmy g53qyhry35 11/01/2018 -190
Jimmy j655hrhg315 20/07/2018 190
This table can be easily sorted - by "Owner" and then by "Close Date".
|
Q:
Habilitar o Deshabilitar 'Table' o cualquier otro Control a partir del cambio y seleccion de un DropDownList
Tengo una duda, como puedo Habilitar o Deshabilitar una 'Tabla' al momento de seleccionar el value1 o value2 ?
Es decir, si selecciono Value1 Muestro la tabla, en caso contrario al seleccionar el Value2.
Este es mi DropDown
<asp:DropDownList ID="ddownDesType" runat="server" css="input-form-edit"
CssClass="browser-default custom-select" runat="server"
AppendDataBoundItems="true" AutoPostBack="false">
<asp:ListItem Value="-1">Seleccionar Tipo Desviacion</asp:ListItem>
Tengo entendido que hay una función 'OnSelectedIndexChanged' propia del control de asp.net pero no se si esta es la manera correcta..
Desconozco si es que hay posiblidad con Javascript.
He estado intentando desde algunos eventos de los Controles pero no tengo exito aun.
A:
Puedes hacerlo por javascript usando el evento onchange puedes detectar tu dropdown por su id ddownDesType y obtener su value y dependiendo del valor ocultar o mostrar la tabla.
document.getElementById("seleccion").onchange = function() {Mostrar(this.value)};
function Mostrar(val){
var x = document.getElementById("tablaEjemplo");
if (val === "1") {
x.style.visibility = "visible";
} else if (val === "2"){
x.style.visibility = "hidden";
} else {
x.style.visibility = "hidden";
}
}
table, th, td {
border: 1px solid black;
border-collapse: collapse;
}
<table id="tablaEjemplo" >
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>Dato1</td>
<td>Dato2</td>
<td>Dato3</td>
</tr>
</table>
<hr>
<select id="seleccion">
<option value="0">Seleccione un valor</option>
<option value="1">Mostrar</option>
<option value="2">Ocultar</option>
</select>
Esto se ejecutaria desde el lado cliente, desconozco el funcionamiento en VB pero añadiendo un EventHandler podrias lograrlo desde el lado del servidor
|
<?xml version="1.0"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<!-- Can't use <parent> element in pom because it causes plexus
dependency recursion. Defining <dependencyManagement>
resolves this.
-->
<parent>
<groupId>org.jboss</groupId>
<artifactId>jboss-parent</artifactId>
<version>35</version>
<relativePath/>
</parent>
<groupId>org.jboss.resteasy</groupId>
<artifactId>arquillian-utils</artifactId>
<version>4.6.0-SNAPSHOT</version>
<name>RESTEasy Main testsuite: Arquillian utils</name>
<packaging>jar</packaging>
<properties>
<version.resteasy.testsuite>${project.version}</version.resteasy.testsuite>
</properties>
<profiles>
<profile>
<id>arquillian.managed</id>
<activation>
<property>
<name>arquillian.managed</name>
</property>
<activeByDefault>true</activeByDefault>
</activation>
<dependencies>
<dependency>
<groupId>org.wildfly.arquillian</groupId>
<artifactId>wildfly-arquillian-container-managed</artifactId>
</dependency>
</dependencies>
</profile>
</profiles>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-dependencies</artifactId>
<version>${project.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<!-- arquillian general -->
<dependency>
<groupId>org.jboss.shrinkwrap.resolver</groupId>
<artifactId>shrinkwrap-resolver-depchain</artifactId>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.junit</groupId>
<artifactId>arquillian-junit-container</artifactId>
<scope>compile</scope>
</dependency>
<!-- END OF arquillian general -->
<dependency>
<groupId>com.io7m.xom</groupId>
<artifactId>xom</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-core-spi</artifactId>
<version>${version.resteasy.testsuite}</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
<version>${version.resteasy.testsuite}</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-validator-provider</artifactId>
<version>${version.resteasy.testsuite}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
</dependency>
<dependency>
<groupId>org.wildfly.extras.creaper</groupId>
<artifactId>creaper-core</artifactId>
<exclusions>
<exclusion>
<groupId>org.jboss.as</groupId>
<artifactId>jboss-as-controller-client</artifactId>
</exclusion>
<exclusion>
<groupId>org.jboss.as</groupId>
<artifactId>jboss-as-cli</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.wildfly.extras.creaper</groupId>
<artifactId>creaper-commands</artifactId>
<exclusions>
<exclusion>
<groupId>org.wildfly</groupId>
<artifactId>wildfly-patching</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.wildfly.core</groupId>
<artifactId>wildfly-cli</artifactId>
</dependency>
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-aether-provider</artifactId>
<exclusions>
<exclusion>
<groupId>org.sonatype.aether</groupId>
<artifactId>aether-impl</artifactId>
</exclusion>
<exclusion>
<groupId>org.sonatype.aether</groupId>
<artifactId>aether-spi</artifactId>
</exclusion>
<exclusion>
<groupId>org.sonatype.aether</groupId>
<artifactId>aether-util</artifactId>
</exclusion>
<exclusion>
<groupId>org.sonatype.aether</groupId>
<artifactId>aether-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-api</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-spi</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-util</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-impl</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-connector-basic</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-transport-file</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.aether</groupId>
<artifactId>aether-transport-http</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<artifactId>maven-javadoc-plugin</artifactId>
<configuration>
<source>8</source>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-install-plugin</artifactId>
<configuration>
<skip>false</skip>
</configuration>
</plugin>
</plugins>
</build>
</project>
|
Wintry Feel: Hawkish Fed, Rising Producer Prices Give Market a Chill
The market starts Friday with pressure expected, in part due to the Fed’s expectations of more rate hikes and from a wholesale inflation report that showed prices climbing more than analysts had expected.
Key Takeaways
Market tone looks weak to start day as investors seem focus on further rate hikes
Europe, Asia both fall across the board, with hawkish Fed language a possible factor
Earnings news has been mixed over the last day, with retail earnings ahead next week
(Friday Market Open) As the first snowflakes of winter fall in parts of the country Friday, the stock market also has a chilly tone to start the day. Concerns about rising rates continue to weigh on sentiment, and so does a higher than expected read on U.S. inflation.
Some investors seem to be seizing on what looked like an unremarkable aspect of the Fed’s statement yesterday, one that was unchanged from the previous meeting. The Fed said it expects “further, gradual increases” in rates as the economy continues to thrive. It’s a bit hard to understand any panic about these words, because they didn’t tell investors anything that most didn’t already know. Still, perhaps some people had hoped for a more dovish tone after October’s market thrashing and the Fed didn’t deliver. As it is, markets fell across Europe and Asia overnight, and pressure might spill over into U.S. trading.
Speaking of Europe, there’s more drama there as the European Commission tangled with Italy over the Italian government’s budget forecasts, which the EC said looked too optimistic on deficits. Moving west, debate raged about whether a Brexit deal might be getting close, and the U.K. government is holding meetings on the issue this weekend, media reports said. A Brexit breakthrough, if it comes, might give European markets a boost. But it’s unclear how close it might be. Asian markets are on pace to fall for the sixth week out of the last seven.
Back home, a slew of Fed speakers march out to the microphones today, so consider watching for some of their reflections. Michigan sentiment is also due this morning. Additionally, in a report that might keep the Fed on its toes, producer prices shot up by 0.6% in October—the biggest monthly climb in six years and way above analyst estimates of 0.2%. Higher gas, machinery, and equipment prices all played a part. However, it’s important to note that core wholesale inflation, which strips out volatile energy and food prices, rose a little less at 0.5%, and prices over the last year are up 2.9%, below the peak seen last summer. With oil now in a tailspin (see more below), inflationary worries might be lifting.
In earnings news late Thursday, Disney (DIS) delivered a strong quarter, posting earnings per share of $1.48 and revenue of $14.31 billion. Third-party consensus was for $1.34 and $13.73 billion. Both parks and resorts and the company’s studio-entertainment segments performed well, and the company predicted an accelerated timeline for its takeover of major assets of 21st Century Fox Inc. (FOX). According to DIS, the acquisition could close “meaningfully earlier” than the mid-2019 date it had previously projected, The Wall Street Journal reported. Shares of DIS climbed 1% in pre-market trading.
The news didn’t look so hot over at Activision Blizzard (ATVI), where shares crumbled 10% in the pre-market hours after the company missed analysts’ earnings expectations and saw active users fall.
Fed Reflection
Thursday saw stocks move a bit lower as the Fed held rates steady. Some of the pressure also might have reflected profit taking after a massive rally Wednesday that possibly got a bit overdone. Also, the Fed stuck to its position of advocating gradual rate hikes, meaning no sign of any end to the steady drip-drip of higher rates (this is the part that seems to be hurting sentiment early Friday). The Fed’s decision likely didn’t come as a surprise to many, but futures prices still predict around a 76% chance of a fourth 2018 rate hike by the end of the year. Fed Chair Jerome Powell is scheduled to make some public comments next week, which could give investors more insight into the Fed’s current thinking.
The Fed’s statement Thursday didn’t evolve much from its last meeting in September. As it did then, the Fed noted “economic activity has been rising at a strong rate,” and “job gains have been strong.” It added that the unemployment rate has dropped, and that household spending has “continued to grow strongly.”
Fed Sees Business Investment Moderating
The one significant change was around business investment, and that follows what looked like a slowdown in that category in the Q3 gross domestic product (GDP) report issued last month. “Growth of business fixed investment has moderated from its rapid pace earlier in the year,” the Fed’s statement now reads. Back in September, the Fed’s statement said that the category had “grown strongly.”
On the inflation front, there was no change to the Fed’s September prediction that price increases would remain near its 2% target over the next 12 months both for overall inflation and core inflation that strips out food and energy prices.
The Fed arguably finds itself in a tough place as it tries to keep the U.S. economy from overheating even as it faces pressure not to run the value of the dollar so high through tighter monetary policy that it might hurt foreign countries and keep their consumers from being able to afford U.S. products. Remember, the Fed has a dual mandate of stable prices and maximum employment. There are signs that a stronger dollar, along with recent trade battles, might be hurting U.S. company outlooks.
In Treasuries, the 10-year benchmark yield rose slightly to 3.23% after the Fed statement Thursday, just a few basis points below last month’s highs. It was at 3.21% early Friday.
Energy Loses Steam After Oil Craters
Energy shares took the biggest hit Thursday, falling more than 2% as crude oil entered a bear market (see more below). The dollar index is also back on the climb, moving above 96.60 Thursday after falling below 96 earlier this week. Hawkish Fed policy tends to support a strong dollar, and that could be a possible stress factor for some sectors like tech, industrials, and materials with big foreign markets. Also, the dollar is once again climbing vs. the Chinese yuan, with the psychological 7 yuan to the dollar level not far away.
Financials did appear to get a boost from the Fed’s continued optimistic tone regarding the economy, and topped the sector leaderboard Thursday. In fact, financials are outpacing the broader S&P 500 Index (SPX), over the last week. That’s kind of a change of pace, but it’s not clear if it can hold up. Bank stocks have declined about 7.5% so far this year, and just haven’t been able to get going despite this anticipated higher rate environment.
The stock market is out of correction, but the crude market turned bearish Thursday. U.S. futures closed below $61 a barrel on Thursday, down more than 20% from highs above $76 last month. Generally, a 20% drop from high close to low close defines a bear market. By early Friday, the front-month U.S. contract had fallen to $59.80, the first time a front-month has fallen under $60 since March 8. There hasn’t been a close below $60 in nearly nine months, so perhaps that’s worth watching later today. Thursday’s losses were the ninth-consecutive lower session for U.S. crude futures, with oil taking a beating from U.S. production hitting record highs and Saudi Arabia and Russia also gearing up output.
The long-term impact on the energy sector from all this is too soon to tell. One bad month probably isn’t going to make that big a difference, but extended periods of low oil prices can sometimes hinder energy sector earnings growth, as investors might remember from the bear oil market of 2015-2016. At that point, crude fell as low as $26 a barrel in the U.S., and many oil companies saw dramatic plunges in their earnings. It’s way too soon to worry about that now, and it’s also unlikely crude could fall to such bargain basement levels, especially considering how healthy the U.S. economy is now compared to then.
Figure 1: Extra Cash for Holiday Shoppers? This one-month chart comparing the consumer discretionary sector (candlestick) to the energy sector (purple line) shows that discretionary stocks are starting to rally back from October lows even as energy continues to wallow near recent lows. Cheaper oil is hurting the energy sector, but might end up helping discretionary by putting a little extra gas savings in holiday shoppers’ pockets. Data Source: S&P Dow Jones Indices.Chart source: The thinkorswim® platform from TD Ameritrade. For illustrative purposes only. Past performance does not guarantee future results.
Lower Gas Costs Might Help Shoppers: Consumer discretionary has had its rally hat on lately ahead of holiday shopping season, and gas prices going back below $3 a gallon across much of the country over the last week probably can’t hurt this sector. Lower gas prices might have a psychological impact, making shoppers feel like they can spend a bit more on gifts if they’re saving at the pump (see chart above).
Stay tuned this weekend for an OPEC meeting. Some analysts think OPEC members might start sending signals soon about lowering output in the face of this price pullback, but that’s not assured. Others say Saudi Arabia may be under political pressure to keep the oil flowing after last month’s diplomatic dispute with the U.S. over a missing journalist.
Tech Recovery Beyond the Headlines: The FAANGS get a lot of attention, and all five of those closely watched stocks are up significantly from their October lows. Some have even gained in the double-digits. However, if you’re looking for the best performing tech stocks since October’s depths, you won’t find the FAANGs among them. Instead, tech leaders as of Thursday afternoon included semiconductor firm Advanced Micro Devices (AMD), up 34% from its October low, followed by a 30% gain from cybersecurity company Symantec (SYMC), and a 24% rise from Xilinix (XLNX), a supplier of programmable logic devices, CNBC noted. What this might help show is how varied the tech sector is beyond big names like Apple (AAPL), IBM (IBM) and Microsoft (MSFT).
From a high-level view, tech still trails the broader S&P 500 (SPX) over the last month. This partly reflects how so much selling zeroed in on tech shares during October’s shakeout. For the year so far, tech shares are still up more than 12%, compared with less than 6% for the SPX, but tech’s rise this year is way short of the better than 30% gains of 2017. Consider keeping an eye out for a couple of major tech earnings next week from Cisco (CSCO) and Nvidia (NVDA), which could give insight into the latest macro developments in those two companies’ respective industries.
Retail Stocks Perk Up Ahead of Holidays, Earnings: Black Friday is just two weeks away (less if you factor in that many stores actually throw open the doors late on Thanksgiving), and retail earnings season is also fast approaching. Next week brings Home Depot (HD) and Macy’s (M), along with Wal-Mart (WMT), Nordstrom (JWN), and J.C. Penney (JCP). So what do investors seem to think about how holiday shopping season might shape up? Judging from company share performance, the answer seems to be a thumbs up. Many big retailers have been outpacing the broader S&P 500 (SPX) since posting their October lows. As of midday Thursday, Macy’s shares were up 20% since the recent low last month; Amazon (AMZN) was up 20%, and Kohl’s (KSS) was up 18%. WMT didn’t seem to be affected much by the market sell-off in October, and is up 13% over the last month. The SPX is up about 8% from its October low.
On the whole, consumer discretionary stocks are slightly outpacing the SPX since early October. The outlier might be Apple (AAPL), which is actually a tech stock but obviously can have a huge impact on shopping season. The stock has made it back a bit after falling below $200 a share last week, but remains a long way from recent highs of around $230 as investors continue to debate what the company’s holiday quarter guidance and decision to stop reporting iPhone unit sales might mean moving forward.
Check out all of our upcomingWebcasts or watch one of the many archived ones, covering a wide range of topics from market commentary to portfolio planning basics to trading strategies for active investors. No matter your experience level, there’s something for everybody.
Looking to stay on top of the markets? Check out theTD Ameritrade Network, which is live programming that brings you market news and helps you hone your trading knowledge
The TD Ameritrade Network is brought to you by TD Ameritrade Media Productions Company. TD Ameritrade Media Productions Company and TD Ameritrade, Inc. are separate but affiliated subsidiaries of TD Ameritrade Holding Corporation.
TD Ameritrade and all third parties mentioned are separate and unaffiliated companies, and are not responsible for each other’s policies or services.
Inclusion of specific security names in this commentary does not constitute a recommendation from TD Ameritrade to buy, sell, or hold..
Market volatility, volume, and system availability may delay account access and trade executions.
Past performance of a security or strategy does not guarantee future results or success.
Options are not suitable for all investors as the special risks inherent to options trading may expose investors to potentially rapid and substantial losses. Options trading subject to TD Ameritrade review and approval. Please read Characteristics and Risks of Standardized Options before investing in options.
Supporting documentation for any claims, comparisons, statistics, or other technical data will be supplied upon request.
The information is not intended to be investment advice or construed as a recommendation or endorsement of any particular investment or investment strategy, and is for illustrative purposes only. Be sure to understand all risks involved with each strategy, including commission costs, before attempting to place any trade. Clients must consider all relevant risk factors, including their own personal financial situations, before trading.
This is not an offer or solicitation in any jurisdiction where we are not authorized to do business or where such offer or solicitation would be contrary to the local laws and regulations of that jurisdiction, including, but not limited to persons residing in Australia, Canada, Hong Kong, Japan, Saudi Arabia, Singapore, UK, and the countries of the European Union.
This link takes you outside the TD Ameritrade Web site.
Clicking this link takes you outside the TD Ameritrade website to
a web site controlled by third-party, a separate but affiliated company.
TD Ameritrade is not responsible for the content or services this website.
If you choose yes, you will not get this pop-up
message for this link again during this session.
You are now leaving the TD Ameritrade Web site and will enter an
unaffiliated third-party website to access its products and its
posted services. The third-party site is governed by its posted
privacy policy and terms of use, and the third-party is solely
responsible for the content and offerings on its website. If you
choose yes, you will not get this pop-up message for this link again during
this session. |
Oncocytic cystadenoma of the parotid gland with prominent signet-ring cell features.
A case of distinctive benign cystadenoma of the parotid gland composed of several different morphological components is presented. The most conspicuous morphological component and the largest part of the neoplasm was represented by solid sheets of oncocytic cells surrounded by myoepithelial cell layer. Most oncocytic cells possessed large intracytoplasmic vacuoles with the nuclei displaced towards the periphery, imparting them with a striking signet-ring cell appearance. The size of the intracytoplasmic vacuoles ranged from 4 to 50 microm. Immunohistochemically these signet-ring cells lacked immunoreactivity for S-100 protein and cytokeratin but they strongly stained for antimitochondrial antibody 113-1. The present case illustrates an unusual, hitherto undescribed, morphological feature of benign oncocytic cystadenoma of the parotid gland. |
---
abstract: 'We consider one copy of a quantum system prepared in one of two orthogonal pure states, entangled or otherwise, and distributed between any number of parties. We demonstrate that it is possible to identify which of these two states the system is in by means of local operations and classical communication alone. The protocol we outline is both completely reliable and completely general - it will correctly distinguish any two orthogonal states 100% of the time.'
author:
- Jonathan Walgate
- 'Anthony J. Short'
- Lucien Hardy
- Vlatko Vedral
title: Local Distinguishability of Multipartite Orthogonal Quantum States
---
Introduction {#sec:Intro}
============
Pure quantum states may only be perfectly distinguished from one another when they are orthogonal. That is, a state $\left| \psi \right\rangle $ may be reliably distinguished from another, $\left| \phi \right\rangle $, only if $\left\langle
\psi |\phi \right\rangle =0$. We will show that if $\left\langle
\psi |\phi \right\rangle =0$ for given $\left| \psi\right\rangle$ and $\left| \phi\right\rangle$, then $\left| \psi \right\rangle $ may always be distinguished from $\left| \phi \right\rangle $ by means of local operations and classical communication (LOCC). This may be surprising, since quantum systems can encode information that may only be extracted by analyzing the system *as a whole*. This well-known phenomenon - entanglement - forms the basis of many recently proposed quantum schemes, such as cryptography[@crypt1; @crypt2; @crypt3] computation[@comp] and enhanced communication[@comm]. A tempting interpretation is that “entangled information” can only be uncovered using global measurements upon the system as a whole. But this is not the case - in our very general situation local measurements, sequentially dependent upon classically communicated prior measurement results, suffice to identify orthogonal entangled quantum states.
Schemes for distinguishing between a set of quantum states, both pure and mixed have been considered by various authors [@helstrom; @holevo; @fuchs; @sausage; @gottesman; @koashi]. Closely related to the present paper is the work of Bennett et al [@sausage] who showed that there exist sets of orthogonal product states that cannot be distinguished by LOCC.
Alice and Bob each hold part of a quantum system, which occupies one of two possible orthogonal quantum states $\left|
\psi \right\rangle $ and $\left|
\phi \right\rangle $. Alice and Bob know the precise form of $\left| \psi \right\rangle $ and $\left| \phi
\right\rangle $, but have no idea which of these possible states they actually possess: they will have to perform some measurements to find out. A global measurement would suffice, but alas Alice and Bob cannot afford to meet up. Fortunately for them, they are on speaking terms, as one phone call is all they require. This situation, LOCC, is of primary relevance to most applications of entanglement.
The strategy Alice and Bob adopt is simple. They can always find a basis in which the two orthogonal states can be represented $$\begin{aligned}
\left| \psi \right\rangle &=&\left| 1\right\rangle _{A'}\left| \eta
_{1}\right\rangle _{B}+\cdots +\left| l\right\rangle _{A'}\left|
\eta _{l}\right\rangle _{B} \label{final} \\ \left| \phi
\right\rangle &=&\left| 1\right\rangle _{A'}\left| \eta
_{1}^{\perp }\right\rangle _{B}+\cdots +\left| l\right\rangle
_{A'}\left| \eta _{l}^{\perp }\right\rangle _{B} \nonumber\end{aligned}$$ where { $\left| i\right\rangle_{A'}$ for $i=1$ to $l$} form some orthogonal basis set for Alice, $\{ \left| \eta _{1}\right\rangle_{B} ,\cdots ,\left| \eta _{l}\right\rangle_{B}\}$ are not normalized, and $\left| \eta _{i}^{\perp
}\right\rangle _{B}$ is orthogonal to $\left| \eta _{i}\right\rangle_{B}$. Alice simply measures her part of the system in such a basis, and communicates the result, $i$, to Bob. Bob then has an easy task - he may distinguish locally between $\left| \eta
_{i}\right\rangle _{B}$ and $\left| \eta _{i}^{\perp
}\right\rangle _{B}$ and thereby know which state he and Alice shared to begin with.
Matrix Representation of Possible States {#sec:2}
========================================
Alice and Bob start out knowing the precise form of two states that might correspond to their shared quantum system. These two possible states, $\left| \psi \right\rangle $ and $\left| \phi
\right\rangle $, are orthogonal, so that $\left\langle \psi |\phi
\right\rangle =0$. We can represent them in the following, entirely general way: $$\begin{aligned}
\left| \psi \right\rangle &=&\left| 1\right\rangle _{A}\left| \eta
_{1}\right\rangle _{B}+\cdots +\left| n\right\rangle _{A}\left|
\eta _{n}\right\rangle _{B} \label{initial} \\ \left| \phi
\right\rangle &=&\left| 1\right\rangle _{A}\left| \nu
_{1}\right\rangle _{B}+\cdots +\left| n\right\rangle _{A}\left|
\nu _{n}\right\rangle _{B} \nonumber\end{aligned}$$ where $\{ \left| 1\right\rangle _{A},\cdots ,\left| n\right\rangle _{A}\}$ form an orthonormal basis set for Alice, and the vectors $\{\left| \eta
_{1}\right\rangle _{B},\cdots ,\left| \eta
_{n}\right\rangle _{B}\}$ and $\{\left| \nu
_{1}\right\rangle _{B},\cdots ,\left| \nu
_{n}\right\rangle _{B}\}$ are not normalized and also not necessarily orthogonal. Alice and Bob can express the vectors $\{\left| \eta
_{1}\right\rangle _{B},\cdots ,\left| \eta
_{n}\right\rangle _{B}\}$ and $\{\left| \nu
_{1}\right\rangle _{B},\cdots ,\left| \nu
_{n}\right\rangle _{B}\}$ as a superposition of a set of arbitrary basis vectors in Bob’s space $$\left| \eta_{i}\right\rangle _{B}=\sum_{j} F_{ij} \left| j\right\rangle _{B}
\, , \;\;
\left| \nu_{i}\right\rangle _{B}=\sum_{j} G_{ij} \left| j\right\rangle _{B}
\label{A,B matrices}$$ where the elements $F_{ij}$ an $G_{ij}$ form two $n\times m$ matrices $F$ and $G$. These matrices preserve all the information Alice and Bob hold about states $\left| \psi\right\rangle$ and $\left| \phi\right\rangle$. Because of the way they are constructed, the matrix $FG^{\dagger}$ takes the following form: $$FG^{\dagger}=\left(
\begin{array}{ccc}
\langle \nu _{1}|\eta _{1}\rangle & \cdots & \langle \nu _{1}|\eta
_{n}\rangle \\ \vdots & \ddots & \vdots \\ \langle \nu _{n}|\eta
_{1}\rangle & \cdots & \langle \nu _{n}|\eta _{n}\rangle
\end{array}
\right) \label{AB}$$ We can see this is the case by inspection, because . The matrix $FG^{\dagger}$ encapsulates a great deal of significant information for Alice and Bob about the relationship between the states $\left| \psi\right\rangle$ and $\left| \phi\right\rangle$. Since we know by the conditions of the problem that $\langle \phi |\psi \rangle =0$, we know that $$\langle \phi |\psi \rangle =\sum_{i=1}^{n} \langle \nu _{i}|\eta _{i}\rangle = {\rm Trace}(FG^{\dagger})=0 \label{orthogonality}$$ But the $FG^{\dagger}$ matrix holds more information than the simple fact of the states’ orthogonality. It also encodes the key to distinguishing between these two possible states. Alice plans to distinguish $\left| \psi\right\rangle $ and $\left| \phi\right\rangle$ by finding some basis - any basis - in which she can describe her part such that the states $%
\left| \psi\right\rangle $ and $\left| \phi\right\rangle $ take the more restricted form of (\[final\]). Alice must choose her $\{ \left| 1\right\rangle _{A},\cdots ,\left| n\right\rangle _{A}\}$ basis carefully such that no matter what result $\left| i\right\rangle _{A}$ she obtains, Bob can surely distinguish between his possible states. This means that for all $i$ , $%
\left| \nu _{i}\right\rangle $ must be orthogonal to $\left| \eta _{i}\right\rangle $. Thus we can write down our distinguishability criterion: $$\forall i \qquad \langle \nu _{i}|\eta _{i}\rangle =0
\label{distinguishability}$$ In other words, in our matrix representation, we require the diagonal elements of $FG^{\dagger}$ to be zero. Alice can alter the form of $FG^{\dagger}$ by changing the basis in which she describes and measures her system. She has a great deal of choice in this regard: any orthogonal basis set spanning her space will provide a description of form (\[initial\]), and thus some matrix $FG^{\dagger}$ of form (\[AB\]). When she changes her orthonormal basis set, this changes the form of the matrices $F$ and $G$, and thus changes the form of $FG^{\dagger}$. In fact, unitary transformations of Alice’s measurement basis map to the conjugate unitary transformations upon $FG^{\dagger}$.
A unitary transformation $U^{A}$ upon Alice’s measurement basis will transform the matrix $FG^{\dagger}$ to $U^{A\ast }(FG^{\dagger})U^{A\ast \dagger }$.
Proof: From (\[initial\]), $\left| \psi\right\rangle =\sum_{i}\left| i\right\rangle _{A}\left| \eta
_{i}\right\rangle _{B}$. Alice’s unitary transformation acts thus: $\left| i\right\rangle _{A}=\sum_{j}U_{ij}^{A\dagger }\left|
j^{\prime }\right\rangle _{A}$. From (\[A,B matrices\]) it follows that, in Alice’s new basis $\{\left| 0^{\prime }\right\rangle _{A},\cdots ,\left| n^{\prime }\right\rangle
_{A}\}$: $$\left| \psi\right\rangle =\sum_{ijk}U_{ij}^{A\dagger }\left| j^{\prime }\right\rangle
_{A}F _{ik}\left| k\right\rangle _{B}$$ For true generality, we consider Bob might assist Alice by unitarily rotating his basis by $U^{B}$. We therefore write , giving . Since $U_{ij}^{A\dagger }=U_{ji}^{A\ast } $, we can rewrite this as $$\psi =\sum_{ijkl}\left| j^{\prime }\right\rangle _{A}\left|
l^{\prime }\right\rangle _{B}U_{ji}^{A\ast }F
_{ik}U_{kl}^{B\dagger }.$$ By analogy with (\[initial\]) and (\[A,B matrices\]), this means that in the new basis of description, we have a new matrix $F^{\prime }$ where $F _{ik}^{\prime }=\sum_{jl}U_{ji}^{A\ast }F
_{ik}U_{kl}^{B\dagger }$. Under unitary basis rotations by Alice and Bob, our matrices $A$ and $B$ undergo the curious transformations $$F^{\prime }=U^{A\ast }FU^{B\dagger }\mbox{}\mbox{}\mbox{},\mbox{}\mbox{}\mbox{} G^{\prime
}=U^{A\ast }GU^{B\dagger } \label{A,B transformations}$$ This means that the object of our interest, the $FG^{\dagger}$ matrix encoding information about the relationship *between* the states, will transform as... $$\begin{aligned}
F^{\prime }G^{\prime \dagger }=(U^{A\ast }FU^{B\dagger })\left(
U^{A\ast }GU^{B\dagger }\right) ^{\dagger } \nonumber \\ =U^{A\ast }FU^{B\dagger
}U^{B}G^{\dagger }U^{A\ast \dagger } \nonumber \\ =U^{A\ast }(FG^{\dagger
})U^{A\ast \dagger } \qquad \Box \label{AB transformation}\end{aligned}$$ Bob’s unitary rotation $U^{B}$ drops out, as rotations in his basis will not affect the overlaps $\langle \nu _{i}|\eta _{j}\rangle$ that make up $FG^{\dagger}$.
If $U^{A}$ is unitary, then so is $U^{A\ast }$. Alice can find a basis of form (\[final\]), and thereby satisfy our distinguishability criterion (\[distinguishability\]) , *if and only if* there exists a unitary matrix $%
U=U^{A\ast }$ such that $U(FG^{\dagger})U^{\dagger }$ is a “zerodiagonal” matrix. (A matrix whose diagonal elements are all zero.) A proof that such a unitary matrix always exists constitutes a proof that two orthogonal quantum states can always be distinguished.
Matrix proof of $\left| \psi\right\rangle ,\left| \phi\right\rangle $ distinguishability {#sec:3}
========================================================================================
Unitary transformations upon Alice’s measurement basis translate into (conjugated) unitary transformations upon her specific $%
FG^{\dagger}$ matrix. If she can find a unitary rotation that converts this matrix into zerodiagonal form, she can ensure Bob will be able to distinguish between states $\left| \psi\right\rangle $ and $\left| \phi\right\rangle $.
We first prove such a rotation always exists in the two-dimensional case, and then show how Alice may use a finite sequence of such $2\times2$ transformations to zerodiagonalize any traceless $n\times n$ matrix.
Two-dimensional case {#sec:3.1}
--------------------
Let $M$ be the wholly general $2\times 2$ matrix $\left(
\begin{array}{cc}
x & y \\ z & t
\end{array}
\right) $. There exists a $2\times 2$ unitary matrix $U$ such that the diagonal elements of are equal.
Proof: Let $U=\left(
\begin{array}{cc}
\cos \theta & \sin \theta e^{i\omega
} \\ \sin \theta e^{-i\omega
} & -\cos \theta
\end{array} \right)$.
We need the diagonal elements of $UMU^{\dagger }$ to be equal. This gives us the condition: $$(x-t)\cos 2\theta +\sin 2\theta (ye^{-i\omega }+ze^{i\omega })=0$$ The real and imaginary parts of this equation, can be solved for the angles $\omega$ and $\theta$: $$\tan \omega =\frac{{\rm Im}(x-t){\rm Re}(z+y)-{\rm Re}(x-t){\rm Im}(z+y)}{%
{\rm Re}(x-t){\rm Re}(z-y)+{\rm Im}(x-t){\rm Im}(z-y)} \label{omega}$$ $$\tan 2\theta =\frac{{\rm Re}(x-t)}{{\rm Re}(z+y)\cos \omega -{\rm Im}(z-y)\sin \omega}
\label{theta}$$ The RHS of (\[omega\]) is always real, and thus there will always be an angle $\omega $ that satisfies the equation. Given a definite $\omega $, we can always solve (\[theta\]) for a definite $\theta $ for the same reason. Thus for any $2\times 2$ matrix $M$, there exists a $2\times 2$ unitary matrix that “equidiagonalizes” it. (Equalizes all its diagonal elements.) This completes the proof $\Box$.
This mathematical result can be applied to the $2 \times 2$ dimensional case. Since the $\left| \psi\right\rangle$ and $\left| \phi\right\rangle$ states are orthogonal, the corresponding $FG^{\dagger}$ matrix is traceless, in which case equidiagonalization constitutes zerodiagonalization. Equations (\[omega\]) and (\[theta\]) therefore always pick out a specific unitary transformation that will zerodiagonalize $FG^{\dagger}$. By measuring in that basis, Alice and Bob can always distinguish between the two possible orthogonal states of their system.
$2^k$ dimensional case {#sec:3.2}
-----------------------
We want to consider all situations of greater dimensionality than 2, but we first concentrate on situations where Alice’s Hilbert space has $2^k$ dimensions (where $k$ is some positive integer). The $AB^\dagger$ matrix has the same dimensionality, and will have $2^k \times 2^k$ elements. Note that while this particular class of $FG^{\dagger}$ matrices - those of dimension $2^{k}$ - may seem limited, it includes all quantum states comprising sets of qubits. In such cases, Alice can adopt a simple strategy to equidiagonalize this potentially huge matrix in a relatively small number of steps. We know from theorem 2 above that Alice may unitarily rotate any two diagonal elements in her $FG^{\dagger} $ matrix so that they become equal. By grouping the diagonal elements into $2^{k-1} $ pairs, and equidiagonalizing each pair, she can create $2^{k-1} $ equal pairs.
Both elements of an equal pair can then be individually made equal to the elements of another equal pair, using only two $2
\times 2$ unitary transformations. Thereby, Alice can create $2^{k-2}$ “quartets” of equal diagonal elements with just $2^{k-1} $ further $2\times 2$ unitary transformations. By repeating this process $k$ times, Alice will set all the diagonal elements exactly equal. If her $FG^{\dagger}$ matrix has $2^{k}$ diagonal elements, then $k\cdot
2^{k-1}$ elementary operations will serve to equidiagonalize it. This satisfies Alice’s requirements: since she knows that her physical $FG^{\dagger}$ matrix is traceless, she knows that all the diagonal elements $\langle \nu _{i}|\eta _{i}\rangle$ will be thereby set to zero. Therefore Alice and Bob can distinguish the two orthogonal states. Of course, Alice need not physically enact each and every separate $2\times 2$ unitary transformation. A single $2^k \times 2^k$ unitary transformation will represent the product of all these rotations, and finding this one transformation that equidiagonalizes $FG^{\dagger}$ in one shot is a perfectly tractable problem for Alice to solve.
General case {#sec:3.3}
------------
The matrix $FG^{\dagger}$ will not, in general, be of size $2^{k}\times 2^{k}$. Alice may nevertheless devise an approach that is guaranteed to yield state equations of form (\[final\]). She needs to be inventive. Her favored tactic so far - a sequence of pair-wise equalizations - will converge upon the desired unitary matrix only in the infinite limit. She can find a more elegant method, however. The $2^{k}$ dimensional case is unproblematic, so if Alice can *enlarge* $FG^{\dagger}$ such that it achieves a dimensionality of a power of two, she can solve her problem.
Such an enlargement represents an expansion of Alice’s quantum system into a Hilbert space of greater dimension. She must perform a SWAP operation to transfer the state of her original quantum system $\mathcal{H}_{n}^{A}$ described by (\[initial\]) to an $n$-dimensional subspace of a larger space, $\mathcal{H}_{l}^{A'}$, where $l\geq n$ and $l=2^{k}$ for some integer $k$: $$\begin{aligned}
\left| i\right\rangle_{A} \left|j \right\rangle_{A'} &\Longrightarrow& \left|j \right\rangle_{A} \left|i
\right\rangle_{A'} \;\;\mbox{when}\;\; i,j=1 \;\mbox{to}\; n \\
\setlength{\mathindent}{100pt}
\left| i\right\rangle_{A} \left|j \right\rangle_{A'} &\Longrightarrow& \left|i \right\rangle_{A} \left|j
\right\rangle_{A'} \;\;\mbox{otherwise} \nonumber\end{aligned}$$ Since the size of $FG^{\dagger}$ is simply equal to the number of orthonormal vectors in Alice’s measurement basis, this operation expands it to size $l \times
l$. In her new basis, $\{ \left| 1\right\rangle _{A'},\cdots ,\left| l\right\rangle _{A'}\} _{A}$, Alice describes the two possible states (\[initial\]) thus: $$\begin{aligned}
\left| \psi \right\rangle &=&\left| 1\right\rangle _{A'}\left| \eta
_{1}'\right\rangle _{B}+\cdots +\left| l\right\rangle _{A'}\left|
\eta _{l}'\right\rangle _{B} \\ \left| \phi
\right\rangle &=&\left| 1\right\rangle _{A'}\left| \nu
_{1}'\right\rangle _{B}+\cdots +\left| l\right\rangle _{A'}\left|
\nu _{l}'\right\rangle _{B} \nonumber\end{aligned}$$ Here, $\left| \eta_{i}'\right\rangle _{B}$ and $\left| \nu
_{i}'\right\rangle _{B}$ are new unnormalized vectors, but remain describable in Bob’s original basis $\{ \left| 1\right\rangle _{B},\cdots ,\left| m\right\rangle _{B}\}$. Now her system has a convenient number of dimensions, Alice proceeds as in Sec. \[sec:3.2\]. She will obtain and perform a measurement guaranteeing Bob possesses one of two orthogonal states.
SWAP operations like these are physically unproblematic, and do not in any way derogate the entangled information Alice shares with Bob. One physical realization of this procedure requires just one ancilliary qubit. Alice introduces this qubit “$Z$”, known to be in state $\left| 0 \right\rangle _{Z}
$ to her system, giving her state equations of form: $$\begin{aligned}
\left| \psi \right\rangle = \left| 10\right\rangle _{AZ}\left| \eta _{1}\right\rangle _{B}+\cdots
+\left| n0\right\rangle _{AZ}\left| \eta
_{n}\right\rangle _{B}\\+\left| 11\right\rangle _{AZ}\left| \eta _{n+1}\right\rangle _{B}+\cdots
+\left| n1\right\rangle _{AZ}\left| \eta
_{2n}\right\rangle _{B} \nonumber\end{aligned}$$ Since qubit $Z$ is in state $\left| 0\right\rangle
_{Z}$, we know all the unnormalized vectors $\left| \eta
_{n+i}\right\rangle _{B}$ have zero amplitude. This gives rise to the rather lop-sided $FG^{\dagger}$ matrix, wherein $\{FG^{\dagger}\}_{ij}=0$ everywhere that either $i>n$ or $j>n$. With this $FG^{\dagger} $ matrix, Alice’s problems are over. Between the numbers $n$ and $2n$ there lies a power of two. Thus there is a sub-matrix of $FG^{\dagger} $ that includes all $n$ non-zero terms, and just enough zero-valued terms to round things out to the most convenient dimensionality. Alice can find unitary manipulations on this sub-matrix that transform it, (and thereby simultaneously transform the $FG^{\dagger} $ matrix as a whole) into zero-diagonal form. She simply follows the procedure outlined in Sec. \[sec:3.2\], obtaining a finite sequence of unitary transformations that, taken together, represent a single rotation of her measurement basis.
This unlikely procedure is surprisingly efficient for distinguishing $\left| \psi\right\rangle$ and $\left| \phi\right\rangle$. No matter what the dimensionality of the problem, there is a solution after a finite number of steps: a number of steps equal to $\frac{1}{2}\, l \log_{2}l$, where $l$ is the expanded dimensionality. Through the use of this SWAP operation Alice can always accomplish perfect distinguishability with minimal effort.
Further Generalizations
=======================
Multipartite states {#sec:4.1}
-------------------
We have considered only the bipartite case thusfar, but the strategy used by Alice and Bob can also be deployed by any number of people. States of tripartite form, for instance: $$\begin{aligned}
\left| \psi \right\rangle &=&\left| \alpha _{0}\right\rangle
_{A}\left| \beta_{0}\right\rangle _{B}\left| \gamma
_{0}\right\rangle _{C}+\cdots +\left| \alpha _{n}\right\rangle _{A}\left|
\beta_{n}\right\rangle _{B}\left| \gamma _{n}\right\rangle _{C}
\\
\left| \phi \right\rangle &=&\left| \alpha _{0}^{\prime
}\right\rangle _{A}\left| \beta _{0}^{\prime }\right\rangle
_{B}\left| \gamma _{0}^{\prime }\right\rangle _{C}+\cdots +\left| \alpha _{n}^{\prime }\right\rangle _{A}\left|
\beta _{n}^{\prime }\right\rangle _{B}\left| \gamma _{n}^{\prime
}\right\rangle _{C} \nonumber\end{aligned}$$ can, when Alice swaps into a larger Hilbert space, easily be represented thus: $$\begin{aligned}
\left| \psi \right\rangle &=&\left| 0\right\rangle _{A'}\left|
\Gamma _{0}\right\rangle _{BC}+\cdots +\left|
l\right\rangle _{A'}\left| \Gamma _{l}\right\rangle _{BC} \\
\left| \phi \right\rangle &=&\left| 0\right\rangle _{A'}\left|
\Gamma _{0}^{\perp }\right\rangle _{BC}+\cdots
+\left| l\right\rangle _{A'}\left| \Gamma _{l}^{\perp
}\right\rangle _{BC} \nonumber\end{aligned}$$ Alice simply behaves as before, and leaves Bob and Claire to distinguish between the resulting bipartite orthogonal states. The problem collapses to its original formulation, which we have already solved. If $n$ people share the quantum system, performing a series of $n-2$ such measurements will cascade their problem down to the bipartite case. We can conclude that two orthogonal states of any quantum system, shared in any proportion between any number of separated parties can be perfectly distinguished.
Multiple possible states {#sec:4.2}
------------------------
Our procedure distinguishes perfectly between two orthogonal states, $\left| \psi \right\rangle $ and $\left| \phi
\right\rangle $. What if Alice and Bob must distinguish between more than two orthogonal states? In general, this will not be possible so long as Alice and Bob share only one copy of their state. Whichever bases they perform sequential measurements in, their binary outcomes may not perfectly distinguish between more than two possibilities.
It is natural to quantify Alice and Bob’s situation by asking *how many* copies of their state they require to perfectly distinguish between it and the other possibilities. A detailed analysis of this problem is beyond the scope of this paper. Nevertheless, our basic procedure places an upper bound on the number of copies required. $n$ possible orthogonal states can be distinguished perfectly with $n-1$ copies.
Let us denote the possible states $\left| \psi _{i}\right\rangle
$. Alice and Bob simply act on their first copy as if they were distinguishing $
\left| \psi _{0}\right\rangle $ and $\left| \psi_{1}\right\rangle
$. If the state they share happens to be either $
\left| \psi _{0}\right\rangle $ or $\left| \psi _{1}\right\rangle
$, then their measurement result will be a definite verdict in favour of one or the other possibility. If they share instead some other $\left| \psi
_{i}\right\rangle$, since $\left\langle
\psi_{i} |\psi_{j} \right\rangle =\delta_{ij}$, Alice and Bob’s measurement will randomly decide upon $ \left| \psi
_{0}\right\rangle $ some of the time, and will seem to measure $\left| \psi _{1}\right\rangle$ otherwise. A positive measurement for $ \left| \psi _{0}\right\rangle $ is no guarantee of Alice and Bob sharing that state, for all the other states (barring $\left|
\psi _{1}\right\rangle$) sometimes produce that result. What a verdict for $ \left| \psi _{0}\right\rangle $ does show is that Alice and Bob definitely do not share $\left| \psi
_{1}\right\rangle$, which they would have detected with certainty.
Proceeding in this way, Alice and Bob can always use a single copy of their state to exclude one possibility. After $n-1$ such operations, they can have excluded $n-1$ states, and can thus distinguish between $n$ possibilities. This represents an upper bound upon the number of copies required for state distinction. Note that there are certainly sets of orthogonal states that can be distinguished using less than $n-1$ copies. An example are the four Bell-states, where only two copies will suffice.
Conclusion {#sec:5}
==========
We have proved that any two orthogonal quantum states shared between any number of parties may be perfectly distinguished by local operations and classical communication. Since orthogonal states are the only perfectly distinguishable states, this means that all pairs of distinguishable states are distinguishable with LOCC - global measurements are never required. Whether non-orthogonal states may also be optimally distinguished in this way remains an open question.
We would like to acknowledge the support of Hewlett-Pacard. JW and AJS thank the UK (Engineering and Physical Sciences Research Council), and LH thanks the Royal Society for funding this research.
[00]{} C.H. Bennett and G. Brassard, *Quantum Cryptography: Public key distribution and coin tossing*, Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, Bangalore, India, December 1984, p.175. A.K. Ekert, Phys.Rev.Lett [**67**]{}, 661 (1991). C.H. Bennett, G. Brassard and N.D. Mermin, Phys.Rev.Lett [**68**]{}, 557 (1992). M. Mosca, R. Jozsa, A. Steane and A. Ekert, Quantum Enhanced Information Processing, Phil.Trans.R.Soc.Lond.A [**358**]{}, 261 (2000) R. Cleve and H. Buhrman, Phys.Rev.A [**56**]{}, 1201 (1997) C.H. Helstrom, *Quantum Detection and Estimation Theory*, (Academic Press, NY, 1976) A.S. Holevo, *Probabilistic and Statistical Aspects of Quantum Theory*, (North-Holland Publishing Company, Amsterdam, 1982) C.A. Fuchs, PhD thesis, The University of New Mexico, Albuquerque, NM, unpublished (1996); (lanl e-print server: quant-ph/9601020) C.H. Bennett *et al*, Phys.Rev.A [**59**]{}, 1070 (1999) R. Cleve, D. Gottesman, H. Lo, Phys.Rev.Lett [**83**]{}, 648 (1999) M. Koashi and N. Imoto, Phys.Rev.Lett [**79**]{}, 2383 (1997)
|
IN THE COURT OF CRIMINAL APPEALS OF TENNESSEE
AT JACKSON
Assigned on Briefs May 5, 2015
STATE OF TENNESSEE v. CALVIN COE
Appeal from the Circuit Court for Tipton County
No. 7802 Joseph H. Walker III, Judge
No. W2014-01854-CCA-R3-CD - Filed July 1, 2015
Appellant stands convicted of driving under the influence of an intoxicant, fourth offense,
and driving on a cancelled, suspended, or revoked license, second offense. The trial court
sentenced appellant to an effective eighteen-month sentence, suspended to supervised
probation after serving 150 days in confinement. On appeal, appellant argues that the
trial court violated the Tennessee Rules of Evidence and appellant‟s Equal Protection
rights by limiting appellant‟s cross-examination of Officer Norris regarding any racial
bias or any disciplinary action the police department levied against Officer Norris due to
racially-biased language. Following our review of the parties‟ briefs, the record, and the
applicable law, we affirm the judgments of the trial court.
Tenn. R. App. P. 3 Appeal as of Right; Judgments of the Circuit Court Affirmed
ROGER A. PAGE, J., delivered the opinion of the court, in which NORMA MCGEE OGLE
and ROBERT W. WEDEMEYER, JJ., joined.
Virginia M. Crutcher, Atoka, Tennessee, for the appellant, Calvin Coe.
Herbert H. Slatery III, Attorney General and Reporter; Caitlin Smith, Assistant Attorney
General; D. Michael Dunavant, District Attorney General; and James Walter Freeland,
Jr., and Jason Randall Poyner, Assistant District Attorneys General, for the appellee,
State of Tennessee.
OPINION
This case arose from the traffic stop of appellant in the early morning hours of
March 16, 2013, and the subsequent detention of appellant for blood-alcohol testing
related to charges of driving under the influence of an intoxicant (“DUI”). Appellant was
later indicted for DUI per se; DUI; DUI, fourth offense; driving on a cancelled,
suspended, or revoked license; and driving on a cancelled, suspended, or revoked license,
third offense. Appellant‟s trial on these charges began on August 18, 2014. The trial
court bifurcated the trial. First, the jury heard evidence regarding the facts of the March
16 stop. Second, appellant conceded that he had prior convictions for DUI and driving
on a cancelled, suspended, or revoked license.
I. Facts from Trial
On March 16, 2013, Covington Police Department Officer Billy Norris was
patrolling in an area that had “disruptions” around the time that the bars in that area
closed. Officer Norris was driving south on Highway 51 North when appellant, leaving a
bar in his car, entered the roadway in front of Officer Norris, causing Officer Norris to
slow down and change lanes to avoid a collision. Officer Norris observed appellant turn
right onto Ervin Lane and noticed that appellant crossed the center line several times.
Officer Norris effectuated a stop. After encountering appellant, Officer Norris smelled
the odor of alcohol on appellant‟s person. Appellant admitted that he had been drinking
earlier in the day, but he asserted that he had not consumed alcohol that night. Officer
Norris searched appellant‟s car during the stop, finding an unopened bottle of Crown
Royal whisky. Appellant also admitted that his driver‟s license was suspended, which
Officer Norris verified through dispatch. Appellant was unsteady while standing and
performed “poorly” on the walk-and-turn test by not following instructions and by
stopping during the test. Appellant also did not perform the one-leg stand test “as
requested.” Officer Norris arrested appellant at 3:04 a.m. After appellant signed the
implied consent form, Officer Norris transported appellant to the hospital to have blood
drawn; the duration of the drive was approximately twelve minutes. The blood sample
was submitted to the Tennessee Bureau of Investigation (“TBI”) for testing. The results
of the testing showed that appellant had a blood-alcohol level of 0.26.
A jury found appellant guilty of DUI, DUI per se, and driving on a cancelled,
suspended, or revoked license. Appellant then pleaded guilty to DUI, fourth offense, and
driving on a cancelled, suspended, or revoked license, second offense.1 The trial court
merged the DUI convictions and merged the driving on a cancelled, suspended, or
revoked license convictions. Appellant stands convicted of DUI, fourth offense, and
driving on a cancelled, suspended, or revoked license, second offense. The trial court
sentenced appellant to eighteen months, suspended to supervised probation after serving
150 days in incarceration, for the DUI conviction with a concurrent eleven-month-and-
twenty-nine-day sentence for the driving on a cancelled, suspended, or revoked license
1
The trial court found that appellant had two prior convictions for driving on a cancelled, suspended, or
revoked license but that one of the offenses was over ten years old. See Tenn. Code Ann. § 55-50-
504(a)(2). The State conceded that the offense should not be considered. Therefore, appellant stands
convicted of driving on a cancelled, suspended, or revoked license, second offense, rather than third
offense.
-2-
conviction, also to be suspended to probation after appellant serves 150 days in
confinement.
II. Analysis
Appellant argues that the trial court violated the Tennessee Rules of Evidence and
appellant‟s Equal Protection rights by limiting appellant‟s cross-examination of Officer
Norris regarding any racial bias or any disciplinary action the police department levied
against Officer Norris due to racially-biased language. The State responds that appellant
waived this issue by failing to file a motion for new trial and that the trial court did not
commit plain error. In his reply brief, appellant argues that he is entitled to plain error
review.
Appellant‟s argument rests solely on a series of questions that occurred during
Officer Norris‟s cross-examination. The contested colloquy took place as follows:
Q: Okay. Did you ever have any disciplinary actions when you were at the
police department?
[Prosecutor]: Your Honor, may we approach?
THE COURT: The objection will be sustained.
[Defense Counsel]: It‟s just going to his credibility, Your Honor.
THE COURT: The objection will be sustained.
[Defense Counsel]: Okay.
Q: So you never -- did you ever get in trouble while you were there at the
police department?
A: Yes, ma‟am.
[Prosecutor]: Your Honor, I object.
THE COURT: The objection will be sustained.
Q: So, did you ever -- did you like being a police officer?
A: I enjoyed it, yes, ma‟am.
-3-
Q: You enjoyed it. Did you want to leave?
A: Yes, ma‟am.
[Prosecutor]: Your Honor, can we approach?
THE COURT: Yes, sir, you can approach.
[A bench conferences ensued].
[Prosecutor]: Your Honor, she is simply asking the same question, and the
same objection has been sustained twice now, different ways. And now if
she‟s got something to get to that‟s relevant some way, that‟s fine. But if
it‟s not going to be relevant to his credibility, I‟d ask that counsel move on.
....
[Defense Counsel]: I think he‟s made some racial remarks that I think are
relevant, while he was on duty.
THE COURT: The objection will be sustained. Go to another subject,
please.
[Defense Counsel]: Okay.
[The bench conference concluded].
THE COURT: Okay. Ladies and gentlemen, the Court sustained the
objection because the questions had nothing to do with this case. Okay.
Go to another question, please.
[Defense Counsel]: Thank you, Your Honor.
While appellant objected at trial, appellant failed to file a motion for new trial.
Tennessee Rule of Appellate Procedure 3(e) states, “[I]n all cases tried by a jury, no issue
presented for review shall be predicated upon error in the admission or exclusion of
evidence . . . unless the same was specifically stated in a motion for a new trial; otherwise
such issues will be treated as waived.” Tennessee Rule of Appellate Procedure 13(b)
explains:
Review generally will extend only to those issues presented for review. The
appellate court shall also consider whether the trial and appellate court have
-4-
jurisdiction over the subject matter, whether or not presented for review,
and may in its discretion consider other issues in order, among other
reasons: (1) to prevent needless litigation, (2) to prevent injury to the
interests of the public, and (3) to prevent prejudice to the judicial process.
Furthermore, the Tennessee Rules of Appellate Procedure provide:
A final judgment from which relief is available and otherwise appropriate
shall not be set aside unless, considering the whole record, error involving a
substantial right more probably than not affected the judgment or would
result in prejudice to the judicial process. When necessary to do substantial
justice, an appellate court may consider an error that has affected the
substantial rights of a party at any time, even though the error was not
raised in the motion for a new trial or assigned as error on appeal.
Tenn. R. App. P. 36(b). This type of review is referred to as plain error review.
The accepted test for plain error review requires that:
(a) the record must clearly establish what occurred in the trial court;
(b) a clear and unequivocal rule of law must have been breached;
(c) a substantial right of the accused must have been adversely affected;
(d) the accused did not waive the issue for tactical reasons; and
(e) consideration of the error is “necessary to do substantial justice.”
State v. Smith, 24 S.W.3d 274, 282 (Tenn. 2000) (quoting State v. Adkisson, 899 S.W.2d
626, 641-42 (Tenn. Crim. App. 1994)). To rise to the level of “plain error,” an error
“„must [have been] of such a great magnitude that it probably changed the outcome of the
trial.‟” Adkisson, 899 S.W.2d at 642 (quoting United States v. Kerley, 838 F.2d 932, 937
(7th Cir. 1988)). All five factors must be established by the record before a court will
find plain error. Smith, 24 S.W.3d at 282. Complete consideration of all the factors is
not necessary when clearly at least one of the factors cannot be established by the record.
Appellant has not proven plain error. We conclude that consideration of the issue
is not necessary to do substantial justice. There was an overwhelming amount of
evidence against appellant at trial. In addition to the arresting officer‟s testimony that
appellant pulled out in front of him, swerved in the road, smelled of alcohol, failed the
field sobriety tests, and admitted drinking earlier in the day, the jury viewed the video of
-5-
the arrest and heard evidence from a Tennessee Bureau of Investigation forensic scientist
that appellant‟s blood-alcohol level was 0.26. We also note that appellant failed to
provide a proffer of proof for the record to determine exactly what Officer Norris‟s
responses would have been. Therefore, even though appellant asserts that Officer Norris
had been disciplined for using racially-biased language, there is nothing in the record to
support this claim. Based on the record as a whole, appellant has failed to prove that if he
had been allowed impeached Officer Norris, the outcome of the trial would have been
different. Appellant is not entitled to plain error review.
CONCLUSION
Based on the parties‟ briefs, the record, and the applicable law, we affirm the
judgments of the trial court.
_________________________________
ROGER A. PAGE, JUDGE
-6-
|
Q:
Room @Relation returning only the first item of the table
I am using Room in Android for my database also I am using @Relation (based on this: https://developer.android.com/reference/android/arch/persistence/room/Relation.html)
to fetch data from the relation entities that are related one-to-many with a ForeinKey.
What I am trying to get is a List with roomAreaNames from the RoomArea Entity using @Relation. The code does not have compile errors, The issue is that I am only getting back from the @Relation is a List with size 1 (only the first object from the table) rather than the full list.
Tables:
@Entity(
tableName = "buildings_table",
indices = [Index("contract_id")],
foreignKeys = [
ForeignKey(
entity = Contract::class,
parentColumns = ["contract_id"],
childColumns = ["contract_id"],
onDelete = ForeignKey.CASCADE)]
)
data class Building(
@PrimaryKey(autoGenerate = true) @ColumnInfo(name = "building_id")
val buildingId: Long = 0L,
@ColumnInfo(name = "contract_id")
val contractId: Long,
@ColumnInfo(name = "building_name")
val buildingName: String)
@Entity(
tableName = "floors_table",
indices = [Index("building_id")],
foreignKeys = [
ForeignKey(
entity = Building::class,
parentColumns = ["building_id"],
childColumns = ["building_id"],
onDelete = ForeignKey.CASCADE)]
)
data class Floor(
@PrimaryKey(autoGenerate = true) @ColumnInfo(name = "floor_id")
val floor_id: Long = 0L,
@ColumnInfo(name = "building_id")
val buildingId: Long,
@ColumnInfo(name = "level")
val level: Int
)
@Entity(
tableName = "rooms_area_table",
indices = [Index("floor_id")],
foreignKeys = [
ForeignKey(
entity = Floor::class,
parentColumns = ["floor_id"],
childColumns = ["floor_id"],
onDelete = ForeignKey.CASCADE)]
)
data class RoomArea(
@PrimaryKey(autoGenerate = true) @ColumnInfo(name = "room_area_id")
val roomAreaId: Long = 0L,
@ColumnInfo(name = "floor_id")
val floorId: Long,
@ColumnInfo(name = "room_area_name")
val roomAreaName: String
)
Dao Query:
@Transaction
@Query("SELECT * FROM buildings_table WHERE contract_id = :contractId")
fun getItemsAuditBuilding(contractId: Long): LiveData<List<ItemAuditBuilding>>
Here is the @Relation (giving me a list size 1 only) i need all the roomAreas related with the buildingId
data class ItemAuditBuilding(
@Embedded val building: Building,
@Relation(
parentColumn = "building_id",
entityColumn = "room_area_id",
entity = RoomArea::class,
projection = ["room_area_name"]
)
var roomAreas: List<String>
)
Thank you.
A:
I hope this helps anyone trying to work with @relation with Room. I have based the answer to my question on the Android Dev Summit 2019. Here the link of the part where Florina speaks about room: [https://www.youtube.com/watch?v=_aJsh6P00c0]. Based on the video I have made the following changes:
I added a Junction table:
@Entity(
primaryKeys = ["building_id", "room_area_id"],
indices = [Index("room_area_id")]
)
data class RoomAreaWithBuilding(
@ColumnInfo(name = "building_id")
val buildingId: Long,
@ColumnInfo(name = "room_area_id")
val roomAreaId: Long
)
Then I modified the ItemAuditBuilding class
data class ItemAuditBuilding(
@Embedded val building: Building,
@Relation(
parentColumn = "building_id",
entity = RoomArea::class,
entityColumn = "room_area_id",
projection = ["room_area_name"],
associateBy = Junction(RoomAreaWithBuilding::class)
)
val roomAreas: List<String>
)
Note: For this code to work you will need to be with Room 2.2 as associateBy is a new feature. Enjoy!!!
|
Oil Gas Fuel Water Tube Boiler Use For Textile Mills
use of boiler in textile industry | Sitong Wood Biomass
use of boiler in textile industry bangladesh assignment. use of boiler in textile industry bangladesh we CFBC Boiler and injury, are an acceptable part of the textile industry in Bangladesh. May 6, 2015 We have sold horizontal oil fired boiler to many industries in Bangladesh, besides uses of boilers in textile industry | Industrial Biomass
Textile Hot Water Vacuum Boiler - ketelpatrouille.be
textile hot water vacuum tube boiler supplier in Laos. Types of Boiler Machine Used in Textile Mills. List of boiler machine for Textile Mills and Apparel Industry is closed vessel in which water or other liquid is heated, steam or vapor is generated, steam is super heated, or any combination thereof, under pressure or vacuum, for use external to itself, by the direct application of energy
Textile Gas Hot Water Atmospheric Pressure Water
SZS series gas-fired hot water boiler WNS series gas-fired hot water boiler The horizontal drum is arranged in a longitudinal direction with three return water-fired tube boilers, and the water-cooled wall tubes are densely arranged on both sides of the furnace; two types of atmospheric pressure and pressure, users can use according to
boiler for palm oil mill production | Industrial Gas
Two sets of the 10 ton capacity steam gas fired boilers use for textile industry in Bangladesh. thermal efficiency is 92%. The 360,0000 Kcal biomass fired thermal oil boiler use for plywood plant in Semarang, Indonesia. the thermal efficiency is 85%. This series boiler is vertical type once through structure water tube oil gas fired
why boiler in use in textile - hieroglyphs.in
use of boiler in textile industry bangladesh assignment. why boiler use in textile. Textile Industry Boiler--The textile plant steam boiler provides heat for the dyeing and drying of yard goods. The fuels used for the production of thermal energy in general are diesel oil, heavy oil, LPG, coal, natural gas and solid fuels such as . Get a Quote
boiler for palm oil mill production | Industrial Gas
Two sets of the 10 ton capacity steam gas fired boilers use for textile industry in Bangladesh. thermal efficiency is 92%. The 360,0000 Kcal biomass fired thermal oil boiler use for plywood plant in Semarang, Indonesia. the thermal efficiency is 85%. This series boiler is vertical type once through structure water tube oil gas fired
textile fuel vapor vacuum water tube boiler
textile fuel pressure water tube boiler price - ismdr.in. List of boiler machine for Textile Mills and Apparel Industry is closed vessel in which water or other liquid is heated, steam or vapor is generated, steam is superheated, or any combination thereof, under pressure or vacuum, for use external to itself, by the direct application of
textile mill use industrial steam boiler
Alibaba.com offers 5,380 textile industry steam boiler products. About 99% of these are Boilers. A wide variety of textile industry steam boiler options are available to you, such as pressure, usage, and type. Textile Mill Use Industrial Oil Gas Water Tube Steam Boiler Price. Get a Quote
what boiler for textile mills - ratskeller-marienberg.de
boiler in textile mill - pisakinder.de. Best Industrial Oil Gas Textile Boiler Solid Fuel Boiler . For example, the textile mills use boilers to cook the slurry, and the printing& dyeing plants will use boiler to heat and dry textile |
This Halloween we have a harrowing tale to share with you.
Relive the terrible curse of the mad Dr. Jamison Junkenstein, his ominous master, and the four Wanderers who sought to bury them once and for all. Gray Shuko brings horror to life in The Return of Junkenstein with his chilling illustrations of revenge from beyond the grave.
Doom descended upon Adlersbrunn. Dr. Jamison Junkenstein's lust for revenge had spilled into every street and engulfed the village in a sea of terror. Yet as the town seemed lost, they appeared. Four Wanderers who had traveled from distant lands to vanquish the darkness. When their grim work was done, the doctor's mad laughter haunted the village no longer.
The Wanderers vanished as suddenly as they had appeared. The stories of their valor would live on, but the peace they had brought to Adlersbrunn would not…
Dr. Junkenstein was but the pawn of a greater power: the one known as the Witch of the Wilds. She would not abandon her fallen servant…not while his debt to her remained unpaid. Her forbidden magic breathed the spark of life back into Dr. Junkenstein's cold heart.
Death had not slaked his thirst for chaos, nor had it dimmed his devious mind. He labored to remake his infernal army mightier and more terrible than ever before.
The noble Lord of Adlersbrunn was helpless to stop Dr. Junkenstein's rampage. The only hope he had to save his village lay far beyond its walls…
The winds carried his plea far and wide, first to a legendary Viking craftsman who had fought beside the Lord of Adlersbrunn in days long past. He could not ignore the call of an old friend, no more than he could resist spilling the blood of a new foe.
On and on the ravens flew, even to the misty lake where the Countess dwelled. It was said that she felt no warmth, no cold, no joy, and no sorrow. The only thing that stirred in her heart was an unending hunger, but whether that was what moved her to seek out Adlersbrunn, none could say.
Stranger still were the Monk and his apprentice, the Swordsman. Where they had met and why they had agreed to travel together are tales for another day. But it is said that they answered to a foreboding presence—a force beyond mortal understanding.
Across land and sea, by foot and by hoof, they came. Four they were in number like the Wanderers of old. Trust would not come easy, but they would need it to survive the horrors the Witch had in store.
For she had at her side a faithful servant named the Reaper, and she had claimed a monstrous new ally: the Summoner, who wielded the power of an ancient dragon. Bound to the Witch’s will by pacts forged in blood, they were called to the battle and pledged not to rest until they had destroyed Adlersbrunn once and for all.
And so the endless night began…
Overwatch Halloween Terror has arrived. Grab your friends and stand ready to fight against the darkness!
To learn more about our in-game festivities, click here. |
In-situ immune profile of polymorphic vs. macular Indian Post Kala-azar dermal leishmaniasis.
Post Kala-azar Dermal Leishmaniasis (PKDL), a sequel of apparently cured Visceral Leishmaniasis presents in South Asia with papulonodular (polymorphic) or hypomelanotic lesions (macular). Till date, the polymorphic variant was considered predominant, constituting 85-90%. However, following active-case surveillance, the proportion of macular PKDL has increased substantially to nearly 50%, necessitating an in-depth analysis of this variant. Accordingly, this study aimed to delineate the cellular infiltrate in macular vis-à-vis polymorphic PKDL. To study the overall histopathology, hematoxylin and eosin staining was performed on lesional sections and phenotyping by immunohistochemistry done in terms of dendritic cells (CD1a), macrophages (CD68), HLA-DR, T-cells (CD8, CD4), B-cells (CD20) and Ki67 along with assessment of the status of circulating homing markers CCL2, CCL7 and CXCL13. In polymorphic cases (n = 20), the cellular infiltration was substantial, whereas in macular lesions (n = 20) it was mild and patchy with relative sparing of the reticular dermis. Although parasite DNA was identified in both variants by ITS-1 PCR, the parasite load was significantly higher in the polymorphic variant and Leishman-Donovan bodies were notably minimally present in macular cases. Both variants demonstrated a decrease in CD1a+ dendritic cells, HLA-DR expression and CD4+ T-cells. In macular cases, the proportion of CD68+ macrophages, CD8+ T-cells and CD20+ B-cells was 4.6 fold, 17.0 fold and 1.6 fold lower than polymorphic cases. The absence of Ki67 positivity and increased levels of chemoattractants suggested dermal homing of these cellular subsets. Taken together, as compared to the polymorphic variant, patients with macular PKDL demonstrated a lower parasite load along with a lesser degree of cellular infiltration, suggesting differences in host-pathogen interactions, which in turn can impact on their disease transmitting potential and responses to chemotherapy. |
Stationary mounted film projectors in cinemas are always located at a predetermined distance from the projection screen in the theater. Focusing the projected image, i.e. adjsuting the optics of the projector, should, in theory, be a non-recurring task, but experience has shown that in practice re-focusing has to be carried out from time to time. For example, deposits from the surface of the film can build up in the film path of the projector which results directly in blurred images, and furthermore the layer structure of the film can vary.
The development within the cinema business has lead to that an operator nowadays has to supervise 8-10 projectors in different auditoria. Quite naturally he is not able then to pay much time and attention to the focusing of the various projectors, which could be inconvenient to the audience, since sharpness is a determining factor when it comes to impression and experience of the picture shown. Also, it can be difficult to find swiftly the correct setting of focus and, in addition, in certain picture sequences it can be hard to judge the proper sharpness of the picture from the projection room. It is often so that the projectionist focuses with the guidance of the translated subtitles occurring in foreign film along the lower edge of the picture. This means that sharpness becomes poorer where best needed, namely, within that part of the screen where the actor's faces mostly appear, i.e. the central area of the screen, at a distance from the top edge of the screen corresponding to about one third of the height of the screen.
In practice there is thus a need for simple and correct focusing means, and earlier several solutions to the problem of focusing have bene proposed. For example, in the Swiss Patent Specification No. 484 443 an apparatus for focusing projected images is described. The basis of this solution is the fact that a sharp image exhibits greater contrasts between light and dark portions, and that a blurred image includes grey tones, i.e. the contrast effect is eliminated. However, it is difficult in practice, in particular when cinematographic pictures are concerned, to utilize grey tone conditions for providing focusing parameters. Several specific steps have to be taken in order that normal fluctuations in image intensity be eliminated and in order that the two measuring signals be obtained which are to form the basis of a correcting signal. |
SUNY Downstate Health Sciences University
Office of Communications & Marketing
SUNY Downstate’s Dr. Joseph P. Merlino Named a Fellow Ambassador by the New York Academy
of Medicine
Brooklyn, NY – SUNY Downstate Medical Center’s Vice President for Faculty Affairs
and Professional Development Joseph P. Merlino, MD, MPA, has been named to the Fellows
Ambassador Program of the New York Academy of Medicine. Dr. Merlino, who is also professor
of psychiatry at SUNY Downstate, was among seven persons chosen this year from the
Academy’s prestigious membership of more than 2,000 experts from across the professions
affecting health.
The Fellows Ambassador Program was established in 2015 to increase the direct engagement
of Fellows with the research and policy staff of the Academy, and provide the public
with access to the wealth of knowledge that the Academy’s Fellows possess through
public communication and media interviews. The program offers several ambassador positions
each year through an application process open to the Academy’s Fellows.
“We are thrilled to continue the Fellows Ambassador program after a successful inaugural
year,” said Academy President Jo Ivey Boufford, MD. “The Fellows are the foundation
on which the Academy was built and the Fellows Ambassador program provides a unique
opportunity for the distinguished health professionals in our fellowship to share
their expertise and experience with the public.”
The Ambassadors were selected by the Academy based on their interest and ability as
spokespersons for their field of expertise, as well as for their ability to address
broad reaching topics in the news today such as urban health, prevention, and health
disparities. In addition to being available to media for comment and interviews, the
2016-17 Ambassadors will author commentaries, blog posts, and op-eds. The program’s
goal is to develop a critical mass of Fellows prepared to work with the media and
help the Academy become a valuable resource for media seeking health expertise to
inform the public.
Dr. Merlino has been a member of the SUNY Downstate faculty since 2009 and is an expert
in psychiatry and psychotherapy, disaster psychiatry, and physician and medical student
mental health. As founding vice president for faculty affairs and professional development,
he supports Downstate faculty members in their mission of training future generations
of physicians, nurses, allied health professionals, scientists, and public health
practitioners. Dr. Merlino also holds appointments in the College of Health Related
Professions and School of Public Health at Downstate.
About The New York Academy of MedicineThe New York Academy of Medicine advances solutions that promote the health and well
beingof people in cities worldwide. Established in 1847, The New York Academy of Medicine
continues to address the health challenges facing New York City and the world’s rapidly
growing urban populations. The Academy accomplishes this through its Institute for
Urban Health, home of interdisciplinary research, evaluation, policy and program initiatives;
its world class historical medical library and its public programming in history,
the humanities, and the arts; and its Fellows program, a network of more than 2,000
experts elected by their peers from across the professions affecting health. The Academy’s
current priorities are healthy aging, disease prevention, and eliminating health disparities.
About The New York Academy of Medicine FellowsThe Academy’s prestigious Fellows program, the foundation on which the Academy was
established in 1847, includes more than 2,000 individuals, elected by their peers,
from across the professions affecting health. Working collaboratively across disciplines
and specialties, in a tradition of honor and service, the Fellows are organized into
18 diverse sections and workgroups that address clinical and population health issues
facing individuals and communities in New York City and cities around the world.
###
About SUNY Downstate Medical Center
SUNY Downstate Medical Center, founded in 1860, was the first medical school in the
United States to bring teaching out of the lecture hall and to the patient’s bedside.
A center of innovation and excellence in research and clinical service delivery, SUNY
Downstate Medical Center comprises a College of Medicine, College of Nursing, School
of Health Professions, a School of Graduate Studies, School of Public Health, University
Hospital of Brooklyn, and a multifaceted biotechnology initiative including the Downstate
Biotechnology Incubator and BioBAT for early-stage and more mature companies, respectively.
SUNY Downstate ranks twelfth nationally in the number of alumni who are on the faculty
of American medical schools. More physicians practicing in New York City have graduated
from SUNY Downstate than from any other medical school. |
Transnasal insertion of percutaneous endoscopic gastrostomy in a patient with intermaxillary fixation: case report.
We describe successful placement of a percutaneous endoscopic gastrostomy via the transnasal approach in a patient who required intermaxillary fixation for an open mandible fracture, and required enteral nutrition for chronic respiratory failure and traumatic brain injury. This method may be useful in other cases where the transoral route is not available for endoscopic insertion of enterostomy tubes. |
//
// Generated by class-dump 3.5 (64 bit) (Debug version compiled Jul 6 2018 12:02:43).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard.
//
#import <SketchCloudKit/SCKObject.h>
@class NSString, SCKPaginatedShares;
@interface SCKProject : SCKObject
{
NSString *_shortID;
NSString *_name;
SCKPaginatedShares *_paginatedShares;
}
- (void).cxx_destruct;
@property(readonly, nonatomic) SCKPaginatedShares *paginatedShares; // @synthesize paginatedShares=_paginatedShares;
@property(readonly, copy, nonatomic) NSString *name; // @synthesize name=_name;
@property(readonly, copy, nonatomic) NSString *shortID; // @synthesize shortID=_shortID;
- (id)description;
- (id)dictionaryRepresentation;
- (id)initWithDictionary:(id)arg1;
- (id)initWithObjectID:(id)arg1 shortID:(id)arg2;
@end
|
A new Fox Business poll of the Wisconsin GOP presidential primary shows Ted Cruz leading Donald Trump 42 percent to 32 percent, with John Kasich in third place at 19 percent. Cruz leads Trump among nearly every demographic group except for independents:
Among just those who say they will "definitely" vote, Cruz's lead over Trump widens to 46-33 percent, and Kasich gets 16 percent. CLICK HERE TO READ THE POLL RESULTS There is a big gender gap. Women back Cruz over Trump by a 19-point margin (46-27 percent). The two candidates are much closer among men: Cruz gets 40 percent to Trump's 35 percent. Cruz's advantage over the real estate mogul also comes from self-described "very" conservative voters, who give him a 36-point lead (61 percent Cruz vs. 25 percent Trump). White evangelical Christians voting in the GOP primary prefer Cruz over Trump by 49-28 percent. Trump has beaten Cruz among this key voting bloc in more than 10 contests so far, according to the Fox News exit poll. Cruz is ahead of Trump among those with a college degree (42-30 percent) as well as those without a degree (44-34 percent). Independents can vote in Wisconsin's open primary -- and are more inclined to back Trump (37 percent) than Cruz (26 percent) or Kasich (26 percent).
A poll by the Marquette University Law School, widely considered the gold standard pollster in Wisconsin, also showed Cruz up 10 points over Trump yesterday. But two other recent polls have shown Cruz leading Trump by just 1 point. Cruz leads Trump by 3.8 points in the RealClearPolitics average of Wisconsin polls. |
---
abstract: 'We compute the concentrations of five transition elements (Cr, Fe, Co, Ni and Zn) via condensation and implantation in supernova presolar grains (Silicon Carbide Type X) from the time they condense till the end of free expansion (or pre-Sedov) phase. We consider relative velocities of these elements with respect to grains as they condense and evolve at temperatures $\le$ 2000 K, use zonal nucleosynthesis yields for three core collapse supernovae models - 15 M~$\odot$~, 20 M~$\odot$~ and 25 M~$\odot$~ and an ion target simulator SDTrimSP to model their implantation onto the grains. Simulations from SDTrimSP show that maximal implantation in the core of the grain is possible, contrary to previous studies. Among the available models, we find that the 15 M~$\odot$~ model best explains the measured concentrations of SiC X grains obtained from Murchison meteorite. For grains where measured concentrations of Fe and Ni are $\ga$ 300 ppm, we find implantation fraction to be $\la$ 0.25 for most probable differential zonal velocities in this phase which implies that condensation is dominant than implantation. We show that radioactive corrections and mixing from the innermost Ni and Si zones is required to explain the excess Ni (condensed as well as implanted) in these grains. This mixing also explains the relative abundances of Co and Ni with respect to Fe simultaneously. The model developed can be used to predict concentrations of all other elements in various presolar grains condensed in supernova ejecta and compared with measured concentrations in grains found in meteorites.'
author:
- 'Kuljeet K. Marhas'
- Piyush Sharda
bibliography:
- 'references.bib'
title: 'Transition Elements in supernova Presolar Grains: condensation vs. implantation'
---
Introduction {#s:intro}
============
Supernovae (SN) are rare astronomical events (\~3 per century) which provide an enormous wealth of information on stellar evolution and nucleosynthesis of elements. Studying these singular events has been quite challenging, given the large scale of energies (10$^{51}$ J), mass (\~10-200 M~$\odot$~) and temperatures (\~10$^9$ K) involved in supernovae physics. As the matter moves outwards after an SN explosion, expanding envelopes of stellar ejecta cool adiabatically. Eventually, the condensation of solid grains of sizes ranging from nanometers to a few tens of microns [@1982AdSpR...2...13G; @1988mess.book..984N] takes place. These grains which are found in meteorites are termed as presolar grains, owing to their time of formation which precedes that of the solar system [@1987Natur.326..160L; @1990Natur.345..238A]. At times, these are also referred as circumstellar grains and the ones condensing around supernovae have been termed X grains [@1990Natur.345..238A; @1992ApJ...394L..43A] or SUNOCONS [@1975ApJ...199..765C; @1975ApJ...198..151C; @2002ApJ...578L..83C]. These grains not only carry imprints of the nucleosynthesis environment within the star but also provide an insight into post explosion ejecta evolution and related processes that go on in the supernovae environment (SNe).
Of the various minerals (like nitrides, oxides, carbides) condensing in the SNe, largely, Silicon Carbide (SiC) has been studied more in the laboratory for its morphology, along with elemental and isotopic compositions [@1998AREPS..26..147Z; @2010ApJ...719.1370H; @2014LPICo1800.5051H; @2017LPI....48.2331L]. The SiC X grains from supernovae constitute \~1% of the total presolar SiC grains and are characterized by higher ${^{12}\textnormal{C}}/{^{13}\textnormal{C}}$ and lower ${^{14}\textnormal{N}}/{^{15}\textnormal{N}}$ ratios than solar abundances [@2005ChEG...65...93L]. Infact, few of them contain high ${^{26}\textnormal{Al}}/{^{27}\textnormal{Al}}$ ratios (upto 0.6) and all of them are consistently endowed with $^{28}$Si enrichment. Supernovae genesis of these grains is strongly supported by traces of $^{44}$Ti and $^{49}$V, which are produced during the explosive nucleosynthesis of $^{44}$Ca and $^{49}$Ti respectively [@1996ApJ...462L..31N; @2002ApJ...576L..69H; @2014AIPC.1594..307A], suggesting these grains condense around type II core collapse supernovae (CCSN) [@1992ApJ...394L..43A], although the possibility of finding X type SiC grains in SN Ia cannot be ruled out .
The pre-explosion SN structure can be associated with chemically distinct zones marked by the most abundant element(s), *viz.*, $^{56}$Ni, $^{28}$Si, $^{20}$Ne, $^{16}$O, $^{12}$C, $^{4}$He and $^{1}$H, in the order of successive hydrostatic burning stages [@1995Metic..30..325M]. The inner most shells are rich in transition elements since high binding energy of Fe stops further fusion. Post explosion, neutron sources (like, some radioactive elements) and most abundant elements in each of these zones transcend into regions with variable abundances of certain isotopes which are the diagnostic signature of a supernova. Extensive mixing occurs due to Rayleigh-Taylor (RT) instabilities at the zonal boundaries due to which material from the interior is mixed with the outermost envelopes, as predicted by simulations and confirmed by laboratory measurements [@2002ApJ...564..896D; @2009ApJ...696..749K]. Similarly, H from the outermost zone gets mixed with He and other zones in the interior in a series of reverse mixing stages because of Richtmyer-Meshkov instabilities [@1992ApJ...400..222B; @1999ApJ...511..335K]. Isotopic ratios of elements can provide information on the nucleosynthesis pathways, evolution and mixing occurring at the peripheries of various zones.
While [@1998Sci...281.1165V] proposed elemental abundances to be highly sensitive to grain size as evidence for ion implantation, proposed the existence of a negative correlation between elemental concentration and grain size. To understand and relate the isotopic signatures observed in presolar grains with post supernovae RT mixing, we have constructed a model to simulate trace ion implantation in the SNe. Specifically, the simulations study high energy Cr, Fe, Co, Ni and Zn from SN explosion interacting with 1 $\micron$ and 5 $\micron$ SiC grains. The model proposed can be applied to all isotopes eventually; we choose to begin with transition elements because their implantation has not been studied in detail, unlike the implantation of noble gases and low mass elements like Li and B . In fact, some predictions for the rare earth elements have also been made, see, for e.g., [@2006ApJ...647..676Y]. The idea is to de-alienate ion implantation with direct condensation and compare with observations from laboratory studies. Incorporating RT mixing and differential zonal velocities make the model highly versatile because one can vary these parameters to correspond to measured concentrations and the resulting parameter set can give a handle on physical conditions present in the grain surroundings at the time of implantation, not to mention the mass of the progenitor in the first place where these grains must have condensed.
We study backscattering, implantation, transmission and sputtering (hereafter, BITS) processes to check the total implantation of these isotopes in a spherical SiC grain. In section \[s:sec2\], we talk about supernovae nucleosynthesis (which leads to the production of heavy ions in question) and condensation of SiC in SNe. Section \[s:implan\] discusses the theory of transition ion implantation in presolar grains and section \[s:sdtrimsp\] describes how the ion target simulator SDTrimSP is set up. We present all the results and calculations in section \[s:disc\] - section \[s:disc1\] gives a summary of BITS processes and section \[s:disc2\] lays down all the calculations by taking up the example of Cr. In the last subsection (\[s:disc3\]), we compare our calculated concentrations with the ones measured in the laboratory, improve them by adding appropriate corrections and discuss reasons for similarities and discrepancies. Finally, we summarize our results in section \[s:summ\]. In Appendix \[s:append\], we show the derivation of model used for studying ion transmission through the grain.
NUCLEOSYNTHESIS AND CONDENSATION WITHIN SNe {#s:sec2}
===========================================
Nucleosynthesis in CCSN has been studied in detail over the last couple of decades after major breakthroughs in computational astrophysics. Although 3D hydrodynamic models have also been developed , they are still subject to scientific scrutiny as the explosion mechanism is not properly understood. To explore the implantation of trace elements into presolar grains, it is vital to analyze their evolution after explosion. We use the zonal nucleosynthesis yields provided by one of the hydrodynamic models, along with the surrounding conditions to predict the amount of transition ion implantation in grains condensed in the ejecta.
There are numerous models which lay down nucleosynthesis yields from CCSN explosions . In this work, we utilize the zonal yield sets from [@2016ApJ...821...38S] (hereafter, S16), which used the modified 1D hydrodynamic code KEPLER[^1] along with P-HOTB. P-HOTB stands for Prometheus-Hot Bubble and was used to study core collapse whereas KEPLER was used to evolve the star along zero age main sequence (ZAMS) and calculate nucleosynthesis yields and light curves [@1978ApJ...225.1021W]. For models which exploded, isotopic yields were generated post explosion. The zonal yields were obtained for three particular models - 15.2 M~$\odot$~, 20.1 M~$\odot$~ and 25.2 M~$\odot$~ \~200 seconds after the explosion, before any mixing could take place[^2]. Although all models in S16 dataset assume solar metallicity and do not take into account the effects of rotation, they can account for: 1.) detailed neutrino transport calculations using an improved explosion mechanism as compared to [@2002ApJ...576..323R] and [@2007PhR...442..269W]; 2.) a central engine which considers matter inside the collapsed core unlike certain other models which investigated only the matter exterior to the central engines used and; 3.) unlike previous nucleosynthesis models, all models[^3] used here are not exploded by injecting artificial energy because: a.) models below 15 M~$\odot$~ almost always explode, b.) models in 20-30 M~$\odot$~ rarely explode and c.) most models above 30 M~$\odot$~ implode and become black holes (see Figure 14 in S16 for the probability of explosion of different progenitor masses). Infact, the few models above 30 M~$\odot$~ in which explosion does take place is due to their core being ripped apart by winds to sizes comparable to \~15 M~$\odot$~. The decimals in these models might seem bizarre; the reason is that the authors have tried to explode all possible progenitor masses in steps of 0.1 M~$\odot$~ between 12-30 M~$\odot$~, however, 15.0, 15.1, 20.0, 25.0 and 25.1 M~$\odot$~ imploded in their simulations. This apparently small change in progenitor mass which leads to an altogether different end scenario is due to small but significant variations in the progenitor compactness [@2011ApJ...730...70O] rather than the central engine characteristics [@2015ApJ...801...90P]. This effect is more pronounced near progenitor masses of $\sim$20 M~$\odot$~ because the carbon burning stage changes to the radiative pathway from a convective mechanism. Infact, it has been recently shown that two similar progenitors with identical masses but slightly different input physics can lead to totally different scenarios [@2017arXiv171003243S]. Thus, it is not unusual for such stark differences to show up between two similar progenitor stars. Throughout this paper, we frequently approximate 15.2 to 15, 20.1 to 20 and 25.2 to 25 M~$\odot$~ models for the sake of simplicity.
SiC condensation could have taken place either in the inner shell or in the outer He shells where zonal C/O > 1 [@1979GeCoA..43.1455L; @1997AIPC..402..391L]. The inner shell where $^{4}$He is thought to be the most abundant isotope is negligible in size as compared to its neighboring shells, for the three models in consideration. In fact, we only observe a clear $^{4}$He dominated shell in the 25.2 M~$\odot$~ explosion model. Nevertheless, in the inner region, SiC condensation can be more prevalent in large progenitor mass models (like the 25.2 M~$\odot$~ model) having more energetic explosions, provided the temperature is brought down to \~2000 K within a few hundreds of days because at T > 2000 K, SiC is assumed to be stable only in its molecular form and does not condense to form a grain [@2013ApJ...776..107S]. The SiC grains formed in the inner region (if any) are, however, prone to destruction due to excessive sputtering by ions present in the nearby O rich regions or due to scattering by high energy particles. Moreover, TiC and SiS have been found to be more stable condensates in the inner regions where C/O <1, rather than Graphite or SiC [@2001GeCoA..65..469E; @2013ApJ...776..107S]. Thus, we do not study SiC condensation in the inner regions and focus only on the outer He rich zone. However, [@1999Sci...283.1290C] have argued against the limiting criterion of C/O >1 by proposing the formation of large (up to few $\micron$) C grains in the ejecta interior if CO molecules are destroyed by radioactive nuclei early on and C atoms are freed. Notwithstanding such evidence for the formation of SiC, we have considered SiC condensation in an environment where C/O >1. These regions can be identified in the mass fraction plot against the mass of the star, as shown in Figure \[fig:fig1\] for a 15.2 M~$\odot$~ star to be between 3.2-4.0 M~$\odot$~. Similarly, we obtain the C/O >1 regions as 5.0-6.1 M~$\odot$~ and 7.2-8.2 M~$\odot$~ for 20.1 M~$\odot$~ and 25.2 M~$\odot$~ models respectively. Typically quoted condensation temperatures for SiC in these zone are $\le$ 2000 K (), which are similar to those predicted for the solar neighborhood inspite of the lower pressure present in SNe, due to the depletion of Hydrogen (except for the outer H shell, see ).
\[fig:fig1\] {width="7in" height="5in"}
Observations of expanding envelopes of supernova ejecta suggest grain condensation can occur as early as 300-600 days [@1989ApJ...344..325K; @1997AIPC..402..317W; @2003ApJ...598..785N]. A recent work by Stephen et. al. (GeCoA 2017, in press) highlights a similar condensation time for Sr, Zr and Ba isotopes found in presolar SiC X grains. On the other hand, a much delayed formation (later than 1100 days) is proposed by in the outer region after explosion for a 15 M~$\odot$~ stellar model, owing to 1.) the presence of He$^+$ (the presence of which will not let temperature decrease below 2000 K, see also @1990ApJ...358..262L) for the first 1000 days after explosion and 2.) more efficient rates of condensation for carbon dust than SiC [@2009ApJ...703..642C]. Further, they also predict that SiC condensation starts as late as \~1740 days, for a homogeneous ejecta model for a 19 M~$\odot$~ star. An earlier condensation (\~900 days) can be achieved if one takes into account the clumpy model, which says that the ejecta is no longer homogeneous after a few hundred days and is separated into spherical clumps owing to the finger like projections generated due to RT instabilities [@1992ApJ...392..118C].
After explosion, the immediate major source of heat present is in the form of high energy radioactive elements and condensation can start quite early in regions where such materials are in scarcity. In the inner regions, subsequent presence of $\gamma$ rays and Compton electrons [@0004-637X-562-1-480; @0004-637X-638-1-234] is detected which ionize all material they encounter and hence, the ejecta attains very high temperatures (\~10$^6$ K or more). Additionally, the presence of UV radiation due to the degradation of $\gamma$ rays can also cause the destruction of grains, although its effect is not as pronounced as compared to that of Compton electrons [@2009ApJ...703..642C]. The reverse shock while traveling inward also tends to heat the ejecta and reaches the remnant core by the time free expansion phase (also called the pre-Sedov phase, @1990ApJ...356..549S)[^4] ends. Its effect on SiC condensation is not clearly understood.
C$\,$>$\,$O could also lead to some of the C being tied up as CO, thus delaying the formation of SiC; unless radioactive ions like $^{56}$Co can dissociate CO and free C atoms over a timescale of months, as suggested by [@0004-637X-562-1-480; @2013ApJ...769...38Y]. However, the radioactive ions carry heat with them; thus, their presence could increase the temperatures to more than 2000 K [@1994ApJS...92..527N; @0004-637X-842-1-13]. In O rich regions, SiO formation is more efficient and thus preferable than SiC, however, its total mass strongly decreases from its initial value at 200 days to that at 1500 days [@2013ApJ...776..107S]. It is also believed that SiO is the starting molecule in the formation of SiC. This is in direct support of an often quoted larger value of Si/C, since if Si>>C, SiC condensation can be straightforward because even after some Si atoms being locked in by O in the form of SiO, enough Si can be present to get condensed to SiC. A recent study by @2017ApJ...843...57D shows a direct dominance of radiative formation of SiC over SiO in the outer He rich zones. This is also confirmed by ALMA observations of SN1987A [@2017MNRAS.469.3347M] where the authors find only 10% of the total Si synthesized in SNe to be locked up in SiO. Additionally, the authors also report a deficiency of SiS as compared to theoretical models [@2013ApJ...776..107S] whose over production could decrease the availability of Si for SiC.
Overall, condensation of SiC primarily depends on temperature of the ejecta and concentration of the key species. The grain formation rate first increases as the temperature cools down, and then it decreases as the concentration of key species decreases. To demonstrate our calculations, we consider SiC condensation as late as 1700 days after explosion, in line with the recently proposed kinetic model of SiC formation by [@2017ApJ...843...57D]. By this time, the temperatures in the He zones are in the order of a few hundred K [@2013ApJ...776..107S]. We call this delayed condensation because other grain species are believed to undergo condensation at earlier times . We urge the reader to go through and [@2013ApJ...776..107S] for a full zonal sequence of condensates produced in the supernova ejecta.
ION IMPLANTATION: TRANSITION ELEMENTS IN SiC GRAINS {#s:implan}
===================================================
[@2003ApJ...594..312D] have described the reverse shock mechanisms which propel the grain outwards in the ejecta. This model provides grain velocities of the order of a few hundreds of km s$^{-1}$ during the first few years, wherein they take the grain velocity to be 60% that of the shock velocity. Similarly, [@2006ApJ...648..435N] suggest a value of 75%. This is due to the deceleration of ejecta when it collides with reverse shock and formation of a contact discontinuity [@1999ApJS..120..299T]. Considering these values, we have carried our simulations for ion velocities in the range 1000-6000 $\mathrm{km}\,\mathrm{s}^{-1}$.
We focus on these grain sizes because such grains are most probable to survive sputtering in the SNe (and later on in the interstellar medium (ISM)). Grains smaller than these sizes are highly prone to destruction within the SNe itself [@1994ApJ...433..797J; @1996ApJ...469..740J]. Additionally, high velocity implantation (we study in this work) will be lesser in smaller grains since more ions will be able to transmit through them. Infact, it has been shown that grains smaller than \~0.05 $\micron$ are destroyed due to excessive sputtering whereas \~0.05-0.2 $\micron$ sized grains are trapped into a dense shell, making it impossible for them to be ejected into the ISM [@1995GeCoA..59.1411A; @2007ApJ...666..955N]. Moreover, $\micron$ sized SiC grains condensed in SNe have been proposed to survive the SN shocks and get ejected into ambient ISM [@2004ApJ...614..796S]. Their longer lifetime as compared to graphite is another reason put forth for explaining SiC in $\micron$ sized presolar grains . Some smaller grains are also believed to have accumulated to form a larger grain size which can ensure their survival, specially due to charge separation between the smaller and larger grain and subsequent coagulation [@1990ApJ...361..155H]. Majority of SiC grains chemically separated in laboratory have a size of \~0.1-10 $\micron$. Lastly, we note that our choice of grain sizes is based on the comparison of our calculated concentrations with those measured through NanoSIMS by [@2008ApJ...689..622M] where the grain sizes were of the order of a few $\micron$. Though grain size as large as 50 $\micron$ has also been found in meteorites which is believed to originate from red giant stars (e.g., [@2009PASA...26..278G]), SN grains have been reported to have smaller diameters, see, for e.g., @2013GeCoA.120..628A [@2016ApJ...820..140L]. The largest supernova SiC grain found till date is the famous Bonanza grain, with a size of roughly 30 $\micron$ [@2011LPI....42.1070Z].
Isotopes of Cr, Fe, Co, Ni and Zn relevant to this study are all created during the explosive burning of Si. A material when heated to 5 billion K experiences nuclear statistical equilibrium (NSE) [@1994Metic..29Q.503M]. For a 15.2 M~$\odot$~ star, this temperature is achieved in a radius of around 3670 km, which encloses about 2.02 M~$\odot$~ (S16). In general, Cr is one of the products of Si burning whereas Co, Ni and Zn form during the alpha rich freeze out phase. Si burns via a series of photo disintegration reactions, producing alpha particles, which then react with the quasi equilibrium group (QSE) above $^{28}$Si to form Fe group elements [@2017arXiv170106786W]. During explosive burning, the nuclear burning timescale and the hydrodynamic time scales eventually become comparable, so the ejecta cools and expands before the alpha particles released from initial photo disintegration manage to get captured (thus called alpha-rich freeze out). These alpha particles eventually get assembled to heavier nuclei on a hydrodynamic timescale, to produce elements like Ni and Zn [@2002abcd...74.1015S] within seconds of the explosion[^5]. After 2.02 M~$\odot$~, the O rich shell is present, where at temperatures ranging from 3-4 billion K, elements like Ca are produced. Fe is mostly produced by SNe Ia[^6].
All these ions which are produced in the innermost shells travel with velocities of the order of a few thousands of $\mathrm{km}\,\mathrm{s}^{-1}$. Grains condensing around the stellar envelopes provide a surface to stick on for these ions - analogous to cool balls in the middle of hot gas (A. Sarangi, *private communication*). Since the grain area is quite larger than the size of particles in the gas, the rate of radiation of the grains is very large as compared to the gas particles. We discuss this transport (mixing) in detail in section \[s:disc3\].
ION TARGET SIMULATOR SETUP {#s:sdtrimsp}
==========================
TRIM (Transport of Ions in Matter) is a program in the SRIM (Stopping and Ranges of Ions in Matter) package, developed by [@2002srim...74.1015Z; @2010NIMPB.268.1818Z]. TRIM was primarily used for studying implantation and backscattering in the field of materials science. It is now known that sputtering yields generated from SDTrimSP (developed by [@2011srim...74.1015M]; SD stands for Static-Dynamic, reflecting the fact that SDTrimSP can also work with dynamic targets, where composition of target changes as more ions are incident on it) better fits the experimental simulations than TRIM. SDTrimSP also offers a wide range of choices of input parameters like target temperature, choice of interaction potential and multiple bombardment of ions with varying velocities. A significant difference between TRIM and SDTrimSP arises because TRIM does not take into account the inelastic energy losses of the ions. We report calculations using implantation fraction obtained from SDTrimSP (Version 5.07). We obtain implantation data for relative velocities ranging from 1000-6000 $\mathrm{km}\,\mathrm{s}^{-1}$ and perform a total of 6400 iterations per ion. This choice is motivated by 1.) Vikram100 HPC processing capabilities and 2.) negligible change in statistics for iterations $\ge$ 3200.
TRIM and SDTrimSP only work for planar targets, so we extend their results to a spherical target to analyze SiC grains because it is the simplest structure we can assume the grains possess and spherical surfaces tend to be the most stable due to least surface area. The approximation we use also accounts for grain irradiation from all directions.
Out of BITS processes, backscattering does not need a different geometrical model since the ion is not interacting with the grain’s interior. Also, backscattered ions interact with the grain’s outer surface for a very short time, causing only surface erosion, which can be neglected when compared with sputtering due to other incident ions. For spherical modeling of implantation profiles, we follow the method developed by [@vyvsinka2009depth]. This model gives different weights to ions incident at various angles getting implanted into the grain at diverse depths by considering the sphere as a regular polyhedron whose number of sides are decided by the number of bins of incident angles used. The weight is thus a product of the depth profile and surface area of the cross section of the grain for different angles (see Figure 2 in their paper).
For spherical modeling of transmission and sputtering profiles, we follow a different approach, whose framework and calculations are derived in Appendix \[s:append\]. We develop it to be coherent with the content of output data files generated in SDTrimSP. The output file provides the final position of the ion before leaving the target. Taking x axis to be the reference, we treat the distances covered in y and z directions separately and apply the model described in Appendix \[s:append\] on both the axes. This model assumes that ions travel in a straight path (not necessarily parallel to the x axis) inside the grain till it is ejected out. The weights are the ratio of extra length traversed in planar targets to with the total length traversed in planar targets. This extra length is the distance the ions would not be traveling if the target was spherical, since they would have been transmitted at a shorter distance. The model takes as input the last known coordinate of the ion near the grain surface and the angle at the time of ejection. Then, assuming a straight line trajectory, it backtraces the ion to its place of origin (entrance). Based on the extra distance the ion had to travel for planar surface and the decrement in its kinetic energy inside the grain, for each ion (in the output file), the number of ions that could have been transmitted had the surface been a sphere is calculated. The straight line approximation is valid in the velocity interval where transmission dominates, since at these velocities, the ion cruises through the grain in an approximately linear trajectory. We confirm this by tracking ion trajectories throughout the target and find them to be approximately linear. This provides reasonably good results, considering the weights are applied to each ejected species individually, which is different from the model described for implantation, where weights are applied in chunks decided by the bin size. Following the notations given in section \[s:append\], we note that $L_p/L_{sph}$ >1, where $L_p$ is the length on planar surface and $L_{sph}$ is the projected length traversed in the sphere. We assume that transmission in 1D is inversely proportional to the distance traveled and is co-dependent on energy of the incident ion, since an ion with relatively less kinetic energy may also get transmitted if the angle of incidence is high.
Sputtering yields are highly sensitive to surface binding energies (hereafter, SBE) and lattice binding energies. It is a general practice to use heat of sublimation as SBE, however, the results do not match the experimental data obtained (at energies quite lower than the one discussed in this paper) specially for strong electronegative elements like O, C, etc. since strong ionic bonds can form between atoms in the top layer and those in the bulk [@WITTMAACK201237; @MUTZKE2008872]. We use the model developed by [@2005ApSS..239..273K], which takes into account weighted contributions from ionic and covalent bonds to calculate SBE. We provide all analysis without considering relativistic effects which are negligible at the range of velocities in question. In subsequent sections, we frequently use the terms ’larger’ and ’smaller’ grain to refer to 5 and 1$\micron$ grains respectively.
DISCUSSION {#s:disc}
==========
BITS Processes[^7] {#s:disc1}
------------------
Using information from typically quoted shock and ion velocities in young supernovae (for e.g., [@1987ApJ...315L.135K; @2005ApJ...619..839C; @diehl2013astrophysics]), we use an upper bound of 6000 $\mathrm{km}\,\mathrm{s}^{-1}$ for transition elements’ ion implantation in SiC. At relative velocities higher than 6000 $\mathrm{km}\,\mathrm{s}^{-1}$ for these elements, \~99.8% of ions get transmitted through the grain. The remaining \~0.2% which get implanted have a major contribution from extremely oblique angles of incidences, which are mathematically possible but physically rare as compared to equatorial ion bombardments. At relative velocities lower than 1000 $\mathrm{km}\,\mathrm{s}^{-1}$, most ions get backscattered or are implanted into the upper surfaces of grains which have a high probability of being lost due to sputtering and erosion, etc. We observe that the contribution of extremely oblique angles (> 75$^{\circ}$) in implantation fraction is quite low ($\la$ 1% for velocities < 3000 $\mathrm{km}\,\mathrm{s}^{-1}$ and $\la$ 0.2% for velocities > 3000 $\mathrm{km}\,\mathrm{s}^{-1}$). The fraction of ions backscattered decreases as incident velocity is increased to 6000 $\mathrm{km}\,\mathrm{s}^{-1}$. Backscattered ions incident at oblique angles undergo a maximum loss of \~36% in kinetic energy while the ions backscattered at around 45$^{\circ}$ face a maximum reduction of only 7%. Most runs did not give any backscattering for normal angles.
The low sputtering yields of Si and C by transition elements with velocities outside this range eliminate any possibilities of them significantly impacting the grain. At 6000 $\mathrm{km}\,\mathrm{s}^{-1}$, we find a sputtering yield of \~0.06 for C and \~0.08 for Si per incident transition ion, which is negligible as compared to the ones obtained in the lower energy range (\~2-10 per incident ion). Maximum sputtering is observed at oblique incident angles, since at such angles more momentum transfer can take place and more atoms from the surface can be easily knocked out. Maximum damage to the grain is caused in the range of velocities showing maximum interaction (or equivalently, implantation). Sputtering yields can go as high as \~28 times while the angle of incidence is changed from 0$^{\circ}$ to 85$^{\circ}$. For example, taking a numerical value of 20 ions sputtered per incident ion for Si (as observed in a run for Cr ions with a relative velocity of 1000 $\mathrm{km}\,\mathrm{s}^{-1}$ at 85$^{\circ}$ angle of incidence at 300 K), we see that a total of 10$^{-8}$% of Si ions have been knocked out from the grain (assuming the process goes on for a few hundred years), which is still low to cause any significant changes. Also, for velocities near 1000 $\mathrm{km}\,\mathrm{s}^{-1}$, the sputtering yields were more at higher temperatures than at room temperature, which is a straightforward consequence of more atoms being knocked out by hotter incident ions. C has a higher sputtering yield than Si at T $\le$ 800 K and vice-versa, irrespective of the angle of incidence, which can be attributed to the fact that lighter atoms have a higher cross section to interact with the collision cascade and are thus more easily sputtered out (see review by [@SMENTKOWSKI20001] and references therein).
We note that the sputtering effects obtained from simulations only consider sputtering by a particular ion incident on the grain and associated collision cascades, whereas in reality there can be other fast moving ions hitting the grain simultaneously. For example, the sputtering yield of O atoms on SiC is proposed to be unity [@1994ApJ...431..321T], which would lead to a significant destruction of their surface (because the abundance of O is high in nearby shells), enough to completely wipe the grains out. On the contrary, [@2003ApJ...598..785N] proposed a recycling scenario wherein the top 14% of the surface is recycled multiple times while it stays in the SNe. This also explains the seemingly low concentrations of O atoms found in SiC grains[^8]. Although SiC grains $\ga$ 0.1 $\micron$ can survive thermal sputtering in the remnant , they are prone to destruction due to non thermal sputtering by He$^+$ (Ar$^+$ and Ne$^+$ destroy SiO and other oxides formed in O rich zones in a similar manner) present in the ejecta. However, as shown in these studies, the sputtering yields of He, D, H and non thermal material on SiC is lower than that of O by at least an order of magnitude and SiC grains > 0.1$\micron$ can survive this destruction. Thus, we assume a 10% surface destruction of the grains (*i.e.,* loss of 10% of the grain’s surface area) leading to the loss of ions implanted in the top 10% of the grain surface, which is also consistent with the 6-8% surface erosion for $\micron$ sized C, Fe and Mg$_2$SiO$_4$ grains proposed by [@2007ApJ...666..955N].
\[fig:fig2\] {width="1.0\linewidth"}
Transmission is the dominant phenomenon among BITS processes at ion velocities > 3000 $\mathrm{km}\,\mathrm{s}^{-1}$. At such velocities, the interaction time of the grain with ion is very less and the ion affects the spatial arrangement of target atoms only at highly oblique angles ($\ge$ 75$^{\circ}$). Based on our model, we find a high increment in the number of transmitted atoms (up to 40% in certain cases) as compared to planar surfaces. Transmission is initially high at normal incident angles and is almost null beyond 70$^{\circ}$. As the incident velocity of ions is increased, transmission of ions starts at higher angles of incidence. Figure \[fig:fig2\] summarizes the fraction of ions transmitted or backscattered against incident ion velocities, for different combinations of temperature and grain sizes. The trends shown are consistent for all the ions in question. We see a comparatively lower fraction ($\la$10%) of ions backscattered and transmitted from the grain when ion velocities are < 3000 $\mathrm{km}\,\mathrm{s}^{-1}$. However, this shoots up to \~50% for velocities near 3000 $\mathrm{km}\,\mathrm{s}^{-1}$ and reaches nearly unity for velocities \~6000 $\mathrm{km}\,\mathrm{s}^{-1}$. On the other hand, the 5$\micron$ grain hardly shows any transmission or backscattering and almost all the incident ions are implanted. Thus, we conclude there lies a certain range of ion velocities where implantation is dominant ($\sim$ 1000-2000 $\mathrm{km}\,\mathrm{s}^{-1}$). At velocities lower than this range, most ions are backscattered whereas for velocities higher than this range, most of them are transmitted.
We find that BITS processes are very sensitive to ion velocity and grain size, specially if the grain is smaller (\~1$\micron$). For velocities $\la$ 2500 $\mathrm{km}\,\mathrm{s}^{-1}$, more than 90% of the ions are implanted in a 1$\micron$ grain; for velocities between 2500-4000 $\mathrm{km}\,\mathrm{s}^{-1}$ and > 4000 $\mathrm{km}\,\mathrm{s}^{-1}$, 50-80% and 90-97% of ions are either transmitted or backscattered, respectively. We also see a drop of 7-87% in implantation fraction at temperatures > 800 K for Cr and Zn ions, while implantation fraction for other three elements (Fe, Co, Ni) remain independent of temperature. For Zn, this can be attributed to its volatility (boiling point of Zn is 1180 K[^9]) while for Cr, it can possibly be attributed to the formation of certain unstable complexes of Chromium and Carbon which could get evaporated at higher temperatures. These formations are highly favored if there is some Oxygen available as well such that CO-Cr complexes can be produced [@2013arXiv1308.4924S]. For Cr, this bias at high temperatures could also be simply due to Cr loosing its outermost electron in the *4s$^1$* shell in the simulations. This would not be true for an SNe where highly ionized Cr would be present. The difference is then reflected in the final concentration estimated for these elements. For the 5 $\micron$ grain, implantation fraction remains constant for all temperatures $\le$ 2000 K [^10].
Simulations also predict changes in the spatial arrangement of atoms in the smaller grain’s core due to ion implantation, specially when the population of ions implanted near the center of the grain is highest, as shown in Figure \[fig:fig3\], which has been reproduced from [@2017LPI....48.1490S]. For the larger grain, core implantation could not be achieved, even at velocities \~6000 $\mathrm{km}\,\mathrm{s}^{-1}$. The effect on the core, although small, is significant and addresses the question of possibility of core contamination due to implantation in presolar grains. This is contrary to an often quoted assumption where core implantation is ruled out [@2003ApJ...594..312D; @2008ApJ...689..622M]. Ions implanted in the core have a higher chance of survival and the signatures of smaller grains with sufficient core implantation can be preserved if they are embedded into bigger grains as subgrains. Thus, if an enhanced abundance of trace elements is obtained in X type grains with progenitor masses > 20 M~$\odot$~, a plausible conclusion of this excess abundance can be the inclusion of sub-grains within larger grains while they were still condensing. These subgrains may contain an enhanced abundance of trace elements condensed or implanted in their core, which could have survived after condensing in a bigger grain. As a matter of fact, recently analysis has found FeS and TiC subgrains in presolar grains [@2015PhDT........10G; @2016ApJ...825...88H].
\[fig:fig3\] {width="1.0\linewidth"}
Concentration Calculations {#s:disc2}
--------------------------
We follow the work done by [@2003ApJ...594..312D] for calculating the concentrations of Cr, Fe, Ni, Co and Zn at different temperatures and velocities. The number of possible interactions that ions of a particular species in a column of uniform cross section (same as that of the grain) can have with the grain at a given time after explosion is described by:
$$\label{eq:1}
\mathit{N_z^A} = \frac{N_A \sigma}{\mu_z} \int\limits_{m_0}^{M} \frac{X(m) dm}{4 \pi r^2(m)}$$
where, $N_A$ is the Avogadro’s Number, $\mu_z$ is the atomic weight of the element, $\sigma$ is the grain cross section, $r$ is the radial coordinate, $X(m)$ denotes the mass fraction of the isotope as a function of mass coordinate, $m_0$ denotes the mass coordinate at $r_0$ where $r_0$ is the grain condensation radius and $M$ is the total ejecta mass. Equation \[eq:1\] does not consider the time expansion of ejecta. Hence, we introduce an additional term where the ratio of initial to final volume of the zones where implantation can take place is derived and used as a factor to decipher the possible number of interactions at a later time $t$.
Early age supernovae have shock velocities of the order of a few thousand $\mathrm{km}\,\mathrm{s}^{-1}$ [@1991ApJ...375..652S; @2017ApJ...837L...7B; @2017ApJ...840..112S] and the differential zonal velocities can be taken to be a substantial fraction (\~60% and more) of the primary shock velocities because we consider the two outermost zones of the ejecta (He and H zones, respectively). For an estimate of final zonal widths, we consider a uniformly expanding ejecta with zones moving ahead with four possible differential zonal velocities ($\Delta$v) = 500, 1000, 2000 and 3000 $\mathrm{km}\,\mathrm{s}^{-1}$ respectively[^11]. We use the zonal nucleosynthesis yield model sets from S16 and apply the volume ratio factor to arrive at an estimate of number of possible interactions at times as late as a few hundred years (which marks the end of free expansion phase). To find the implantation concentration in ppm, we take its product with implantation fraction obtained using SDTrimSP at various velocities.
For a typical explosion energy of $1.327\times10^{51}$ ergs, for a 15.2 M~$\odot$~, the ejecta mass as reported in S16 is 12.58 M~$\odot$~ for which we obtain a radius of \~4.54 pc and the time at which the free expansion phase ends as \~431 years. Similarly, for 20.1 M~$\odot$~ and 25.2 M~$\odot$~ supernovae, the time marking the end of the first phase comes out as \~466 and \~458 years respectively. Although the progenitor masses differ by \~25% for the two heavier stars considered, their free expansion phase lifetime is similar, perhaps because the stellar winds reduce the heavier star (prior to explosion) to the size same as that of the lighter star . We also observe that the ejecta mass thrown out by both the stars is same (\~15 M~$\odot$~), which reinforces the argument made above.
We describe the calculations for $^{52}$Cr here, at various velocities and temperatures for 1$\micron$ and 5$\micron$ SiC grains for 15.2 M~$\odot$~, 20.1 M~$\odot$~ and 25.2 M~$\odot$~ stars, respectively. From the data for 15.2 M~$\odot$~, it is observed that for a condensation radius of $1.295\times10^6$ km (mid-point of He zone), $X(m)$ is a constant for $^{52}$Cr. The ppm concentration (by weight) is given by:
$$\label{eq:2}
\mathit{I_c(ppm) = 10^6 I_i N^A_z \mu_z \frac{d^3_M - d^3_{m_0}}{{(t \Delta v)}^3\frac{4 \pi}{3}r^3_{SiC}\rho_{SiC}}}$$
where, $I_i$ is the implantation fraction of ions and is a function of ion velocity (relative to grain), grain cross section, grain density and temperature. From a numerical integration of equation \[eq:1\] with $m_0$ = 3.95 and $M$ = 12.58, for a 1$\micron$ grain, we get $N^A_z$ = 1.67$\times10^9$. Thus, for a differential zonal velocity ($\Delta$v) of 2000 $\mathrm{km}\,\mathrm{s}^{-1}$ between zones 499 (mid-point of He zone) and 950 (last zone) in S16 zonal yield sets, at a time $t\,$(days) after explosion, the ppm concentration (by weight) is given by:
$$\mathit{I_c (ppm) = \frac{\beta I_i}{t^3 (days)}}$$
where, $\beta$ is a constant. Assuming grain condensation is fully achieved by $t$=1700 days [@2017ApJ...843...57D] and the free expansion phase ends around 430 years, we get a ppm concentration of \~0.62 for $^{52}$Cr implanted in a 1$\micron$ grain when its velocity is 1000 $\mathrm{km}\,\mathrm{s}^{-1}$.
So far, the mass fraction has been assumed to be constant at $t$ > 200 seconds. However, two bipolar mechanisms can alter this mass fraction: the addition of mass into zones which are farther out as the ejecta sweeps up material from ambient ISM and the addition of mass from zones interior to the condensation zone because of RT instabilities (mixing) which are caused when a lighter fluid tends to push over a heavier fluid (discussed in detail in section \[s:disc3\]). The amount of mixing remains a largely unsettled question, however, to begin with, we consider a 1% mixing between He/C and He/N zones (as estimated by NanoSIMS analysis of presolar grains by [@2008ApJ...689..622M]), *i.e.*, we add 1% of the zonal yields from He/C zone to the He/N zone while calculating implanted concentrations. The mass fraction of $^{52}$Cr can increase to around double the value of $X(m)$ taken in this calculation (\~83% increase in the case of $^{52}$Cr implanted into SiC condensed in a 15.2M~$\odot$~ SNe). The free expansion phase ends when the SN ejecta has swept up a mass $\sim\,12.6$ M~$\odot$~. It can be assumed without the loss of generality that the swept up mass which affects the mass fraction of ions in question is at least 60% of the total mass of the expanding supernova (for e.g., the swept up mass of Tycho’s supernova remnant (SNR) present in the outermost regions is \~53% of the total mass of the SNR, as derived by @1983ApJ...266..287S). Anyway, this factor is suppressed when mixing from inner regions is taken into account, as we explain in section \[s:disc3\]. It can also be safely assumed that there was no production of Cr before the explosion and hence the swept up mass does not contain any significant quantity of the isotope[^12]. Thus, the mass fraction decreases by a factor $$\label{eq:4}
\Delta X(m) = \frac{12.58-3.95}{(12.58-3.95) + (0.6\times12.58)} = 53.35\%$$
Overall, we get a \~30% rise in the mass fraction of Cr, which gives a corrected value of \~0.80 ppm. In an attempt to account for grain destruction in later years in the ejecta and the ISM, we assume the loss of top 10% of the grain surface, as noted in section \[s:disc1\]. Utilizing the depth profiles obtained through spherical grain approximation for each ion implanted in the grain, we find that for this particular case, \~59% of the ions implanted are able to penetrate the grain to more than 10% of the grain’s depth. The surface erosion corrected ppm concentration thus obtained is \~0.79 ppm. This concentration still lacks mixing from innermost zones and additional contributions from radioactive nuclei, which we correct for in section \[s:disc3\]. Interestingly, for the larger grain, most of the ions implanted are in layers deeper than the $\sim$10% erosion barrier threshold we work with and are hence preserved during surface erosion.
Comparison With Laboratory Measurements {#s:disc3}
---------------------------------------
The calculated concentrations of transition ions implanted in the grains vary over 1-3 orders of magnitude when ion velocities and differential zonal velocities are varied between 1000-6000 and 500-4000 $\mathrm{km}\,\mathrm{s}^{-1}$ respectively. However, not all the calculated values would correspond to physically plausible initial conditions, so we maintain a check on those sets which yield erroneously high concentrations.
Laboratory measurements of these concentrations include contribution from both processes: condensation and implantation[^13]. The estimates of concentrations via condensation and implantation can be matched with those measured in the grains. X type SiC grains from the Murchison meteorite analyzed in @2008ApJ...689..622M have a mean size of \~2.5$\micron$ with 90% sizes lying in the range 1.8-3.7$\micron$ [@1994ApJ...430..870H; @ZINNER20074786] whereas those analyzed by [@2000MPS...35.1157H] have grain sizes in the range 0.5-1.5 $\micron$. To compare the concentrations measured by [@2008ApJ...689..622M] with our calculations, we require the implantation fraction for each incident element. We approximate it by taking a geometric mean of the resultant implantation fractions for the two grain sizes we study.
Laboratory based ppm concentrations of Fe and Ni range from a few tens to a few thousands for SiC X grains found in Murchison meteorite while those of Co are mostly a few tens of ppm. The 15 M~$\odot$~ supernova model is believed to better explain these abundances than the 25 M~$\odot$~ model. as stated by [@2008ApJ...689..622M], however the models these authors used were taken from [@2002ApJ...576..323R] which have been improved upon in S16. Concentrations of Fe in smaller SiC X grains studied by [@2000MPS...35.1157H] lie in the range \~100-1000 ppm. Our calculated values for Fe implanted in the grains is 0-2 orders of magnitude lesser (on average[^14]) for ion velocities in 1000-3000 $\mathrm{km}\,\mathrm{s}^{-1}$ and differential zonal velocities between 1000-3000 $\mathrm{km}\,\mathrm{s}^{-1}$. We are not aware of any measured concentrations for Zn found in presolar SiC X grains. Though Cr concentrations have been measured to be $\sim$1 ppm in SiC X grains (measured by [@2008LPI....39.2135K], as reported in @2009IJMSp.288...36L), we cannot utilize them since they do not belong to the same grains as analyzed by [@2008ApJ...689..622M]. Hence, we leave these elements out in the discussion for comparison but we propagate the effects of various mixing criteria we have considered in them.
Contrary to finding excess Ni in type X presolar SiC grains (<Fe/Ni> = 0.78 (0.14, 3.34) for the SiC X grains analyzed by [@2008ApJ...689..622M]), we find Fe/Ni >1. Ion implantation as a probable cause for this excess was ruled out by [@2008ApJ...689..622M] because these excesses were distributed all across the grain instead of being localized in the outermost regions and core implantation was not taken into account. This motivates us to consider mixing from the innermost regions (Si/S, Ni zones) as well, which are rich in Ni and contributions from these regions can possibly explain the observed excess of Ni.
It has been shown that concentrations of certain isotopes of Si, Ti and Ca obtained in laboratory measurements of Carbide (Graphite and SiC) grains can only be explained if there is intense mixing between inner Si/S zones and outer He zones [@1999ApJ...510..325T; @2002ApJ...576L..69H] while the same has been postulated for explaining excess Ni obtained in these measurements [@2012ApJ...758...59S]. This can happen through Si rich jets originating from Si zone in the interior and cutting across O rich zones, throwing material from the inner regions all the way out to He and H zones. The presence of Si rich jets owing to an asymmetric explosion [@1999ApJ...524L.107K] has been often reported ([@2004ApJ...615L.117H; @2017MNRAS.468.1226G] and references therein) which support the theory of mixing from innermost zones to sites of carbide grain condensation in outer zones [@2006ApJ...647L..37L]. These jets also cause $\alpha$ rich freeze-out behind the energetic shock which is essential for the production of transition elements in question [@1997ApJ...486.1026N; @2000ApJS..127..141N] and their presence would also constrain the mixing from intermediate O rich zones to lower values which is necessary in order to limit the amount of Oxygen available in the grain surroundings so that C/O >1 is preserved, O is held up in CO and oxide formation can be suppressed. A homogeneous mixing from all zones is thus not preferred for explaining observed elemental abundances in presolar grains [@2004ASPC..309..265H].
3D simulations of mixing in the ejecta performed by [@2010ApJ...714.1371H] predict the formation of ’bullets’ (clumps) of Z >8 elements (called Ni rich bullets in their paper), some of which are fast enough to overtake the O rich bullets and reach the outer He and H zones within the first 10000 seconds of explosion. The same has been observed in SN1987A [@1989ApJ...341L..63A]. This decay can take place in the innermost shell which is moving the slowest and as the ejecta cools adiabatically, $\gamma$-rays from this decay cause local heating which sends out a pressure wave towards the outside, thus giving rise to conditions necessary for RT mixing. 3D simulations of CCSN predict mixing to cease by \~10$^5$ seconds for a 15 M~$\odot$~ star [@2010ApJ...723..353J].
Keeping these studies in mind, we consider a 1% contribution (through mixing) of Ni from Si/S zone and add contributions from those radioactive ions which may travel to outer zones along with Ni and Si rich ’bullets’ [^15]. Specifically, we trace $^{52}$Ni, $^{52}$Fe and $^{52}$Mn for $^{52}$Cr, 2.) $^{56}$Co, $^{56}$Ni for $^{56}$Fe and 3.) $^{59}$Ni for $^{59}$Co[^16]. Amongst these, we neglect $^{52}$Fe, $^{52}$Mn and $^{52}$Ni because their zonal contributions are negligible as compared to that of their end products in the zones of interest. The half life of $^{59}$Ni is > 10$^4$ years which implies that only 0.4% of $^{59}$Ni has decayed into $^{59}$Co by the end of free expansion phase [@RUHM1994227]. Thus, we use a 1% fraction of the decayed 0.4% $^{59}$Ni from the innermost Ni zone and maintain the same fraction for all other elemental yields taken into account from this zone so that zonal mixing is same for all elements and calculations are unbiased in every zone (*i.e.*, elemental fractionation is not favored by ion implantation).
Consequently, we find that the concentrations of Ni in a 1$\micron$ grain can increase by 19$\times$, 14$\times$ and 8$\times$ for 15.2, 20.1 and 25.2 M~$\odot$~ models respectively, with respect to the concentrations calculated before this mixing is taken into account. Similarly, the concentrations of Co increase by \~2.5$\times$ for all the three models whereas the concentrations of Cr increase by 5.2$\times$, 2.3$\times$ and 1.8$\times$ respectively. Zn remains unaffected by mixing from the interior. This additional concentration for Ni is solely from mixing whereas the increments in concentrations of Cr and Co come from mixing as well as radioactivity corrections. Contributions to $^{56}$Fe from $^{56}$Co do not lead to significant increments. The production of $^{56}$Ni has been the focus of all supernovae nucleosynthesis models and while it is a key factor to uncovering the mysteries of supernovae explosions (, S16, @2017arXiv170404780S), its contribution to the ppm concentrations of $^{56}$Fe make Fe/Ni high ( $\ge\,$1) whereas Fe/Ni <1 has been measured in \~73% of all the SiC X grains analyzed by [@2008ApJ...689..622M]. Thus, we consider two scenarios: one where we refrain from adding significant contributions from $^{56}$Ni to our calculated concentrations of $^{56}$Fe and the other with taking it into account. The former is motivated by 3D simulations of supernovae explosions which predict that most of the mass of $^{56}$Ni resides in two big clumps moving in opposite directions . This is also in concurrence with the observed structure of SN1987A 23 years after the explosion by [@2016ApJ...833..147L] where the authors quote that although the 3D simulation by models the SNe environment a few hundred seconds after the explosion, the overall structure and spatial distribution of $^{56}$Ni should hardly change in subsequent times. Similarly, presence of bipolar $^{56}$Ni jets have been detected in SNR 2013ej which is a strong evidence for inhomogeneous and clumpy distribution of this isotope [@2017MNRAS.472.5004U]. We thus assume that the site of SiC condensation is away from these high velocity clumps of $^{56}$Ni. In any case, most of these high velocity (4000-6500 $\mathrm{km}\,\mathrm{s}^{-1}$) $^{56}$Ni ions would simply traverse the grain without significant implantation. For the latter scenario where we assume the formation site and consequent movement of SiC grains near $^{56}$Ni clumps, we assume a 0.001% mixing of this isotope for the production of $^{56}$Fe, in line with the work of [@1998MNRAS.299..150F] where this amount of mixing of $^{56}$Ni from innermost Ni zone to outer He zones has been used to reproduce the observed He I line in SN1995V.
[|c c c c c c|]{}\
Fe/Cr&66.0&....&14-26&44-76& 41-70\
&&&18-34&49-86&45-77\
Fe/Co&362.9&31 (3-80)& 32-84&76-200& 58,153\
&&&41-112&89-141&70-187\
Fe/Ni&17.8&0.8 (0.1- 3.3)&0.90-1.03&1.43-1.61&1.6-1.8\
&&&1.24-1.34&1.51-1.68&1.74-1.97\
Fe/Zn&690.9&....&790-834& 2098-2215&1282-1353\
&&&1017-1073&2342-2517&1425-1657\
\
Fe/Cr&66.0&....&15-29&46-83& 43-74\
&&&21-39&53-111&49-96\
Fe/Co&362.9&31 (3-80)& 35-92&81-227&62-161\
&&&47-129&94-231&69-185\
Fe/Ni&17.8&0.8 (0.1-3.3)&0.40-0.50&0.77-0.87&0.91-1.03\
&&&0.63-0.69&0.84-0.99&1.04-1.13\
Fe/Zn&690.9&....&869-917&2218-2435&1371-1467\
&&&1170-1234&2584-2738&1574-1790\
\[tab:tab1\]
However, despite the above additions from inner zones and radioactivity corrections, we fail to cover the whole range of measured Fe/Ni and Fe/Co ratios. On experimenting further with different sets of mixing, we find that a 2% mixing from Si/S zone (instead of the 1% considered so far, keeping other mixing contributions constant) simultaneously generates the desired abundances of Ni and Co (relative to Fe) to a certain extent. To get to the lowest Fe/Ni ratios reported in [@2008ApJ...689..622M], we find that a higher contribution ($\ge$ 4%) is required from the Si/S zone because in this zone, while $^{58}$Ni is still in excess, $^{56}$Ni has highly depleted from its value in the innermost Ni zone. Thus, we land at our final calculated values of condensed as well as implanted concentrations for the species of interest by assuming 1.) 0.004% and 0.001% contribution for $^{59}$Ni and $^{56}$Ni (respectively), 2% and 1% mixing from Ni, Si/S and He/C zones and 2.) 0.004% and 0.001% contribution for $^{59}$Ni and $^{56}$Ni (respectively), 4% and 1% mixing from Ni, Si/S and He/C zones, for a SiC X grain formed in the He zone. We summarize the relative abundances obtained by modeling implantation$+$condensation in Table \[tab:tab1\] for the two scenarios put forth. The difference between the two scenarios can be attributed to SiC grains condensing and evolving near or far from Si rich ejecta present in the outermost layers. However, since the condensation of SiC is highly favorable if Si rich clumps (ejected outwards from inner Si rich regions; see Figure \[fig:fig1\] where Si/C >> 1) are present, scenario 2 seems more probable. Thus, mixing from inner zones can also provide an explanation for the high isotopic abundances of Si in SiC X grains. If the percentage of mixing is increased by 2$\times$ in either the Ni or the He zones, it leads to an overproduction of elemental abundances by implantation itself, without leaving room for condensation. On the other hand, the concentrations decrease by 40% when mixing is taken to be 0.5% in He/C zones, which can be considered a lower threshold since a value less than this will not produce enough abundance through implantation in the grains.
During the first few hundred years after explosion, the differential zonal velocity between He and H zones is of the order of a few thousand $\mathrm{km}\,\mathrm{s}^{-1}$. Thus, the majority of implantation during the free expansion phase should come when $\Delta$v > 1000 $\mathrm{km}\,\mathrm{s}^{-1}$. There is a possibility of higher differential zonal velocities than considered in our work but they would not exist for a long time (as compared to the timeline of few hundred years we use). Also, their contribution to the implanted concentrations would be scarce as compared to the ones considered in our calculations. We also consider a case with $\Delta$v = 500 $\mathrm{km}\,\mathrm{s}^{-1}$, however, the concentrations we calculate are higher than measured in more than 80% of the grains which leads us to reject this set in most comparisons. With this view, we find a probable range of fraction of abundances implanted in the SiC X grains for which ppm concentrations of Fe, Co and Ni have been measured by [@2000MPS...35.1157H] and [@2008ApJ...689..622M].
In Tables \[tab:tab21\] and \[tab:tab22\], we present the maximum fraction of these elements which can come from implantation when zonal mixing from Si/S zone is 2% and 4% respectively, by taking geometrical mean of the implanted concentrations we find for the two grain sizes (since the average size reported in analysis of SiC X grains is \~2.5 $\micron$). We reject certain sets which show an implantation fraction > 1 for all combinations of parameters and we call the case ’NP’ (Not Possible). For grains which show lesser concentrations, we assume they have been ejected into the ISM earlier than others. By ’early’ ejection, we imply that the grain gets out of the reach of high velocity ions in the shocked ISM earlier than its expected ejection time into ambient ISM. This ’early’ ejection scenario is possible if the SiC grain condenses near Si rich ejecta clumps moving outwards at high velocities because such clumps can cross the forward shock and move ahead of it; essentially imitating an early ejection. As a matter of fact, Si rich clumps have been observed to be moving ahead of the forward shock in the Vela CCSN . However, this mechanism requires stark density contrasts between the clump and its surroundings. Also, the time it takes for the clump to overtake the forward shock is not known with surety (see also @1988LNP...316.....K [@1995Natur.373..587A]). Another way the early ejection could be achieved is through the presence of a huge shock wave which accelerates the dust and not the gas around it. However, the origin of such a shock wave remains unclear. Thus for such grains we only consider implantation at the earliest epochs when differential zonal velocities were the highest. To explain higher concentrations, we subsequently include contributions from lower differential zonal velocity sets while assuming that these grains spent a longer time in the SNe.
As is seen from Tables \[tab:tab21\] and \[tab:tab22\], implantation fraction predicted for these elements covers the whole range from 0-1 depending on the physical conditions present in the SNe. If we were to believe that implantation should not contribute more than 60% (the highest predicted implantation fraction for heavy elements, @2004ApJ...607..611V) to the total concentration of transition elements found in the grain, it would imply that most of the ions are implanted in the free expansion phase when $\Delta$v is still a few thousand $\mathrm{km}\,\mathrm{s}^{-1}$. Since the shock velocities are of the order of a few thousand $\mathrm{km}\,\mathrm{s}^{-1}$ in this phase, the argument made above supports using $\Delta$v > 1000 $\mathrm{km}\,\mathrm{s}^{-1}$ as most probable differential zonal velocities because at one end of it, we have nothing but the shock velocity (since we deal with the outermost layers of the ejecta). Moreover, we observe that there is almost always a steep decline in implantation fraction as one moves from a zonal velocity of 1000 to 2000 $\mathrm{km}\,\mathrm{s}^{-1}$. Since this decline does not seem continuous, it becomes straightforward to demarcate a maximum possible implantation fraction for the elements we study in SiC X grains.
Thus, if we only take into account $\Delta$v > 1000 $\mathrm{km}\,\mathrm{s}^{-1}$, the implantation fraction we get is $\la$ 0.25 for grains condensed in 15 M~$\odot$~ model, where measured concentrations of Fe and Ni are $\ga$ 300 ppm. For lower concentrations of Fe and Ni, this fraction could reach as high as \~60% while for measured concentrations $\ga$ 1000, this fraction drops below 0.1, implying condensation is the dominant process among the two unless the grain did not spend much time in the SNe, as postulated earlier. The model also predicts that if these SiC grains were synthesized in heavier stars ($\ge$ 20 M~$\odot$~), they would have spent a lesser effective time in the SNe or the zonal velocities would have been higher in the free expansion phase. Although we assume fixed zonal velocities throughout this phase, one can achieve more accuracy in concentrations by taking appropriate fractions of each zonal velocity yield. In most cases, for identical conditions, implantation fraction of Ni is thought to be more than Fe which makes sense because Ni is more volatile than Fe so condensation fraction for Fe should be higher if same amounts of Fe and Ni condense with the grain.
Concentration of other two elements - Cr and Zn, is relatively lower - of the order of 1 and 0.1 ppm for a 15 M~$\odot$~ model, 10 and 0.3 ppm for a 20 M~$\odot$~ model and 40 and 3 ppm for a 25 M~$\odot$~ model. A substantial amount of Zn found in SiC X grains should come from implantation and not condensation because of its volatile nature and hence making it difficult to co-condense with the grain. On the other hand, Cr is refractory so a substantial amount of it can also come from condensation. These elements would be discussed in detail in a future work wherein measured concentrations from other X type grains will be available. We also leave calculations for Ti and V (whose ppm concentrations have been measured in presolar grains by @2001LPI....32.2192K [@2002LPI....33.2056K]) for a future work.
[|c| c c c c c | c| c c c c c|]{} \[tab:tab21\] Fe&$\le$ 50&3000&75&NP&NP&Co&$\le$ 20&1000&58&$\ge$ 81&NP\
&&&96&NP&NP&&&2000&7&24&$\ge$ 43\
&50-150&2000&62&NP&NP&&&3000&2&7&30\
&&&78&NP&NP&&25-70&500&$\ge$56&NP&NP\
&50-150&3000&14&$\ge$ 78&NP&&&1000&17&55&NP\
&&&32&$\ge$85&NP&&&2000&2&7&29\
&150-310&2000&40&NP&NP&&&3000&0.6&2&9\
&&&52&NP&NP&&200-220&500&42&$\ge$ 59&NP\
&&3000&12&93&NP&&&1000&0.5&17&73\
&&&15&$\ge$59&NP&&&2000&0.6&2&9\
&310-640&1000&$\ge$ 64&NP&NP&&&3000&0.1&0.6&3\
&&&NP&NP&NP&Ni&$\le$ 100&2000& $\ge$ 52&NP&NP\
&&2000&20&$\ge$ 62&NP&&&3000&37&$\ge$ 77&NP\
&&&25&NP&NP&&100-250&2000&50&NP&NP\
&&3000&6&45&NP&&&3000&15&73&$\ge$59\
&&&7&70&NP&&250-600&1000&70&NP&NP\
&900-1100&1000&91&NP&NP&&&2000&21&$\ge$ 43&NP\
&&&$\ge$ 48&NP&$\ge$50&&&3000&6&31&89\
&&2000&11&89&NP&&650-900&1000&$\ge$ 47&NP&NP\
&&&14&$\ge$56&NP&&&2000&14&68&$\ge$ 84\
&&3000&3&26&85&&&3000&4&20&60\
&&&4&41&NP&&950-1200&1000&83&NP&NP\
&1100-1800&1000&56&NP&NP&&&2000&10&81&$\ge$ 63\
&&&72&NP&NP&&&3000&3&24&45\
&&2000&7&54&$\ge$71&&1300-1700&1000&59&NP&NP\
&&&9&85&NP&&&2000&7&57&44\
&&3000&2&16&52&&&3000&2&17&32\
&&&3&25&75&&2000-2500&1000&40&NP&NP\
&2000-3000&1000&33&NP&NP&&&2000&5&39&71\
&&&43&NP&NP&&&3000&1&12&21\
&&2000&4&32&$\ge$43&&\~3000&1000&33&NP&NP\
&&&5&51&62&&&2000&4&32&60\
&&3000&1&10&31&&&3000&1&10&18\
&&&2&15&45&&\~3300&1000&30&NP&NP\
&\~3500&1000&29&NP&NP&&&2000&4&29&54\
&&&37&NP&NP&&&3000&1&9&16\
&&2000&5&28&90&&\~4500&500&$\ge$ 74&NP&NP\
&&&5&44&53&&&1000&22&$\ge$ 70&NP\
&&3000&1&8&27&&&2000&3&22&40\
&&&1&13&39&&&3000&0.8&6&12\
&\~4500&500&$\ge$ 73&NP&NP&&\~5400&500&$\ge$ 62&NP&NP\
&&&$\ge$ 93&NP&NP&&&1000&18&$\ge$ 59&NP\
&&1000&22&$\ge$ 70&NP&&&2000&2&18&33\
&&&29&NP&NP&&&3000&0.6&5&10\
&&2000&3&22&70&&&&&&\
&&&4&34&41&&&&&&\
&&3000&0.8&6&21&&&&&&\
&&&1&10&30&&&&&&\
[|c| c c c c c|]{} \[tab:tab22\] Ni&$\le$ 100&3000& $\ge$ 35&NP&NP\
&100-250&2000&$\ge$47&NP&NP\
&&3000&33&96&NP\
&250-600&2000&47&86&NP\
&&3000&14&55&NP\
&650-900&2000&31&83&$\ge$ 84\
&&3000&9&36&$\ge$\
&950-1200&1000&$\ge$79&NP&NP\
&&2000&23&$\ge$53&NP\
&&3000&7&48&93\
&1300-1700&1000&$\ge$56&NP&NP\
&&2000&16&78&$\ge$69\
&&3000&5&31&50\
&2000-2500&1000&90&NP&NP\
&&2000&11&62&$\ge$47\
&&3000&3&29&34\
&\~3000&1000&75&NP&NP\
&&2000&9&74&93\
&&3000&3&27&28\
&\~3300&1000&68&NP&NP\
&&2000&9&67&85\
&&3000&3&39&25\
&\~4500&1000&50&NP&NP\
&&2000&6&47&62\
&&3000&2&21&19\
&\~5400&1000&42&NP&NP\
&&2000&5&51&52\
&&3000&2&47&16\
A similar analysis (as we perform in this work) was recently carried out by [@KODOLANYI2017] where the authors measured concentrations of Fe and Ni isotopes in SiC X grains obtained from KJD [@1994GeCoA..58..459A] and Mur2012B [@2014LPI....45.1031H] grain separates of the Murchison meteorite. Although they find Fe/Ni concentrations in different SiC X grains to vary over two orders of magnitude (from 0.36 to 37.6), their measurements are doubtful due to possible contamination from multiple phases while sample preparation because the grains they analyzed were smaller and their diameter was comparable to the beam diameter of the desorption laser used. Keeping this in mind, the authors further discussed the concentrations of only three particular grains which they expected to be least affected by contamination (see section 4.2 in their paper). The first of these belonged to the KJD mount whereas the rest two belonged to the Mur2012B mount. Furthermore, the sizes of these three grains are between $\sim$0.5-1.0 $\micron$ which is similar to the ones we have simulated. For these three grains, the authors attempted to establish links with CCSN nucleosynthesis models through a variety of mixtures of elements from different zones. The nucleosynthesis models they used were from [@2002ApJ...576..323R] and [@2015ApJ...808L..43P] where the latter model included ingestion of abundances from the outermost H zone. However, they were not able to reproduce the desired abundances of all the isotopes while maintaining C/O >1 from either of these models, for two of the three grains. On comparing the Fe/Ni ratios they find for the three grains (0.36, 1.26 and 1.24 respectively), we immediately see that we are able to reproduce the measured Fe/Ni ratios in them using the same nucleosynthesis model (15 M~$\odot$~ and mixing criteria we have considered for grains analyzed in [@2008ApJ...689..622M]; see Table \[tab:tab1\]). This is encouraging and highly expected because the grains belonged to the same meteorite. Thus, it is logical to argue that grains embedded in different parts of the same meteorite originated from the same CCSN.
Summary {#s:summ}
=======
We have developed a theoretical model to estimate the fraction of transition elements condensed and implanted in SiC X grains and compared it with their concentrations obtained from SiC X grains found in Murchison meteorite. For this calculation, we analyzed ion-grain interactions at various sets of relative velocities, zonal velocities and temperature using an ion target simulator SDTrimSP. We use nucleosynthesis zonal yield sets generated by S16 for 15, 20 and 25 M~$\odot$~ stellar models. We also take into account the time expansion of ejecta through the free expansion phase and associate appropriate radioactive corrections to our analytical calculations. This model is fairly versatile due to its two degrees of freedom (namely, ion velocity and differential zonal velocity) and can be applied to calculate implanted concentrations of all other elements (and respective isotopes) in the X grains condensed in SNe for all types of progenitor masses. Our main conclusions are as follows:
1. Backscattering is only effective for lower ion velocities (v < 1000 $\mathrm{km}\,\mathrm{s}^{-1}$) and highly oblique angles. Only $\la$ 6% of the ions incident are backscattered for the range of velocities we consider in this work. Using our geometric model described in Appendix \[s:append\], we find transmission to dominate implantation for ion velocities > 4000 $\mathrm{km}\,\mathrm{s}^{-1}$ for SiC grains of size 1$\micron$. Maximum transmission is $\le$ 10% for velocities $\le$ 2500 $\mathrm{km}\,\mathrm{s}^{-1}$, after which it shoots up to > 50%, reaching almost unity for v > 4000 $\mathrm{km}\,\mathrm{s}^{-1}$. We also confirm that sputtering of Si and C atoms by transition elements is largely ineffective.
2. While the implantation of transition elements remain fairly independent of temperature (provided T $\le$ 2000 K) for Fe, Co and Ni, it decreases by more than half at T > 800 K for Cr and Zn. For Zn, it could be attributed to its volatility while for Cr, it is possibly due to simulations using non ionized Cr or due to the formation of certain complexes of Cr-C which can escape out. The implantation fraction also decreases by half as relative velocities cross the 3000 $\mathrm{km}\,\mathrm{s}^{-1}$ threshold as transmission becomes dominant. Almost all the concentration of Zn found in SiC X grains can be attributed to implantation since it is volatile and any quantity of Zn co-condensed with SiC while grain condensation is highly likely to get evaporated. Maximum ppm concentrations of Zn predicted to be implanted in a 1$\micron$ grain are \~1, 3 and 13 for 15, 20 and 25 M~$\odot$~ models respectively. We also find that for a 5$\micron$ grain, even though less ions get implanted in it due to its larger size, lesser ions getting lost due to surface erosion is another factor that leads to a higher concentration of implantation for the larger grain. This happens because most ions are implanted into a ’penultimate’ layer of the grain, at a depth below the outermost surface which gets eroded.
3. We also establish that transition ion implantation in the core of the grain is quite possible for a suitable range of velocities, thus increasing the chances of survival of transition elements in the grains. This is contradictory to the assumptions made so far about negligible impacts of ion implantation in the grain’s core and encourages the hypothesis of smaller grains (rich in certain elements) to be embedded into bigger grains later on as sub-grains. Implantation is inhomogeneous and localized as opposed to condensation which is fairly homogeneous, however, it is more difficult than previously thought to observe this localization because all grain regions are within the reach of implantation, specially for smaller grains ($\la$ 1$\micron$). Hence, the measured concentrations which vary over 3 orders of magnitude reflect non negligible weightage from varied exposures to ion implantation scenarios in the SNe.
4. We find that the observed relative abundances (due to implantation and condensation) of Fe, Co and Ni can only be explained by considering mixing from innermost Ni and Si/S zones and are best matched against concentrations calculated for the 15 M~$\odot$~ model. We find the mixtures we use to agree well with measured concentrations of Fe and Ni in SiC X grains from different mounts of the Murchison meteorite. Additionally, the S16 model confirms that such a mixing can also explain the isotopic abundance of Si in SiC X grains. This is in sync with 3D simulations and subsequent observations of SNRs which conclude that mixing happens as early as after few tens of seconds and ceases near \~10$^{5-6}$ seconds. We work with two sets of mixtures from Ni, Si/S zone, He/C and He/N zones and follow two scenarios (for each mixture) for the role of $^{56}$Ni in the implantation of $^{56}$Fe owing to its inhomogeneous distribution in the form of clumps. Mixing from Si/S zone plays the most important role in calculating concentrations of Ni and Co. Additionally, mixing from He/C zones cannot be as high as 5% or as low as < 0.5% in order to explain the measured concentrations. Concentration of other two elements - Cr and Zn, is relatively lower (of the order of 1 and 0.1 ppm for a 15 M~$\odot$~ model, 10 and 0.3 ppm for a 20 M~$\odot$~ model and 40 and 3 ppm for a 25 M~$\odot$~ model).
5. For grains where measured concentrations of Fe and Ni are $\ga$ 300 ppm, implantation fraction is $\la$ 0.25 and condensation dominates implantation whereas for other grains, implantation fraction can reach as high as \~0.6. Implantation fraction of Ni is more than that of Fe possibly due to Ni being more volatile and hence having higher chances of evaporation after condensation.
Free expansion phase is the time period of maximum activity in the new born expanding ejecta, however, quantity of implantation beyond this period (where temperature-distance equations become highly non linear) must be investigated to account for changes in the grain structure which may dominate those set by this phase. Although this may be less likely since the grains might be able to develop protective layers of ice/organics or be embedded into larger grains, not to mention the decrement in density of particles due to volumetric expansion of the ejecta which will lead to lower probability of interactions of grains with ions. In any case, studies on galactic chemical evolution should help solve these mysteries.
*Acknowledgments* We are grateful to the anonymous referee whose comments helped improve this work. We acknowledge the utilization of Vikram 100 HPC Supercomputing Facility at PRL, Ahmedabad for computations related to SDTrimSP. We are indebted to Andreas Mutzke for his guidance while using SDTrimSP, Tuguldur Sukhbold for discussions on nucleosynthesis yields and Arkaprabha Sarangi for discussions on SiC condensation.
P.S. would like to thank Sanjukta Dhar and Akarsh Relhan for running simulations on SDTrimSP. The project was funded wide grant SERB-WE (964), Science and Engineering Research Board, Govt. of India.
Geometrical Considerations for ion Transmission through the grain {#s:append}
=================================================================
Here, we describe the approximate geometrical model to calculate the fraction of ions transmitted through a spherical grain. Simulations in TRIM and SDTrimSP assume planar target surfaces whereas we assume the grains to be spherical. This model backtraces the linear trajectory of a transmitted ion to a sphere inside the planar surface and finds the extra length such a transmitted ion has to travel in the planar surface as compared to a spherical one (see section \[s:implan\] for a discussion on the validity of linear trajectories). We perform this calculation of the extra lengths traversed for every ion which gets transmitted and a weight equal to the ratio of extra length by total length traversed is assigned to each transmitted ion. The weights are calculated separately for each dimension and then multiplied. For example, if the weight (extra length) for an ion transmitted in the simulation comes as 1.05, it implies that 1.05 ions would have transmitted had the surface been spherical.
Let us call the length $\overline{AC}$ = $S_y$ (for y direction) and $S_z$ (for z direction). A total of four cases are developed for each axes such that each case in the region where y or z > 0 (called positive cases) has a corresponding case in the region where y or z < 0 (called negative cases). We note that $\theta \leq 135^{\circ}$ for positive cases otherwise the ion path would not trace back to the sphere. Similarly, $45\,<\,\theta\,<\,180$ for negative cases. The y and z values have been limited to $\pm$R. We only discuss the first case as illustrated in Figure \[fig:append\]. The other three cases follow suite and can be worked upon in a straightforward manner. For this case, $0\,<\,\theta\,<\,90$, $\theta\,>\,\beta$, z (or y) $\leq$ R. Following the notations mentioned in Figure \[fig:append\] and using cosine angle formula, following equations can be derived for $\Delta$OAC:
$$\angle OCA = \theta-\beta$$
$$\mathit{\cos(\theta-\beta) = \frac{\overline{AC}^2+\overline{OC}^2-\overline{OA}^2}{2\,\overline{AC}\cdot\overline{OC}}}$$
Since, $\overline{OC} = \sqrt{R^2+z^2}$ and $\overline{OA} = R$,
$$\mathit{\cos(\theta-\beta) = \frac{\overline{AC}^2+z^2}{2\,\overline{AC}\cdot\sqrt{R^2+z^2}}}$$
$$\mathit{\overline{AC} = \cos(\theta-\beta)\sqrt{R^2+z^2}\pm\sqrt{(R^2+z^2)\,\cos^2(\theta-\beta)-z^2}}$$
The positive solution is to be discarded since $\beta > 0$ for our model. Thus,
$$\mathit{\overline{AC} = \cos(\theta-\beta)\sqrt{R^2+z^2}-\sqrt{(R^2+z^2)\,\cos^2(\theta-\beta)-z^2}}$$
where, $\overline{AC}$ is the extra length. Then, the weight is given by:
$$\mathit{W=\frac{L_{py}}{L_{py}-S_y}\frac{L_{pz}}{L_{pz}-S_z}}$$
where, $L_{pi}$ is the length traversed in the planar target, measured from the center in the $i^{th}$ direction.
There are a few cases (specially at very oblique angles of incidence) wherein the particle gets transmitted owing to its crossing the boundary of the target in only y or z direction. In such cases, the weights are computed as:
$$\mathit{W=\frac{L_{p\,(y,z)}}{L_{p\,(y,z)}-S_{\,(y,z)}}}$$
where, the subscript $(y,z)$ implies either $y$ or $z$.
\[fig:append\] {width="0.4\linewidth"}
[^1]: https://2sn.org/kepler/doc/Introduction.html
[^2]: T. Sukhbold, *Private Communication*
[^3]: Each model has a particular progenitor mass.
[^4]: The term ’free expansion phase’ refers to the initial few hundred years after a SN explosion, where the ejecta moves outwards with negligible deceleration. It is also called the pre-Sedov phase. This term is not really accurate because a reverse shock already starts traveling inwards during this phase which can heat the material it encounters to X-ray emitting temperatures.
[^5]: 2D simulations of core collapse supernova find a higher yield of Zn as compared to S16, see @2005ApJ...623..325P [@2017arXiv170106786W]
[^6]: Fe yields given in S16 models are calibrated to their upper bounds calculated in P-HOTB in S16 since it is underproduced in massive stars
[^7]: All the discussion in this subsection is with respect to a 1$\micron$ grain, unless stated otherwise.
[^8]: Other possible explanations proposed by @2003ApJ...598..785N are the formation of O$_2$ or CO and diffusion through the grain
[^9]: *Source:* Royal Society of Chemistry, www.rsc.org
[^10]: This observation was noted by running simulations at T = 1600 K and T = 2000 K whose results overlapped with the trends at T = 1200 K as shown in Figure \[fig:fig2\].
[^11]: The notion of a uniformly expanding ejecta will specially not hold if the shocked material is running into denser ambient material, like molecular clouds or is being shocked by the primary shock, cooled adiabatically and re$-$shocked by multiple reverse shocks (Sharda et. al. 2017, in preparation).
[^12]: The same assumption is valid for other four ions because the contribution to their yields by the winds before explosion is negligible as compared to their abundances produced after explosion (S16).
[^13]: While laboratory measurements from other instruments may contain errors due to sample contamination [@2006ApSS..252.7117H; @2008LPI....39.2135K] those from NanoSIMS are able to avoid it and have been shown to be quite accurate
[^14]: This average is the mean of concentrations measured by [@2008ApJ...689..622M].
[^15]: This is unlike the algorithm used by @2007ApJ...666.1048Y to explain observed isotopic ratios (of same elements) in SiC X grains by manipulating mixtures from different zones according to each individual grain.
[^16]: See National Nuclear Data Centre, Brookhaven National Laboratory’s website for details on decay times and decay cascades: https://www.nndc.bnl.gov/ensdf/
|
subroutine eomccsd_1prdm(rtdb,omega,d_f1,d_v2,
1 k_f1_offset,k_v2_offset,
1 d_t1,d_t2,d_x1,d_x2,d_y1,d_y2,
1 k_t1_offset,k_t2_offset,
2 k_x1_offset,k_x2_offset,
3 k_y1_offset,k_y2_offset)
implicit none
#include "global.fh"
#include "mafdecls.fh"
#include "util.fh"
#include "rtdb.fh"
#include "errquit.fh"
#include "tce.fh"
#include "tce_main.fh"
#include "stdio.fh"
integer rtdb
double precision omega
integer d_f1,d_v2
integer k_f1_offset,k_v2_offset
integer d_t1,d_t2,d_x1,d_x2,d_y1,d_y2
integer k_t1_offset,k_t2_offset
integer k_x1_offset,k_x2_offset
integer k_y1_offset,k_y2_offset
c
integer d_e,l_e_offset,k_e_offset,size_e
double precision r0
integer d_d0,l_d0_offset,k_d0_offset,size_d0
double precision denominator
integer d_x0,l_x0_offset,k_x0_offset,size_x0
c
character*256 filename
logical nodezero
integer d_hh,l_hh_offset,k_hh_offset,size_hh
integer d_hp,l_hp_offset,k_hp_offset,size_hp
integer d_ph,l_ph_offset,k_ph_offset,size_ph
integer d_pp,l_pp_offset,k_pp_offset,size_pp
integer dim_rdm_ao,l_rdm_ao,k_rdm_ao
integer dim_mo_h,l_mo_h,k_mo_h,l_mo_h_tmp,k_mo_h_tmp
integer dim_mo_p,l_mo_p,k_mo_p,l_mo_p_tmp,k_mo_p_tmp
integer dim_mu_h,l_mu_h,k_mu_h
integer dim_mu_p,l_mu_p,k_mu_p
integer dim_rdm_mo_hh,l_rdm_mo_hh,k_rdm_mo_hh
integer dim_rdm_mo_hp,l_rdm_mo_hp,k_rdm_mo_hp
integer dim_rdm_mo_ph,l_rdm_mo_ph,k_rdm_mo_ph
integer dim_rdm_mo_pp,l_rdm_mo_pp,k_rdm_mo_pp
integer nh,np,i,j,particle,hole
logical ao_rdm_write
external ao_rdm_write
c
c
nodezero=(ga_nodeid().eq.0)
c
c calculate R0
c
call tce_filename('e',filename)
call tce_e_offset(l_e_offset,k_e_offset,size_e)
call createfile(filename,d_e,size_e)
call tce_zero(d_e,size_e)
call nr0(d_f1,d_e,d_t1,d_v2,d_x1,d_x2,k_f1_offset,
1 k_e_offset,k_t1_offset,k_v2_offset,
2 k_x1_offset,k_x2_offset)
call reconcilefile(d_e,1)
call get_block(d_e,r0,1,0)
if(dabs(omega).gt.1.0d-7) then
r0 = r0/omega
else
call errquit('eomccsd_1prdm: r0 is infinity',0,ma_err)
end if
call deletefile(d_e)
if(.not.ma_pop_stack(l_e_offset))
1 call errquit('eomccsd_1prdm: ma problem',1,ma_err)
c
c calculate denominator
c
call tce_filename('d_d0',filename)
call tce_e_offset(l_d0_offset,k_d0_offset,size_d0)
call createfile(filename,d_d0,size_d0)
call tce_zero(d_d0,size_d0)
call eomccsd_denominator(d_d0,d_x1,d_x2,d_y1,d_y2,
1 k_d0_offset,k_x1_offset,
2 k_x2_offset,k_y1_offset,k_y2_offset)
call reconcilefile(d_d0,size_d0)
call get_block(d_d0,denominator,1,0)
call deletefile(d_d0)
if(.not.ma_pop_stack(l_d0_offset))
1 call errquit('eomccsd_1prdm: ma problem',2,ma_err)
c
c put r0 value to d_x0
c
call tce_filename('d_x0',filename)
call tce_e_offset(l_x0_offset,k_x0_offset,size_x0)
call createfile(filename,d_x0,size_x0)
call tce_zero(d_x0,size_x0)
call put_block(d_x0,r0,1,0)
c
c allocate memory for 1PRDM with AO basis
c
dim_rdm_ao=nbf*nbf
if(.not.ma_push_get(mt_dbl,dim_rdm_ao,'rdm_ao',
1 l_rdm_ao,k_rdm_ao))
2 call errquit('eomccsd_1prdm: ma problem',1,ma_err)
do i=1,dim_rdm_ao
dbl_mb(k_rdm_ao+i-1)=0.d0
enddo
c
c allocate memory for MOs
c
nh=nocc(1)+nocc(ipol)
np=nmo(1)+nmo(ipol)-nh
dim_mo_h=nh*nbf
dim_mo_p=np*nbf
if(.not.ma_push_get(mt_dbl,dim_mo_h,'mo_h',
1 l_mo_h,k_mo_h))
2 call errquit('eomccsd_1prdm: ma problem',2,ma_err)
do i=1,dim_mo_h
dbl_mb(k_mo_h+i-1)=0.d0
enddo
if(.not.ma_push_get(mt_dbl,dim_mo_p,'mo_p',
1 l_mo_p,k_mo_p))
2 call errquit('eomccsd_1prdm: ma problem',3,ma_err)
do i=1,dim_mo_p
dbl_mb(k_mo_p+i-1)=0.d0
enddo
c
c get the MOs from GA and make them sorted (list them according column)
c
if(.not.ma_push_get(mt_dbl,dim_mo_h,'mo_h_tmp',
1 l_mo_h_tmp,k_mo_h_tmp))
2 call errquit('eomccsd_1prdm: ma problem',4,ma_err)
do i=1,dim_mo_h
dbl_mb(k_mo_h_tmp+i-1)=0.d0
enddo
c
c hole alpha
c
do hole=1,nocc(1)
i=2*hole-1
call ga_get(g_movecs(1),1,nbf,hole,hole,
1 dbl_mb(k_mo_h_tmp+(i-1)*nbf),nbf)
enddo
c
c hole beta
c
do hole=1,nocc(ipol)
i=2*hole
call ga_get(g_movecs(ipol),1,nbf,hole,hole,
1 dbl_mb(k_mo_h_tmp+(i-1)*nbf),nbf)
enddo
c
c make them sorted according to column index
c
do i=1,nh
do j=1,nbf
dbl_mb(k_mo_h+(j-1)*nh+(i-1))=
1 dbl_mb(k_mo_h_tmp+(i-1)*nbf+(j-1))
enddo
enddo
if(.not.ma_pop_stack(l_mo_h_tmp))
1 call errquit('eomccsd_1prdm: ma problem',5,ma_err)
c
c particle alpha
c
if(.not.ma_push_get(mt_dbl,dim_mo_p,'mo_p_tmp',
1 l_mo_p_tmp,k_mo_p_tmp))
2 call errquit('eomccsd_1prdm: ma problem',6, ma_err)
do i=1,dim_mo_p
dbl_mb(k_mo_p_tmp+i-1)=0.d0
enddo
c
do particle=nocc(1)+1, nmo(1)
i=2*particle-1-nh
call ga_get(g_movecs(1),1,nbf,particle,particle,
1 dbl_mb(k_mo_p_tmp+(i-1)*nbf),nbf)
enddo
c
c particle beta
c
do particle=nocc(ipol)+1,nmo(ipol)
i=2*particle-nh
call ga_get(g_movecs(ipol),1,nbf,particle,particle,
1 dbl_mb(k_mo_p_tmp+(i-1)*nbf),nbf)
enddo
c
c make them sorted according to column index
c
do i=1,np
do j=1,nbf
dbl_mb(k_mo_p+(j-1)*np+(i-1))=
1 dbl_mb(k_mo_p_tmp+(i-1)*nbf+(j-1))
enddo
enddo
if(.not.ma_pop_stack(l_mo_p_tmp))
1 call errquit('eomccsd_1prdm: ma problem',7,ma_err)
c
c allocate memory for intermediates
c
dim_mu_h = nbf*nh
dim_mu_p = nbf*np
if(.not.ma_push_get(mt_dbl,dim_mu_h,'mu_h',
1 l_mu_h,k_mu_h))
2 call errquit('eomccsd_1prdm: ma problem',8, ma_err)
do i=1,dim_mu_h
dbl_mb(k_mu_h+i-1)=0.d0
enddo
if(.not.ma_push_get(mt_dbl,dim_mu_p,'mu_p',
1 l_mu_p,k_mu_p))
2 call errquit('eomccsd_1prdm: ma problem',9, ma_err)
do i=1, dim_mu_p
dbl_mb(k_mu_p+i-1)=0.d0
enddo
c
c-eomccsd_1prdm_hh
c
c allocate memory for hh block
dim_rdm_mo_hh = nh*nh
if(.not.ma_push_get(mt_dbl,dim_rdm_mo_hh,'rdm_mo_hh',
1 l_rdm_mo_hh,k_rdm_mo_hh))
2 call errquit('eomccsd_1prdm: ma problem',10,ma_err)
do i=1,dim_rdm_mo_hh
dbl_mb(k_rdm_mo_hh+i-1)=0.d0
enddo
c
call tce_filename('hh',filename)
call tce_dens_hh_offset(l_hh_offset,k_hh_offset,size_hh)
call createfile(filename,d_hh,size_hh)
call dratoga(d_x1)
call dratoga(d_x2)
call dratoga(d_y1)
call dratoga(d_y2)
call eomccsd_1prdm_hh(d_hh,d_t1,d_t2,d_x0,d_x1,d_x2,d_y1,d_y2,
1 k_hh_offset,k_t1_offset,k_t2_offset,k_x0_offset,
2 k_x1_offset,k_x2_offset,
3 k_y1_offset,k_y2_offset)
call reconcilefile(d_hh,size_hh)
call gatodra(d_y2)
call gatodra(d_y1)
call gatodra(d_x2)
call gatodra(d_x1)
call get_mo_rdm_hh(d_hh,k_hh_offset,k_rdm_mo_hh,denominator)
call deletefile(d_hh)
if (.not.ma_pop_stack(l_hh_offset))
1 call errquit("eomccsd_1prdm: ma problem",11,ma_err)
c
c do the matrix multiplication
c
call dgemm('t','n',nbf,nh,nh,1.d0,dbl_mb(k_mo_h),
1 nh,dbl_mb(k_rdm_mo_hh),nh,
2 0.d0,dbl_mb(k_mu_h),nbf)
call dgemm('n','n',nbf,nbf,nh,1.0d0,dbl_mb(k_mu_h),nbf,
1 dbl_mb(k_mo_h),nh,0.d0, dbl_mb(k_rdm_ao),nbf)
c
c release hh block memory
c
if(.not.ma_pop_stack(l_rdm_mo_hh))
1 call errquit('ccsd_1prdm: ma problem',12,ma_err)
do i=1,dim_mu_h
dbl_mb(k_mu_h+i-1)=0.d0
enddo
c
c eomccsd_1prdm_hp
c
c
c allocate memory for hp block
c
dim_rdm_mo_hp = nh*np
if(.not.ma_push_get(mt_dbl,dim_rdm_mo_hp,'rdm_mo_hp',
1 l_rdm_mo_hp,k_rdm_mo_hp))
2 call errquit('eomccsd_1prdm: ma problem',13,ma_err)
do i=1,dim_rdm_mo_hp
dbl_mb(k_rdm_mo_hp+i-1)=0.d0
enddo
c
c-eomccsd_1prdm_hp
c
call tce_filename('hp',filename)
call tce_dens_hp_offset(l_hp_offset,k_hp_offset,size_hp)
call createfile(filename,d_hp,size_hp)
call dratoga(d_x1)
call dratoga(d_x2)
call dratoga(d_y1)
call dratoga(d_y2)
call eomccsd_1prdm_hp(d_hp,d_x0,d_x1,d_y1,d_y2,
1 k_hp_offset,k_x0_offset,k_x1_offset,
2 k_y1_offset,k_y2_offset)
call reconcilefile(d_hp,size_hp)
call gatodra(d_y2)
call gatodra(d_y1)
call gatodra(d_x2)
call gatodra(d_x1)
call get_mo_rdm_hp(d_hp,k_hp_offset,k_rdm_mo_hp,denominator)
call deletefile(d_hp)
if (.not.ma_pop_stack(l_hp_offset))
1 call errquit("eomccsd_1prdm: ma problem",14,ma_err)
c
c do the matrices multiplication
c
call dgemm('t','n',nbf,np,nh,1.0d0,dbl_mb(k_mo_h),
1 nh,dbl_mb(k_rdm_mo_hp),nh,
2 0.d0,dbl_mb(k_mu_p),nbf)
call dgemm('n','n',nbf,nbf,np,1.0d0,dbl_mb(k_mu_p),nbf,
1 dbl_mb(k_mo_p),np,1.0d0, dbl_mb(k_rdm_ao),nbf)
c
c release hp block memory
c
if(.not.ma_pop_stack(l_rdm_mo_hp))
1 call errquit('eomccsd_1prdm: ma problem',15,ma_err)
do i=1,dim_mu_p
dbl_mb(k_mu_p+i-1)=0.d0
enddo
c
c eomccsd_1prdm_ph
c
c allocate memory for ph block
c
dim_rdm_mo_ph=np*nh
if(.not.ma_push_get(mt_dbl,dim_rdm_mo_ph,'rdm_mo_ph',
1 l_rdm_mo_ph, k_rdm_mo_ph))
2 call errquit('ccsd_1prdm: ma problem',91,ma_err)
do i=1,dim_rdm_mo_ph
dbl_mb(k_rdm_mo_ph+i-1)=0.d0
enddo
call tce_filename('ph',filename)
call tce_dens_ph_offset(l_ph_offset,k_ph_offset,
1 size_ph)
call createfile(filename,d_ph,size_ph)
call dratoga(d_x1)
call dratoga(d_x2)
call dratoga(d_y1)
call dratoga(d_y2)
call eomccsd_1prdm_ph(d_ph,d_t1,d_t2,d_x0,d_x1,d_x2,d_y1,d_y2,
1 k_ph_offset,k_t1_offset,k_t2_offset,
2 k_x0_offset,k_x1_offset,k_x2_offset,
2 k_y1_offset,k_y2_offset)
call reconcilefile(d_ph,size_ph)
call gatodra(d_y2)
call gatodra(d_y1)
call gatodra(d_x2)
call gatodra(d_x1)
call get_mo_rdm_ph(d_ph,k_ph_offset,k_rdm_mo_ph,denominator)
call deletefile(d_ph)
if(.not.ma_pop_stack(l_ph_offset))
1 call errquit("eomccsd_1prdm: ma problem",16,ma_err)
c
c
c do the matrix multiplication
c
call dgemm('t','n',nbf,nh,np,1.0d0,dbl_mb(k_mo_p),
1 np,dbl_mb(k_rdm_mo_ph),np,
2 0.d0,dbl_mb(k_mu_h),nbf)
call dgemm('n','n',nbf,nbf,nh,1.0d0,dbl_mb(k_mu_h),nbf,
1 dbl_mb(k_mo_h),nh,1.0d0, dbl_mb(k_rdm_ao),nbf)
c
c release memory for ph block
c
if(.not.ma_pop_stack(l_rdm_mo_ph))
1 call errquit('eomccsd_1prdm: ma problem',17,ma_err)
do i=1,dim_mu_h
dbl_mb(k_mu_h+i-1)=0.d0
enddo
c
c-eomccsd_1prdm_pp
c
c allocate memory for pp block
c
dim_rdm_mo_pp=np*np
if(.not.ma_push_get(mt_dbl,dim_rdm_mo_pp,'rdm_mo_pp',
1 l_rdm_mo_pp,k_rdm_mo_pp))
2 call errquit('eomccsd_1prdm: ma problem',18,ma_err)
do i=1,dim_rdm_mo_pp
dbl_mb(k_rdm_mo_pp+i-1)=0.d0
enddo
call tce_filename('pp',filename)
call tce_dens_pp_offset(l_pp_offset,k_pp_offset,size_pp)
call createfile(filename,d_pp,size_pp)
call dratoga(d_x1)
call dratoga(d_x2)
call dratoga(d_y1)
call dratoga(d_y2)
call eomccsd_1prdm_pp(d_pp,d_t1,d_t2,d_x0,d_x1,d_x2,
1 d_y1,d_y2,k_pp_offset,k_t1_offset,k_t2_offset,
2 k_x0_offset,k_x1_offset,k_x2_offset,
3 k_y1_offset,k_y2_offset)
call reconcilefile(d_pp,size_pp)
call gatodra(d_y2)
call gatodra(d_y1)
call gatodra(d_x2)
call gatodra(d_x1)
call get_mo_rdm_pp(d_pp,k_pp_offset,k_rdm_mo_pp,denominator)
call deletefile(d_pp)
if (.not.ma_pop_stack(l_pp_offset))
1 call errquit("eomccsd_1prdm: ma problem",19,ma_err)
call dgemm('t','n',nbf,np,np,1.0d0,dbl_mb(k_mo_p),
1 np,dbl_mb(k_rdm_mo_pp),np,
2 0.d0,dbl_mb(k_mu_p),nbf)
call dgemm('n','n',nbf,nbf,np,1.0d0,dbl_mb(k_mu_p),nbf,
1 dbl_mb(k_mo_p),np,1.0d0, dbl_mb(k_rdm_ao),nbf)
c
c
if(nodezero.and.util_print('densmat',print_high)) then
write(luout,*) '==================================='
write(luout,*) 'Debug information of density matrix'
write(luout,*) '==================================='
do i=1,nbf
do j=1,nbf
if(abs(dbl_mb(k_rdm_ao+(i-1)+(j-1)*nbf)).gt.1.d-8)
1 write(luout,'(i5,i5,f20.16)') i,j,
1 (dbl_mb(k_rdm_ao+(i-1)+(j-1)*nbf)+
1 dbl_mb(k_rdm_ao+(j-1)+(i-1)*nbf))/2.d0
enddo
enddo
write(luout,*) '==================================='
write(luout,*) ' End of debug information '
write(luout,*) '==================================='
endif
c
c release the memory for pp block
c
if(.not.ma_pop_stack(l_rdm_mo_pp))
1 call errquit('eomccsd_1prdm: ma problem',20,ma_err)
c
c release the memory for intermediates
c
if(.not.ma_pop_stack(l_mu_p))
1 call errquit('eomccsd_1prdm: ma problem',21,ma_err)
if(.not.ma_pop_stack(l_mu_h))
1 call errquit('eomccsd_1prdm: ma problem',22,ma_err)
c
c release the memory for MOs
c
if(.not.ma_pop_stack(l_mo_p))
1 call errquit('eomccsd_1prdm: ma problem',23,ma_err)
if(.not.ma_pop_stack(l_mo_h))
1 call errquit('eomccsd_1prdm: ma problem',24,ma_err)
c
c dump the ao rdm to a file
c
if (.not.rtdb_cget(rtdb,'tce:file_densmat',1,filename))
1 call errquit('eomccsd_1prdm: rtdb_cget failed - file_densmat',
1 0,RTDB_ERR)
if(.not.ao_rdm_write(filename,k_rdm_ao))
1 call errquit('eomccsd_1prdm: disk problem',1,disk_err)
if(.not.ma_pop_stack(l_rdm_ao))
1 call errquit('eomccsd_1prdm: ma problem',03,ma_err)
c
c
c
call deletefile(d_x0)
if(.not.ma_pop_stack(l_x0_offset))
1 call errquit('eomccsd_1prdm: ma problem',04,ma_err)
end
c $Id$
|
framework module TTLoadTime {
umbrella header "TTLoadTime.h"
export *
module * { export * }
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.