text
stringlengths
8
5.77M
Q: how to style chat-bubble in iphone classic style using css only I trying to create a html page which looks like similar to Messages(thread view) just as in our android and iphone devices. Here is what i have coded Css styles: <style type='text/css'> .triangle-right { position:relative; padding:15px; color:#fff; background:#075698; background:-webkit-gradient(linear, 0 0, 0 100%, from(#2e88c4), to(#075698)); background:-moz-linear-gradient(#2e88c4, #075698); background:-o-linear-gradient(#2e88c4, #075698); background:linear-gradient(#2e88c4, #075698); -webkit-border-radius:10px; -moz-border-radius:10px; border-radius:10px; } .triangle-right.top { background:-webkit-gradient(linear, 0 0, 0 100%, from(#075698), to(#2e88c4)); background:-moz-linear-gradient(#075698, #2e88c4); background:-o-linear-gradient(#075698, #2e88c4); background:linear-gradient(#075698, #2e88c4); } .triangle-right.left { margin-left:10px;background:#075698; } .triangle-right.right { margin-right:10px; background:#075698; } .triangle-right:after { content:''; position:absolute; bottom:-20px;left:50px;border-width:20px 0 0 20px;border-style:solid;border-color:#075698 transparent; display:block;width:0; } .triangle-right.top:after { top:-20px;right:50px;bottom:auto;left:auto;border-width:20px 20px 0 0;border-color:transparent #075698; } .triangle-right.left:after { top:16px;left:-15px; bottom:auto;border-width:0 15px 15px 0;border-color:transparent #E8E177; } .triangle-right.right:after { top:16px;right:-15px;bottom:auto;left:auto;border-width:0 0 15px 15px; border-color:transparent #8EC3E2 ; } .triangle { width: 0; height: 0; border-left: 50px solid transparent; border-right: 100px solid transparent; border-bottom: 50px solid #fc2e5a; } I tried changing some values in .triangle-right.left:after { top:16px;left:-15px; bottom:auto;border-width:0 15px 15px 0;border-color:transparent #E8E177; } .triangle-right.right:after { top:16px;right:-15px;bottom:auto;left:auto;border-width:0 0 15px 15px; border-color:transparent #8EC3E2 ; } but not getting the exact shapes as desired. I need to construct the bubble in the following fashion Can anyone help me A: The HTML <div class="chat"> <div class="yours messages"> <div class="message last"> Hello, how's it going? </div> </div> <div class="mine messages"> <div class="message"> Great thanks! </div> <div class="message last"> How about you? </div> </div> </div> The CSS body { font-family: helvetica; display: flex ; flex-direction: column; align-items: center; } .chat { width: 300px; border: solid 1px #EEE; display: flex; flex-direction: column; padding: 10px; } .messages { margin-top: 30px; display: flex; flex-direction: column; } .message { border-radius: 20px; padding: 8px 15px; margin-top: 5px; margin-bottom: 5px; display: inline-block; } .yours { align-items: flex-start; } .yours .message { margin-right: 25%; background-color: #EEE; position: relative; } .yours .message.last:before { content: ""; position: absolute; z-index: 0; bottom: 0; left: -7px; height: 20px; width: 20px; background: #EEE; border-bottom-right-radius: 15px; } .yours .message.last:after { content: ""; position: absolute; z-index: 1; bottom: 0; left: -10px; width: 10px; height: 20px; background: white; border-bottom-right-radius: 10px; } .mine { align-items: flex-end; } .mine .message { color: white; margin-left: 25%; background: rgb(0, 120, 254); position: relative; } .mine .message.last:before { content: ""; position: absolute; z-index: 0; bottom: 0; right: -8px; height: 20px; width: 20px; background: rgb(0, 120, 254); border-bottom-left-radius: 15px; } .mine .message.last:after { content: ""; position: absolute; z-index: 1; bottom: 0; right: -10px; width: 10px; height: 20px; background: white; border-bottom-left-radius: 10px; } https://codepen.io/swards/pen/gxQmbj A: Try this code For Thread view Messages. <div class="messages scroll"> <div class="item blue"> <div class="arrow"></div> <div class="text"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus ut diam quis dolor mollis tristique. Suspendisse vestibulum convallis felis vitae facilisis. Praesent eu nisi vestibulum erat. </div> <div class="date">09.02.2013, 21:04</div> </div> <div> Css Styles /* messages */ .body .content .block .messages{position: relative;} .body .content .block .messages .item{width: 90%; padding: 5px; position: relative; margin: 10px 0px 0px; float: left;} .body .content .block .messages .item.out{float: right; margin: 10px 0px 10px;} .body .content .block .messages .item .arrow{border-color: transparent transparent #009AD7 #009AD7; border-style: solid; border-width: 5px;width: 0px; height: 0px; position:absolute; left: 10px; top: -10px;} .body .content .block .messages .item.out .arrow{left: auto; top: auto; right: 10px; bottom: -10px; border-color: #005683 #005683 transparent transparent;} .body .content .block .messages .item .text{font-size: 12px; color: #FFF; line-height: 13px;} .body .content .block .messages .item .date{font-size: 12px; color: #FFF; text-align: right; opacity: 0.6; filter: alpha(opacity=60); line-height: 13px;} /* eof messages */ Thanks, Kamalakannan.M
Survival after segmentectomy and wedge resection in stage I non-small-cell lung cancer. Although lobectomy is considered the standard surgical treatment for stage IA non-small-cell lung cancer (NSCLC), wedge resection or segmentectomy are frequently performed on patients who are not lobectomy candidates. The objective of this study was to compare survival among patients with stage IA NSCLC, who are undergoing these procedures. Using the Surveillance, Epidemiology and End Results registry, we identified 3525 patients. We used logistic regression to determine propensity scores for patients undergoing segmentectomy, based on the patient's preoperative characteristics. Overall and lung cancer-specific survival of patients treated with wedge resection versus segmentectomy was compared after adjusting, stratifying, or matching patients based on propensity score. Overall, 704 patients (20%) underwent segmentectomy. Analyses, adjusting for propensity scores, showed that segmentectomy was associated with significant improvement in overall (hazard ratio: 0.80, 95% confidence interval: 0.69-0.93) and lung cancer-specific survival (hazard ratio: 0.72, 95% confidence interval: 0.59-0.88) compared with wedge resection. Similar results were obtained when stratifying and matching by propensity score and when limiting analysis to patients with tumors sized less than or equal to 2 cm, or aged 70 years or younger. These results suggest that segmentectomy should be the preferred technique for limited resection of patients with stage IA NSCLC. The study findings should be confirmed in prospective studies.
Related Items Articles It's not every day you get a politician ready with numbers to counter a dismal poll, but there was the Liberal leader looking as fiery as Jon Gerrard gets, girding for a fight. "I don't believe that five per cent," Gerrard retorted Tuesday when asked about the CJOB/Viewpoints telephone survey of 579 Manitobans on their voting intentions. The poll noted an impressive undecided figure, about 19 per cent of respondents. But five per cent for a party looks deadly, no? "It just doesn't fit with a lot of other things we're hearing," he told the Free Press editorial board. He noted Viewpoints is a credible company, but it is a firm with NDP ties -- it's run by former premier Gary Doer's wife Ginny Devine. Liberal communications director David Shorr, new to the campaign game, was more helpful. In fact, he was irrepressibly chatty about what the Liberals are picking up at the doorsteps and in their phone polls -- nothing nearly as scientific as the non-random CP/Environics online poll of 1,000, also out Tuesday, where respondents were recruited and compensated, right? Opinion polling today leaves much to creative interpretation. You gotta love a neophyte's unbridled enthusiasm. Shorr eagerly shared the good news flowing into campaign headquarters: Liberal Paul Hesse, no stranger to the campaign trail, is five points behind the NDP's Jennifer Howard in Fort Rouge, where the boundary has been redrawn. Burrows, wide open with the resignation of the NDP's Doug Martindale, is getting the Liberals absolutely giddy with anticipation. They're also excited about Logan, a new riding, where NDP minister Flor Marcelino is running, having lost her Wellington riding in the 2008 redistribution. In Tyndall Park, the Liberals are optimistic with candidate Roldan Sevillano, hand-picked by one of the few federal Liberals to get elected west of, well, anywhere I guess, Kevin Lamoureux. Sevillano has Lamoureux's well-greased election machinery working for him in a brand new riding where the NDP are running Marcelino's brother-in-law Ted Marcelino and the former NDPer Cris Aglugub is reborn as a Tory. Meanwhile, in Minto -- what? Justice Minister Andrew Swan in trouble after seven years in the legislature? Really? Like I said, there's some unbridled enthusiasm stoking wildfires here. But without a doubt, some of these ridings are up for the pickin', due to either resignations or redistribution. The NDP is facing challenges from the Tories in south Winnipeg, swinging east/northeast, and central/northwest from the Liberals. Shorr said the party thinks it's hit 18 per cent support in Winnipeg, maybe 15 per cent outside of the city. He tempered that after seeing Environics/Canadian Press found provincial support for the Liberals at 10 per cent in a survey with a massive undecided cohort and that also sees the Tories slightly ahead of the NDP. In a province where the right-of-centre Tories and left-of-centre NDP are increasingly crowding the centre, how is it the Liberals, famously centre-of-centre, can muster 18 per cent support? A couple of things to reflect upon: First, I missed the NDP surge federally this spring, not entirely but enough to throw water on any instinct to speculate on election returns. Second, the federal NDP capitalized in areas of Canada where the electorate was feeling poorly served by the choices at hand. Shorr doesn't think the Tories' bewildering promise to run Manitoba into deficit until 2018 has played into Liberal favour. Some ridings traditionally represented by the NDP are simply feeling neglected, he said. Further, he said campaign fundraising hasn't been so good since the Carstairs era. Sharon Carstairs ran her wildly successful 1988 campaign on a relative shoestring compared to the campaigns of 1990 and 1995. The Liberals are predicting a caucus of four seats -- Gerrard, the lone MLA since Lamoureux bolted for Ottawa, is running hard to hold onto River Heights, eight per cent ahead, he believes, of Tory Marty Morantz. So there you have it, a little meat on Gerrard's bold and early prediction that the next legislature will see a minority government and his party will hold the balance of power. The conditions of this election campaign are vastly different from 1988, when Carstairs' team was elected with balance of power. A wild idea, coming out of 12 years of uninterrupted NDP reign, yes? And I thought the Tories' fiscal policy this election had doom written all over it. Maybe it's more of a crap shoot -- in an election almost entirely void of electricity comes a hint and a hope we might be up for a little entertainment. You can comment on most stories on winnipegfreepress.com. You can also agree or disagree with other comments. All you need to do is be a Winnipeg Free Press print or e-edition subscriber to join the conversation and give your feedback. You can comment on most stories on winnipegfreepress.com. You can also agree or disagree with other comments. All you need to do is be a Winnipeg Free Press print or e-edition subscriber to join the conversation and give your feedback.
/* * Asqatasun - Automated webpage assessment * Copyright (C) 2008-2019 Asqatasun.org * * This file is part of Asqatasun. * * Asqatasun is free software: you can redistribute it and/or modify * it under the terms of the GNU Affero General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Affero General Public License for more details. * * You should have received a copy of the GNU Affero General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * * Contact us by mail: asqatasun AT asqatasun DOT org */ package org.asqatasun.entity.audit; import java.io.Serializable; import javax.persistence.*; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlTransient; import org.codehaus.jackson.annotate.JsonIgnore; import org.codehaus.jackson.annotate.JsonSubTypes; import org.codehaus.jackson.annotate.JsonTypeInfo; /** * * @author jkowalczyk */ @Entity @Table(name = "EVIDENCE_ELEMENT") @XmlRootElement public class EvidenceElementImpl implements EvidenceElement, Serializable { private static final long serialVersionUID = 5494394934902604527L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "Id_Evidence_Element") private Long id; @ManyToOne @JoinColumn(name = "EVIDENCE_Id_Evidence") private EvidenceImpl evidence; @Column(name = "Element_Value", nullable = false, length = 16777215) private String value; @ManyToOne @JoinColumn(name = "PROCESS_REMARK_Id_Process_Remark") @JsonIgnore private ProcessRemarkImpl processRemark; public EvidenceElementImpl() { super(); } public EvidenceElementImpl(String value) { super(); this.value = value; } @Override public Long getId() { return id; } @XmlTransient @Override @JsonTypeInfo(use=JsonTypeInfo.Id.NAME, include=JsonTypeInfo.As.WRAPPER_OBJECT) @JsonSubTypes({ @JsonSubTypes.Type(value=org.asqatasun.entity.audit.EvidenceImpl.class, name="Evidence")}) public Evidence getEvidence() { return (Evidence) evidence; } @Override public String getValue() { return value; } @Override public void setId(Long id) { this.id = id; } @Override public void setEvidence(Evidence Evidence) { this.evidence = (EvidenceImpl) Evidence; } @Override public void setValue(String value) { this.value = value; } @XmlTransient @Override public ProcessRemark getProcessRemark() { return processRemark; } @Override public void setProcessRemark(ProcessRemark processRemark) { if (processRemark instanceof ProcessRemarkImpl) { this.processRemark = (ProcessRemarkImpl)processRemark; } } }
American culture venerates choice, but choice may not be the key to happiness and health, according to a new study in the Journal of Consumer Research. "Americans live in a political, social, and historical context that advances personal freedom, choice, and self-determination above all else," write authors Hazel Rose Markus (Stanford University) and Barry Schwartz (Swarthmore College). "Contemporary psychology has proliferated this emphasis on choice and self-determination as the key to healthy psychological functioning." The authors point out that this emphasis on choice and freedom is not universal. "The picture presented by a half-century of research may present an accurate picture of the psychological importance of choice, freedom, and autonomy among middle-class, college-educated Americans, but this is a picture that leaves about 95 percent of the world's population outside its frame," the authors write. The authors reviewed a body of research surrounding the cultural ideas surrounding choice. They found that among non-Western cultures and among working-class Westerners, freedom and choice are less important or mean something different than they do for the university-educated people who have participated in psychological research on choice. "And even what counts as a 'choice' may be different for non-Westerners than it is for Westerners," the authors write. "Moreover, the enormous opportunity for growth and self-advancement that flows from unlimited freedom of choice may diminish rather than enhance subjective well-being." People can become paralyzed by unlimited choice, and find less satisfaction with their decisions. Choice can also foster a lack of empathy, the authors found, because it can focus people on their own preferences and on themselves at the expense of the preferences of others and of society as a whole. "We cannot assume that choice, as understood by educated, affluent Westerners, is a universal aspiration, and that the provision of choice will necessarily foster freedom and well-being," the authors write. "Even in contexts where choice can foster freedom, empowerment, and independence, it is not an unalloyed good. Choice can also produce a numbing uncertainty, depression, and selfishness." ### Hazel Rose Markus and Barry Schwartz. "Does Choice Mean Freedom and Well Being?" Journal of Consumer Research: August 2010. A preprint of this article (to be officially published online soon) can be found at http://journals.uchicago.edu/jcr). Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
package com.iota.iri.service.milestone.impl; import com.iota.iri.controllers.MilestoneViewModel; import com.iota.iri.service.milestone.MilestoneException; import com.iota.iri.service.milestone.MilestoneRepairer; import com.iota.iri.service.milestone.MilestoneService; /** * Creates a {@link MilestoneRepairer} service to fix corrupted milestone objects. */ public class MilestoneRepairerImpl implements MilestoneRepairer { /** * A {@link MilestoneService} instance for repairing corrupted milestones */ private MilestoneService milestoneService; /** * Holds the milestone index of the milestone that caused the repair logic to get started. */ private int errorCausingMilestoneIndex = Integer.MAX_VALUE; /** * Counter for the backoff repair strategy (see {@link #repairCorruptedMilestone(MilestoneViewModel)}. */ private int repairBackoffCounter = 0; /** * Constructor for a {@link MilestoneRepairer} to be used for resetting corrupted milestone objects * @param milestoneService A {@link MilestoneService} instance to reset corrupted mielstones */ public MilestoneRepairerImpl(MilestoneService milestoneService) { this.milestoneService = milestoneService; } /** * {@inheritDoc} * * <p> * We simply use the {@link #repairBackoffCounter} as an indicator if a repair routine is running. * </p> */ @Override public boolean isRepairRunning() { return repairBackoffCounter != 0; } /** * {@inheritDoc} */ @Override public boolean isRepairSuccessful(MilestoneViewModel processedMilestone) { return processedMilestone.index() > errorCausingMilestoneIndex; } /** * {@inheritDoc} */ @Override public void stopRepair() { repairBackoffCounter = 0; errorCausingMilestoneIndex = Integer.MAX_VALUE; } /** * {@inheritDoc} */ public void repairCorruptedMilestone(MilestoneViewModel errorCausingMilestone) throws MilestoneException { if (repairBackoffCounter++ == 0) { errorCausingMilestoneIndex = errorCausingMilestone.index(); } for (int i = errorCausingMilestone.index(); i > errorCausingMilestone.index() - repairBackoffCounter; i--) { milestoneService.resetCorruptedMilestone(i); } } }
UAVs and fantasy flights Unmanned aerial vehicles made an appearance at the Sea Air Space Expo. Navy Secretary Ray Mabus said the next UAV milestone will be deployment on a carrier. // Photos by Joshua Stewart Unmanned aerial vehicles aren’t brand-spanking new to naval aviation; UAVs the Navy uses include the rotary-wing Fire Scout and the RQ-2A Pioneer. But the next big thing in unmanned flight at sea will be an aircraft that can take off from and land on a carrier. Several companies are in the process of making that happen. The big names in flight are displaying wares that they hope become the backbone of the Navy’s collection of UAVs at the Sea Air Space Expo at the Gaylord National Resort Hotel and Convention Center in National Harbor, Md. Look at the pictures above. At left is Northrop Grumman’s X-47B, a UAV that made its first flight in February. The one on the right is Boeing’s X-45C. The one in the center, while not labeled, is almost certainly the X-47B — it’s at the Huntington Ingalls Industries display; HII is a Northrop Grumman spin-off, and the airframe has the same shape as the X-47B UCAS. It’s tough to tell in this picture, but it’s shown positioned on the flight deck of the carrier Gerald R. Ford, a ship being built in Newport News, Va. The HII picture merits another look because, well, the whole thing is a mock up of what may someday be the face of unmanned naval aviation on the flight deck a non-existent ship. Sitting nearby the now hypothetical UAV is an F-35C Lightning II joint strike fighter. In case you’re keeping track, that’s an aircraft that had not joined the fleet sitting on the flight deck of a carrier that’s under construction near an another plane that’s currently being tested. Like flying cars and undersea bubble cities, it’s all fantasy, for now at least. Navy Secretary Ray Mabus emphasized during his luncheon speech Monday that unmanned craft will play a prominent role in the Navy’s future. “Over the next decade, we will move aggressively to develop a family of unmanned systems including underwater systems that will be able to operate for a extended periods of time in support of our ships, our expeditionary units and our special warfare teams, and a low-observable, carrier-based intelligence surveillance reconnaissance strike unmanned air system,” he said.
Leeway Chair Maximum freedom in a minimal footprint Leeway Chair Leeway Seating maximizes freedom of movement in a minimal footprint. Leveraging Geiger’s strength in woodcraft, designer Keiji Takeuchi gave these side chairs a crescent-shaped, cantilevered backrest that allows people to move naturally as they sit to collaborate or socialize. “If people have any hesitation choosing my chair over something else, that means something is missing.”
‘06 video shows killer whale trying to drown trainer Newly-released video, which was shot in 2006 and now being used as evidence in an investigation, shows a female orca whale dragging a trainer deep underwater as he struggles for life. The trainer survived. TODAY’s Hoda Kotb reports.
“You are still bringing 700 people into the court and fingerprinting them, and that is 700 people that still need their records expunged. All of these cases are still being withdrawn.” When Pittsburgh City Council passed a marijuana-decriminalization ordinance in December 2015, it was seen as a victory on many fronts. Local governments would see fiscal savings thanks to fewer resources being used to arrest and prosecute individuals. Progress was promised in reducing the disproportionate arrests of black people in the city. And, hopefully, vulnerable populations would avoid entering the criminal-justice system for marijuana possession in small amounts. “From a social perspective, it will really help a lot of young men and women’s lives from being destroyed or caught in sort of the hamster wheel of prosecution through governmental means,” Pittsburgh City Councilor Daniel Lavelle told City Paper in 2015. But after a decrease in marijuana-possession arrests in 2016, those numbers actually jumped significantly in 2017. Chris Goldstein of marijuana-advocacy group Philly NORML compiled statistics from the Pennsylvania Crime Reporting System over the past few years. Goldstein counted the arrests filed under Pennsylvania statute 18F, which signifies misdemeanor possession of less than 30 grams of marijuana. In 2016, the first year of decriminalization in Pittsburgh, marijuana-possession arrests dropped to 494 for the year (down by 160). But in 2017, marijuana-possession arrests increased to 772. In fact, 2017 arrests for possessing less than 30 grams of marijuana even surpassed the 2015 totals by 118 arrests (before a decriminalization ordinance was in place but recognized as necessary by city officials). Without comprehensive data available, Pittsburgh marijuana-reform advocates aren’t sure exactly why the arrests rose in 2017. However, they say some actions left the door open for officers to ignore the ordinance, which could be a factor, along with others, in the rise of arrests. Advocates are also upset that the ordinance has had virtually no effect on shrinking the disproportionate gap in which black and white people are arrested for marijuana possession. Pittsburgh officials say they’re aware of the increase and are looking at steps to meet their commitments to decriminalize small amounts of marijuana. But advocates say arrests will only drop again if changes are made to the ordinance and attitudes are altered among Pittsburgh police officers. And in an age of conflicting law-enforcement priorities handed down from federal, state and local officials, that may be a difficult feat. CP photo by Jake Mysliwczyk Patrick Nightingale Patrick Nightingale, of marijuana-advocacy group Pittsburgh NORML, isn’t happy about the increase in arrests. He says the decriminalization ordinance, which essentially issues tickets and a small fine to violators, was used about 200 times in 2017. But Nightingale says that number should be higher, considering most of the misdemeanor marijuana-possession charges end up being withdrawn anyway. “You are still bringing 700 people into the court and fingerprinting them, and that is 700 people that still need their records expunged,” says Nightingale. “All of these cases are still being withdrawn. Why are these people getting fingerprinted?” Nightingale is also upset that possession arrests are still overwhelmingly affecting African Americans in Pittsburgh. Out of the 772 people arrested on misdemeanor marijuana-possession charges in 2017, 551 of them were black. That means in a city where African Americans make up just 24 percent of the population, black residents made up 71 percent of these marijuana arrests. That percentage has seen virtually no annual change since 2013. In February, Nightingale met with Pittsburgh officials and high-ranking police officers to discuss the increase in arrests. He says officers complained some marijuana consumption in public has increased due to a presumption marijuana is quasi-legal now. Nightingale says this claim may have some merit, given some might not fully understand the decriminalization ordinance, as well as the proliferation of headlines about the start of Pennsylvania's medical-marijuana program and California’s new legal recreational cannabis status. But Nightingale thinks the increase in arrests has more to do with a flaw in the ordinance and a memo sent to Pittsburgh police officers. Nightingale says Pittsburgh Deputy Chief Thomas Stangrecki issued a memo advising officers that they had the discretion to use the decriminalization ordinance. However, the memo told officers they “may” follow the ordinance but stopped short of telling them that they “shall” use the ordinance. The ordinance also states: “This Chapter shall not be construed to supersede any existing Pennsylvania or Federal law.” State and federal law classify marijuana possession as a misdemeanor. “It is a policy, it is not a law, and the policy is weakly worded,” says Nightingale. “All I want to do is change the word from ‘may’ to ‘shall.’” Nightingale says the Pittsburgh ordinance was modeled after Philadelphia’s decriminalization ordinance, except for some minor changes, like using the word “may” instead of “shall.” But he says that makes big difference. Philadelphia’s marijuana-possession arrests have dramatically dropped since the city passed its decriminalization ordinance in 2014. Dan Gilman, chief of staff for Pittsburgh Mayor Bill Peduto, says Peduto still deeply believes in the decriminalization of small amounts of marijuana. In fact, Peduto announced in a tweet on May 11 that he supports Pennsylvania legalizing and taxing recreational marijuana. Gilman doesn’t downplay the marijuana-arrest numbers, but he says those figures could be a bit misleading since some marijuana-possession arrests could be tied to more serious crimes. However, Gilman says officials are working closely with the law department to make possible adjustments to the policy and should be meeting this week. “It is something very much on the front burner and once we get through that, then training will reflect the new policy,” says Gilman. Gilman says he “would have hoped to see the numbers drop by now” but notes the decriminalization ordinance is not a “silver bullet” solution for fixing the issues surrounding marijuana, including the racial gap in those arrests. Gilman says he has faith in the ability of Pittsburgh Police officers to adjust to these figures. Gilman also says the police department’s ongoing implicit bias training should continue to tackle the racial-disparity gap in all arrests. Jesse Wozniak is a West Virginia University criminology professor who advocated for Pittsburgh’s decriminalization ordinance as part of the local Alliance for Police Accountability. He’s not surprised that marijuana-possession arrests grew last year because he says the ordinance’s language isn’t strong enough. “Without some teeth in the ordinance, it probably won’t make a difference,” says Wozniak. And without a mandate in the policy, Wozniak says Pittsburgh is reliant on officers to choose to enforce the ordinance when confronting people possessing marijuana. He says some city officers have embraced the ordinance, but understands conflicting law-enforcement priorities are sending mixed messages. “There are some officers that have really taken [the decriminalization ordinance] to heart,” says Wozniak. “But, in the current political environment, you also have rhetoric and policies of [U.S. Attorney General] Jeff Sessions.” Sessions is a well-known opponent to all things marijuana. In January, Sessions announced that federal prosecutors can decide for themselves whether to press cases against growers, sellers or users for violating federal law, including in states where the drug is legal. Wozniak says it’s not too farfetched to believe Pittsburgh police officers could be following Sessions’ lead. Either way, Wozniak says police culture doesn’t change quickly and Pittsburgh officers still have the ability to issue misdemeanor marijuana-possession arrests without breaking city rules. He says stricter local policies are needed to combat the culture of police officers and priorities laid out by people like Sessions. “Without strict language,” says Wozniak, "then it just goes back to the way it was before.”
[Characteristics of the U wave on the electrocardiogram of patients with diabetes mellitus type 1 without clinical signs of cardiac damage]. Additionally increased (1 mB = 50 mm) ECG recorded in 137 patients with diabetes mellitus without clinical signs of cardiac damage and 66 healthy subjects revealed that the incidence of positive U wave was less in the patients than in the healthy subjects. The negative U wave was found in leads aVR, III, aVL, and V1 in healthy subjects. In patients with diabetes mellitus, the negative and diphasic U waves were found in all the ECG leads. The amplitude of a positive U wave on a ECG strip of the patients was significantly lower in leads V1 and V2 than that of the healthy subjects. The findings are regarded as manifestations of metabolic and dystrophic myocardial changes of diabetic origin.
Social Communications | Author | Educator | Coach Like nature, you are either growing, or dying Trees really are amazing. It seems impossible to think that something so big and strong can grow from such a small seed. I think because it takes so many years for a tree to grow, we often miss the magnitude of what an amazing transformation is happening. If we had a time-lapse super-sped-up to watch the growth of a tree I bet we’d start appreciating their incredible growth much quicker. Trees are great for metaphors and analogies. They are used to help explain many of the most famous stories told. I’m using them in this piece to help illustrate a valuable piece of knowledge I picked up recently at a seminar held by Joe Pane, a successful business and life coach, in Melbourne, about personal growth, that I found a fantastically simple to grasp. Joe said, “Like everything in nature, you are either growing, or dying.” This really resonated with me as someone interested in personal development, passing on valuable and inspirational lessons to others. But it also immediately brought people in my life to mind. Like a tree, you are either growing – expanding your horizons, learning new things, adding to your skills, bringing value to people, taking action…. Living. Or Like a tree, or you dying – shrinking into yourself, no longer learning new things, not expanding but withering, losing your skills, and desire to move…. Dying. Like a tree your growth may initially be impossible for others to see externally, but if you continue to live, learn, expand and take action, the growth will happen, and internally you will be thriving, until one day others too will see you as the mighty tree you have grown into.
package com.luojilab.share.bean; import com.luojilab.share.core.AbsShareBean; /** * <p><b>Package:</b> com.luojilab.share.bean </p> * <p><b>Project:</b> jimu-sample-project </p> * <p><b>Classname:</b> AppShareBean </p> * <p><b>Description:</b> 该Demo演示:大多数情况下我们并不想将一些类下沉如何处理 </p> * Created by leobert on 2018/7/6. */ public class AppShareBean extends AbsShareBean { private String content; public AppShareBean(int shareVia, String content) { super(shareVia); this.content = content; } @Override protected String getShareContent() { return content; } }
Q: filter option in list view fragment activity hi friend i want to add search option to list view in fragment activity but i cannot access list view adapter from addTextChangedListener. can you please help me.i cannot find answer from internet,can you please help me to fix this.data loading part working properly.i want to filtered data by code. public class FindPeopleFragment extends Fragment implements AdapterView.OnItemClickListener { private static final String TAG_CONTACTS = "contacts"; private static final String TAG_ID = "GUID"; private static final String TAG_NAME = "GUNO"; private static final String TAG_Code = "ShipNo"; private static final String TAG_ADDRESS = "Description"; private static final int CAMERA_CAPTURE_IMAGE_REQUEST_CODE = 100; private static final int CAMERA_CAPTURE_VIDEO_REQUEST_CODE = 200; public static final int MEDIA_TYPE_IMAGE = 1; public static final int MEDIA_TYPE_VIDEO = 2; private static final String IMAGE_DIRECTORY_NAME = "Hello Camera"; // private static final int RESULT_OK = 1 ; // private static final int RESULT_CANCELED = 2 ; private Uri fileUri; // file url to store image/video ImageView imgPreview; private VideoView videoPreview; private String baid = ""; private PopupWindow pwindo; Button btndeliver; Button btnreject; Button btncannot; Button btnother; Button btnClosePopup; EditText inputSearch; // Button btnCallPopup; // Button btnwtsappPopup; public String pnumb; public String CusMobileNo; // contacts JSONArray JSONArray contacts = null; // Hashmap for ListView ArrayList<HashMap<String, String>> contactList; public FindPeopleFragment(){} @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { if (android.os.Build.VERSION.SDK_INT > 9) { StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build(); StrictMode.setThreadPolicy(policy); } Movie movie = new Movie(); baid=movie.getGUID(); View rootView = inflater.inflate(R.layout.fragment_find_people, container, false); contactList = new ArrayList<HashMap<String, String>>(); WebServiceCaller webServiceCaller = new WebServiceCaller(); String Result = webServiceCaller.mydespatchDettails(baid); try { JSONObject Jasonobject = new JSONObject(Result); JSONArray Jarray = Jasonobject.getJSONArray("PendingDespatch"); for (int i = 0; i < Jarray.length(); i++) { JSONObject json_data = Jarray.getJSONObject(i); String id = json_data.getString(TAG_ID); String name = json_data.getString(TAG_NAME); String code = json_data.getString(TAG_Code); String address = json_data.getString(TAG_ADDRESS); HashMap<String, String> contact = new HashMap<String, String>(); // adding each child node to HashMap key => value contact.put(TAG_ID, id); contact.put(TAG_NAME, name); contact.put(TAG_Code, code); contact.put(TAG_ADDRESS, address); // adding contact to contact list contactList.add(contact); } ListView list = (ListView) rootView.findViewById(R.id.list); final ListAdapter adapter = new SimpleAdapter(getActivity() ,contactList, (R.layout.list_item), new String[] { TAG_NAME,TAG_Code,TAG_ID ,TAG_ADDRESS }, new int[] { R.id.code, R.id.names, R.id.city,R.id.address }); list.setAdapter(adapter); list.setOnItemClickListener(this); //list.setListAdapter(adapter); inputSearch = (EditText) rootView.findViewById(R.id.inputSearch); inputSearch.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence cs, int start, int count, int after) { System.out.println("Text ["+cs+"]"); FindPeopleFragment.this.adapter.getFilter().filter(cs); //***This is problem i cannot access adapter here..i want to to search by code*** // FindPeopleFragment.TAG_Code..getFilter().filter(cs); } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { } @Override public void afterTextChanged(Editable s) { } }); } catch (JSONException e) { e.printStackTrace(); } return rootView; } A: How about changing this line: FindPeopleFragment.this.adapter.getFilter().filter(cs); into: adapter.getFilter().filter(cs); or How about using AutoCompleteTextview for your search feature. AutoCompleteTextView documentation Example
Article content continued Trying to balance the budget in short order, says Gantefoer, is worrisome. “If you’re faster than that, I worry about the medicine killing the patient and it’s going to be too hard for the people to bear, and that is not necessary if you take a long vision for the province,” he says. Much of the ideas being floated by the province to combat the deficit are aimed at the public sector. Wage freezes, salary roll backs, layoffs and forced unpaid days off are, as Wall and Doherty put it “on the table.” Charles Smith is an associate professor of political science at University of Saskatchewan’s St. Thomas More College. He says there are different areas the province could look at to keep the pain of an austerity budget limited, but would avoid looking to cut $1.2 billion in spending all in one go just to balance the books. “You don’t actually cut and burn during downturns, because that will inevitably make the situation worse,” he says. Smith believes Saskatchewan’s current government did not plan well for a downturn that, in this cyclical economy, was unavoidable. Now, he says the response is ideological, as Saskatchewan’s government has done its best to avoid tax increases and because of that, has been left with the option of cutting. “This is a government that is very comfortable taking on or challenging the public sector,” he says, noting the province’s relationship with unions in the province. Jason Childs, an economist at the University of Regina, says the province was basing its budget on price forecasts that ended up being wrong, because natural resources didn’t recover as expected. “Realistically, there is going to have to be some pain somewhere. Cuts are going to have to be part of the solution,” he says, adding that tax increases should also be considered to bring in more revenue for the province. dfraser@postmedia.com Twitter.com/dcfraser
Q: Customize Shrine gem JSON response I'm using shrine gem in my rails app for for file uploading. I want to integrate this gem with fineuploader front-end library to enhance the user experience while uploading the files. I'm able to integrate it to an extent that I'm able to upload files through fineuploader front-end via shrine server-side code to my s3 bucket. Now, on a successful upload I receive a 200 status code with JSON response which appear something like following: {"id":"4a4191c6c43f54c0a1eb2cf482fb3543.PNG","storage":"cache","metadata":{"filename":"IMG_0105.PNG","size":114333,"mime_type":"image/png","width":640,"height":1136}} But the fineuploader expects a success property in JSON response with a value of true in order to consider this response successful. So I need to modify this 200 status JSON response to insert this success property. For this, I asked the author of shrine gem and he advised me to use this code in shrine initializer file: class FineUploaderResponse def initialize(app) @app = app end def call(env) status, headers, body = @app.call(env) if status == 200 data = JSON.parse(body[0]) data["success"] = true body[0] = data.to_json end [status, headers, body] end end Shrine::UploadEndpoint.use FineUploaderResponse Unfortunately, this code is not working and infact by using this code fineuploader is throwing following error in console: Error when attempting to parse xhr response text (Unexpected end of JSON input) Please advice me, how I need to modify this code to insert success property with a valid JSON response. A: After you change the body, you need to update the Content-Length within header or the browser will cut it off. If you do this, it will work flawlessly: class FineUploaderResponse def initialize(app) @app = app end def call(env) status, headers, body = @app.call(env) if status == 200 data = JSON.parse(body[0]) data['success'] = true body[0] = data.to_json # Now let's update the header with the new Content-Length headers['Content-Length'] = body[0].length end [status, headers, body] end end Shrine::UploadEndpoint.use FineUploaderResponse
Open API An open API (often referred to as a public API) is a publicly available application programming interface that provides developers with programmatic access to a proprietary software application or web service. APIs are sets of requirements that govern how one application can communicate and interact with another. APIs can also allow developers to access certain internal functions of a program, although this is not typically the case for web APIs. In the simplest terms, an API allows one piece of software to interact with another piece of software, whether within a single computer via a mechanism provided by the operating system or over an internal or external TCP/IP-based or non-TCP/IP-based network. In the late 2010s, many APIs are provided by organisations for access with HTTP. APIs may be used by both developers inside the organisation that published the API or by any developers outside that organisation who wish to register for access to the interface. Characteristics Open APIs have three main characteristics: They are available for use by developers and other users with relatively few restrictions. Restrictions might include the necessity to register with the service providing the API. They are typically backed by open data. Open data is freely available for everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. An Open API may be free to use but the publisher may limit how the API data can be used. They are based on an open standard. Open API versus private API Private API A private API is an interface that opens parts of an organisation’s backend data and application functionality for use by developers working within (or contractors working for) that organization. Private APIs are only exposed to internal developers therefore the API publishers have total control over what and how applications are developed. Private APIs offer substantial benefits with regards to internal collaboration. Using a private API across an organisation allows for greater shared awareness of the internal data models. As the developers are working for (or contracted by) one organisation, communication will be more direct and therefore they should be able to work more cohesively as a group. Private APIs can significantly diminish the development time needed to manipulate and build internal systems that maximise productivity and create customer-facing applications that improve market reach and add value to existing offerings. Open API In contrast to a private API, an open API is publicly available for all developers to access. They allow developers, outside of an organisation's workforce, to access backend data that can then be used to enhance their own applications. Open APIs can significantly increase revenue without the business having to invest in hiring new developers making them a very profitable software application. However, it is important to remember that opening back end information to the public can create a range of security and management challenges. For example, publishing open APIs can make it harder for organisations to control the experience end users have with their information assets. Open API publishers cannot assume client apps built on their APIs will offer a good user experience. Furthermore, they cannot fully ensure that client apps maintain the look and feel of their corporate branding. Open APIs in business Open APIs can be used by businesses seeking to leverage the ever-growing community of freelancing developers who have the ability to create innovative applications that add value to their core business. Open APIs are favoured in the business sphere as they simultaneously increase the production of new ideas without investing directly in development efforts. Businesses often tailor their APIs to target specific developer audiences that they feel will be most effective in creating valuable new applications. However, an API can significantly diminish an application's functionality if it is overloaded with features. For example, Yahoo's open search API allows developers to integrate Yahoo search into their own software applications. The addition of this API provides search functionality to the developer's application whilst also increasing search traffic for Yahoo's search engine hence benefitting both parties. With respect to Facebook and Twitter, we can see how third parties have enriched these services with their own code. For example, the ability to create an account on an external site/app using your Facebook credentials is made possible using Facebook's open API. Many large technology firms, such as Twitter, LinkedIn and Facebook, allow the use of their service by third parties and competitors. Open APIs on the Web With the rise in prominence of HTML5 and Web 2.0, the modern browsing experience has become interactive and dynamic and this has, in part, been accelerated through the use of open APIs. Some open APIs fetch data from the database behind a website and these are called Web APIs. For example, Google's YouTube API allows developers to integrate YouTube into their applications by providing the capability to search for videos, retrieve standard feeds, and see related content. Web APIs are used for exchanging information with a website either by receiving or by sending data. When a web API fetches data from a website, the application makes a carefully constructed HTTP request to the server the site is stored on. The server then sends data back in a format your application expects (if you requested data) or incorporates your changes to the website (if you sent data). See also OpenAPI Specification Application enablement Open system (computing) Mashup (web application hybrid) Webhook References Category:Application programming interfaces
1. Field of the Invention This invention relates to an arm motion support apparatus that helps persons having arm related motor function disabilities to perform volitional arm movements and functional exercises. 2. Description of the Prior Art Devices that have been proposed or are commercially available for supporting arm movement by persons having disabilities related to the motor functions of the arms include non-motorized arm suspension devices which employ springs, powered devices that use lines (cords or wires) or the like to move an arm up and down, and crane arrangements with seven degrees of freedom that use horizontal manipulators and lines for vertical movement. However, the drawback of these conventional apparatuses is that they do not provide a large degree of control of arm movement, using a simple apparatus and simple control. Apparatuses that are simple do not provide satisfactory control of movement, while those that do provided satisfactory control are highly complex. Also, with respect to powered devices for aiding autonomous arm movement and functional exercises, when a mechanically driven manipulator is fastened to a patient's arm, the patient has the unpleasant feeling of being restrained by a machine. Thus, there is a need for an apparatus that eliminates such unpleasant feelings and is at the same time lighter, and in which full regard is given to considerations of safety when mistakes are made in movements. An object of the present invention is to provide an apparatus for supporting arm movement that is structurally light and simple, does not cause patients to feel restrained, can control arm movements with a large degree of freedom and can effectively help patients to make everyday arm movements under their own volition, and to perform functional exercises.
The family of a former University of Manitoba student convicted of aiding a terror plot in Afghanistan has written to a U.S. court pleading for leniency ahead of his sentencing on March 7. A jury convicted Muhanad Al Farekh in a New York court last September of providing support to terrorists and other charges related to a 2009 explosion at a U.S. military base in Afghanistan. The minimum sentence is seven years, but the U.S. government is asking the court to impose the maximum penalty of life in prison. "I have known Muhanad for the entirety of his life, he is a wonderful person, loving, caring, kind," his grandmother wrote to Federal District Court Judge Brian Cogan. The grandmother, who lives in Winnipeg, wrote that her grandson was an active volunteer in the community, has a "good heart inside" and "continues to have unwavering support from his family." Letters from family members in support of Al Farekh were filed in the New York court this week. In this video filed in court, Al Farekh is seen in a Winnipeg apartment watching a video that was published by the Islamic Army in Iraq. 1:15 Al Farekh, 32, had moved to Winnipeg in 2003 to live with his grandmother and uncle in preparation for his university education. Convicted on 9 charges He and two other University of Manitoba students, Ferid Imam and Maiwand Yar, left Winnipeg for Pakistan in March 2007. Court heard evidence that they went to join al-Qaeda. In the months before their departure, court was told, the three men watched video recordings encouraging violent jihad, listened to jihadist lectures and talked about their support for violent Islamist extremism. While Imam and Yar seemed to have disappeared, the Texas-born Al Farekh resurfaced. U.S. authorities say he climbed to a high-ranking position in the al-Qaeda operation. Before Al Farekh was captured, the U.S. administration debated whether to kill him in a drone attack in Pakistan, according to a 2015 New York Times report, which would have been a rare and controversial move against an American citizen. Al Farekh was arrested in Pakistan in 2014 and transferred to the custody of U.S. authorities, who took him to New York in April 2015 to face trial. He was convicted on all nine charges — which included providing support to terrorists, conspiracy to bomb a government facility and use of explosives — in connection with an attack on an American military base in Afghanistan. Bomb would have been 'catastrophic' On Jan. 19, 2009, two vehicles loaded with explosives lumbered toward Forward Operating Base Chapman in Afghanistan's Khost province where, court heard, some 90 Americans were working. The first driver detonated his load outside the gates and died in the explosion. Court video showing the aftermath of the January 2009 attack on a U.S. military base in Afghanistan. 0:35 The second truck became lodged in the crater from that explosion and did not detonate. That driver was shot dead as he tried to escape. Technicians dismantled the bomb and gathered evidence from it. The second bomb was much bigger — 3,400 kilograms — and court heard evidence it would have had a "catastrophic" effect, killing many people, if it had entered the base and exploded as planned. One American soldier was injured in the attack, along with several Afghan citizens, including a pregnant woman who got a piece of shrapnel lodged in her back. Armed with fingerprint evidence from the packing tape used on the bombs, the prosecution tied Al Farekh to the plot. The jury heard there were 18 prints that were a match to Al Farekh, but the defence lawyer argued the fingerprint evidence involved "guesswork." Defence seeks 'hope and a future' The defence told court Al Farekh was "an innocent man falsely accused" of crimes he didn't commit. In its submission to the court on sentencing, defence lawyers argued none of the offences Al Farekh was convicted of call for penalties beyond seven years. They said the sentence should be "sufficient but not greater than necessary," and that a sentence of lifetime incarceration for the 32-year-old who has no criminal history would be "far greater than necessary." "We urge the court to impose a sentence that leaves this individual human being with hope and a future and a chance to reunite with his large and loving extended family," said the defence sentencing memorandum filed this week. The memorandum noted that although there were injuries from the attack in Afghanistan, there were no deaths. The submission noted Al Farekh has been held in solitary confinement for nearly three years. 'More than what the jury concluded' Al Farekh has submitted to the court "that he does not believe in violence for any purpose, including religion-based violence." "Mr. Al Farekh is more than what the jury concluded from the evidence," the defence submission said. This FBI video recreation entered into evidence shows an explosion with just one fifth of the force of the bomb that failed to go off outside Forward Operating Base Chapman in January of 2009. 0:43 "He is also a brother, a son, a grandson, a nephew and a friend. As the many letters submitted on his behalf attest, he is a person with love of knowledge and learning who is also loving, generous, kind, funny, and loved by many people. He has never been known to be violent. He has a life ahead of him and people to care for him when released from confinement." In his letter to the judge, Al Farekh's father, Mahmoud Al Farekh, wrote that he believed his son travelled to Pakistan in 2007 to go to university. He revealed that he met him there and that they toured the campus and met with an admissions officer. Al Farekh's mother wrote a letter referring to her son's period of incarceration. "I was given the chance to see my son only once and the thought of the kind of suffering my son must be going through in his solitary confinement is unbearably painful." Uncle calls him 'a fun person' Al Farekh's uncle wrote to the judge, "Muhanad is a fun person, loves life and is full of it, who always kept a grin that made him shine." He said Muhanad is "an amazing person, sincere and kind." The prosecution argued, "The sentence imposed should send a message to all would-be terrorists that if they conspire to train and fight, and if they support al-Qaeda's call to murder Americans, they will be caught, prosecuted, and then imprisoned for life." Prosecutors pointed to seven cases in which conviction on similar charges in U.S. courts resulted in life sentences. ​​Got a tip for the CBC News I-Team? Email iteam@cbc.ca or call the confidential tip line at 204-788-3744.
Q: WCF vs. Web service vs. Sockets: which to choose? I have two related questions about Web services: (1) I'm currently writing a set of applications, and it occurred to me that maybe I'm not using the right tool for the job. Here is the spec: There are many Windows servers behind different VPNs and firewalls. Each of the servers has a Windows service running, that reports various information about it to a centralized server, via a Web service, both of which I've written, and have access to. So I'm both the producer and the consumer, and I'm staying on the same platform (.NET). Maybe a web service isn't the way to go? I'm using one purely because it's easy to write and deploy, and I'm the most comfortable with them. Should I really be using WCF for this? (2) In the web service, I'm creating a State object to represent the state of the server, and sending it as a parameter. However, adding a service reference creates a proxy of the State class. It seems gacky to copy the properties of the State object to the proxy, and then send the proxy. Should I just replace the proxy class with the real class in the auto-generated code (i.e., include a reference to the State class instead)? A: By "web services" I assume you mean an ASMX? I would go with WCF is possible, simply because you lose nothing but gain lots of flexibility. You could, for example, switch from XML-over-HTTP to Binary-over-TCP through a simple config change.
GrandViewResearch.com has announced the addition of "Global Antimicrobial Coatings Market Analysis And Segment Forecasts To 2020" Market Research report to their Database. Global antimicrobial coatings market is expected to reach USD 4,520.3 million by 2020, according to a new study by Grand View Research Inc. Growing demand for medical device coatings is expected to remain a key market driver over the next six years. In addition increasing market penetration of indoor air quality products, mainly in the U.S. is also expected to have a positive impact on the market over the forecast period. Stringent regulatory scenario, primarily in Europe and U.S. on account of increasing health concerns and the issues and costs associated with product registration is expected to remain a key challenge for the industry participants over the next six years. Additionally, volatile prices of silver and other raw materials are also expected to have a dampening effect on market profitability. The report "Antimicrobial Coatings Market Analysis And Segment Forecasts To 2020," is available now to Grand View Research customers at grandviewresearch.com/industry-analysis/antimicrobial-coatings-market Inquiry Before Buying @ grandviewresearch.com/inquiry/328 Further key findings from the study suggest: 1. Global antimicrobial coating market volume was estimated at 310.3 kilo tons in 2013 and is expected to reach 589.8 kilo tons by 2020, growing at a CAGR of 9.8% from 2014 to 2020. 2. Surface modification coatings dominated the global market and as the leading product segment, accounted for 53.9% of total market volume in 2013. Global revenue for antimicrobial powder coatings is expected to reach USD 2,213.2 million by 2020, growing at a CAGR of 13.2% from 2014 to 2020. 3. Indoor air quality emerged as the leading application market for antimicrobial coatings and accounted for 26.9% of total volume in 2013. Global antimicrobial coating demand for medical applications is expected to reach 151.6 kilo tons by 2020, growing at a CAGR of 10.2% from 2014 to 2020. 4. North America dominated the global market and accounted for 39% of total market volume in 2013. North America along with being the largest market is also expected to be the fastest growing market for antimicrobial coatings, at an estimated CAGR of 10.6% from 2014 to 2020. European market revenue, on the other hand is expected to reach USD 1,043.9 million by 2020. 5. with top four participants including AkzoNobel NV, Sherwin-Williams, Dow Microbial Control and Diamond Vogel accounting for over 40% of global demand in 2013. Request Sample of this Reprot @ grandviewresearch.com/request/328 For the purpose of this study, Grand View Research has segmented the antimicrobial coatings market on the basis of product, application and region: Bitumen Market Analysis And Segment Forecasts To 2020 ( grandviewresearch.com/industry-analysis/bitumen-market ) global market for bitumen is expected to reach USD 95.77 billion by 2020, according to a new study by Grand View Research, Inc. Bitumen is primarily used in road construction activities and increased road development in high growth markets of India, China and Brazil is expected to be a key driver for the growth of the market. Roadway constructions were the major consumers of bitumen in 2013, accounting for over 80 million tons of global consumption. Other key applications include waterproofing, insulation and adhesives. participants over the next six years. Dental Equipment Market Analysis And Segment Forecasts To 2020 ( grandviewresearch.com/industry-analysis/dental-equipment-market ) global dental equipment market is expected to reach USD 8,453.7 million by 2020, according to a new study by Grand View Research Inc. Growing demand for dental tourism in emerging Asian markets, increasing adoption rates of advance technologies such as CAD/CAM enabling expedited manufacturing of dental prosthetics and the presence of robust reimbursement frameworks in North America are expected to be primary growth drivers for the market over the next six years. Increasing prevalence of dental disorders in developed markets with high patient awareness levels such as North America and Europe are expected to have a positive impact on market growth over the forecast period. About Grand View Research Grand View Research, Inc. is a market research and consulting company that provides off-the-shelf, customized research reports and consulting services. To help clients make informed business decisions, we offer market intelligence studies ensuring relevant and fact-based research across a range of industries, from technology to chemicals, materials and energy. With a deep-seated understanding of varied business environments, Grand View Research provides strategic objective insights. For more information, visit grandviewresearch.com
Dehradun (Uttarakhand) [India], Dec 23 (ANI/NewsVoir): 'Hello Uttarakhand' is a public utility mobile app available on Google Play Store on Android-based smartphones, which works as a multilingual translation facility for foreign and Indian tourists to communicate with locals who only understand regional languages. The state of Uttarakhand has three main regional languages, namely Garhwali, Jaunsari, and Kumaoni. The main objective is to minimize the language barrier that people face to communicate with each other. Foreign nationals can use this app to translate in French, English, German, Chinese, Japanese, Russian, Italian, Spanish and Swedish and many more. Indian tourists can also use the app to translate in Hindi or English. 'Hello Uttarakhand' is a technology-enabled community development service aimed to improve the socio-economic conditions of people living in Himalayan states like Uttarakhand. This app has been developed by Dehradun-based IT solution expert, data scientist and social entrepreneur, Akash Sharma. Akash has previously successfully developed other Android-based mobile apps like 'Uttarakhand Police App' which has had over one lakh downloads so far making it Uttarakhand's number one mobile app. This app was developed in sync with the Government's initiative of promoting technology in the state. He has also worked on developing regional gaming apps called 'Pithoo Batti' and 'Bagh Bakri' which are also available on Google Play Store. "My team has been working on this app for over two years now and it has been a very challenging and difficult journey. We have conducted extensive research with over 100 people to collect and feed regional language data in the app. We are hoping that the app helps in boosting the tourism industry in Uttarakhand," said Akash Sharma, Founder, and Developer. Akash Sharma is also actively working as an aid to the PR departments of government agencies. In his free time, Akash enjoys traveling and especially trekking. This story is provided by NewsVoir. ANI will not be responsible in any way for the content of this article. (ANI/NewsVoir).
/* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2017 Cavium, Inc */ #ifndef CTX_H #define CTX_H #ifdef __cplusplus extern "C" { #endif /* * CPU context registers */ struct ctx { void *sp; /* 0 */ void *fp; /* 8 */ void *lr; /* 16 */ /* Callee Saved Generic Registers */ void *r19; /* 24 */ void *r20; /* 32 */ void *r21; /* 40 */ void *r22; /* 48 */ void *r23; /* 56 */ void *r24; /* 64 */ void *r25; /* 72 */ void *r26; /* 80 */ void *r27; /* 88 */ void *r28; /* 96 */ /* * Callee Saved SIMD Registers. Only the bottom 64-bits * of these registers needs to be saved. */ void *v8; /* 104 */ void *v9; /* 112 */ void *v10; /* 120 */ void *v11; /* 128 */ void *v12; /* 136 */ void *v13; /* 144 */ void *v14; /* 152 */ void *v15; /* 160 */ }; void ctx_switch(struct ctx *new_ctx, struct ctx *curr_ctx); #ifdef __cplusplus } #endif #endif /* RTE_CTX_H_ */
North Korea has funded its development of nuclear weapons by stealing money from financial institutions around the world via state-sponsored hacks, top cybersecurity experts warned. In a 58-page news report released Monday, Russian leading cybersecurity firm Kaspersky revealed that Pyongyang utilized a secret government program called Lazarus to electronically remove funds from banks in 18 countries, according to CNN. North Korea had previously been suspected by researchers of being behind several major thefts, including one last year in which up to $81 million was stolen from Bangladesh's central bank account in New York, as well as other attempted heists in Ecuador, the Philippines and Vietnam. Kaspersky reportedly supplied evidence that Pyongyang was also directly responsible for hacks in over a dozen other nations and that the cash was likely used to fund North Korea's nuclear weapons program. Other nations affected by North Korea's digital robberies included Costa Rica, Ethiopia, Gabon, India, Indonesia, Iraq, Kenya, Malaysia, Nigeria, Poland, Taiwan, Thailand, and Uruguay, said the report. Kaspersky said the addresses used by attackers were carefully concealed by routing their signals through countries such as France, South Korea and Taiwan, but one fateful error allowed researchers to detect the North Korean signal, according to United Press International. Trending: U.S., Mexico and Canada Plan Joint Bid to Host 2026 World Cup North Korea, which has been led by Kim Jong Un since his father's death in 2011, has suffered from years of economic sanctions since openly pursuing nuclear weapons in spite of U.N. Security Council resolutions. Pyongyang has routinely threatened to use the full extent of its nuclear arsenal in response to perceived hostilities by the U.S. and Washington's regional allies such as South Korea and Japan. Defense experts have estimated North Korea to possess about 10 nuclear warheads, but have doubted its ability to attach them to long-range missiles capable of reaching the U.S. North Korea has launched a number of missiles recently, indicating it was working on such intercontinental ballistic missile (ICBM) technology. Tensions in the Asia-Pacific region have been heightened since President Donald Trump said he would take a tougher stance on Pyongyang than his predecessor, former President Barack Obama, and a series of military exercises held last month between the U.S. and South Korea not far from North Korean territory. Story continues RTX344JE Center for Strategic and International Studies/Reuters More from Newsweek
Q: How can I efficiently dedupe a very large number of records? I'm working on a database with about 600,000 records, of which approximately 200,000 are believed to be duplicates. Has anyone got experience/advice for doing dedupe at scale? I imagine that we'll need to export and do some sort of external dedupe. Has anyone successfully used the datamade dedupe with Civi? I know that the "merge" API got some love a couple months ago - will any of that help me? A: We did a lot of work to the dedupe processes in CiviCRM and in 4.7 its a much easier process to de-dupe large databases then it used to be. It was presented by Eileen in this years London CiviCon, worth a look before you get into trying to do it outside of CiviCRM with all the pitfalls that brings with it!
Qua'tarian "Mira mar" Class Light Strike Cruiser The Mira mar Class was developed after the Qua'tarians took over the Terrains. The Terrains had an unfinished light cruiser for their fleet and offered up the plans for the design. The Empire then tried to make a science ship from the design in 2245 but the only prototype was destroyed in a terrible accident in the Ren'jah system by Commander Kodos. When the design was about to be scrapped it was brought back in 2267 and updated to be used as an effective stike cruiser, since its weapon arcs were very tight to front and rear with great overall coverage against fighters. However due to the FTL engines being VERY external (it was for higher combat speed) and the defective sensors array (1 hit to the area behind the bridge causes a power surge and blows the whole array up) the design was abandoned by 2278 with only 128 ships in service. The IQS Mira mar was the most famous, commanded by the famous and traitorous Captain Nehrig, who died in combat around 2241 in the "Ve'po'Bah" or “Great Traitor” Incident where Ivan Rankoff punished her by death for betraying him and the empire on 3 different occations. The left lighter colored ship is the earlyier refitted variant in 2256 with the darker ship being the last refit in 2268-2278.
The camping Right on the Barbarossa beach, the campground is equipped with sites for all tents, caravans and campers of all sizes. The sites are separated by hedges, with ample natural shade; all of the sites have free electric and water hook-up. The excellent bathroom and shower facilities have been recently renovated and are equipped with free hot water for showers, wash basins and sinks for washing dishes and cooking utensils, HK facilities, baby changing rooms and token operated washers and dryers. The campground offers the guests a TV room, an ironing room, shared refrigerators and freezers, free Wi-Fi, a playground for children, basket and volley ball fields, a bocce field and ping pong table. The Reception area provides the following services: safebox for valuables, cell phone charging station, assistance and advice for excursions to the Island of Elba and the other islands of the Tuscan Archipelago. Advanced booking is recommended.
Why Have A Fanniversary? “You have three minutes to celebrate the 45th anniversary of the Haunted Mansion. Ready, go!” If you were given such a time, we’re sure that you’d be able to fill that time rather quickly because of the 999 things you know about the classic Disney attraction. These are the types of challenges that D23, the Official Disney Fan Club, faced when putting together one of their staple events. “The Lion King” celebrates its 20th anniversary this year! In addition to the ever-popular D23 Expo that happens every other year, D23 puts on more personal and focused events throughout the year that still celebrate Disney as a whole by looking at the pieces. One of those events, called the Fanniversary, celebrates Disney through exactly what it name contains: fans and anniversaries. With the entire Walt Disney Company as their resource, D23 created an event that would not only cater to Disney fans, but also share the lesser known things of the Company by looking back at some of the things that made it. “The great thing about Fanniversary is whether you’re a Disney fan of the studios, television, or theme parks from ages 6 to 66 and beyond, it gave us a way to engage and entertain an audience in a personal, more meaningful way,” Jeffrey Epstein, a D23 spokesperson said. They are able to do that through a 105-minute program that’ll contain never-before-seen pictures, exclusive videos, and personal stories surrounding the Disney things that are celebrating landmark anniversaries this year. We have Disney Fanniversaries so we can “geek out.” From very notable items like the 45th anniversary of the Haunted Mansion attraction at Disneyland park to the lesser known (perhaps even forgotten) items like the 15th anniversary of the Disney Channel Original Movie ‘Zenon: Girl of the 21st Century’, Billy Stanek, who is this year’s show writer, found that his favorite part of putting together an event that “don’t necessarily have celebrations built around them.” To give all these things a fitting tribute, Billy sifted through tons of material and created what he promised to be an event that’ll make veteran and new Disney fans “geek out”. There will be never-before-seen pictures, exclusive video interviews with some of the Disney Legends and luminaries who put their own stamp on the Disney dream, and even more engagement that is sure to deepen your love for Disney even more. That love has also deepened for those who’ve been involved with putting together the event at D23, including Jeffrey and Billy. “It really completely changes when you get in front of the fans,” Billy said. “Bringing things I love to other people with like minds – that’s the best part.” Tickets are now on sale for the 2014 D23 Fanniversary events to all D23 members for all tour dates and stops that will start on August 9 in Burbank, Calif. Any remaining tickets will then be offered to the general public for purchase beginning July 9.
Trammell, a third-term House member who won his race last year with about 52% of the vote, said he hasn’t thought much about the calls for his replacement. He represents House District 132, which is considered a toss-up for Democrats, if not Republican-leaning. Voters in Trammell’s district supported Republican Brian Kemp in the governor’s race last year with about 51% of the vote. A little more than 50% of voters in the district supported the GOP’s Donald Trump in the 2016 presidential race. “It will be an election as usual in terms of the way we prepare for it,” Trammell said. “We do the same thing every two years — take our record to the voters.” But Cole Muzio, the executive director of the Family Policy Alliance of Georgia, said he doesn’t think Trammell’s record will hold up with his constituents. He said Trammell is out of touch with his “conservative-ish” district. The group held a press conference in Trammell’s district in Hogansville on Monday that Muzio said served to “put (Trammell) on notice” that anti-abortion voters are organizing to vote him out. “This is focused on an individual who led the charge against (HB 481),” Muzio said. “He tried every technical maneuver he could, and he pulled together the opposition to the bill (during the House debate).” The group has targeted 12 lawmakers who voted against HB 481, including two Republicans. Most on the list represent metro Atlanta districts. Muzio said the group is working with other anti-abortion organizations to recruit candidates, raise money and campaign against abortion rights supporters who seek office. Georgia Democrats have consistently vowed to challenge Republican lawmakers who voted for HB 481. Groups such as Planned Parenthood and the Georgia WIN List, and state Democratic groups have begun fundraising and candidate recruitment campaigns. Several women have announced their intent to run for office against Republicans next year over the anti-abortion votes. “Democrats are going to continue to make gains in the Legislature because of the social policy that the Republicans have passed,” Trammell said. “We expect to pick up more seats in 2020.” Trammell, who’s served as the House Democratic leader since 2017, is a rare white rural Democrat in the chamber. He won his race last year by 749 votes against a Republican candidate whom he accused of not living in the district. The office of then-Secretary of State Kemp declined to investigate the claim. Stay on top of what’s happening in Georgia government and politics at www.ajc.com/politics.
Wednesday, August 10, 2011 The longer I work in early childhood education, the more convinced I become that the single most important thing we can do for the young children we serve is to build connections and relationships that foster children's emotional well-being and the ability to interact positively with others. This is not to say that cognitive development doesn't "matter," but rather that in the absence of a strong sense of social and emotional competence, the "ABC's and 123's" just won't take you very far. Unfortunately, as early childhood teachers, many of us are not prepared to address this critical aspect of children's development, especially in the face of the many challenging behaviors that children can display as they grow and learn. With the generous support of the St. Louis Mental Health Board, CDCA has been working for the past five years to address this critical need through our Social-Emotional Early Childhood (SEEC) Project. SEEC is a year-long process that begins with classroom teaching teams attending four full-day classes on supporting children's social and emotional development. These classes stress the importance of building supports from the bottom up-- focusing first on relationships, teacher beliefs and attitudes, then on classroom environment, social-emotional teaching strategies and finally, individual intervention plans for children with greater needs. Friday, April 15, 2011 This is a private event at the Magic House, only open to the families that register. Tickets are only $6.00. We have had so much fun at this event the past few years and would love for you to join us this year. Click here to register and bring as many of your friends and family as possible. What's your idea of a perfect play date for a child? We want to know... Tuesday, April 5, 2011 CDCA has proudly supported children and families in the community for the past 40 years. Click here to help us to continue to bring these services to your community. If you would like to donate your time, CDCA is always looking for volunteers. Call us at 314-531-1412, ext. 19 if you are interested or leave us a note here. Monday, April 4, 2011 The Outdoor Classroom workshop we held on Saturday April 2, 2011 was the first outdoor workshop held by CDCA. It was held at Kids International Child Care Center in Ellisville Mo. 25 participants came and participated in an outdoor classroom setting. We worked in created learning centers outdoors for over an hour. Here are some things that the groups included in their feedback reports. Please join me and others in continuing our discussion about using the outdoor classroom and learning through and from Nature! Here is a list of what participants said were new activities and Ideas that they learned about through the workshop. 1) Nature touchy feely box 2) Live insect boxes/containers 3) Windy day scarves use to mimic movement of trees, grasses 4) Using large and small tree cookies as percussion instruments 5) Making Grab and Go bags with instruments and props to use outdoors 6) Different uses for materials I already have in my center 7) Working with tree cookies cut in halves and fourths as fractions 9) Planning for outdoor experiences will lead to children spending more time outdoors 10) Using natural materials from the outdoors instead of plastic stuff 11) Including a bird watching station 12) Having Children sketch plants 13) Measuring things outside 14) The Tree activity (look,move,build and draw) 15) Creating an Outdoor Play policy 16) Using outdoor materials in art projects (like sticks) and do them outdoors Again thanks to everyone who shared a new idea or activity they learned. Click here to see what participants learned in this valuable workshop by going to our You Tube Channel. Look for more posts related to The Outdoor Classroom I hope that many of you were inspired and create all of those wonderful spaces and activites that you listed. Check out this link for the latest Grow & Learn Family Education workshops CDCA is offering. Would these trainings be of value to you as a parent? If so, let us know. If not, we want to know why, we value your feedback!
Q: How to push down main menu while expanding nested menu using Flex CSS? I have two level menu (main and nested). The menu appears as Flex column on small screen size but when I hover on the main menu the sub menu appears top of the main menu. How can I push down the main menu to make require room for sub menu while expanding? #main-menu ul { display: flex; flex-direction: row; background-color: #F2F3F4; padding: 0; } #main-menu ul li { position: relative; flex: 1 0 auto; text-align: left; } #main-menu li ul { display: none; width: 100%; position: absolute; } #main-menu ul ul { left: 0; top: 10; } #main-menu ul li:hover ul, #main-menu li ul li:hover ul { display: flex; flex-direction: column; } #main-menu ul a { display: block; padding: 10px; } #main-menu ul a:hover { background-color: #B2BABB; } @media only screen and (max-width: 768px) { #main-menu ul { flex-direction: column; } } <nav id="main-menu"> <ul> <li><a href="#">Item 1</a> <ul id="sub-menu"> <li><a href="#">Item 1.1</a></li> <li><a href="#">Item 1.2</a></li> <li><a href="#">Item 1.3</a></li> </ul> </li> <li><a href="#">Item 2</a></li> <li><a href="#">Item 3</a></li> <li><a href="#">Item 4</a></li> </ul> </nav> A: That overlap is caused by the inner ul being set to position: absolute, which simply mean, the ul is taken out of flow and doesn't affect any other elements. #main-menu li ul { display: none; width: 100%; position: absolute; /* this needs to be changed */ } Simply add a new rule in the media query, that override the previous setting on smaller screens @media only screen and (max-width: 768px) { #main-menu ul { flex-direction: column; } #main-menu li ul { /* added */ position: relative; } } Stack snippet #main-menu ul { display: flex; flex-direction: row; background-color: #F2F3F4; padding: 0; } #main-menu ul li { position: relative; flex: 1 0 auto; text-align: left; } #main-menu li ul { display: none; position: absolute; left: 0; top: 100%; width: 100%; } #main-menu ul li:hover ul, #main-menu li ul li:hover ul { display: flex; flex-direction: column; } #main-menu ul a { display: block; padding: 10px; } #main-menu ul a:hover { background-color: #B2BABB; } @media only screen and (max-width: 768px) { #main-menu ul { flex-direction: column; } #main-menu li ul { position: relative; } } <nav id="main-menu"> <ul> <li><a href="#">Item 1</a> <ul id="sub-menu"> <li><a href="#">Item 1.1</a></li> <li><a href="#">Item 1.2</a></li> <li><a href="#">Item 1.3</a></li> </ul> </li> <li><a href="#">Item 2</a></li> <li><a href="#">Item 3</a></li> <li><a href="#">Item 4</a></li> </ul> </nav> You also had these 2 rules, #main-menu li ul { display: none; width: 100%; position: absolute; } #main-menu ul ul { left: 0; top: 10; /* invalid value, needs a unit (if not a "0") */ } which I merged into 1 and corrected the top: 10 property/value. #main-menu li ul { display: none; position: absolute; left: 0; top: 100%; /* changed so it start at the bottom of its parent */ width: 100%; }
958 S.W.2d 740 (1997) STATE of Tennessee, Appellant, v. Billy O. WINNINGHAM, Appellee. Supreme Court of Tennessee, at Nashville. December 29, 1997. *741 John Knox Walkup, Attorney General and Reporter, Michael E. Moore, Solicitor General, Daryl J. Brand, Assistant Attorney General, Nashville, William E. Gibson, District Attorney General, Anthony W. Huddleston, Assistant District Attorney General, Livingston, for Appellant. Phillips M. Smalling, Byrdstown, for Appellee. OPINION BIRCH, Justice. Billy O. Winningham, the appellee, was adjudicated in contempt of court for having violated an order of protection issued at the request of his estranged wife. The contemptuous conduct alleged included setting the fire that burned down his wife's house.[1] This same conduct also served as the basis for an arson indictment later returned against him. *742 The trial court, upon the appellee's motion, dismissed the indictment on double jeopardy grounds; the Court of Criminal Appeals affirmed that judgment. We granted the State's application for review under Rule 11, Tenn. R.App. P., in order to determine whether the double jeopardy provisions of the United States and Tennessee Constitutions bar a subsequent criminal prosecution when the conduct underlying the charge in the indictment also served as the evidentiary basis for an earlier contempt conviction. Because arson and contempt are, in the context presented, significantly different offenses under double jeopardy analyses, we find no double jeopardy violation here and reverse the judgment of the Court of Criminal Appeals. I The protective order in question was entered on October 15, 1993, by the Circuit Court of Pickett County in the matter of Mary S. Winningham v. Billy O. Winningham. It provided: the respondent is enjoined from coming about petitioner [Ms. Winningham] for any purpose and specifically from abusing, threatening to abuse petitioner, or committing any acts of violence upon petitioner upon penalty of contempt. On November 19, 1993, Ms. Winningham's house burned, and the appellee was arrested and incarcerated the same day on a contempt charge for violation of the protective order. On November 23, 1993, the trial court held a hearing on the contempt charge and found the appellee guilty of civil and criminal contempt. The trial court delineated the factual basis for its ruling: The proof in this case satisfies the Court both by a preponderance of the evidence for civil contempt and beyond a reasonable doubt for criminal contempt that the defendant did in fact violate this order. I'm satisfied that the proof, by both direct and circumstantial evidence, indicates that the defendant threatened Ms. Winningham's life on the telephone, that he came around there, that he came back onto the back porch and cut the wires. I'm satisfied that by direct and circumstantial evidence that he came back to the property and set the fire that led to this house being burned down. The Court finds in this case that the aggrieved party has suffered damages in the burning of her home and in the shooting of her car, both of which in the Court's opinion, and the Court finds both by a preponderance of the evidence and beyond a reasonable doubt, was at the hand of the defendant. The trial court imposed punishment for both civil contempt and criminal contempt, pursuant to Tenn. Code Ann. §§ 36-3-610 (1991) and 29-9-105 (1980).[2] The order of civil contempt was vacated on January 24, 1994. As of that date, the appellee had been incarcerated longer than the maximum sentence allowable for criminal contempt under Tenn. Code Ann. § 29-9-103 (Supp. 1993).[3] On January 3, 1994, the appellee was indicted for arson in the alleged burning of Ms. Winningham's house. The Criminal Court of Pickett County found that the trial court's prior contempt judgment was based on the same facts upon which the arson indictment had been grounded. Consequently, the court dismissed the arson indictment on double jeopardy grounds, and the Court of Criminal Appeals affirmed the dismissal. II Because this appeal presents a question of law, our review is de novo with no *743 presumption of correctness. State v. Davis, 940 S.W.2d 558, 561 (Tenn. 1997). The Double Jeopardy Clause of the Fifth Amendment to the United States Constitution, applicable to the states through the Fourteenth Amendment, provides that no person shall "be subject for the same offense to be twice put in jeopardy of life or limb... ." Article 1, § 10 of the Tennessee Constitution provides that "no person shall, for the same offence, be twice put in jeopardy of life or limb." As we have stated many times, three fundamental principles underlie double jeopardy: (1) protection against a second prosecution after an acquittal; (2) protection against a second prosecution after conviction; and (3) protection against multiple punishments for the same offense. State v. Denton, 938 S.W.2d 373, 378 (Tenn. 1996) (citing, among others, North Carolina v. Pearce, 395 U.S. 711, 717, 89 S.Ct. 2072, 2076, 23 L.Ed.2d 656, 664-65 (1969)). Under the Tennessee Constitution, this Court inquires further than do federal courts in determining whether a defendant has been unconstitutionally subjected to double prosecution for the same conduct. According to Denton, 938 S.W.2d at 381, resolution of a double jeopardy issue requires the following: (1) a Blockburger analysis of the statutory offenses; (2) an analysis, guided by the principles of Duchac [v. State, 505 S.W.2d 237 (Tenn. 1973)], of the evidence used to prove the offenses; (3) a consideration of whether there were multiple victims or discrete acts; and (4) a comparison of the purposes of the respective statutes. None of these steps is determinative; rather the results of each must be weighed and considered in relation to each other. A Thus, we begin with the first Denton factor, an analysis under the test established in Blockburger v. United States, 284 U.S. 299, 304, 52 S.Ct. 180, 182, 76 L.Ed. 306, 309 (1932).[4] In the context of both double punishment and double prosecution cases, the subject offenses must survive the Blockburger "same-elements" test in order to satisfy the requirements of the Double Jeopardy Clause. United States v. Dixon, 509 U.S. 688, 696, 113 S.Ct. 2849, 2856, 125 L.Ed.2d 556, 568 (1993). This test asks "whether each offense contains an element not contained in the other; if not, they are the `same offence' and double jeopardy bars additional punishment and successive prosecution." Id.[5] Dixon included appeals by Alvin Dixon and Michael Foster; their cases were consolidated on appeal. Dixon was released under an order which specified that the commission of "any criminal offense" could subject him to prosecution for contempt of court. While on bond under the release order, he was indicted for a felony drug offense. This indictment triggered Dixon's conviction for criminal contempt of court for violation of the release order. The other defendant, Foster, consented to a protection order obtained by his estranged *744 wife. This order required that he not "molest, assault, or in any manner threaten or physically abuse" his wife. Alleging several episodes of assaults and threats, his wife filed three motions to have Foster held in contempt. The court held a hearing and found him guilty of criminal contempt for violation of the order. Subsequently, he was indicted on three counts of threatening to injure, one count of simple assault, and one count of assault with intent to kill. All five counts were based on episodes for which he was either acquitted or convicted in the previous contempt hearing. On appeal, each defendant contended that prosecution under his respective indictment constituted a second prosecution for the same offense — the first having been the contempt conviction. This procedure, they asserted, violated double jeopardy principles. Id. at 691-93, 113 S.Ct. at 2853-55, 125 L.Ed.2d at 564-66. A majority of the Dixon Court disagreed about the application of the Blockburger test to the facts described above. As a result, not one of the five separate approaches in Dixon gained support sufficient to constitute a majority view.[6] Nevertheless, in the matter under review, the Court of Criminal Appeals adopted Justice Scalia's approach to the Blockburger test and concluded that the arson indictment violated double jeopardy principles. Under Scalia's approach, the language of a court order may, but does not always, "incorporate" statutory offenses into the order. If an offense is deemed to have been "incorporated," then application of double jeopardy principles would permit but one prosecution, which could be for either the contempt of court or the incorporated offense — whichever one was first prosecuted. The rationale is that when the underlying offense is incorporated into the order, it becomes an element of contempt. As an element of contempt, the underlying offense involved must be included in the Blockburger analysis. In effect, the underlying offense becomes a lesser-included offense of contempt. Consequently, the underlying offense does not have an element not contained in the contempt offense, and subsequent prosection for the underlying offense violates double jeopardy principles under Blockburger. Id. at 697-98, 113 S.Ct. at 2856-57, 125 L.Ed.2d at 569-70. As a result of the "incorporation" approach, the outcome of a case would necessarily depend on the language of the order at issue, and the consequence of such dependence is not easily predictable. According to Justice Scalia, the language of the protective order prohibiting Foster from assaulting or threatening his wife did not incorporate all criminal statutes concerning assaults and threats. Id. at 700-02, 113 S.Ct. at 2858-59, 125 L.Ed.2d at 570-72. In contrast, the language of the order prohibiting Dixon from committing criminal offenses did incorporate all criminal statutes. Id. at 698, 113 S.Ct. at 2857, 125 L.Ed.2d at 569. The result of this approach is not one that is readily predictable or consistent. The protective order issued against Winningham enjoined him from "committing any acts of violence upon petitioner." This language varies slightly from the language of the orders issued against Foster and Dixon. Consequently, application of the "incorporation" approach to the protection order in the instant case is unworkable. The problem with the "incorporation" approach is that no matter how carefully protective orders may be crafted, they may nevertheless incorporate the elements of a criminal offense and thereby unwittingly bar subsequent prosecution *745 for the underlying offense — a result certainly not intended. We find that Chief Justice Rehnquist's application of Blockburger is better-reasoned and more easily adaptable to Tennessee case law. Under this approach, protection orders[7] do not implicitly incorporate the statutory elements of any crime into the offense of contempt. The Blockburger test focuses not on the terms of the particular order involved, but on the statutory elements of contempt in the ordinary sense. Further, the underlying criminal offense is not viewed as a lesser-included offense because it is not necessarily included within the statutory elements of contempt. Id. at 716-20, 113 S.Ct. at 2867-68, 125 L.Ed.2d at 579-82 (Rehnquist, C.J., concurring and dissenting). Current Tennessee case law parallels Rehnquist's approach to the double jeopardy issue created by criminal prosecution following contempt proceedings. See State v. Wyche, 914 S.W.2d 558, 560-61 (Tenn. Crim. App. 1995); State v. Sammons, 656 S.W.2d 862, 866-69 (Tenn. Crim. App. 1982). In Sammons, the defendant violated an order awarding custody of his daughter to his former wife. The violations included his having abducted the daughter several times; he was cited for contempt of court for this conduct. Subsequently, he was indicted on charges of kidnaping and burglary based on the same conduct which resulted in his contempt conviction. Id. at 864-66. Because of a procedural irregularity, the court was unable to determine whether a double jeopardy violation had occurred in that case. Id. at 866. Nevertheless, the court proceeded to find that under Blockburger there would have been no double jeopardy bar to the subsequent prosecutions. Id. at 868. The court based its finding on the principle that contempt and kidnaping statutes serve entirely different purposes: The purposes of the general statutes authorizing a court to punish for abuse of its processes and those creating and prescribing punishment for various indictable offenses are so entirely different, and designed to accomplish such wholly different purposes, that we do not find any violation of constitutional principles in imposing punishment upon an offender under both sets of statutes. Id. at 867 (quoting Maples v. State, 565 S.W.2d 202 (Tenn. 1978). This reasoning represents the prevailing view, as the Sammons court explained: The traditional view has long been that "former jeopardy cannot be invoked on the ground the same act is punishable both as a contempt of court and as a crime." The reason underlying the rule is a recognition that the two offenses are not the same for constitutional purposes. Thus, the courts have concluded, "the fact that an act constituting a contempt is also criminal and punishable by indictment or other method of criminal prosecution does not deprive the outraged court from punishing the contempt." Id. at 868 (citations omitted). Furthermore, whether the same conduct can be subject to multiple punishment is a matter of legislative intent, and the legislature clearly intended that the kidnaping statute and the contempt statute address totally separate and independent concerns. Id. at 869. Applying Justice Rehnquist's approach to the instant case, we find that the arson indictment does not violate federal double jeopardy principles. Tennessee Code Annotated § 29-9-102(3) (1980) provides the elements of contempt: (1) willful disobedience or resistance and (2) to any lawful writ, process, order, rule, decree, or command of said courts. The statutory elements of arson, however, are (1) the knowing damage of any structure by means of a fire or explosion, and (2) without the consent of all persons who have a possessory, proprietary, or security interest therein, or (3) with intent to damage the structure to collect insurance or for any unlawful purpose. Tenn. Code Ann. § 39-14-301(a) (1991). Clearly, both statutes contain elements which the other does *746 not; in fact, they have no common elements. Thus, application of the Blockburger test strongly suggests that the legislature intended to impose separate punishment for each of these offenses. With this conclusion, the analysis under the Double Jeopardy Clause of the United States Constitution is now complete, and the arson indictment withstands federal constitutional scrutiny. B Continuing the inquiry under the Double Jeopardy Clause of the Tennessee Constitution, the next step is the Duchac analysis of the evidence used to prove each offense. If the same evidence is not required to prove each offense, "then the fact that both charges relate to, and grow out of, one transaction, does not make a single offense where two are defined by the statutes." Denton, 938 S.W.2d at 380 (quoting Duchac v. State, 505 S.W.2d 237, 239 (Tenn. 1973)). The particular facts underlying each case must be examined to determine whether one conviction will bar the other. Id. (quoting Duchac, 505 S.W.2d at 240). In Denton, because defendant Denton's conduct consisted of a single attack on a single victim, this Court found that the charges of aggravated assault and attempted voluntary manslaughter necessarily relied on the same evidence. Thus, application of Duchac indicated that the two offenses were the same for double jeopardy purposes. Id. at 382. In the cause before us, evidence of the following conduct formed the grounds for contempt: threats to Ms. Winningham's life, trespass upon her property, shots fired at her car, and the setting of the fire that destroyed her house. The house-burning incident served also as the grounds for the arson indictment. Thus, in order to prove arson, the State must rely on evidence which necessarily includes some of the same evidence used to establish the appellee's conduct as contemptuous. We are mindful that evidence in addition to the arsonous conduct supported the contempt conviction. However, the various acts upon which the contempt conviction was based are, for purposes of a Duchac analysis, inseparable. We cannot ascertain whether any one of the factual findings, including the finding that the appellee burned his wife's house, was truly necessary to establish contempt, and we decline to speculate. In sum, the application of Duchac principles suggests that the two offenses in the case under review are the same for double jeopardy purposes. C We now turn to Denton's third double jeopardy factor, the consideration of whether there were different victims or discrete acts. The charges of contempt and arson both involve the same act of burning a house. However, the contempt conviction was also based on other discrete acts, such as threats and trespass. Second, different victims are involved. In general terms, criminal conduct offends the State as the sovereign. Also offended by arson would be the owner of the structure and, perhaps, the community-at-large. In contrast, "`[t]he proceeding in contempt is for an offense against the court as an organ of public justice, and not for violation of the criminal law.'" Sammons, 656 S.W.2d at 868 (quoting State v. Howell, 80 Conn. 668, 69 A. 1057, 1058 (1908)) (emphasis added). Thus, the court and the judicial process are "victims" of the act of contempt. The fact that different victims are involved suggests that separate prosecutions would not violate double jeopardy principles under the Tennessee Constitution. D The fourth and final step under Denton requires an analysis of the purposes sought to be accomplished by the enactment of each of the two statutes. Here, the arson statute and the contempt statute serve vastly dissimilar purposes. Obviously, the prohibition against arson is intended to deter the destruction of property and the endangerment of human life. In marked contrast, the offense of contempt of court has as its purposes the maintenance of the integrity of court orders and the vindication of the court's authority. Dixon, 509 U.S. at 742, 113 S.Ct. at 2880, 125 L.Ed.2d at 597-98 (Blackmun, J., concurring and dissenting); Sammons, 656 S.W.2d at 869. So essential is *747 this purpose to the proper functioning of the court that even erroneous orders must be obeyed. Id. The fact that the two statutes serve vastly different purposes suggests that separate prosecutions would not violate double jeopardy principles under our state constitution. III To summarize, through our analyses under Denton we have found both similarities and significant differences between the crime of contempt and arson, as presented in the context of this case. In the final analysis, we conclude that the Denton factors weigh in favor of allowing the prosecution for arson to follow the appellee's contempt conviction. Concededly, because the contempt conviction and arson indictment both involve the same act of burning Ms. Winningham's house, some of the same evidence used to prove contempt may also be used to prove arson. This merely underscores the similarity of the two offenses under Duchac. However, the vast differences in the elements of each statute, the victims of each statute, and the purposes of each statute demonstrate the legislature's intent to allow separate punishment for both arson and contempt. Therefore, we hold that the prosecution for arson, in the context of the facts and circumstances here presented, does not violate the Double Jeopardy Clause of the Tennessee Constitution. In conclusion, neither the Double Jeopardy Clause of the United States Constitution nor that of the Tennessee Constitution bars separate proceedings and punishments for contempt and the substantive offense underlying the contempt. The judgment of the Court of Criminal Appeals is reversed, and the indictment for arson is reinstated. Costs of this cause are taxed against the appellee, for which execution may issue if necessary. ANDERSON, C.J., DROWOTA, REID and HOLDER, JJ., concur. NOTES [1] Other conduct supporting the contempt conviction included: threats to Ms. Winningham's life, trespass upon her property, and shots fired at her car. [2] Tennessee Code Annotated § 36-3-610 (1991) provides: "Upon violation of the order of protection ... the court may hold the defendant in civil or criminal contempt and punish him in accordance with the law." Tennessee Code Annotated § 29-9-105 (1980) provides: "If the contempt consists in the performance of a forbidden act, the person may be imprisoned until the act is rectified by placing matters and person in statu quo [sic], or by the payment of damages." [3] Tennessee Code Annotated § 29-9-103 (Supp. 1993) provides: (a) The punishment for contempt may be by fine or by imprisonment, or both. (b) Where not otherwise specially provided, the circuit, chancery, and appellate courts are limited to a fine of fifty dollars ($50.00), and imprisonment not exceeding ten (10) days... . [4] Our Blockburger analysis is guided by United States v. Dixon, 509 U.S. 688, 113 S.Ct. 2849, 125 L.Ed.2d 556 (1993). In Dixon, the United States Supreme Court held that double jeopardy protection attaches to nonsummary criminal contempt proceedings in the same way it attaches to other criminal prosecutions. Nonsummary contempt proceedings address contemptuous conduct occurring outside of the court's presence. As such, the contempt hearing is usually conducted before a different judge at a date subsequent to the conduct. In contrast, summary contempt refers to misbehavior occurring in the presence of the court, which the court addresses immediately. This court has previously held that imposing two punishments for the same offense through summary contempt and a criminal prosecution does not violate double jeopardy principles. Maples v. State, 565 S.W.2d 202, 203 (Tenn. 1978). The instant case involves nonsummary contempt proceedings. [5] Prior to Dixon, the United States Supreme Court also included the "same-conduct" test in the double jeopardy analysis: if, to establish an essential element of an offense, the government will prove conduct that constitutes another offense for which the defendant has already been prosecuted, the second prosecution violates double jeopardy. Grady v. Corbin, 495 U.S. 508, 510, 110 S.Ct. 2084, 2087, 109 L.Ed.2d 548, 557 (1990). The Dixon majority, however, explicitly overruled Grady, leaving Blockburger as the sole measure of federal double jeopardy violations. Dixon, 509 U.S. at 704, 113 S.Ct. at 2860, 125 L.Ed.2d at 573. Thus, courts are no longer required to determine whether both prosecutions were based on the same underlying conduct. [6] Justice Scalia, joined by Justice Kennedy, proffered the first approach, a Blockburger analysis modified to fit the context of a contempt proceeding followed by a prosecution for the underlying substantive offense. Chief Justice Rehnquist, joined by Justice O'Connor and Justice Thomas, proffered the second approach, a traditional Blockburger analysis. The third opinion, written by Justice White, and the fifth opinion, written by Justice Souter, are arguably the least viable approaches in light of the majority decision to overrule Grady. Both utilized a Grady-type analysis of the conduct at issue, rather than focusing on the statutory elements of each offense, and concluded that all the subsequent prosecutions violated double jeopardy. Writing the fourth opinion separately, Justice Blackmun found no double jeopardy violations without actually applying Blockburger. [7] Defendant Dixon was actually subject to a release order. Dixon, 509 U.S. at 698, 113 S.Ct. at 2857, 125 L.Ed.2d at 565.
Dementia, lower respiratory tract infection, and long-term mortality. To examine long-term mortality and its determinants in nursing home residents with dementia diagnosed with a lower respiratory tract infection (LRI). US (Missouri) nursing home residents (541) and Dutch residents (403) with dementia who were treated with antibiotics for an LRI. Prospective studies of nursing home-acquired LRI in the US (Missouri) and in the Netherlands. Measurements included demographics, indicators of acute illness, general health condition, intake problems, and comorbid disease. Six-month mortality rates were calculated and Cox proportional hazards models were developed for mortality up to 2 years after diagnosis. Six-month mortality was 48.8% among Dutch residents and 36.4% among US residents. After multivariable adjustment, Dutch nationality was not associated with higher long-term mortality. Variables most strongly associated with long-term mortality were activity of daily living dependency and male gender. Other variables associated with outcome were diverse: respiratory difficulty, age, dehydration, congestive heart failure, decreased alertness, decubitus ulcers, Parkinson disease, weight loss/poor nutrition, and pulse rate. LRI is followed by substantial mortality in the months after diagnosis, indicating high frailty of nursing home residents with dementia who develop LRI. A variety of patient characteristics, including many not directly related to LRI, were consistently associated with long-term mortality in two cohorts with differing illness severity. The results are relevant for informing families, evaluating poor long-term survival in the context of care and treatment, and balancing the potential burdens and benefits of care.
Manila tornado The Manila tornado occurred on 4:30 pm Philippine Time on August 14, 2016. This was only the second time in recorded history that a tornado struck Manila, the capital of the Philippines. The tornado affected in the cities of Manila and Quezon City. The most destructive impact of the tornado was in the southern part of Metro Manila. Supply The water and electricity supply of the barangays in Quezon City, were shut down due to the attack of tornado. More electric lines were damaged during the blast. Aftermath More than 200 houses were damaged and two people were injured. The tornado formed at the time of 4:25 pm in Baseco Tondo Compound. See also 2016 Philippine southwest monsoon floods References Category:Tornadoes of 2016 Category:Natural disasters in the Philippines Category:2016 disasters in the Philippines Category:2016 meteorology Category:History of Manila Category:August 2016 events in Asia Category:Tornadoes in the Philippines
Q: Gzip compress/uncompress a long char array I need to compress a large byte array, im already using the Crypto++ library in the application, so having the compression/decompression part in the same library would be great. this little test works as expected: /// string test = "bleachbleachtestingbiatchbleach123123bleachbleachtestingb.....more"; string compress(string input) { string result (""); CryptoPP::StringSource(input, true, new CryptoPP::Gzip(new CryptoPP::StringSink(result), 1)); return result; } string decompress(string _input) { string _result (""); CryptoPP::StringSource(_input, true, new CryptoPP::Gunzip(new CryptoPP::StringSink(_result), 1)); return _result; } void main() { string compressed = compress(test); string decompressed = decompress(compressed); cout << "orginal size :" << test.length() << endl; cout << "compressed size :" << compressed.length() << endl; cout << "decompressed size :" << decompressed.length() << endl; system("PAUSE"); } I need to compress something like this: unsigned char long_array[194506] { 0x00,0x00,0x02,0x00,0x00,0x04,0x00,0x00,0x00, 0x01,0x00,0x02,0x00,0x00,0x04,0x02,0x00,0x04, 0x04,0x00,0x02,0x00,0x01,0x04,0x02,0x00,0x04, 0x01,0x00,0x02,0x02,0x00,0x04,0x02,0x00,0x00, 0x03,0x00,0x02,0x00,0x00,0x04,0x01,0x00,0x04, .... }; i tried to use the long_array as const char * and as byte then feed it to the compress function, it seems to be compressed but the decompressed one has a size of 4, and its clearly uncomplete. maybe its too long. How could i rewrite those compress/uncompress functions to work with that byte array? Thank you all. :) A: i tried to use the array as const char * and as byte then feed it to the compress function, it seems to be compressed but the decompressed one has a size of 4, and its clearly uncomplete. Use the alternate StringSource constructor that takes a pointer and a length. It will be immune to embedded NULL's. CryptoPP::StringSource ss(long_array, sizeof(long_array), true, new CryptoPP::Gzip( new CryptoPP::StringSink(result), 1) )); Or, you can use: Gzip zipper(new StringSink(result), 1); zipper.Put(long_array, sizeof(long_array)); zipper.MessageEnd(); Crypto++ added an ArraySource at 5.6. You can use it too (but its really a typedef for a StringSource): CryptoPP::ArraySource as(long_array, sizeof(long_array), true, new CryptoPP::Gzip( new CryptoPP::StringSink(result), 1) )); The 1 that is used as an argument to Gzip is a deflate level. 1 is one of the lowest compressions. You might consider using 9 or Gzip::MAX_DEFLATE_LEVEL (which is 9). The default log2 windows size is the max size, so there's no need to turn any knobs on it. Gzip zipper(new StringSink(result), Gzip::MAX_DEFLATE_LEVEL); You should also name your declarations. I've seen GCC generate bad code when using anonymous declarations. Finally, use long_array (or similar) because array is a keyword in C++ 11.
The present invention relates to a semiconductor device and a method for producing the same that has a built-in integrated circuit section used for information communication equipment or electronic equipment for offices and allows a high-density packaging provided with wires or electrodes that connect the semiconductor integrated circuit section to the terminals of external equipment. Recently, with compactness, high density and high functionality of electronic equipment, compactness and high density have been required for semiconductor devices. To satisfy this need, a technique to form CSP (chip size package) within semiconductor wafers has come to be used (Japanese Laid-Open Patent Publication No. 8-102466). The CSP formed within a semiconductor wafer is called a wafer level CSP even after a semiconductor wafer is divided into chips. Hereinafter, a conventional semiconductor device and a production method thereof will be described in detail in reference with the accompanying drawings. FIG. 5 is a cross-sectional view of a conventional semiconductor device, more specifically, a conventional wafer level CSP. As shown in FIG. 5, in the conventional wafer level CSP, a plurality of element electrodes 101 that are electrically connected to semiconductor elements are formed on a semiconductor wafer 100 in which the semiconductor elements are arranged in respective semiconductor chip forming regions (not shown). The surface of the semiconductor wafer 100 is covered with a passivation film 102 in which a plurality of openings 102a are arranged in order to expose the element electrodes 101. On the passivation film 102, a plurality of Cu wires 103 that are connected to the element electrodes 101 via the openings 102a are formed. The surface of each of the Cu wires 103 is covered with a Ni-plated layer 104. On the passivation film 102, a cover coating film (protective film) 105 is formed so as to cover the Cu wires 103 as well as Ni-plated layer 104. In the cover coating film 105, a plurality of openings 105a are formed so as to expose a plurality of external electrodes 106 that are formed of a portion of the Cu wires 103 (including the Ni-plated layer 104) and are two-dimensionally arranged. A plurality of solder bumps 107 connected to the external electrodes 106 via the openings 105a are formed immediately above the external electrodes 106 as external electrode terminals. The outline of a method for producing the conventional wafer level CSP is as follows. First, a passivation film 102 is formed by spin-coating on the whole surface of the semiconductor wafer 100 provided with semiconductor elements and a plurality of element electrodes 101 electrically connected to the semiconductor elements in respective semiconductor chip forming regions. Then, a plurality of openings 102a is formed in the passivation film 102 so as to expose the element electrodes 101 by well-known techniques of photolithography and etching. Next, a plurality of Cu wires 103 are formed on the semiconductor wafer 100 via the passivation film 102 so as to extend within the inner portion of respective semiconductor chip forming regions and to be connected to the element electrodes 101 via the openings 102a. Thereafter, a Ni-plated layer 104 is formed on the Cu wires 103 by electroless plating. Then, a cover coating film 105 is formed so as to cover the Cu wires 103, and then a plurality of openings 105a are formed on the cover coating film 105 in order to expose a plurality of external electrodes 106 that are formed of a portion of the Cu wires 103 and arranged two-dimensionally by well-known techniques of photolithography and etching. Thereafter, a plurality of solder bumps 107 that are connected to the external electrodes 106 via the openings 105a are formed immediately above the external electrodes 106 as external electrode terminals. As described above, according to the wafer level CSP that is a conventional semiconductor device, the external electrodes 106 that are connected to the respective element electrodes 101 can be arranged two-dimensionally regardless of the arrangement of the element electrodes 101, so that compact semiconductor device can be produced, and therefore, equipment such as information communication equipment can also be made small in size. However, in the conventional semiconductor device, there exists a resistance in the wires connecting the element electrodes to the external electrodes (for example, Cu wires) in addition to a resistance in the wires connecting the semiconductor elements to the element electrodes (for example, Al wires). Because of the resistance, signal delay is increased and the problem is caused that high-speed transmission of signals between the semiconductor device and external equipment becomes difficult. Therefore, with the foregoing in mind, it is an object of the present invention to provide a semiconductor device that allows high-speed transmission of signals between the semiconductor device and external equipment while compactness is achieved. In order to achieve the above object, a semiconductor device of the present invention includes a semiconductor substrate provided with at least one semiconductor element, a first element electrode and a second element electrode formed on the semiconductor substrate and connected electrically to the semiconductor element, an insulating film formed so as to cover the first element electrode and the second element electrode, a first opening formed on the insulating film and exposing at least one portion of the first element electrode, a second opening formed on the insulating film and exposing at least one portion of the second element electrode, a first external electrode formed immediately above the first element electrode and connected to the first element electrode via the first opening, a second external electrode formed on the insulating film and a connecting wire formed on the insulating film and having one end connected to the second element electrode via the second opening and the other end connected to the second external electrode. The semiconductor device of the present invention includes a first external electrode formed immediately above the first element electrode and connected to the first element electrode. Therefore, the first element electrode and the first external electrode are connected without a wire, so that the resistance between the first element electrode and the first external electrode can be reduced and signal delay can be decreased. Thus, high-speed transmission of signals between the semiconductor device and external equipment becomes possible. The semiconductor device of the present invention includes a second external electrode formed on the insulating film on the semiconductor substrate and a connecting wire formed on the insulating film and having one end connected to the second element electrode and the other end connected to the second external electrode. Therefore, regardless of the arrangement of the second element electrodes, the second external electrodes electrically connected to the second element electrodes can be arranged two-dimensionally, so that it is possible to provide multiple external electrode terminals in a small area. As a result, it becomes possible to realize a compact semiconductor device that is capable of including multiple pins. Furthermore, according to the semiconductor device of the present invention, the first external electrode, the second external electrode and the connecting wire can be formed easily by patterning a conductive film formed on the semiconductor substrate to integrally form the first external electrode, the second external electrode and the connecting wire. Therefore, manufacturing cost can be reduced. In the semiconductor device of the present invention, the semiconductor substrate may be a semiconductor wafer or a chip obtained by dividing a semiconductor wafer. In the semiconductor device of the present invention, it is preferable that the insulating film is formed of elastic insulating material. According to the semiconductor device as described above, in the case where the semiconductor device is mounted on a motherboard, even if the heating or cooling of the semiconductor device causes stress in the connection between the semiconductor device and the motherboard because of the difference in thermal expansion coefficient between the semiconductor device and the motherboard, the stress is reduced by the insulating film formed of elastic material, that is, the elastic layer. As a result, the possibility that the conductive pattern such as the external electrode or the connecting wire is disconnected is decreased, so that a highly reliable wiring structure can be realized. In the semiconductor device of the present invention, it is preferable that each wall surface of the first opening and the second opening, or at least the portions near the upper end and near the lower end of the wall surface have an inclination of less than 90xc2x0 with respect to the surface of the semiconductor substrate. According to the semiconductor device as described above, the conductive pattern such as the external electrode or the connecting wire never straddles a sharp step, so that the conductive pattern is easily formed and hardly disconnected. In the semiconductor device of the present invention, it is preferable that the semiconductor device further includes a pair of third element electrodes formed on the semiconductor substrate and electrically connected to the semiconductor elements, a pair of third openings formed on the insulating film and exposing at least one portion of each of the pair of third element electrodes and a coil formed on the insulating film and having ends, each of which is connected to a corresponding third element electrode of the pair via a corresponding third opening of the pair. In the semiconductor device as described above, a coil with high L (inductance) value that has been difficult to form by the conventional semiconductor process can be realized by patterning the conductive film formed on the semiconductor substrate to form the coil. Therefore, semiconductor elements for high frequency can also be attained. It is preferable that the semiconductor device of the present invention further includes a protective film formed so as to cover the first external electrode, the second external electrode and the connecting wire and having the property of repelling a conductive material, a fourth opening formed on the protective film and exposing at least one portion of the first external electrode, a fifth opening formed on the protective film and exposing at least one portion of the second external electrode, a first external electrode terminal formed immediately above the first external electrode and connected to the first external electrode via the fourth opening and a second external electrode terminal formed immediately above the second external electrode and connected to the second external electrode via the fifth opening. According to the semiconductor device as described above, when mounting the semiconductor device on the motherboard, unfavorable electrical short-circuit is prevented between the first external electrodes, the second external electrodes or the connecting wires and wirings or electrodes of the motherboard, and the semiconductor device can be reliably mounted on the motherboard. In the case where the semiconductor device includes the first external electrode terminals and the second external electrode terminals, it is possible to use metallic balls, conductive bumps or a portion of each of the first external electrodes and the second external electrodes as the first and second external electrode terminals. However, in any case, it is preferable that the junctions of the first external electrodes and the first external electrode terminals are covered with the protective film. In the semiconductor device of the present invention, it is preferable to further include a passivation film covering the surface of the semiconductor substrate except the first element electrode and the second element electrode and that the insulating film is formed above the passivation film. According to the semiconductor device as described above, the reliability of the semiconductor device can be improved. In the case where the passivation film is included, it is preferable that the semiconductor device further includes a pair of third element electrodes formed on the semiconductor substrate and electrically connected to the semiconductor elements and a coil formed on the passivation film and having ends, each of which is connected to a corresponding third element electrode of the pair, and that the insulating film covers the coil. According to the semiconductor device as described above, a coil with high L value that has been difficult to form by the conventional semiconductor process can be realized by patterning the conductive film formed on the semiconductor substrate to form the coil, so that the semiconductor elements for high frequency can be attained. A method for producing a semiconductor device according to the present invention includes a first step of forming on a semiconductor substrate on which at least one semiconductor element is provided, a first element electrode and a second element electrode electrically connected to the semiconductor element, a second step of forming an insulating film so as to cover the first element electrode and the second element electrode, a third step of forming a first opening for exposing at least one portion of the first element electrode and a second opening for exposing at least one portion of the second element electrode by selectively removing an upper portion of each of the first element electrode and the second element electrode in the insulating film and a fourth step of forming a conductive film on the insulating film so as to fill up the first opening and the second opening and patterning the conductive film, thereby forming form a first external electrode connected to the first element electrode via the first opening immediately above the first element electrode, and forming a second external electrode and a connecting wire having one end connected to the second element electrode via the second opening and the other end connected to the second external electrode on the insulating film. According to the method for producing a semiconductor device of the present invention, the first external electrode connected to the first element electrode is formed immediately above the first element electrode. Therefore, the first element electrode and the first external electrode are connected without a wire, so that the resistance between the first element electrode and the first external electrode can be reduced and signal delay can be decreased, so that high-speed transmission of signals between the semiconductor device and external equipment becomes possible. According to the method for producing a semiconductor device of the present invention, the second external electrode and the connecting wire having one end connected to the second element electrode and the other end connected to the second external electrode are formed on the insulating film on the semiconductor substrate. Therefore, regardless of the arrangement of the second element electrodes, the second external electrodes electrically connected to the second element electrodes can be arranged two-dimensionally, so that it is possible to arrange multiple external electrode terminals in a small area. As a result, it becomes possible to realize a compact semiconductor device that is capable of including multiple pins. Furthermore, according to the method for producing a semiconductor device of the present invention, the first external electrode, the second external electrode and the connecting wire are formed integrally by patterning a conductive film formed on the semiconductor substrate. Therefore, the first external electrode, the second external electrode and the connecting wire can be formed easily and thus manufacturing cost can be reduced. In the method for producing a semiconductor device of the present invention, it is preferable that the semiconductor substrate is a semiconductor wafer, and the method further includes a step of dividing the semiconductor wafer into chips after the fourth step. According to the method as described above, since the external electrodes, the connecting wires or the like can be formed collectively in respective semiconductor chip forming regions of the semiconductor wafer, manufacturing cost can be greatly reduced. In the method for producing a semiconductor device of the present invention, it is also possible that the semiconductor substrate is a chip obtained by dividing the semiconductor wafer. In the method for producing a semiconductor device of the present invention, it is preferable that the insulating film is made of elastic insulating material. According to the method as described above, in the case where the semiconductor device is mounted on a motherboard, even if the heating or cooling of the semiconductor device causes stress in the connection between the semiconductor device and the motherboard because of the difference in thermal expansion coefficient between the semiconductor device and the motherboard, the stress is reduced by the insulating film made of elastic material, that is, the elastic layer. As a result, the possibility that the conductive pattern such as the external electrode or the connecting wire is disconnected is decreased, so that a highly reliable wiring structure can be realized. In the method for producing a semiconductor device of the present invention, it is preferable that the third step includes a step of forming each wall surface of the first opening and the second opening, or at least the portions near the upper end and near the lower end of the wall surface so as to have an inclination of less than 90xc2x0 with respect to the surface of the semiconductor substrate. According to the method as described above, the conductive pattern such as the external electrode or the connecting wire never straddles a sharp step, so that the conductive pattern is easily formed and hardly disconnected. In the method for producing a semiconductor device of the present invention, it is preferable that the first step includes a step of forming a pair of third element electrodes electrically connected to the semiconductor elements on the semiconductor substrate, the third step includes a step of forming a pair of third openings for exposing at least one portion of each of the pair of third element electrodes by selectively removing an upper portion of the pair of third element electrodes in the insulating film, and the fourth step includes a step of forming a coil having ends, each of which is connected to a corresponding third element electrode of the pair via a corresponding third opening of the pair, on the insulating film by patterning the conductive film. In the method as described above, a coil with high L value that has been difficult to form by the conventional semiconductor process can be realized. Therefore, semiconductor elements for high frequency can be attained. In the method for producing a semiconductor device of the present invention, it is preferable that the method includes a fifth step of forming a protective film having a property of repelling a conductive material so as to cover the first external electrode, the second external electrode and the connecting wire and then selectively removing an upper part of each of the first external electrode and the second external electrode in the protective film to form a fourth opening for exposing at least one portion of the first external electrode and a fifth opening for exposing at least one portion of the second external electrode, after the fourth step. According to the method as described above, when mounting the semiconductor device on the motherboard, unfavorable electrical short-circuit is prevented between and the first external electrodes, the second external electrodes or the connecting wires and wirings or electrodes of the motherboard, and the connection can be easily performed between the first external electrodes or the second external electrodes and the wirings or the electrodes of the motherboard with a connecting member such as solder. In the case where the method includes the fifth step, it is preferable that the fifth step includes a step of forming a first external electrode terminal connected to the first external electrode via the fourth opening immediately above the first external electrode and forming a second external electrode terminal connected to the second external electrode via the fifth opening immediately above the second external electrode. According to the method as described above, the semiconductor device can be mounted on the motherboard very easily. In the method for producing a semiconductor device of the present invention, it is preferable that the first step includes a step of forming a pair of third element electrodes electrically connected to the semiconductor element on the semiconductor substrate, that the method includes, between the first step and the second step, a step of forming a passivation film covering the surface of the semiconductor substrate except the first element electrode, the second element electrode and the pair of third element electrodes, and then forming a coil having ends, each of which is connected to a corresponding third element electrode of the pair, on the passivation film, and that the insulating film covers the passivation film and the coil. According to the method as described above, the reliability of the semiconductor device can be improved further. A coil with high L value that has been difficult to form by the conventional semiconductor process can be realized by patterning the conductive pattern formed on the semiconductor substrate to form the coil, so that the semiconductor elements for high frequency can be attained.
LONDON (Reuters) - British Foreign Secretary Boris Johnson said on Tuesday he saw no reason to cancel Donald Trump’s state visit to Britain after the U.S. president criticized Mayor Sadiq Khan’s response to the London Bridge killings. U.S. President Donald Trump speaks during an event announcing the Air Traffic Control Reform Initiative in the East Room of the White House in Washington, DC, U.S. June 5, 2017. REUTERS/Joshua Roberts Prime Minister Theresa May called Trump’s comments “wrong.” Trump has lambasted Khan on Twitter, accusing him of making a “pathetic excuse,” for saying Londoners should not be alarmed by the sight of additional police on the streets of the British capital after Saturday’s attack that killed seven people. “The invitation has been issued and accepted and I see no reason to change that, but as far as what Sadiq Khan has said about the reassurances he’s offered the people of London, I think he was entirely right to speak in the way he did,” Johnson said in a BBC radio interview when asked whether Trump’s state visit should be canceled. No date has been set for the visit, which was agreed during May’s visit to Washington in January and seen as a sign of her desire to maintain good ties with Britain’s traditional close ally as Trump began his presidency. The Conservative prime minister has said Khan is doing a good job, echoing public sentiment across London. On Tuesday, May told a political rally in response to a question about Trump’s tweets, “I think Donald Trump was wrong in the things that he has said about Sadiq Khan.” Trump and Khan, the son of Pakistani immigrants and the first Muslim elected as London’s mayor, have been at odds since Khan denounced as “ignorant” Trump’s campaign pledge to impose a temporary ban on Muslims entering the United States. Since taking office on Jan. 20, Trump has ordered temporary travel restrictions on people from several Muslim-majority countries, although the ban is currently held up by federal courts. Asked on Tuesday about the London visit, White House spokesman Sean Spicer said only that Trump intended to go and that “he appreciates her majesty’s gracious invitation.” Asked on Monday evening if he would like Trump’s visit to be called off, Khan, a member of Britain’s opposition Labour party, said his position remained the same. “I don’t think we should roll out the red carpet to the president of the USA in the circumstances where his policies go against everything we stand for,” Khan told Channel 4 News. ‘TRASH TALK’ Tim Farron, leader of the opposition Liberal Democrats, also has urged May to cancel the visit, saying Trump was insulting Britain’s values “at a time of introspection and mourning.” Former Democratic U.S. presidential candidate Hillary Clinton, defeated by Trump last November, praised Khan’s performance in dealing with the attacks. Speaking at a fundraising event on Monday, she did not name Trump but said it was “not the time to lash out, to incite fear and use trash talk and terror for political gain,” the Washington Examiner reported. Deputy White House spokeswoman Sarah Sanders told reporters on Monday that she did not think it was correct to characterize Trump’s tweets as “picking a fight” with Khan. Asked if Trump was attacking the mayor because he is Muslim, Sanders replied: “Not at all. And I think to suggest something like that is utterly ridiculous.” Trump’s oldest son, Donald Trump Jr., defended his father. “Every time he puts something out there he gets criticized by the media. All day, every day,” Trump Jr. said in an interview with ABC’s “Good Morning America” broadcast on Tuesday. “And guess what, he’s been proven right about it, every time. We keep saying, ‘It’s going to be great’ and ‘Hold fast,’ ‘We’re going to keep calm and carry on.’ Maybe we have to keep calm and actually do something,” he said. He was referring to a World War Two-era slogan of resilience, to “keep calm and carry on”, that Britons have echoed following the London attack. British author J.K. Rowling said on Tuesday that if a state visit did go ahead, Trump’s tweets related to the attack should be enlarged and shown wherever he goes. “I’d rather he didn’t come, but if he does, I’d like his vile Tweets juxtaposed against whatever he’s been coaxed to read off an autocue,” Rowling, celebrated for her Harry Potter books and a frequent critic of Trump, wrote on Twitter.
The City of Sherman, Texas Named after a decorated hero of the Texas Revolution, General Sidney Sherman, the city of Sherman is now home to a population nearing 40,000. Sherman has a plethora of deep roots and history of events to leave history buffs clamoring to learn and retrieve more information about the long-standing treasure and gem of North Texas when visiting the Sherman Museum. Designated at the county seat for Grayson County, Sherman was incorporated by law as a Texas town in 1850. Sherman was best known for a deep history of providing main thoroughfare to various cities for travelers and locals via Red River and Butterfield Trails. Sherman, Texas Is Thriving and Industrious Today, Sherman is a fast-growing city of budding professionals and entrepreneurs with families. Sherman provides a high-quality of life for locals with various businesses and companies: Sunny Delight, Kaiser Aluminum, Globitech, Fisher Controls International, Presco Products, Texas Instruments, MEMC, and Tyson Foods just to name a few. In addition to being a central hub for good paying jobs, Sherman is also home of excellent general and post education schooling, including Grayson County Junior College and the prestigious Austin College — is the oldest college or university in Texas operating under its original charter. Sherman, Texas Must-See Attractions Known as a weekend attraction for travelers in the Dallas metropolis and surrounding areas, Sherman boasts an old town, down home presence when visiting the city square yet is rapidly growing the city beyond the small town with shopping outlets and other great amenities geared towards families and visitors with families. Some Sherman's most notable attractions for locals, visitors, tourists, and weekenders to frequent are listed below. Come visit Sherman, Texas and live the experience! Sherman Bearcats Football Field of Dreams But one of the most notable attractions experience by all can be seen from passersby on Highway 75 driving, whether traveling North or South, through the heart of the city of Sherman on a Friday night when the mighty Sherman Bearcats take to the football field under the big lights at Bearcat Stadium. Copyright 2012-2015. All rights reserved. ShermanFootball.net. This site is produced and maintained by the Brownstone Strategies, and is not officially sanctioned, affiliated or supported by Sherman Football Booster Club, Sherman Indepedent School District, Sherman ISD or Sherman High School. Neither Sherman High School nor Sherman ISD is responsible for the content of this web site or the content of links external to this web site.
<?xml version="1.0" encoding="UTF-8"?> <!-- YUI 3 Gallery Component Build File --> <project name="AutoComplete List Group" default="local"> <description>AutoComplete List Group Build File</description> <property file="build.properties" /> <import file="${builddir}/3.x/bootstrap.xml" description="Default Build Properties and Targets" /> </project>
E-cadherin is a WT1 target gene. The WT1 tumor suppressor gene encodes a transcription factor that can activate and repress gene expression. Transcriptional targets relevant for the growth suppression functions of WT1 are poorly understood. We found that mesenchymal NIH 3T3 fibroblasts stably expressing WT1 exhibit growth suppression and features of epithelial differentiation including up-regulation of E-cadherin mRNA. Acute expression of WT1 in NIH 3T3 fibroblasts after retroviral infection induced murine E-cadherin expression. In transient transfection experiments, the human and murine E-cadherin promoters were activated by co-expression of WT1. E-cadherin promoter activity was increased in cells overexpressing WT1 and was blocked by a dominant negative form of WT1. WT1 activated the murine E-cadherin promoter through a conserved GC-rich sequence similar to an EGR-1 binding site as well as through a CAAT box sequence. WT1 produced in vitro or derived from nuclear extracts bound to the WT1-response element within the murine E-cadherin promoter, but not the CAAT box. E-cadherin, a gene important in epithelial differentiation and neoplastic transformation, represents a downstream target gene that links the roles of the WT1 in differentiation and growth control.
The Daily: SEC Ramps Up Enforcement, 60% of Smart Contracts Are Dormant In Saturday’s edition of The Daily, we take a look at the SEC’s annual report, which reveals where cryptocurrencies sit on their radar. We also consider the fate of dormant smart contracts, 60% of which have never seen use according to a new report. All that plus the reaction to Coinbase’s latest token listing, which hasn’t pleased everyone. Also read: ‘Decentralized’ Exchange IDEX to Introduce Full KYC SEC Zeroes in on Cryptocurrency Scams The U.S. Securities and Exchange Commission (SEC) has released its annual report and the 45-page document has plenty to say about cryptocurrency. Initial coin offerings (ICOs) are referenced more than 30 times in the report, which notes: “In the past year, the Division has opened dozens of investigations involving ICOs and digital assets, many of which were ongoing at the close of FY [financial year] 2018.” The report also explains that the SEC isn’t just looking at ICOs, but at other potential scams being perpetrated within the cryptocurrency space. It finishes: The Division also has recommended that the Commission use its trading suspension authority to prevent investors from being harmed by possible scams … the Commission suspended trading in the stock of over a dozen publicly traded issuers because of questions concerning, among other things, the accuracy of assertions regarding their investments in ICOs and operation of cryptocurrency platforms. The SEC’s report surfaced just as it emerged that another celebratory is facing a lawsuit over their promotion of a dubious ICO. Clifford Joseph Harris Jr., better known as T.I., is being charged over his involvement with “flik token,” which investors were promised would increase by 25,000 percent. It didn’t. 60% of Smart Contracts Have Never Been Used Researchers at Northeastern University and the University of Maryland have pored over the code governing Ethereum smart contracts and emerged with some interesting findings. Of the 1.2 million smart contracts they examined, the vast majority were clones or extremely similar to one another. As a result, they found there to be less than 6,000 unique smart contract “clusters.” The danger with such widespread usage of code, as the researchers pointed out, is that vulnerabilities are likely to spread far and wide throughout the ecosystem. There is one saving grace, however, that might limit the fallout from a widespread bug: around 60% of all Ethereum smart contracts have never been interacted with. These “ghost contracts” remain dormant, deprived of users willing to spend the gas required to trigger them. New Coinbase Listing – Brave or Foolish? On Nov. 2, Coinbase Pro announced that the latest token to be added to its exchange would be BAT, the advertising rewards-based currency used within the Brave browser. While the news was hailed in some quarters, not least among BAT bagholders, not everyone was impressed. “Welp, it’s official,” tweeted Dan Elitzer. “Coinbase’s lawyers are comfortable with listing digital Chuck-E-Cheese tokens. The Howey Test is clearly out the window.” Jackson Palmer, meanwhile picked holes in Brave’s integration of BAT, and suggested the browser would operate more effectively without the token: What are your thoughts on today’s news tidbits as featured in The Daily? Let us know in the comments section below. Images courtesy of Shutterstock and Blockmodo. Need to calculate your bitcoin holdings? Check our tools section.
Looking ahead to the 2020 campaign, Bill Maher made a call on his Friday night HBO broadcast for former Sen. Al Franken to step back into the political arena and run for the Democratic presidential nomination. In front of an eclectic and stellar panel of guests, including former Obama adviser and current CNN host David Axelrod, New York Times (and former Salon) columnist Michelle Goldberg, former Rep. Charlie Dent, R-Pa., and superstar comedian Jim Carrey, Maher’s traditional closing soliloquy on "Real Time" called on Democrats to find the one thing that is Trump’s Kryptonite. Advertisement: Maher then offered his theory on what gets under Trump’s skin the most: “being made fun of.” The host then circled back to the infamous 2011 White House correspondents’ dinner where President Barack Obama roasted Trump for more than five minutes in front of a gala crowd while the future president “seethed,” in Maher’s words. There has been much speculation in hindsight about whether that public humiliation drove Trump to run for president in 2016, after numerous false alarms. “We need someone who can shred Trump like a stand-up [comic] that takes down a heckler. Trump is a heckler, and to fight him, we need a comedian,” Maher asserted. After teasing the audience by appearing to suggest he might be that comedian, Maher switched gears, pronouncing that it would be great if former comedian-turned-senator Al Franken “got back into the game,” to roaring approval from his live audience. Maher directly addressed the sexual harassment allegations that led Franken to resign from the Senate in the fall of 2017. He argued that Americans overreact to controversy, citing a curious range of events from 9/11 to bird flu to the Janet Jackson wardrobe malfunction at the Super Bowl halftime show, while suggesting that the accusations against Franken ranked among the worst examples of this tendency. Advertisement: Maher reminded his audience of the two events that damaged Franken the most: a photograph taken during a USO tour where he made a gesture of grabbing his co-star’s breasts while she slept (for which Franken apologized) and a claim by radio host and frequent Sean Hannity guest Leeann Tweeden that Franken had forcibly kissed and groped her, which Franken denied. Although the senator soon resigned, Maher pointed out that he never said “I did it.” Maher said he believed Franken’s denial, adding that while most women’s allegations during the #MeToo movement have been truthful, women had not “completely lost their ability to lie in 2017.” READ MORE: Another ho-hum summer with Donald Trump: All the season's big news stories you wish you could forget Advertisement: At that point during Maher’s monologue there was a brief and tense exchange with Michelle Goldberg. She interjected from off-camera to say that the allegations against Franken also included “a lot of ass-grabbing.” Maher retorted to Goldberg -- perhaps in infelicitous language -- that “it isn’t quite your place,” saying that she'd had her time on the show and this was his. He continued by acknowledging the allegations against Franken made by seven other women, including one in which Franken allegedly asked a woman to join him in the bathroom. Speaking of his longtime friend, Maher finally said, “That is not Al Franken.” A visibly emotional Maher concluded by stating, “We can have #MeToo and Al Franken – they are not mutually exclusive. It is time to get Al Franken off the bench to do what he does better than any other Democrat – taking down right-wing blowhards. I want to see Al Franken debate Donald Trump. And, by the way, so do you.”
A multi-scale model of the coronary circulation applied to investigate transmural myocardial flow. Distribution of blood flow in myocardium is a key determinant of the localization and severity of myocardial ischemia under impaired coronary perfusion conditions. Previous studies have extensively demonstrated the transmural difference of ischemic vulnerability. However, it remains incompletely understood how transmural myocardial flow is regulated under in vivo conditions. In the present study, a computational model of the coronary circulation was developed to quantitatively evaluate the sensitivity of transmural flow distribution to various cardiovascular and hemodynamic factors. The model was further incorporated with the flow autoregulatory mechanism to simulate the regulation of myocardial flow in the presence of coronary artery stenosis. Numerical tests demonstrated that heart rate (HR), intramyocardial tissue pressure (Pim ), and coronary perfusion pressure (Pper ) were the major determinant factors for transmural flow distribution (evaluated by the subendocardial-to-subepicardial (endo/epi) flow ratio) and that the flow autoregulatory mechanism played an important compensatory role in preserving subendocardial perfusion against reduced Pper . Further analysis for HR variation-induced hemodynamic changes revealed that the rise in endo/epi flow ratio accompanying HR decrease was attributable not only to the prolongation of cardiac diastole relative to systole, but more predominantly to the fall in Pim . Moreover, it was found that Pim and Pper interfered with each other with respect to their influence on transmural flow distribution. These results demonstrate the interactive effects of various cardiovascular and hemodynamic factors on transmural myocardial flow, highlighting the importance of taking into account patient-specific conditions in the explanation of clinical observations.
Alterations in cell proliferation related gene expressions in gastric cancer. Gastric cancer remains the fourth most prevalent cancer and the second leading cause of cancer-related death in the world. The predominant form of gastric cancer is adenocarcinoma, which originates from glandular epithelium of the gastric mucosa. The major risk factors for gastric cancer include diet, individual genetic variation, and, most importantly, infection with Helicobacter pylori (H. pylori). Certain strains of H. pylori assisted by some of its virulence factors seem to play a critical role in gastric cancer development. Several of these H. pylori virulence factors, which influence cellular proliferation signaling, have been identified. In addition, changes in the expression of several cell proliferation regulating genes accompany or cause the progression of gastric cancer. These changes include modifications of cell cycle regulators, oncogene activation, tumor suppressor inactivation, and miRNA profile alterations. Many of these changes result from H. pylori infection, although their impact on the cellular proliferation system underlying gastric cancer development has not yet been fully elucidated. We review certain features of gastric cancer, the role of H. pylori infection in its etiology and pathogenesis, and gene expression changes during gastric carcinogenesis.
The Bodum So Long line pairs the convenience of drinking from a tumbler with the benefits of a wine glass. You are still able to swirl the wine to release and enjoy the bouquet as well as control the flow of wine onto the palate. Made of borosilicate, the glass will not cloud over years of…
Q: What's wrong with my usage of grep? I'm executing the following command: echo "ze2s hihi" | tr ' ' '\n' | grep 'h*' but instead of getting hihi in the output I'm getting this: ze2s hihi What's wrong? A: What you want is: echo "ze2s hihi" | tr ' ' '\n' | grep 'h.*' With "h*" you are asking to match any number of h's in a sequence, including 0 h's, which ze2s matches. Or maybe you just want to match anything which contains an h: echo "ze2s hihi" | tr ' ' '\n' | grep 'h'
Syringes prefilled with a liquid drug (generally referred to as “prefilled syringes”) are used as medical syringes. The prefilled syringes are advantageous because of their handling ease without the need for transferring the liquid drug into the syringe. Further, transfer of a wrong liquid drug into the syringe is advantageously prevented. Therefore, the prefilled syringes are increasingly used in recent years. Unlike conventional syringes into which a liquid drug sucked up from a vial or other container is transferred immediately before use, the prefilled syringes are each required to serve as a container which is kept in contact with the liquid drug for a long period of time. Such a syringe typically includes a syringe body, a plunger reciprocally movable in the syringe body, and a gasket attached to a distal end of the plunger. The gasket to be used for the syringe is generally made of a crosslinked rubber. It is known that the crosslinked rubber contains various crosslinking components, and these crosslinking components and their thermally decomposed products are liable to migrate into the liquid drug when the liquid rug is kept in contact with the syringe. It is also known that these migrating components adversely influence the efficacy and the stability of some liquid drug. The gasket is required to be smoothly slidable when the syringe is used. In general, the gasket made of the crosslinked rubber is poorer in slidability. To cope with this, it is a general practice to apply silicone oil to an inner surface of the syringe body. However, it is known that the silicone oil adversely influence the efficacy and the stability of some liquid drug. From this viewpoint, a product of so-called “laminated gasket” including a rubber gasket body having a surface laminated with a film having excellent slidability is often used for the medical syringe. Since the surface of the rubber gasket body of the laminated gasket is covered with the highly slidable film, it is possible to prevent the components of the crosslinked rubber from migrating into the liquid drug, and to ensure the slidability even without the use of the silicone oil.
1969927.83s in days? 22.800090625 What is 19/4 of a kilometer in meters? 4750 Convert 96.59593um to millimeters. 0.09659593 How many litres are there in 9.223553 millilitres? 0.009223553 How many seconds are there in 1/54 of a week? 11200 Convert 181.9176 kilometers to centimeters. 18191760 How many microseconds are there in 0.2941723 hours? 1059020280 How many litres are there in 69.84132ml? 0.06984132 How many nanograms are there in 51/5 of a microgram? 10200 Convert 535744.1km to nanometers. 535744100000000000 How many minutes are there in 42/5 of a day? 12096 How many millilitres are there in 3/10 of a litre? 300 How many kilograms are there in 52.48708g? 0.05248708 What is 27/5 of a tonne in kilograms? 5400 Convert 141397.38us to minutes. 0.002356623 What is 0.3627771ml in litres? 0.0003627771 What is 0.8549008 weeks in seconds? 517044.00384 What is 61864.8mg in kilograms? 0.0618648 What is five quarters of a litre in millilitres? 1250 How many years are there in twenty-seven halves of a millennium? 13500 How many millimeters are there in 158012.8m? 158012800 Convert 7.192447 tonnes to milligrams. 7192447000 What is three tenths of a meter in centimeters? 30 How many litres are there in 8823.385ml? 8.823385 How many grams are there in one twentieth of a kilogram? 50 How many micrometers are there in 1945.883nm? 1.945883 What is 3/40 of a second in milliseconds? 75 What is 3/8 of a centimeter in micrometers? 3750 What is 3/25 of a week in seconds? 72576 What is twenty-seven halves of a millennium in years? 13500 How many milligrams are there in 5/8 of a gram? 625 Convert 0.0169228ml to litres. 0.0000169228 What is one twentieth of a litre in millilitres? 50 What is nine tenths of a millennium in months? 10800 What is 0.0846805 centuries in millennia? 0.00846805 How many kilograms are there in fifteen quarters of a tonne? 3750 How many micrograms are there in thirty-three fifths of a milligram? 6600 How many milligrams are there in 3/50 of a kilogram? 60000 How many millilitres are there in eighteen fifths of a litre? 3600 How many meters are there in eleven halves of a kilometer? 5500 Convert 0.8580401 meters to kilometers. 0.0008580401 How many grams are there in 15/4 of a kilogram? 3750 How many milliseconds are there in 5/6 of a minute? 50000 What is 1/10 of a millimeter in nanometers? 100000 How many millennia are there in 1.900023 months? 0.00015833525 What is five eighths of a century in months? 750 What is four fifteenths of a week in minutes? 2688 Convert 88.41355 milligrams to tonnes. 0.00000008841355 Convert 167.6007s to minutes. 2.793345 What is 2/15 of a millennium in months? 1600 How many minutes are there in 1996.3809ms? 0.033273015 Convert 55.17663 millennia to centuries. 551.7663 How many milligrams are there in one tenth of a kilogram? 100000 Convert 334.2017ug to tonnes. 0.0000000003342017 How many centuries are there in 588506.8 decades? 58850.68 What is fourty-four fifths of a hour in seconds? 31680 How many decades are there in sixty-nine halves of a millennium? 3450 How many years are there in 1/10 of a decade? 1 How many millilitres are there in 924155.6l? 924155600 What is 29/2 of a hour in seconds? 52200 What is three fifths of a millennium in years? 600 What is 5/4 of a tonne in kilograms? 1250 What is one twentieth of a millimeter in nanometers? 50000 What is 9538.384nm in millimeters? 0.009538384 Convert 8895.555l to millilitres. 8895555 What is fifty-three fifths of a microgram in nanograms? 10600 Convert 4.3164625 days to weeks. 0.6166375 How many milliseconds are there in 2934.984 microseconds? 2.934984 How many millennia are there in 395713.3 years? 395.7133 What is 9/10 of a litre in millilitres? 900 Convert 7.924813 centuries to months. 9509.7756 Convert 672.0792 micrograms to nanograms. 672079.2 How many millilitres are there in fourty-seven quarters of a litre? 11750 How many centimeters are there in seventeen quarters of a meter? 425 How many grams are there in 6.806139ug? 0.000006806139 How many kilograms are there in 3/8 of a tonne? 375 How many hours are there in 21/2 of a week? 1764 What is nine tenths of a litre in millilitres? 900 What is 14/9 of a hour in seconds? 5600 What is 26/5 of a century in months? 6240 Convert 0.4616955 litres to millilitres. 461.6955 What is 37463.61ml in litres? 37.46361 How many millilitres are there in 0.7292164 litres? 729.2164 What is 7/8 of a litre in millilitres? 875 Convert 3.882415nm to meters. 0.000000003882415 What is 2912.07l in millilitres? 2912070 What is 57139.84 litres in millilitres? 57139840 How many centimeters are there in 3/10 of a meter? 30 How many months are there in 74/3 of a century? 29600 Convert 46.20041 centuries to months. 55440.492 What is 11/4 of a week in hours? 462 Convert 9.629073 millilitres to litres. 0.009629073 Convert 0.509571 tonnes to micrograms. 509571000000 What is 51.15507 hours in nanoseconds? 184158252000000 What is 765299.8 centimeters in nanometers? 7652998000000 Convert 294153.9 litres to millilitres. 294153900 How many minutes are there in 18/7 of a week? 25920 What is fifteen quarters of a litre in millilitres? 3750 What is 8/15 of a decade in months? 64 Convert 20894.142 months to decades. 174.11785 How many months are there in 55/2 of a year? 330 How many millilitres are there in 15/4 of a litre? 3750 What is one quarter of a meter in millimeters? 250 What is 379952.2 centuries in years? 37995220 How many nanometers are there in 73/5 of a micrometer? 14600 How many decades are there in five quarters of a millennium? 125 How many decades are there in 97.30433 millennia? 9730.433 How many nanometers are there in 269.5552 micrometers? 269555.2 Convert 948.0933 millilitres to litres. 0.9480933 Convert 2.077956 millilitres to litres. 0.002077956 What is 27/4 of a century in years? 675 How many nanometers are there in eighteen fifths of a micrometer? 3600 What is 4852.62l in millilitres? 4852620 What is 0.9519676 micrograms in milligrams? 0.0009519676 What is five quarters of a century in years? 125 What is seven halves of a litre in millilitres? 3500 How many weeks are there in 58.3836876 minutes? 0.0057920325 What is one twentieth of a litre in millilitres? 50 How many litres are there in 0.1663239ml? 0.0001663239 What is thirty-seven fifths of a millimeter in micrometers? 7400 How many seconds are there in 7/2 of a hour? 12600 How many nanoseconds are there in 37/5 of a microsecond? 7400 How many nanoseconds are there in 11/4 of a microsecond? 2750 What is 616986.3g in kilograms? 616.9863 What is 98.19923us in nanoseconds? 98199.23 Convert 43.88568 centuries to millennia. 4.388568 How many centimeters are there in 9/10 of a kilometer? 90000 Convert 187935.3ml to litres. 187.9353 Convert 0.7160817 grams to nanograms. 716081700 How many micrograms are there in 22995.35 tonnes? 22995350000000000 How many seconds are there in thirteen fifths of a minute? 156 What is 724.9415 milliseconds in seconds? 0.7249415 What is 31318.18 decades in months? 3758181.6 What is 92852.06ng in micrograms? 92.85206 How many millimeters are there in 5359.052 meters? 5359052 How many millilitres are there in 7/2 of a litre? 3500 How many milliseconds are there in 8/25 of a minute? 19200 Convert 46.63904 weeks to nanoseconds. 28207291392000000 Convert 48202.67 litres to millilitres. 48202670 How many centuries are there in 434.4229 years? 4.344229 How many grams are there in 43100.19mg? 43.10019 What is 1/4 of a microsecond in nanoseconds? 250 Convert 55564.93 nanograms to grams. 0.00005556493 What is 3/32 of a millimeter in nanometers? 93750 Convert 9.20985 kilometers to centimeters. 920985 What is 1/20 of a millennium in years? 50 How many years are there in eighteen fifths of a decade? 36 How many months are there in three eighths of a millennium? 4500 What is 7/6 of a millennium in months? 14000 How many millilitres are there in twenty-five quarters of a litre? 6250 How many millilitres are there in fifteen quarters of a litre? 3750 How many seconds are there in two ninths of a hour? 800 Convert 49.10355l to millilitres. 49103.55 Convert 0.885684mm to kilometers. 0.000000885684 What is 24.807042 minutes in days? 0.0172271125 How many mi
This invention relates to signal coupling apparatus and more particularly to antenna signal coupling apparatus which is shielded to confine a predetermined signal and to exclude all other signals. In general, a complete radio system includes apparatus for transmitting an output signal in the form of electromagnetic energy, and receiving apparatus for capturing a portion of the transmitted electromagnetic energy and processing the energy to reproduce the original signal. A structural member that is found in both the transmitter and receiver of the system is a transmitting antenna for emitting electromagnetic energy, and a receiving antenna that is suitably matched to the characteristics of the radiated electromagnetic energy to capture a usable portion thereof. An inevitable requirement of the radio system is eventual repair and adjustment. Moreover, the system requires periodic tests to determine if its performance ratings are achieved. The significance of maintenance and performance tests is of considerable importance in applications where the radio system is used as a navigation aid. It is even more important to be aware of the operating characteristics of the radio system if it is used primarily in emergency applications. In this regard, the transmitter may be an Emergency Locater Transmitter (ELT) of a type used aboard aircraft. Since the ELT is relied upon to transmit a distress signal for a downed aircraft, it is imperative that the quantitative and qualitative characteristics of the signal be known. Similarly, in the case of radio receiving apparatus used in search and rescue operations, the receiving characteristics of the equipment must be known if search patterns are to be conducted effectively. Known VHF receivers in search and rescue aircraft, operating on 121.5 MHz., can detect and home in on an ELT at close to line-of-sight limits. In view of the normal operational altitudes of such aircraft, these limits can be well in excess of 100 nautical miles. It is apparent, therefore, that a most important consideration in maintaining optimum conditions in search and rescue operations resides in providing that all components of the VHF homing system are functioning properly. In particular, it must be ascertained that the receiver meets its sensitivity specification, that the amplifier, detector, indicator and phasing network that comprise a switched-cardioid homing system are performing as they should, and that no faults exist in the antenna elements and associated feedlines. During periodic maintenance and performance tests, the equipment is removed from the aircraft and is bench-tested in a workshop environment. Such testing is very time consuming since it involves removal and reinstallation of the equipment and furthermore does not cover all components of the system and their interfacing elements. Accordingly, a serious flaw occurs in the bench-testing procedure, i.e., that while the tested components may work satisfactorily in the workshop, there is no guarantee that the system will perform according to rated specifications after reinstallation. Moreover, it is possible that malfunctions may even be introduced after the testing procedure because of a faulty reinstallation. A search crew may therefore engage in a mission under the false apprehension that their electronic gear capability is neither reduced nor destroyed by defective equipment. Since the object of search and rescue operations is to save human lives, which object may be defeated by the known deficiencies in pre-flight testing of radio gear, there is an obvious requirement for a simple pre-flight diagnostic procedure and apparatus to rapidly check the overall aircraft homing system in situ while the aircraft is in the hangar or on the tarmac. If a fault is found, repairs or replacment can be made immediately and the equipment immediately retested or an alternate aircraft can be put into service. Moreover, since in search and rescue time is frequently of the essence, if only moderate degradation of system performance is found, it is possible that the aircraft would still be flown but with modified search tactics, such as reduced track spacing, in order to compensate for the reduced capabilities of the homing system.
History The Camellia Symphony–A Brief History What is now the Camellia Symphony had a humble beginning in the fall of 1961 when Dick Surryhne (leader of the Sacramento Banjo Band for many years) began gathering a group of orchestral musicians to meet and play classical music “just for the fun of it.” Mel Wesleder led the group in these early days, and Zygmunt Darzell was the concertmaster. Dick served as General Manager. They played every Tuesday evening at Encina High School. On several occasions odd combinations of instruments showed up. Once 17 trombonists and 2 violinists turned out. The group’s first public appearance was on May 31, 1962 at the San Juan School District’s Adult Education commencement exercises. They called themselves The Potluck Symphony. Later that summer they played at the State Fair. On September 1, 1962 the orchestra set up a formal organization with a Board of Directors. The members were Ken Trigger, Fay Swan, George Paras, Rev. L. T. Morse, and Dan Backman. The board appointed Zygmunt Darzell as the Musical Director. Maestro Darzell conducted the orchestra through the 1968-1969 season. Concerts were generally held in the San Juan High School Auditorium. There, on January 4, 1963, the 0rchestra first used the name North Area Community Symphony. The 1963-1964 season offered a subscription price of $5.00 for the three concerts and $1.00 for students. The Orchestra grew from 27 musicians to 75 in less than a year. In August of 1963 a North Area Symphony Guild was formed. The Guild’s purpose was to advance the education of the community in symphonic music through encouragement and assistance to the Symphony. Their many activities that would stimulate ticket sales would include wine tasting, gourmet food tasting, ”Tea With Lemon” teas, “Olivera Street North” programs, variety shows, and much more. The orchestra became entirely self-supporting. Its only other income came from actual ticket sales. By February 1, 1968 the orchestra had changed its name to Camellia Symphony, and its concert held that night in Memorial Auditorium was considered part of the Camellia Festival held annually by the City of Sacramento, which is still known as the Camellia City. Other activities of the annual festival included a parade, a floral exhibit with judging and prizes for Camellia plants and floral arrangements, a folk dance festival, and a formal ball to introduce the year’s local Debutantes. Another Camellia Symphony concert at Memorial Auditorium featured the famous Metropolitan Opera tenor Jan Pearce as soloist. Over the years CSO has changed to reflect the personalities and abilities of its conductors. Each conductor has contributed to the orchestra’s development. Walter Kerfoot had been a guest conductor in 1969 and was appointed Music Director in 1970. The season was expanded to four concerts and these were held on Tuesday evenings at Sacramento City College. Season Tickets cost $5.00 for adults, $2.50 for students. Walter also added the free “Pop Concerts” on the lawn at American River College, and in 1974 began the Mothers Day Concerts at the Sunrise Mall, which continued for over two decades. He engaged local soloists and invited many of the community’s choral groups from large choirs to barbershop quartets, and also a local jazz band, to participate in the orchestra’s concerts. The orchestra was rehearsing in the Band room at American River College, and some concerts were performed there. The Camellia Symphony’s first Young Artist Competition for local area musicians had been held in 1966, but had not been held again until 1971. From that time through 1986 each year’s winner was featured as a soloist in one of the main concerts. Dr. Daniel Kingman became the Conductor of the Camellia Symphony in 1979. He made it his mission to program seldom performed American music as well as recently composed music, including some of his own. Under his direction the Camellia Symphony won its first ASCAP award. Composers Norman Dello Joio and William Dawson made cross-country trips to be present at the concerts programming their works. Dan also continued the tradition of the previous conductors in making use of local talent. A commercial recording of Dan’s opera “The Hills of Mexico,” which was given its world premier by the orchestra, won the prestigious “INDIE” award for the best classical release of 1986. The Camellia Symphony also had produced fully staged operas, the most noteworthy of which was the production of Moussorgsky’s “The Fair At Sorochinsk” in a version never before heard outside of Russia. The performance received notice in Opera News, the leading opera journal. The 1987-1988 found the orchestra rehearsing and playing in the auditorium of Hiram Johnson High School. This season, Camellia’s Silver Anniversary, coincided with the 200th anniversary of the United States Constitution. A concert planned for February 27, 1988 coincided with another unique event in Sacramento – the only Northern California showing of the exhibition “The Harlem Renaissance: Art of Black America.” The Camellia organization sought and received a grant to commission and present Duke Ellington’s complete “Black Brown and Beige” as arranged for jazz band and orchestra. Randall Keith Horton, who worked closely with Ellington himself and the Ellington Band during the last years of Duke Ellington’s life, was the arranger. The project had the enthusiastic approval of Mercer Ellington, Duke Ellington’s son. In Mercer Ellington’s view it represented an interpretation of the work which Ellington himself envisioned but which was never realized during his lifetime. The desirability of such a version of the complete work, not performed in its entirety since the 1940s, was supported by letters from the eminent jazz writers Gunther Schuller and Gary Giddens. The Camellia Symphony thus had the opportunity to perform the first complete performance of “Black, Brown and Beige” in almost half a century, and the world premier of an expanded version for symphony orchestra and 16 piece jazz band. Three times Dan arranged concert exchanges with conductors of other Community Symphonies, one of whom was also a composer, and one of his compositions was included on our concert. On opening night of the1990 – 1991 season Nan Washburn made her debut as the Camellia Symphony Orchestra’s new Music Director and Conductor. On the flyer promoting the 28 th Season we found out that Ms. Washburn was already the holder of seven ASCAP awards for adventuresome programs and that she was quickly establishing herself as one of the most imaginative and dynamic conductors in Northern California. She, too, had been instrumental in performing contemporary American music, but critics had hailed her as bringing “the joy of discovery” in her presentation of often forgotten musical treasures. Nan was also noted for her sensitive interpretation of the works of women composers. She had received the New York Women Composers Award for Distinguished Service to Contemporary Music and she led the Camellia Symphony to five more ASCAP Awards. By the end of her term Camellia was performing at Westminster Presbyterian Church. The 1996 – 1997 season featured guest conductors who were finalists in our search for a new conductor. The winning finalist was Eugene Castillo. Under his direction the orchestra continued its programming of adventuresome music. During Eugene’s tenure the orchestra received five ASCAP Awards. An outstanding event was a concert celebrating California’s Sesquicentennial, and The Millennium. A project proposing a work for orchestra by a Chinese composer and including a group playing ancient Chinese instruments was submitted and received funding. The Chinese Community and the Coalition of History Associations of the county were highly supportive. Camellia Symphony Association received the Arts Business Council Award for an Arts Organization with a budget under $125,000 at the beginning of the 2001 – 2002 season. At the end of the 2003 – 2004 season Eugene left Sacramento to begin his duties as the Director of the Philippines Philharmonic Orchestra. During the 2003 – 2004 season, a new conductor search was held, and Dr. Allan Pollack was selected as Camellia Symphony’s Music Director and Conductor. The concert venue moved to the historic Veterans Memorial Auditorium in down Sacramento where the audience doubled in size over the years. Most recently CSO moved to a new intimate and acoustically sound venue, The Center at Twenty-three Hundred. Under Maestro Pollack’s leadership CSO has developed into one of the finest orchestras in the region with programming that includes four season concerts and four free family concerts as well as ongoing collaborations with Camerata California, The Strauss Festival, St John’s Lutheran Church and many others. CSO continues to give back to its community and provide a musical forum for musicians who play for the shear love of the music. For our 50th Anniversary Season the Camellia Symphony begins a new era of classical music with the hiring of a new music director and conductor, Dr. Christian Baldini and the start of a new performance venue, Sacrament City College: Performing Arts Center. We’d love for you to join us for the next 50 years—they should be amazing. 1997 – 2004 Eugene Castillo, Conductor, Five more ASCAP Awards for “Adventuresome Programming,” some of it in celebration of California’s Sesquicentennial, The Millennium, and our 40th Anniversary. Camellia Symphony Association received the Arts and Business Council Award for an Arts Organization with a budget under $125,000 in the beginning of the 2001 – 2002 season. 2004 -2012 Dr. Allan Pollack took the baton in our new venue, The Veterans Memorial Auditorium, and continued the pre-concert lectures, which now follow a short performance by the Camellia Juniors and a silent auction that can be attended by the audience early birds. 2012 – Dr. Christian Baldini takes the baton and begins a new era of classical music for the Camellia Symphony Orchestra with a new performance venue at the Sacramento City College: Performing Arts Center.
This invention relates generally to thermo electric coolers and temperature sensors and more particularly concerns a circuit for a thermoelectric cooler and a temperature sensor which is able to monitor and provide accurate temperature control of the thermoelectric cooler. A single beam laser diode assembly has a single diode and usually, in a scanning system, the diode is driven by a train of image pixel information. The pixel information is used to drive the diode and therefore stimulate laser flux emission where there is a white pixel in a write white system. In a write white system, a laser is turned on to create white space on a page. Intensity of the light beam is directly proportional to the output power of the laser. In order to keep the output power of the diode constant, the temperature of the diode should be kept at a constant level. However, due to the structure of the laser diode assembly, as the pixel information changes, which causes the diode to turn on and off, the temperature of the diode fluctuates, which in turn causes the output power of the diode and the intensity of the light beam to fluctuate. In a printing, system fluctuation in the intensity of light beams causes fluctuation in the exposure of a printed pixel. A multiple beam diode assembly has at least two diodes in close proximity on a common substrate. Each diode is driven by a separate train of image pixel information. Again, as the pixel information changes, the temperature of each diode fluctuates. However, in a multiple diode system, the changing temperature of a diode also causes a temperature fluctuation in adjacent diodes. The temperature fluctuations of the adjacent diodes cause the output power and the intensity of the light beams in those adjacent diodes also to fluctuate. A tri-level system may use one or more diodes with at least one diode operating at full on, full off, and partially on. One example of an application using a single diode tri-level system is the printing of black and white documents with a highlight color. Tri-level systems suffer from the same heating effects both in the full on and the partially on modes of the laser. Accordingly, it is the primary aim of the invention to provide a method for quickly compensating for a variety of thermally induced effects. Further advantages of the invention will become apparent as the following description proceeds.
/* * Copyright (C) 2015 Apple Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include "config.h" #if WK_HAVE_C_SPI #include "PlatformUtilities.h" #include <wtf/HashMap.h> namespace TestWebKitAPI { TEST(WebKit2, WKRetainPtr) { WKRetainPtr<WKStringRef> string1 = adoptWK(WKStringCreateWithUTF8CString("a")); WKRetainPtr<WKStringRef> string2 = adoptWK(WKStringCreateWithUTF8CString("a")); WKRetainPtr<WKStringRef> string3 = adoptWK(WKStringCreateWithUTF8CString("a")); WKRetainPtr<WKStringRef> string4 = adoptWK(WKStringCreateWithUTF8CString("a")); HashMap<WKRetainPtr<WKStringRef>, int> map; map.set(string2, 2); map.set(string1, 1); EXPECT_TRUE(map.contains(string1)); EXPECT_TRUE(map.contains(string2)); EXPECT_FALSE(map.contains(string3)); EXPECT_FALSE(map.contains(string4)); EXPECT_EQ(1, map.get(string1)); EXPECT_EQ(2, map.get(string2)); } } // namespace TestWebKitAPI #endif
Q: Java - Insert values from Array of type Float, into a New Array of type float by index I have 2 Arrays and an ArrayList float[] a = new float [1000]; // contains 1000 float values float[] b = new float [1000]; // contains 1000 float values ArrayList<Float> c = new ArrayList<Float>(); // contains unique list of float values from array a I wish to perform the following for (int i=0; i<a.length; i++) { b[c.indexOf(c.get(i)]++; } Essentially, i am wanting to go through the length of a, find where the first value from C is found, then insert that into a new array b. However i am returned with an index out of bound error, or incompatible type error expected float() found int(). I have also experiment with wrappers when defining float, due to primitive types. Any help would be great. A: Here both the arrays a and b are of type float. by the statement b[c.indexOf(c.get(i)]++; you are applying an increment operator which is for int datatypes not float. So to do what you want you have to typecast the float to integer than you can apply this increment operator.
/* * Copyright (C) 2018 Apple Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #pragma once #include "GenericTaskQueue.h" namespace WebCore { template <typename T> class DeferrableTask : public CanMakeWeakPtr<DeferrableTask<T>> { public: DeferrableTask() : m_dispatcher() { } DeferrableTask(T& t) : m_dispatcher(&t) { } typedef WTF::Function<void ()> TaskFunction; void scheduleTask(TaskFunction&& task) { if (m_isClosed) return; cancelTask(); m_pendingTask = true; m_dispatcher.postTask([weakThis = makeWeakPtr(*this), task = WTFMove(task)] { if (!weakThis) return; ASSERT(weakThis->m_pendingTask); weakThis->m_pendingTask = false; task(); }); } void close() { cancelTask(); m_isClosed = true; } void cancelTask() { CanMakeWeakPtr<DeferrableTask<T>>::weakPtrFactory().revokeAll(); m_pendingTask = false; } bool hasPendingTask() const { return m_pendingTask; } private: TaskDispatcher<T> m_dispatcher; bool m_pendingTask { false }; bool m_isClosed { false }; }; }
The Colorado Avalanche have reportedly added some much needed bulk to their blue-line group, acquiring defenseman Brad Stuart from the San Jose Sharks in a trade, according to TSN's Pierre LeBrun. Stuart has one year and $3.6 million remaining on his contract. In 61 games with the Sharks last season the stay-at-home defender managed three goals and 11 points. CSNBayArea.com's Kevin Kurz is reporting that the Sharks acquired a second-round pick in 2016 and a sixth-round pick in 2017 in the deal. Update: The trade is now official.
Q: Not able to automate loading of node at login Have .nvmrc in my home directory. .nvmrc has the line 'nvm use 4.2' But on login, getting message N/A: version "nvm use 4.2" is not yet installed But from CLI, when I run "nvm use 4.2", it is fine and says : Now using node v4.2.2 (npm v2.14.7) And Im able to use node A: According to the documentation, you should just place the version number in the file: https://github.com/creationix/nvm#usage Try re-creating your .nvmrc file with only the version number (4.2) as the contents (omit the nvm use part): $ echo 4.2 > ~/.nvmrc
// // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, vJAXB 2.1.10 in JDK 6 // See <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a> // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2010.03.02 at 03:10:45 PM EET // package org.w3._2005._08.addressing; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlAnyAttribute; import javax.xml.bind.annotation.XmlAnyElement; import javax.xml.bind.annotation.XmlType; import javax.xml.namespace.QName; import org.w3c.dom.Element; /** * <p>Java class for MetadataType complex type. * * <p>The following schema fragment specifies the expected content contained within this class. * * <pre> * &lt;complexType name="MetadataType"> * &lt;complexContent> * &lt;restriction base="{http://www.w3.org/2001/XMLSchema}anyType"> * &lt;sequence> * &lt;any processContents='lax' maxOccurs="unbounded" minOccurs="0"/> * &lt;/sequence> * &lt;anyAttribute processContents='lax' namespace='##other'/> * &lt;/restriction> * &lt;/complexContent> * &lt;/complexType> * </pre> * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "MetadataType", propOrder = { "any" }) public class MetadataType { @XmlAnyElement(lax = true) protected List<Object> any; @XmlAnyAttribute private Map<QName, String> otherAttributes = new HashMap<QName, String>(); /** * Gets the value of the any property. * * <p> * This accessor method returns a reference to the live list, * not a snapshot. Therefore any modification you make to the * returned list will be present inside the JAXB object. * This is why there is not a <CODE>set</CODE> method for the any property. * * <p> * For example, to add a new item, do as follows: * <pre> * getAny().add(newItem); * </pre> * * * <p> * Objects of the following type(s) are allowed in the list * {@link Object } * {@link Element } * * */ public List<Object> getAny() { if (any == null) { any = new ArrayList<Object>(); } return this.any; } /** * Gets a map that contains attributes that aren't bound to any typed property on this class. * * <p> * the map is keyed by the name of the attribute and * the value is the string value of the attribute. * * the map returned by this method is live, and you can add new attribute * by updating the map directly. Because of this design, there's no setter. * * * @return * always non-null */ public Map<QName, String> getOtherAttributes() { return otherAttributes; } }
Q: What to do with an old film camera? Over the years, I've progressively inherited 4 different cameras from my father, the last one is a DSLR, so the three others are pretty much unused now. I can't bring myself to throw them away, but I'm not sure that there's anything more useful to be done with them. Any ideas? For the curious, in order they are: Miranda MS-1N Chinon CE-4 Pentax MZ-6 And the DSLR is a Pentax *ist DS A: Summary of Options (wiki) Keep for posterity Personally, this is my favourite, because objects we think of as junk to be thrown away are really part of history. I put this into practice often. (My wife does not see things from quite such an historical perspective, however ;) John Cavan Dan Brody Keep using to take pictures Get some film and keep on using them. Enjoy the deep colors, high resolution, and all-round analogue awesomeness of a chemical camera! lindes user28077 Use lenses with a mirrorless body rackandboneman Give to people who will use them This is probably the most constructive and generous idea so far. It's definitely the one to follow if you haven't got the space to keep them for posterity, you don't need to sell them for cash and you don't have a project to use them in. John Cavan Sell on eBay Like giving them away, but with a little bonus for you. :) asalamon74 Salvage what you can for other projects This wouldn't be my choice, because I am a complete klutz, and totally incapable of projects like this! ;) Evan Krall – Convert into a projector. rackandboneman – Parts for repair. A: Well, if you can't part with them and you won't shoot film (you can share lenses between the *istD and the MZ-6, film isn't dead yet), then I guess you either box 'em up, put them on a shelf, or display them somewhere. However, one consideration for parting with them is there are often volunteer groups teaching poorer kids about photography that are grateful for any gear they can get, I've donated cameras and other equipment to such in the past. Anyways, hopefully you're using the *istD, it still takes fine pictures. A: Hack one of them together with a slave flash, some ground glass, and a film positive, and project subversive messages onto popular tourist photography subjects. http://strobist.blogspot.com/2008/06/and-now-few-words-from-tourist-standing.html
--- abstract: 'The statefinder indices are employed to test the superfluid Chaplygin gas (SCG) model describing the dark sector of the universe. The model involves Bose-Einstein condensate (BEC) as dark energy (DE) and an excited state above it as dark matter (DM). The condensate is assumed to have a negative pressure and is embodied as an exotic fluid with the Chaplygin equation of state. Excitations forms the normal component of superfluid. The statefinder diagrams show the discrimination between the SCG scenario and other models with the Chaplygin gas and indicates a pronounced effect of the DM equation of state and an indirect interaction between their two components on statefinder trajectories and a current statefinder location.' author: - 'V.A. Popov' title: Statefinder analysis of the superfluid Chaplygin gas model --- *Department of General Relativity and Gravitation,\ Kazan Federal University,\ Kremlyovskaya st. 18, Kazan 420008, Russia* Email address: vladipopov@mail.ru\ *Keywords:* accelerated expansion, Dark Energy, Dark Matter, relativistic superfluid, Chaplygin gas, statefinder *PACS:* 95.36.+x, 95.35.+d, 98.80.-k, 98.80.Jk, 47.37.+q Introduction ============ The energy content of the Universe is a fundamental issue in cosmology. Observational data, such as Type Ia Supernovae (SNIa) , Cosmic Microwave Background (CMB)  and Large Scale Structure [@SDSS], are evidence of accelerating flat Friedmann-Robertson-Walker model, constituted of about 1/3 of baryonic and dark matter and about 2/3 of a dark energy component. The essential feature of DE is that its pressure must be negative to reproduce the present accelerated cosmic expansion. There are a few candidates for DE incorporated in competing cosmological scenarios. The simplest DE model, the cosmological constant, is indeed the vacuum energy with the equation of state $p=-\rho$. A number of models, such as quintessence [@Wetterich1], k-essence [@Armendariz1], phantom [@Caldwell1] and etc., are based on scalar field theories. Braneworld models explain the acceleration through the five-dimensional general relativity [@braneworld]. The Chaplygin gas model, also denoted as quartessence, exploits a negative pressure fluid, which is inversely proportional to the energy density [@Kamenshchik1]. For a more detail review of DE models and references see [@Copeland1]. Besides, there are models unifying DE and DM, including a some kind of scalar fields [@UniScFlds], generalized Chaplygin gas (GCG) and superfluid Chaplygin gas (SCG) [@SCG; @1]. In order to differentiate these various DE models, Sahni et al. [@Sahni-sf-intro] introduce a new geometrical diagnostic pair $\{r, s\}$, called *statefinder*, which involves the third order derivative of the scale factor with respect to time. Its important attribute is that the spatially flat $\Lambda$CDM has a fixed point $\{r, s\}=\{1,0\}$. Departure of a DE model from this fixed point is a good way of establishing the ‘distance’ of this model from flat $\Lambda$CDM. The statefinder diagnostic has been also applied to several DE models [@Alam-sf; @sf-phantom; @Kam-sf-in-Chap; @sf-holo; @sf-interact; @sf-etc] to differentiate them from $\Lambda$CDM and one from other. In addition the values of the statefinder pair can be extracted from data of SuperNova Acceleration Probe (SNAP) type experiments [@Sahni-sf-intro; @Alam-sf] to obtain constraints on the models. In this letter the statefinder diagnostic is applied to the SCG model developed in [@SCG; @1]. It represents the dark sector of the universe as a superfluid where the superfluid condensate is considered as DE and the normal component is interpreted as DM. The model is based on the action \[Action\] S=(-+[L]{})\^4x , where Lagrangian ${\cal L}$ associated with a generalized hydrodynamic pressure function depends only on one variable if we consider pure condensate, and on three variables when we include the excitation gas. To provide the accelerated expansion the negative pressure of the superfluid background obeys Chaplygin’s equation of state. In Sec. \[Sec: SF dynamics\] and \[Sec: Universe with BEC\] the SCG model is briefly outlined. The statefinder evolution and differentiation between SCG model and another models with the Chaplygin gas are discussed in Sec. \[Sec: Statefinder\]. The metric signature $(+---)$ is adopted in this work. Relativistic superfluid dynamics {#Sec: SF dynamics} ================================ An efficient approach to description of the excited state is two-fluid hydrodynamics. This theory does not depend on details of microscopic structure of the quantum liquid and exploits effective macroscopic quantities. In the theory there exist two independent flows, the coherent motion of the ground state named a superfluid component, and a normal component produced by the quasiparticle gas. For this reason it is necessary to increase the number of independent variables in the generalized pressure (\[PressureInCondensate\]) from one to three . They correspond to three scalar invariants which can be constructed from the pair of independent vectors, namely superfluid $\mu_\alpha$ and thermal $\theta_\alpha$ momentum covectors so that the general variation of the generalized pressure in a fixed background is \[dP1\] P= = n\^\_+ s\^\_ . The coefficients $n^\alpha$ and $s^\alpha$ are to be interpreted as particle number and entropy currents correspondingly. By virtue of its invariance the pressure is given as a function of three independent variables, $I_1=\frac{1}{2}\mu_\alpha \mu^\alpha,\,I_2=\mu_\alpha \theta^\alpha,\,I_3=\frac{1}{2}\theta_\alpha \theta^\alpha$. Taking the derivatives of the pressure, one finds \[lin\] n\^= \^+ \^, s\^= \^+ \^. As soon as the generalized pressure is the Lagrangian density in the action (\[Action\]) its variation with respect to the metric gives the energy-momentum tensor T\_=\_\_+ (\_\_+ \_\_) +\_\_- Pg\_ . Instead of the thermal momentum $\theta_\alpha$ let us introduce an inverse temperature vector $\beta^\alpha=s^\alpha/(s^\beta\theta_\beta)$ which one uses as the independent vector together with the superfluid momentum $\mu_\alpha$ since they are comoving to the excitation gas and the condensate respectively. Corresponding unit 4-velocities are \[Velocities\] U\^= , V\^= . In place of the scalars $I_1,\ I_2,\ I_3$ one uses new three invariants, a chemical potential $\mu=\sqrt{\mu^\beta \mu_\beta}$, scalar $\gamma=V_\alpha U^\alpha$ associated with the relative motion of the components, and inverse temperature with respect to the reference frame comoving to the normal component $\beta=\sqrt{\beta^\beta \beta_\beta}\,$. Using (\[lin\]) and (\[Velocities\]) the energy-momentum tensor and the particle number current are readily represented as n\^&=& n\_[c]{}V\^+ n\_[n]{}U\^,\[sfPN\]\ T\_ &=& n\_[c]{} V\_V\_+ W\_[n]{} U\_U\_- P g\_ .\[sfEMT\] The ground state is described by the generalized hydrodynamic pressure function depending only on $\mu$. We will consider the condensate with the function of the generalized pressure in the form \[PressureInCondensate\] P()=p\_[c]{}=- . It leads to the following particle and energy densities n\_[c]{}= ,\_[c]{}= . It is easy to see that if to eliminate the chemical potential $\mu$ one can obtain \[ChaplyginEOS\] n\_[c]{}= ,p\_[c]{}=- ,and the adiabatic speed of sound \[SoundSpeed\] c\_[s]{}\^2== . The equation of state (\[ChaplyginEOS\]) is uniquely proper to the Chaplygin gas suggested by Kamenshchik et al. [@Kamenshchik1] as an alternative to quintessence and developed by a number of authors for description of the dark sector of the universe . In contrast to these works where pressure of the Chaplygin gas is formed by both DE and DM, this model implies that the equation of state (\[ChaplyginEOS\]) concerns with only BEC which is interpreted as DE. Note that the generalized pressure (\[PressureInCondensate\]) can be obtained from the Lagrangian \[LagrangianBasic\] [L]{} = \_\^\* \^-M ( + ) for a complex scalar field $\phi$ in the WKB-approximation [@SCG; @1]. The interesting aspect of the pressure function (\[PressureInCondensate\]) is that it is a hydrodynamical representation of the generalized Born-Infeld Lagrangian \_ = -A describing a (3+1)-dimensional brane universe with the scalar field $\theta$ in a (4+1)-dimensional bulk . More detail information about the excited state can be derived from statistical description of the elementary excitations. The quasiparticle energy spectrum has a significant nonlinear dispersion at high energy, and therefore completely relativistic description has been carried out only for a low energy excitations, phonons [@Carter2; @Popov1]. Based on the relativistic kinetic theory of the phonon gas [@Popov1] one in particular can obtain \[EOSnn\] =(1-c\_\^2)W\_[n]{} when phonons prevail over another sorts of quasiparticles. Let us assume that the generalized pressure function is separated as follows: \[AnsatzSeparatedCondensate\] P(,,)=p\_()+p\_(,,) and \[PnPhonon\] p\_(,,)= that is inspired by the equilibrium pressure for the phonon gas [@Carter2] corresponding to $\polytrop=3$. Eq. (\[PnPhonon\]) leads to the barotropic equation of state \[AnsatzPn\] p\_[n]{}=W\_[n]{} . Moreover, the ansatz (\[AnsatzSeparatedCondensate\]) and (\[PnPhonon\]) proves to be convenient for a number of reasons. The equation of state (\[PnPhonon\]) makes possible to avoid a detail consideration of the full quasiparticle spectrum. Manipulating the sole parameter $\polytrop$ one can simulate a behavior of the normal component as DM. It also simplifies the following study of the cosmic evolution. This ansatz keeps the condensate self-dependent, i.e. eqs. (\[ChaplyginEOS\]) and (\[SoundSpeed\]) remain valid for the condensate in the framework of two-fluid dynamics and allow to naturally divide the total energy density into DE and DM fractions. We restrict our consideration to the equation of state (\[AnsatzPn\]) for the normal component situated between the dust one and the stiff one. It is evident from eq. (\[AnsatzPn\]) that this constraint implies $\polytrop\ge 1$. Universe with SCG {#Sec: Universe with BEC} ================= The cosmic medium is now regarded as a matter which particularly is in the BEC state and its particle number current and energy-momentum tensor have the form (\[sfPN\]) and (\[sfEMT\]) where the superfluid background obeys the equation of state (\[ChaplyginEOS\]) and the excited state is described by the relations (\[EOSnn\]) and (\[AnsatzPn\]). Let us consider a homogeneous and isotropic spatially flat universe. In this case the superfluid and normal velocities are equal and thus $\gamma=1$. Einstein’s equations then reduce to \[EinsteinEqs\] 3 = 8G \_ , -6 = 8G(3p\_+\_) , where $\rho_{\textrm{tot}}$ consists of the condensate density $\rho_{\textrm{c}}$ and the normal one $\rho_{\textrm{n}}=W_{\textrm{n}}-p_{\textrm{n}}$ that are interpretable as DE and DM densities respectively, and $p_{\textrm{tot}}=p_{\textrm{c}}+p_{\textrm{n}}=P$. In accordance with the integrability conditions of the Einstein equations we require local energy-momentum conservation $\nabla_\mu T^{\mu\nu}=0$ that yields \[EnergyConservation\] \_ + 3(p\_+\_)=0 . The interaction between DE and DM is implicitly included in equation (\[EnergyConservation\]) and also in particle number conservation $\nabla_\mu n^\mu = 0$ that leads to \[ParticleConservation\] n\_ + 3n\_=0 n\_+n\_ = , n\_0=. Taking into account the expressions (\[EOSnn\]), (\[AnsatzPn\]) with $\gamma=1$ and (\[ParticleConservation\]), eqs. (\[EinsteinEqs\]) and (\[EnergyConservation\]) are reduced to following two dimensionless equations: && 3(1+) = + ( + ), \[Eq1\]\ &&3(1+- ) + (1- (- ) )=0 , \[Eq2\] where the notation $\rho_\text{c}$ is used now for the dimensionless energy density $\rho_\text{c}/\sqrt A$ as well as $\rho_\text{n}$ will be used for $\rho_\text{n}/\sqrt A$ and etc. The dimensionless time variable $t'$ is connected with real time $t$ as $t'=\sqrt{8\pi G A^{1/2}}\,t$ and $k=n_0/\sqrt{\lambda}$. In the formal limit $\polytrop\to\infty$ eqs. (\[Eq1\]) and (\[Eq2\]) are solved analytically. As obvious from (\[AnsatzPn\]) the quasiparticle pressure is neglected and DM behaves as dust-like matter. In this case eq. (\[Eq2\]) yields the condensate energy density in the form \[rho\_c:dust\] \_[c]{} = , and the DM energy density is governed by the law \[rho\_n:dust\] \_ = . It is clear from (\[rho\_c:dust\]) and (\[rho\_n:dust\]) the integration constant $\kappa_0$ is the current ratio between the DM and DE energy densities. At the beginning stage (i.e. for small $a$) the total energy density is approximated by $\rho_{\textrm{tot}}\propto a^{-3}$ that corresponds to a universe dominated by dust-like matter. The same behavior is a feature of the Chaplygin gas [@Kamenshchik1] but even though in this model the condensate has the same equation of state, such dependence is due to the normal component. At the late stage (i.e. for large $a$) $\rho_{\textrm{tot}}\to 1$. Separating now DE and DM contributions one finds the subleading terms are \_[c]{} \~1+a\^[-6]{}, \_ \~ , whereas the scale factor time evolution corresponds to de Sitter spacetime, namely, $a\propto e^{t'/\sqrt{3}}$. ![The ratio of the energy density to the critical density for both components of SCG as a function of the redshift $z$ for $k=.2$ and the current value of $\rho_\text{c}(t_0)=1.01$. The quantity $\polytrop$ varies as 1 (dot-dashed line), 5 (dashed line) and 25 (solid line).[]{data-label="FigOmega"}](OmegaEvol){width=".5\textwidth"} When $\polytrop$ has a finite value the asymptotic behavior of $\rho_\text{c}$ is the same as it is found in the case with pressureless DM while $\rho_\text{n}\propto a^{-3(1-1/\polytrop)}$ for small $a$, and $\rho_\text{n}\propto a^{-3(1+1/\polytrop)}$ for large $a$. As before the universe falls within de Sitter phase in the far future. At the intermediate stage eqs. (\[Eq1\]) and (\[Eq2\]) are solved numerically. The figure \[FigOmega\] depicts an evolution of the normalized energy densities $\Omega_{\rm c}$ and $\Omega_{\rm n}$ of DE and DM respectively. The curves are plotted for different values of $\polytrop$ and fixed $k$ and the current value of $\rho_\text{c}$. The latter is close to 1 to provide the correspondence with the current observational value of the DE fraction $\Omega_{\rm c}\approx 0.7$. Photometric observations of apparent Type Ia supernovae attests that the recent cosmological acceleration commenced at $0.3<z_\text{T}<1$ [@Wang1]. To satisfy this condition the quantity $\polytrop$ have to be large, $\polytrop \geq 20$. This implies a lower effective sound speed[^1] for the normal component evolving as $c_\text{s}/\sqrt\polytrop$. In this case properties of the normal component are close to CDM. Note, that in superfluid helium a lower second sound speed is provided by quasiparticles from the nonlinear part of the energy spectrum (such as rotons). To develop more realistic model, a wide quasiparticle spectrum should be taken into account. In the context of the pure phonon consideration they are not taken into account and their influence is simulated with a large value of $\polytrop$. Statefinder diagnostic {#Sec: Statefinder} ====================== In this section we focus our attention to the statefinder diagnostic of the SCG model. The parameter pair $\{ r,s \}$ called ”statefinder” was introduced by Sahni et al. [@Sahni-sf-intro] for the purpose to differentiate between competing cosmological scenarios involving DE. The statefinder test is a geometrical one based on the expansion of the scale factor $a(t)$ near the present time $t_0$: a(t)=1 + H\_0(t-t\_0) - q\_0H\_0\^2(t-t\_0)\^2 + r\_0H\_0\^3(t-t\_0)\^3 + … , where $a(t_0)=1$ and $H_0,\ q_0,\ r_0$ are the current values of the Habble constant $H=\dot a/a$, deceleration factor $q=-\ddot a/aH^2$ and the former statefinder index $r=\dddot a/aH^3$ respectively. The latter index $s$ is the combination of $r$ and $q$: $s=(r-1)/3(q-1/2)$. Since the different cosmological models exhibit qualitatively different trajectories in the $r-s$, $q-r$ or $q-s$ planes, the statefinder diagnostic is a good tool to distinguish them. The remarkable property of the pair $\{ r,s \}$ is that the $\Lambda$CDM corresponds to the fixed point $\{ r,s \}=\{ 1,0 \}$. In fact, the statefinder diagnostic has been successfully used to test a number of models such as the cosmological constant, the quintessence [@Alam-sf], the phantom [@sf-phantom], the Chaplygin gas [@Kam-sf-in-Chap; @Alam-sf], the holographic dark energy models [@sf-holo], the interacting dark energy models [@sf-interact] and etc. [@sf-etc]. On the other hand the statefinder indices can be estimated from SNAP type experiment [@Sahni-sf-intro; @Alam-sf] to examine DE models from the observational data. In what follows we will calculate the statefinder parameters for the SCG model and plot the evolution trajectories in the statefinder planes. The deceleration factor and the statefinder pair can also be expressed as q &=& ( 1+3 ), \[qPress\]\ r &=& 1 + , \[rPress\]\ s &=& ,\[sPress\] where the overdot denotes the the derivative with respect to the time. Using the dimensionless energy density and pressure expressions (\[rPress\]) and (\[sPress\]) can rewritten as r &=& 1 - , \[rSCGgen\]\ s &=& - ,\[sSCGgen\] where the overdot denotes now the the derivative with respect to the dimensionless time. It is because the SCG model uses the unified conservation laws for DM and DE and the DM pressure is non-zero in general, that the expressions (\[rSCGgen\]) and (\[sSCGgen\]) are best suited to calculate the statefinder indices to analyze the impact of the SCG parameters on the statefinder location and reveal the difference in the statefinder evolution for some models with the Chaplygin gas. First we consider the special case $\polytrop\to\infty$ when the normal component is pressureless and the DE and DM energy densities evolve according to the expressions (\[rho\_c:dust\]) and (\[rho\_n:dust\]) respectively. In this case one can obtain explicit dependance $r(s)$ but in view of it complexity it is expressed as follows q &=& ( 1 - ),\ r &=& 1 + ,\ s &=& - , where the scale factor $a$ appears as a natural parameter. The value of $k$ is directly related to the scale factor corresponding to the transition from the deceleration to the acceleration. It is agreed that the transition occurs at $0.3<z_\text{T}<1$ [@Wang1] resulting in the restriction $k<0.652$ for dust-like normal component. ![The statefinder evolution diagrams for the SCG model with the dust-like normal component. The quantity $k$ varies as 0.653 (solid line), 0.488 (dashed line) and 0.345 (dotted line). Dots mark the current values of the statefinder parameters and arrows show the evolution direction of the statefinder trajectories. The star denotes the $\Lambda$CDM location.[]{data-label="FigDust"}](srEvol "fig:"){width=".45\textwidth"}![The statefinder evolution diagrams for the SCG model with the dust-like normal component. The quantity $k$ varies as 0.653 (solid line), 0.488 (dashed line) and 0.345 (dotted line). Dots mark the current values of the statefinder parameters and arrows show the evolution direction of the statefinder trajectories. The star denotes the $\Lambda$CDM location.[]{data-label="FigDust"}](qrEvol "fig:"){width=".45\textwidth"} In the figure \[FigDust\] we plot the evolution trajectories in the $s-r$ and $q-r$ planes assuming the current DE fraction $\Omega_{\Lambda 0}=0.7$ and varying $k$ as 0.652, 0.488 and 0.345 that corresponds to $z_\text{T}=$0.3, 0.4 and 0.5 respectively. The trajectory in the $s-r$ plane begins and ends at the same point corresponding to the $\Lambda$CDM. This is the feature of the SCG model with the pressureless DM component. It does not take into account the radiation and therefore the total pressure is the negative DE pressure and $q < 1/2$ through all the universe evolution. The value $k=0$ corresponds to the transition redshift of $\Lambda$CDM scenario $z_{\text{T}}=(\kappa_0/2)^{-1/3}-1\approx 0.671$. In this case the loop in the figure \[FigDust\] degenerates into the fixed point {0,1} and the SCG model coincides with $\Lambda$CDM. When $\polytrop$ is finite the pressure of DM is positive and the expressions (\[rSCGgen\]) and (\[sSCGgen\]) are directly used to calculate the statefinder evolution based on the numerical solution of eqs. (\[Eq1\]) and (\[Eq2\]). In the figure \[FigSoft\] we plot the evolution trajectories $r(s)$ and $r(q)$ for the various values of the quantity $\polytrop$. At early times the DM pressure and energy density exceed the DE ones and ensure that the total pressure is positive. At the present stage of DE dominance the universe expands with acceleration driven by the negative total pressure. Between these regimes there is a moment of time when the negative pressure of DE is balanced by the positive pressure of DM. In this point the total pressure is zero and $s\to\infty$. In fact, this point exists in the universe evolution even though DM is pressureless when we take into account the whole energy content of the universe. Moreover the most considerable contribution in the positive pressure at this stage is given by the radiation. The non-zero DM pressure only shifts the moment of time when $p_\text{tot}=0$. Trajectories in the figure \[FigSoft\] are shown after this moment to focus the attention to the problem of the recent accelerated expansion of the universe. Another quantity in the SCG model, $k$, realizes the indirect interaction between two components. It is clear from eqs. (\[Eq1\]) and (\[Eq2\]) that varying $k$ can be counterbalanced by rescaling of the scale factor $a$ leaving the equations invariant. However, for a fixed ratio between the DM and DE energy densities different $k$ are corresponded to different trajectories. Although the equations can be solved for any $k$, it is restricted by the observational estimations of the transition redshift $z_\text{T}$ [@Wang1]. For the current DE content $\Omega_{\Lambda 0}=0.7$ the value of $k$ does not exceed 0.652 determined by the limiting case of the pressureless normal component. The figure \[FigSoft\] depicts the evolution curves for $k=0.2$ and varying $k$ gives the similar plots which are different only in a quantitative sense. ![The statefinder evolution diagrams for the SCG model for $k=0.2$ and the different values of $\polytrop=1,\ 5,\ 25,\ 50,\ 150$ (the solid lines from top to down). The trajectories for the model of the Chaplygin gas with CDM (the dashed lines corresponds to $\kappa=0.5,\ 1,\ 5$ from top to down) and the GCG model (the dotted lines corresponds to $\alpha=1,\ 0.5,\ 0.05$ from top to down) is added for comparison. Dots mark the current values of the statefinder parameters and arrows show the evolution direction of the statefinder trajectories. The star denotes the $\Lambda$CDM location.[]{data-label="FigSoft"}](srEvolGen "fig:"){width=".45\textwidth"}![The statefinder evolution diagrams for the SCG model for $k=0.2$ and the different values of $\polytrop=1,\ 5,\ 25,\ 50,\ 150$ (the solid lines from top to down). The trajectories for the model of the Chaplygin gas with CDM (the dashed lines corresponds to $\kappa=0.5,\ 1,\ 5$ from top to down) and the GCG model (the dotted lines corresponds to $\alpha=1,\ 0.5,\ 0.05$ from top to down) is added for comparison. Dots mark the current values of the statefinder parameters and arrows show the evolution direction of the statefinder trajectories. The star denotes the $\Lambda$CDM location.[]{data-label="FigSoft"}](qrEvolGen "fig:"){width=".45\textwidth"} The figure \[FigSoft\] also contains the evolution trajectories for two alternative models with the Chaplygin gas to study differences in their statefinder evolution. The former describes the universe with DE obeying the Chaplygin equation of state $p_\Lambda=-A/\rho_\Lambda\,$, and CDM. This is two-component model without interaction between their parts, where the energy densities of the Chaplygin gas and CDM evolve according to \[KamenRho\] \_= , \_=Ca\^[-3]{} . The statefinder diagnostic of this model was carried out in [@Kam-sf-in-Chap; @Alam-sf]. Substituting (\[KamenRho\]) into (\[qPress\])–(\[sPress\]) one can obtain the explicit dependence r(s)=1- ,q(s)=( 1- ), where $\kappa=C/\sqrt B$ is the ratio between CDM and the Chaplygin gas energy densities at the beginning of the cosmological evolution. The latter model is the generalized Chaplygin gas (GCG) with the equation of state $p=-A/\rho^\alpha\, ,\ (0\leq\alpha\leq 1)$, and the energy density evolves according to =( A+Ba\^[-3(1+)]{} )\^[1/(1+)]{} . It is also suggested that the energy density $\rho$ consists of both vacuum and matter contributions. This is favorable to use the GCG model for a DE and DM unification . Statefinder parameters has explicit dependence r(s)=1- ,q(s)=( 1- ). It is obvious from the figure \[FigSoft\] that the evolution trajectories are distinct for all three models and they converge into the same point at the far future. Note that the models become hard to be distinguished at the late stage in the $s-r$ diagram while they remain quite different in the $q-r$ plane. ![The current statefinder locations. The diagrams show the current positions for the Chaplygin gas model with CDM (circles), the GCG model (squares) and the SCG model (triangles). The star denotes the $\Lambda$CDM location. The areas bordered by the rectangles are enlarged to resolve the values near the $\Lambda$CDM location (star).[]{data-label="FigChaplygin"}](srPres2 "fig:"){width=".43\textwidth"}![The current statefinder locations. The diagrams show the current positions for the Chaplygin gas model with CDM (circles), the GCG model (squares) and the SCG model (triangles). The star denotes the $\Lambda$CDM location. The areas bordered by the rectangles are enlarged to resolve the values near the $\Lambda$CDM location (star).[]{data-label="FigChaplygin"}](qrPres2 "fig:"){width=".47\textwidth"} Nevertheless these trajectories are not of determining significance in themselves. The current statefinder values are of primary importance to differentiate cosmological scenarios from $\Lambda$CDM and to impose restrictions on the models. The figure \[FigChaplygin\] shows the current statefinder locations for the models with the Chaplygin gas assuming that the current DE density $\Omega_{\Lambda 0}=0.7$. It is the sole parameter $\kappa$ determining the statefinder evolution trajectory and fixing the modern statefinder values in the model with the pure Chaplygin gas. We start with the value of $\kappa=3/7$ which coincides with the current ratio between DM and DE energy densities that leads to the modern statefinder values located far from the $\Lambda$CDM point. It is easy to see that already for $\kappa=5$ the modern location is fairly close to $\Lambda$CDM. In the GCG model one uses the decomposition of the energy density into two components \[splitGCG\] =\_+\_p= -\_proposed in [@Bento2] to fix the current statefinder position. The figure \[FigChaplygin\] shows that it approaches the $\Lambda$CDM point as $\alpha$ tends to zero. The current values for the SCG model are given for fixed $k=0.2$ and different $\polytrop$. As one would expect, they are far from the $\Lambda$CDM point as well as from other models concerned when $\polytrop$ is small, and tend to the $\Lambda$CDM fixed point when $\polytrop$ increases. The $s-r$ diagram demonstrates the very close trajectories for all three models near the $\Lambda$CDM point. In contrast, the $q-r$ diagram allows to differentiate between these models. In the GCG model the current deceleration factor is the same as in $\Lambda$CDM for any $\alpha$ owing to the selected decomposition (\[splitGCG\]). It implies that today’s statefinder locations for different $\alpha$ lies along the vertical line $q=\left( q_0 \right)_{\Lambda\text{CDM}}=0.55$ in the $q-r$ diagram. In the model with the pure Chaplygin gas q\_0=( q\_0 )\_ + and it is to be found resting on the parabola to the right of the vertical line. When $\polytrop$ increases the current statefinder location line in the SCG model come close to the parabola since large $\polytrop$ means that the normal component behaves like CDM. Decreasing $k$ implies attenuation of interrelation between the components of SCG that also leads to the approach of the SCG and pure Chaplygin gas models and further degeneration SCG into $\Lambda$CDM. ![The integrated statefinder locations. The diagrams show the integrated statefinder indices defined by (\[sfInegr\]) for the Chaplygin gas model with CDM (parabolas) for different $\kappa$ and the SCG model for different $\polytrop$ at $k=0.2$. The dashed lines corresponds to $z_\text{max}=1$, the dot-dashed lines corresponds to $z_\text{max}=2$, and the dotted lines contain the current locations. The values for $\kappa=1$ (circles) and 5 (squares), and $\polytrop=20$ (triangles) and 150 (rhombuses) are marked out for detail comparison. The stars denote the $\Lambda$CDM locations.[]{data-label="FigSFbar"}](Figsrbar "fig:"){width=".45\textwidth"}![The integrated statefinder locations. The diagrams show the integrated statefinder indices defined by (\[sfInegr\]) for the Chaplygin gas model with CDM (parabolas) for different $\kappa$ and the SCG model for different $\polytrop$ at $k=0.2$. The dashed lines corresponds to $z_\text{max}=1$, the dot-dashed lines corresponds to $z_\text{max}=2$, and the dotted lines contain the current locations. The values for $\kappa=1$ (circles) and 5 (squares), and $\polytrop=20$ (triangles) and 150 (rhombuses) are marked out for detail comparison. The stars denote the $\Lambda$CDM locations.[]{data-label="FigSFbar"}](Figqrbar "fig:"){width=".45\textwidth"} In order to distinguish these models with confidence we use the following integrated quantities \[sfInegr\] |q = \_0\^[z\_]{} q z , |r = \_0\^[z\_]{} r z , |s = \_0\^[z\_]{} s z , introduced in [@Alam-sf] to take into account a previous DE evolution. The figure \[FigSFbar\] depicts the lines passing through the points associated with the pairs $\{ \bar s , \bar r \}$ and $\{ \bar q , \bar r \}$ for different values of $\kappa$ in the pure Chaplygin gas model and for different $\polytrop$ in the SCG model. The current location lines are added for comparison. It is apparent that the certain values of $z_\text{max}$ considerably separates the models even though their current statefinder positions are almost indistinguishable. It is clear that in general an opposite situation can also take place. Because of this ambiguity the trend of $z_\text{max}$-dependence should be specially considered. The figure \[FigSFzmax\] shows the $\bar s - \bar r$ and $\bar s - \bar r$ diagrams governing by $z_\text{max}$. Magnitudes of the quantities (\[sfInegr\]) are less then the maximal corresponding statefinder values in the range $[0,z_\text{max}]$, and the evolution curves in the figure \[FigSoft\] are quite smooth, therefore the integrated statefinder trajectories are similar to the evolution ones and their efficiency could be developed in different ways. Since the SCG trajectories in the $\bar q - \bar r$ plane for large $\polytrop$ become closer to the $\Lambda$CDM line at greater $z_\text{max}$ then to increase $z_\text{max}$ is not so reasonable as to improve a statistical accuracy through a larger number of SNIa in the observational redshift range. To the contrary, the difference between the SCG model and $\Lambda$CDM in the $\bar s - \bar r$ diagram is enhanced when $z_\text{max}$ increases. It is primarily caused by a growing magnitude of the parameter $s$ in the SCG model up to the instant when $p_\text{tot}=0$ while $\Lambda$CDM is represented as the fixed point $\{0,1\}$ as in the current statefinder diagram. This advantage is favorable to distinguish the competing models with more confidence using observations of SNIa at higher redshifts. Similar estimations were carried out in [@Alam-sf], where the authors revealed that the discriminatory ability of the statefinders varies with redshift and showed that it improves when $q$, $r$ and $s$ in (\[sfInegr\]) are integrated over different redshift ranges. ![$z_\text{max}$-dependence of the integrated statefinder quantities. The dot-dashed lines correspond to the Chaplygin gas model with CDM for $\kappa=5$. The dashed lines correspond to SCG model for $\polytrop=20$ (long dashes) and 150 (short dashes), and the dotted lines represent the statefinder evolution trajectories for the same parameters. $\Lambda$CDM is shown as the star in the left panel and the horizontal solid line in the right one. The values for $z_\text{max}=1$ (circles), 2 (squares) and 5 (rhombuses) are marked out for detail comparison. []{data-label="FigSFzmax"}](srZmax1 "fig:"){width=".45\textwidth"}![$z_\text{max}$-dependence of the integrated statefinder quantities. The dot-dashed lines correspond to the Chaplygin gas model with CDM for $\kappa=5$. The dashed lines correspond to SCG model for $\polytrop=20$ (long dashes) and 150 (short dashes), and the dotted lines represent the statefinder evolution trajectories for the same parameters. $\Lambda$CDM is shown as the star in the left panel and the horizontal solid line in the right one. The values for $z_\text{max}=1$ (circles), 2 (squares) and 5 (rhombuses) are marked out for detail comparison. []{data-label="FigSFzmax"}](qrZmax1 "fig:"){width=".45\textwidth"} Conclusion ========== In this letter the SCG model is studied from the statefinder viewpoint. This model describes the dark sector of the universe as a matter that behaves as DE while it is in the ground state and as DM when it is in the excited state. Cosmological dynamics is described in the framework of the relativistic superfluid model therefore the interaction between DE and DM is implicitly involved into the conservation laws (\[EnergyConservation\]) and (\[ParticleConservation\]). The condensate possesses the equation of state of the Chaplygin gas but the universe evolution provided by this matter is different from the two-component model with the Chaplygin gas and CDM as well as from the GCG model used for unifying DE and DM. The discrimination is obviously demonstrated in the statefinder evolution diagrams. The diagrams show that for fixed ratio between DM and DE energy densities two quantities determine the trajectory and the current statefinder location. The former, $\polytrop$, governing the DM equation of state ought to be quite large to correspond to the cosmological observations. It implies that the pressure of the normal component is small and it behaves like CDM. From the superfluid standpoint it means that the second sound speed is small too and this inference should be taken into account for any (realistic or simulative) DM equation of state. The latter, $k$, interrelating the DM and DE is restricted for the universe commenced to accelerate before now. The limiting case of infinite $\polytrop$ and $k=0$ corresponds to $\Lambda$CDM and establishes the maximal value for the transition redshift $z_\text{T}=(z_\text{T})_{\Lambda\text{CDM}}=0.671$ in the SCG model. Near the $\Lambda$CDM fixed point the SCG and pure Chaplygin gas models are close together but they can be separated if the earlier evolution is taken into account. It is found that the better evaluation could be developed at lower redshifts for the parameters $\bar q$ and $\bar r$, and at higher redshifts for $\bar s$. The same inference was made in [@Alam-sf] on the statefinder analysis of a number of models. As it is shown in [@Alam-sf] the observational data from the SNAP type experiments are in reasonably good agreement with $\Lambda$CDM and rule out the models whose current statefinder values locate far from the $\Lambda$CDM point. This is the reason that an effect of the previous DE evolution takes on great importance. It is hoped that the future high-z supernova observations will provide new data to clarify the essence of DE. [99]{} ;\ ;\ ;\ .\ ;\ ;\ ;\ ;\ . ;\ ;\ ;\ . ;\ . . . ;\ . . ;\ . ;\ ;\ ;\ ;\ . ;\ . . . . . . . ;\ . ;\ ;\ . ;\ ;\ ;\ . ; *Sov.Phys.JETP* **56** (1982) 923;\ [Wiley Easter, Bombay]{}[48]{}[1985]{};\ ;\ . . . ;\ ;\ ;\ ;\ . [^1]: It is known as a second sound speed in the superfluid theory.
@page @model WebAPI.IndexModel @inject Microsoft.Extensions.Configuration.IConfiguration Configuration <div class="text-center"> <h1 class="display-4">CORS Test 1</h1> @{ var host3 = Configuration["host3"]; var theHost = HttpContext.Request.Host.Value; if (!host3.Contains(theHost) && !theHost.Contains("localhost")) { <text>Test from <a href="@host3">@host3</a> or <a href="https://localhost:5001">https://localhost:5001</a> </text> } } </div> <div> <span id='result'></span> </div> <ul> <li> <input type="button" value="Values" onclick="MyTestCors3( '@Model.Host','/api/values', 'GET')" /> </li> <li> <input type="button" value="PUT test" onclick="MyTestCors3( '@Model.Host', '/api/values/5', 'PUT')" /> </li> <li> <input type="button" value="GetValues2 [DisableCors]" onclick="MyTestCors3( '@Model.Host','/api/values/GetValues2', 'GET')" /> </li> </ul> <script src="~/js/MyJS.js"></script>
Gun crime surge in Victoria after state records 15 shootings in past week Details of another five shootings, all believed to be connected, were released by police on Friday following the arrest and charging of seven men. The connected shootings were in Broadmeadows, Dallas and Thomastown last weekend. The incidents have taken the tally of reported shootings in Melbourne and Geelong since March 4, to 15. There were also two shootings - one in Geelong and another in Frankston - on Thursday night. Police believe most shootings in the past week are linked to feuding gangs, organised crime and methamphetamines. Deputy Commissioner Shane Patton told reporters on Thursday police were working with MPs on possible reforms to the penalties for gun offences in a bid to prevent weapons use and disrupt the supply chain.
In summer, Portland steadily expands its position as one of the food truck and food cart capitals of the world. The city's mobile cuisine extends from Belizean chicken and rice to Korean tacos to Maine lobster rolls, and wheels are becoming as central to our local menu as Cuisinarts. Then there's the St. Vincent de Paul Blue Bird. Monday noon, an old school bus painted the color of a welcoming sky was parked by the Town Center Station Apartments, across from Clackamas Town Center. The bus, with a compressed kitchen that might not qualify for a food truck reality TV series, served up waffles, oatmeal and fresh blueberries to a couple hundred kids and mothers, including a lot of folks who, when school is out, are what you might call available for lunch. "A lot of folks who don't get a summer lunch are getting one today," explained Charles Ashcraft, watching over two huge pots containing a classroom's worth of oatmeal. A few yards from the Blue Bird, a summer food line reached from the apartment complex's courtyard out to the street, and had been forming for more than an hour before distribution opened. The tables were stocked with produce – potatoes, squash, oranges – from the Oregon Food Bank and 1,800 pounds of granola from Bob's Red Mill. They were staffed by volunteers from GracePointe Church in Milwaukie and by some past and present clients now looking to help out. Sylvia Herrera, who also volunteers to help other Latino parents at Milwaukie High School, got some translation help from her daughter Alexandra, an entering freshman planning to be a flight attendant. "My mom says it helps deal with the hunger in children. It's more energy for them," voices Alexandra. "Especially in summer, they need a little help with the food." At a time when significant numbers of American children get half their calories in school – lunch, breakfast, snack – the summer school shutdown sharply interrupts nutrition for millions. And since families have single food budgets, the student cutoff ripples through the plates of parents and younger siblings. According to a study cited by Sen. Patty Murray, D-Wash., hunger rises 34.2 percent during summer for families with schoolchildren, even for family members a long way from math tests. Federal summer food programs, requiring a location and a way for kids to get there, reach only a small fraction of the millions of kids who normally get free or reduced-price school lunches. Increasingly, instead of waiting for kids to find their way to food in summer, hunger workers are exploring ways to bring the food to the kids. That's what brings the St. Vincent de Paul Blue Bird food bus, and the tables covered with cabbage and potatoes, to Town Center Station Apartments, home to a significant number of kids from the nearby schools. The program's been going on for five weeks this summer, with another week to go. According to Debra Mason, nutrition program director for the Clackamas Service Center, attendance has been rising by about 15 or 20 a week. "It gets pretty exciting," she says, "that we're actually feeding kids." Gesturing toward the lines moving past the tables, mothers and elderly clients stacking bags of onions and bread on strollers, sit-down walkers and motorized carts, Mason points out: "You see what people are carrying. Not a lot of these people want to carry that on three buses to get home." She hopes to expand the idea to other locations next summer, and it could also expand beyond Clackamas. Other programs are considering bringing the idea to apartment complexes in Portland. Monday, the Blue Bird, helped by a $35,000 grant from Walmart, is doing a thriving business at Town Center Station. Like other, more fashionable Portland food trucks, it changes its location daily, spending most of its time out in more rural parts of the metro area. "As you get farther out," explains Paul Kresek of St. Vincent de Paul, "the need diminishes slightly, but the resources drop off sharply," Inside the bus Monday, a mother reaches out her hands to her 4- and 6-year-old daughters, saying a blessing for them all before turning to the waffles and oatmeal. Another mother, sitting with her 10-year-old son who has a great deal to say about animals he's admired this summer, talks about the benefits of the program. "It's wonderful," she says, noting that it's her first visit. "Sometimes you have a lot of bills you have to spend. Whatever is left, you have to make it." And with a little help, make it through the summer. • David Sarasohn's column appears on Wednesdays and Sundays. He blogs at davidsarasohn@blogspot.com and can be reached at davidsarasohn50@gmail.com.
Side population cells isolated from human osteosarcoma are enriched with tumor-initiating cells. It has been proven that "side population (SP)" cells that exclude Hoechst 33342 dye are enriched with cancer stem cells in several tumors. In the present study we aimed to isolate and characterize SP cells from human primary osteosarcoma. Side population cells were detected in osteosarcoma samples. In vitro, SP cells regenerated both SP and non-SP and the clonogenicity of SP cells was higher than that of non-SP cells, just like stem cells. In vivo, SP cells exhibited heightened tumorigenicity and only the SP fraction had the capacity to self- renew both in vitro and in vivo. Furthermore, SP cells exhibited increased multidrug resistance and the RNA expression of ATP-binding cassette protein transporters was increased in the SP group. In addition, "stemness" genes Oct-4 and Nanog were also upregulated in the SP group. However, the expression of other putative stem cell markers (CD44, CD117 and CD133) had no significant difference between SP and non-SP for each individual marker. These findings suggest that SP cells derived from osteosarcoma are enriched with tumorigenic cells with stem-like properties and might be an ideal target for clinical therapy.
Maintenance of mydriasis with epinephrine during cataract surgery. The pupillary response to various doses of intraocular epinephrine (0.1 ml of 1:16,000, 1:32,000, 1:64,000, 1:80,000, or 1:96,000) was studied in 55 consecutive patients during extracapsular cataract surgery. The 1:16,000 epinephrine concentration provided a mean 0.74 mm increase in pupil diameter (range 0.0 to 1.7 mm) when administered to re-dilate the pupil after nucleus expression. The mean increase in pupil area with 1:16,000 epinephrine was 27% which greatly facilitated removal of lens cortex in most cases. However, 25% of all pupils failed to dilate with epinephrine 1:16,000. The other concentrations provided essentially the same mydriasis as the 1:16,000 concentration. Pupils smaller than 6 mm dilated more easily than pupils larger than 6 mm. Iris color, age, or sex had no significant effect on the mydriatic response. It is concluded that an extremely dilute concentration of epinephrine (i.e., 1:96,000 or less) may be effective in maintaining mydriasis during cataract surgery.
Introduction {#emi12980-sec-0001} ============ The ability of a microbe to infect and cause harm (virulence) correlates with its multiplication rate within the host, itself a direct determinant of between‐host transmission success (Read, [1994](#emi12980-bib-0054){ref-type="ref"}; Lipsitch and Moxon, [1997](#emi12980-bib-0039){ref-type="ref"}). High virulence, however, may immobilize or cause the death of the host, impairing transmission to new hosts and hence pathogen fitness. Virulence has thus been theorized to hinge on a trade‐off balance with transmissibility and to be potentially costly to the pathogen (Anderson and May, [1981](#emi12980-bib-0002){ref-type="ref"}; Antia *et al*., [1994](#emi12980-bib-0003){ref-type="ref"}; Bull, [1994](#emi12980-bib-0013){ref-type="ref"}; Alizon *et al*., [2009](#emi12980-bib-0001){ref-type="ref"}). This relationship is easily intuited for microparasites depending on a live host for transmission (i.e. obligate pathogens) and is at the core of virulence theory (Bull and Lauring, [2014](#emi12980-bib-0014){ref-type="ref"}). However, whether microbial virulence also affects the performance of indirectly transmitted pathogens in the environment remains to be clarified and is largely neglected by evolutionary models. Virulence determinants have specifically evolved to confer an advantage within the host, and the gratuitous expression of microbial traits in a situation in which they are not required is known to carry fitness penalties (Nguyen *et al*., [1989](#emi12980-bib-0050){ref-type="ref"}; Eames and Kortemme, [2012](#emi12980-bib-0022){ref-type="ref"}). Despite the obvious potential significance for pathogen evolution, experimental information about the costs associated with unneeded virulence traits in a non‐host system is essentially lacking. A number of studies with phytopathogens have examined the fitness costs of 'avirulence' gene mutations to virulence in susceptible plant populations without the matching resistance (R) gene (where the pathogen\'s avirulence/virulence gene is irrelevant) (Leach *et al*., [2001](#emi12980-bib-0037){ref-type="ref"}; Bahri *et al*., [2009](#emi12980-bib-0006){ref-type="ref"}; Huang *et al*., [2010](#emi12980-bib-0030){ref-type="ref"}; Montarry *et al*., [2010](#emi12980-bib-0048){ref-type="ref"}). These studies have generally measured the cost of virulence via the effects on within‐host fitness attributes (e.g. *in planta* multiplication, amount of disease symptoms or pathogen released from leaves) but not on saprophytic growth and survival (Sacristan and Garcia‐Arenal, [2008](#emi12980-bib-0059){ref-type="ref"}). In animal pathogens, a recent report on *Salmonella* addressed the cost of virulence factors in *in vitro* culture (Sturm *et al*., [2011](#emi12980-bib-0067){ref-type="ref"}). In this study, Sturm and colleagues showed that expression of the type III secretion system (TTSS)‐1 was associated with significant growth retardation. Gene deletion analysis suggested that the growth defect was at least in part attributable to TTSS‐1 virulence factor expression, although the possibility that it was also due to global, pleiotropic regulatory effects was not excluded (Sturm *et al*., [2011](#emi12980-bib-0067){ref-type="ref"}). *Listeria monocytogenes* is a prototypic facultative pathogen that can live both as a soil saprotroph or an intracellular parasite of animals and people (Vazquez‐Boland *et al*., [2001b](#emi12980-bib-0071){ref-type="ref"}; Freitag *et al*., [2009](#emi12980-bib-0024){ref-type="ref"}). Listerial virulence is conferred by a set of proteins that promote host cell invasion (internalins InlA and InlB), phagocytic vacuole escape (pore‐forming toxin Hly, phospholipases PlcA and PlcB, metalloprotease Mpl), cytosolic replication (sugar phosphate transporter Hpt) and actin‐based cell‐to‐cell spread (surface protein ActA, internalin InlC) (Cossart, [2011](#emi12980-bib-0018){ref-type="ref"}). The genes encoding these nine virulence factors are coordinately regulated by the transcriptional activator PrfA (Mengaud *et al*., [1991](#emi12980-bib-0043){ref-type="ref"}; Chakraborty *et al*., [1992](#emi12980-bib-0015){ref-type="ref"}) (Fig. [1](#emi12980-fig-0001){ref-type="fig"}). PrfA‐regulated genes are normally very weakly expressed outside the host but strongly induced during intracellular infection (Moors *et al*., [1999](#emi12980-bib-0049){ref-type="ref"}; Shetron‐Rama *et al*., [2002](#emi12980-bib-0064){ref-type="ref"}; Chatterjee *et al*., [2006](#emi12980-bib-0016){ref-type="ref"}; Joseph *et al*., [2006](#emi12980-bib-0032){ref-type="ref"}; Toledo‐Arana *et al*., [2009](#emi12980-bib-0069){ref-type="ref"}). This activation is thought to require PrfA to allosterically switch from its native, weakly active ('OFF') conformation to a highly active ('ON') state (Scortti *et al*., [2007](#emi12980-bib-0062){ref-type="ref"}; de las Heras *et al*., [2011](#emi12980-bib-0029){ref-type="ref"}) and is essential for *Listeria* virulence (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). Single amino acid substitutions that lock PrfA in an 'always‐ON' (PrfA\*) state have been identified (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}; Vega *et al*., [2004](#emi12980-bib-0072){ref-type="ref"}; Wong and Freitag, [2004](#emi12980-bib-9010){ref-type="ref"}). *Listeria monocytogenes* mutants carrying one such PrfA\* substitution, G145S, constitutively express the PrfA‐regulated genes *in vitro* to levels similar to the wild type during intracellular infection (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}; Vega *et al*., [2004](#emi12980-bib-0072){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). *prfA*\*^G145S^ mutants therefore provide a unique tool to investigate the cost of virulence traits in non‐host conditions. ![Schematic of *L* *. monocytogenes*  PrfA virulence regulon and ON--OFF PrfA switching. Dotted lines indicate relevant transcriptional units.](EMI-17-4566-g001){#emi12980-fig-0001} Taking advantage of the properties conferred by the *prfA*\* allele, we show that virulence gene activation imposes a significant burden on *L. monocytogenes* outside the host. We also show that this burden limits the survival and competitive ability of *L. monocytogenes* in soil. Our data provide the first formal demonstration that the virulence traits that make a microbe pathogenic entail a significant fitness cost. We also experimentally substantiate that a primary key role of virulence gene regulation systems in facultative pathogens is to neutralize the cost of virulence outside the host, thereby maximizing between‐host pathogen fitness in the environmental reservoir. Results {#emi12980-sec-0002} ======= When first identified in our laboratory (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; [1997](#emi12980-bib-0056){ref-type="ref"}), we observed that *prfA\** mutants exhibited impaired growth in broth medium, suggesting a fitness defect (unpubl. data). The *prfA*\*‐associated growth reduction was also noted by others, although the effect was relatively minor compared with wild‐type *prfA* (*prfA* ^WT^) and was not statistically confirmed (Marr *et al*., [2006](#emi12980-bib-0042){ref-type="ref"}). More recently, *L. monocytogenes* bacteria carrying *prfA*\* alleles were found to have increased sensitivity to stress and a competitive disadvantage upon repeated passage in broth culture (Bruno and Freitag, [2010](#emi12980-bib-0012){ref-type="ref"}), although no growth defect in rich medium was directly observed in monoculture (Port and Freitag, [2007](#emi12980-bib-0053){ref-type="ref"}; Bruno and Freitag, [2010](#emi12980-bib-0012){ref-type="ref"}). The interpretation of these reports was complicated by possible regulatory interference of PrfA ON with listerial carbon nutrition/metabolism (Marr *et al*., [2006](#emi12980-bib-0042){ref-type="ref"}; Bruno and Freitag, [2010](#emi12980-bib-0012){ref-type="ref"}). Moreover, effects on fitness could have been obscured in these studies by the use of strains *trans*‐complemented with the *prfA* gene on a multicopy plasmid (Marr *et al*., [2006](#emi12980-bib-0042){ref-type="ref"}), or carrying enzymatic and antibiotic resistance cassettes under the control of a PrfA‐dependent promoter (Port and Freitag, [2007](#emi12980-bib-0053){ref-type="ref"}; Bruno and Freitag, [2010](#emi12980-bib-0012){ref-type="ref"}). Cost of PrfA activation *in vitro* {#emi12980-sec-0003} ---------------------------------- To avoid possible confounding effects due to the potential burden introduced by multicopy plasmids or reporter genes, we investigated the fitness consequences of PrfA regulon activation using a naturally occurring *prfA*\*^G145S^ strain (P14A) (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}) and an isogenic, unmarked *prfA* ^WT^ allelic exchange revertant thereof (P14^Rev^). The latter was obtained by double homologous recombination using fosfomycin to counterselect the original *prfA*\* genotype (see [*Experimental procedures*](#emi12980-sec-0009){ref-type="sec"}). This selection strategy is based on the ability of the listerial PrfA‐dependent sugar phosphate permease Hpt to confer susceptibility to fosfomycin when the PrfA system is activated (Scortti *et al*., [2006](#emi12980-bib-0061){ref-type="ref"}). Bacterial fitness was measured by determining the exponential growth rate (μ) and maximum growth yield (A) in brain--heart infusion (BHI) broth, a rich culture medium in which *Listeria* growth is optimal and wild‐type PrfA‐dependent gene expression is maximally downregulated at 37°C (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; [1997](#emi12980-bib-0056){ref-type="ref"}; Shetron‐Rama *et al*., [2003](#emi12980-bib-0065){ref-type="ref"}). As controls, an isogenic in‐frame *prfA* deletant (Δ*prfA*) and the parent *prfA* ^WT^ strain of P14A (isolate P14) were also tested. P14A exhibited a clear growth defect in BHI, as evidenced by its significantly lower μ and A values (F~3,10~ = 8.07 *P* = .005 and 54.98 *P* \< .0001 respectively) (Fig. [2](#emi12980-fig-0002){ref-type="fig"}). Replacement of P14A\'s *prfA*\* allele by *prfA* ^WT^ (P14^Rev^) restored growth to wild‐type (P14) levels. On the other hand, the growth dynamics of P14 and P14^Rev^, both expressing a PrfA^WT^ protein, was identical to that of the Δ*prfA* strain lacking PrfA (Fig. [2](#emi12980-fig-0002){ref-type="fig"}). These data indicate (i) that the constitutively active PrfA\*^G145S^ protein, driving high ('*in vivo*' equivalent) levels of PrfA‐dependent gene expression in *in vitro* conditions (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}), significantly impairs *L. monocytogenes* fitness in rich medium; and (ii) that PrfA^WT^, associated with negligible levels of PrfA‐dependent gene expression *in vitro* (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}), has a neutral effect on *L. monocytogenes* performance. ![Growth in BHI of *L* *. monocytogenes*  P14A (*prf* *A*\*), isogenic P14^Rev^ (*prf* *A* ^WT^ allele replacement revertant) and Δ*prf* *A* derivatives of P14A, and the wild‐type parent strain P14. Mean ± SEM of four experiments.\ A. Growth curves.\ B. Growth rate (μ) and maximum growth (A) expressed in OD~600~ units. P14^Rev^ was used as the reference in post‐hoc multiple comparisons. Numbers indicate *P* values; ns, not significant.](EMI-17-4566-g002){#emi12980-fig-0002} PrfA\* does not impair *L* *. monocytogenes* fitness in infected host cells {#emi12980-sec-0004} --------------------------------------------------------------------------- Since PrfA‐regulated virulence determinants are unlikely to be necessary for extracellular growth *in vitro*, the fitness disadvantage observed with the *prfA*\* allele in BHI could reflect the burden typically associated with expressing dispensable gene products (Dong *et al*., [1995](#emi12980-bib-0021){ref-type="ref"}; Stoebel *et al*., [2008](#emi12980-bib-0066){ref-type="ref"}; Shachrai *et al*., [2010](#emi12980-bib-0063){ref-type="ref"}). If this explanation is correct, then no significant growth impairment is expected to occur in an infection setting, where bacterial fitness depends on the expression of virulence genes. To confirm this, we compared the behaviour of the *prfA*\* and *prfA* ^WT^ bacteria in intracellular proliferation assays in eukaryotic cell monolayers. P14A did not differ from P14^Rev^ (and P14) in intracellular growth in HeLa cells (F~2,3~ = 0.04 *P* = .9575) (Fig. [3](#emi12980-fig-0003){ref-type="fig"}). This result is in agreement with previous data showing that *prfA*\* and *prfA* ^WT^ *L. monocytogenes* have similar or comparable virulence *in vivo* in mice and in infected cells (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; Shetron‐Rama *et al*., [2003](#emi12980-bib-0065){ref-type="ref"}; Bruno and Freitag, [2010](#emi12980-bib-0012){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). Thus, despite the significant growth defect observed *in vitro* in rich medium, the PrfA\* protein did not seem to impair *L. monocytogenes* fitness *in vivo* in a host system. This is consistent with the notion that PrfA\* is locked in the ON state presumably adopted by PrfA^WT^ *in vivo* during infection, resulting in similar levels of virulence gene expression for both *prfA*\* and *prfA* ^WT^ bacteria within host cells (de las Heras *et al*., [2011](#emi12980-bib-0029){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). ![Intracellular proliferation of *L* *. monocytogenes prf* *A*\* (strain P14A) and *prf* *A* ^WT^ (P14A isogenic wild‐type allele‐replacement revertant P14^Rev^ and parent strain P14) in human HeLa cells. Upper panel, intracellular colony forming units (cfu); lower panel, data expressed as normalized intracellular growth coefficient (IGC, see *Experimental procedures*). Mean ± SEM of three experiments.](EMI-17-4566-g003){#emi12980-fig-0003} The fitness cost is due to PrfA regulon components {#emi12980-sec-0005} -------------------------------------------------- The growth reduction associated with the *prfA*\* allele in nutrient‐rich BHI could be due to the cost of expressing unneeded virulence products, or alternatively to PrfA ON interfering with some listerial housekeeping function important for listerial growth, as previously suggested (Marr *et al*., [2006](#emi12980-bib-0042){ref-type="ref"}). To address this question, we constructed a P14A mutant lacking the entire PrfA regulon (ΔREG), i.e. *Listeria* pathogenicity island 1 encompassing the *prfA*, *plcA*, *hly*, *mpl*, *actA* and *plcB* genes (LIPI‐1), the internalin loci *inlAB* and *inlC*, and the organophosphate transporter gene *hpt* (also known as *uhpT*) (Fig. [1](#emi12980-fig-0001){ref-type="fig"}). ΔREG was complemented with either *prfA* ^WT^ (from P14) or *prfA*\*^G145S^ (from P14A) inserted in monocopy in a permissive site of the listerial chromosome using an integrative vector (pPL2) (Lauer *et al*., [2002](#emi12980-bib-0036){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). P14A Δ*prfA*, which possesses the entire PrfA regulon except the deleted *prfA* gene, was also complemented with the same *prfA* constructs as a control. Western blot analyses confirmed that the PrfA protein was correctly expressed in *prfA*‐complemented ΔREG and Δ*prfA* (Fig. [4](#emi12980-fig-0004){ref-type="fig"}A). They also confirmed that the *prfA*\* and *prfA* ^WT^ constructs induced, respectively, the expected high and low/undetectable expression levels of PrfA‐regulated products in BHI (Fig. [4](#emi12980-fig-0004){ref-type="fig"}B). ![Western immunoblot analysis.\ A. Detection of PrfA in cell extracts of Δ*prf* *A* and ΔREG bacteria complemented with *prf* *A* ^WT^ or *prf* *A*\* alleles. Protein loaded: 10 μg.\ B. Detection of selected PrfA‐dependent virulence factors in the cell extracts or culture supernatants of Δ*prf* *A* complemented with *prf* *A* ^WT^ or *prf* *A*\* alleles. The two arrows in PlcB indicate the unprocessed and mature form of the enzyme. Protein loaded per lane: 20 μg, 5 μg for Hly.](EMI-17-4566-g004){#emi12980-fig-0004} Complementation of Δ*prfA* with the *prfA*\* allele, but not *prfA* ^WT^ or empty vector, caused growth inhibition, with significant reduction in both μ and A (F~2,8~ = 8.17 *P* = .0117 and 34.04 *P* \< .0001 respectively) (Fig. [5](#emi12980-fig-0005){ref-type="fig"}A). This mirrored the previous data with the isogenic strains carrying the *prfA* gene in its native chromosomal location, confirming that the growth reduction was solely due to the activity of PrfA\*. In contrast, no significant differences were observed between the complemented ΔREG strains (μ *P* = .1397, A *P* = .9142) (Fig. [5](#emi12980-fig-0005){ref-type="fig"}B), or between these and Δ*prfA* complemented with *prfA* ^WT^ or empty vector (μ *P* = .4104, A *P* = .1719). These data show that the growth reduction caused by PrfA ON requires the presence of the PrfA‐dependent virulence genes on the listerial chromosome. ![Growth in BHI of (A) Δ*prf* *A* and (B) ΔREG, each complemented with *prf* *A* ^WT^, *prf* *A*\* or empty vector. Below, corresponding μ (growth rate) and A (maximum growth) values expressed in OD~600~ units; *prf* *A*\*‐complemented bacteria were used as the reference in post‐hoc multiple comparisons. Mean ± SEM of at least three experiments. Numbers indicate *P* values; ns, not significant. The Δ*prf* *A* and ΔREG growth curves, shown separately for clarity, were determined in the same set of experiments.](EMI-17-4566-g005){#emi12980-fig-0005} Partial PrfA regulon mutants in P14A were analysed to determine the contribution of specific PrfA‐regulated loci to the fitness loss. Deletion of the internalin genes *inlAB* and *inlC* or the *hpt* monocistron did not relieve the growth defect caused by PrfA\* ([Fig. S1](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)). In contrast, deletion of LIPI‐1 rescued the growth defect in the presence of *prfA*\* (Fig. [6](#emi12980-fig-0006){ref-type="fig"}). Some recovery of the wild‐type phenotype was observed for single *hly* or *actA* deletion mutants within LIPI‐1, although the effect was not statistically significant ([Figs S2 and S3](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)). Thus, the PrfA\*‐associated growth impairment is mainly attributable to LIPI‐1 and depends on the expression of several PrfA‐regulated genes. Together, our results are consistent with the growth reduction caused by PrfA ON being due to the burden associated with the expression of PrfA regulon virulence determinants. ![Growth in BHI of ΔLIPI‐1 complemented with *prf* *A* ^WT^, *prf* *A*\* or empty vector. Δ*prf* *A* bacteria complemented with *prf* *A* ^WT^, *prf* *A*\* or empty vector were used as a control.\ A. Growth curves.\ B. Corresponding μ (growth rate) and A (maximum growth) values expressed in OD~600~ units. Mean ± SEM of three experiments. Δ*prf* *A* complemented with *prf* *A*\* used as reference in post‐hoc multiple comparison. Numbers indicate *P* values; ns, not significant.](EMI-17-4566-g006){#emi12980-fig-0006} PrfA switch‐off is required for optimal fitness in soil {#emi12980-sec-0006} ------------------------------------------------------- We next sought to investigate the effect of PrfA activation on fitness in a non‐host model more closely approximating the conditions encountered by *L. monocytogenes* in nature. Soil rich in decaying plant matter is considered to be the main *Listeria* environmental reservoir (Weis and Seeliger, [1975](#emi12980-bib-0075){ref-type="ref"}; Vazquez‐Boland *et al*., [2001b](#emi12980-bib-0071){ref-type="ref"}; Freitag *et al*., [2009](#emi12980-bib-0024){ref-type="ref"}; Vivant *et al*., [2013](#emi12980-bib-0073){ref-type="ref"}) and was chosen for these experiments. Sterile topsoil of neutral pH was used to ensure optimal *L. monocytogenes* growth/survival (Botzler *et al*., [1974](#emi12980-bib-0010){ref-type="ref"}; McLaughlin *et al*., [2011](#emi12980-bib-0041){ref-type="ref"}; Locatelli *et al*., [2013](#emi12980-bib-0040){ref-type="ref"}; Vivant *et al*., [2013](#emi12980-bib-0073){ref-type="ref"}). P14A (*prfA*\*) and its isogenic P14^Rev^ (*prfA* ^WT^) and Δ*prfA* derivatives were inoculated in axenic microcosms at a dose of ≈ 6 × 10^6^ cfu g^−1^, and viable bacterial numbers in soil were regularly monitored for 17 days by plate counting. Although the pPL2 vector had previously demonstrated stable chromosomal integration in a variety of conditions (*in vitro* in culture media or *in vivo* in infected cells and mice) (Lauer *et al*., [2002](#emi12980-bib-0036){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}; this study), the *prfA* ^WT^ and *prfA*\* pPL2 constructs (and control empty vector) were rapidly lost in soil by the complemented Δ*prfA* strain (within the first 48 h) and could not be used. P14A again showed significantly different behaviour (genotype × time points F~22,72~ = 5.02 *P* \< .0001; two‐way analysis of variance (ANOVA) with Tukey\'s post‐hoc multiple comparisons): after an initial population increase for the three strains, P14A counts steadily dropped from day 3, while P14^Rev^ and Δ*prfA* continued to grow until day 5, followed by stabilization until declining after day 11 (Fig. [7](#emi12980-fig-0007){ref-type="fig"}). Thus, consistent with our observations in rich medium, *prfA*\* bacteria also exhibited diminished fitness in soil compared with *prfA* ^WT^ and Δ*prfA* bacteria. ![Monoculture experiments in soil. Microcosms were seeded with ≈ 6 × 10^6^ cfu g^−1^ of *L* *. monocytogenes prf* *A*\* (P14A), *prf* *A* ^WT^ (P14^Rev^) or Δ*prf* *A*, and the bacterial population dynamics for each strain regularly monitored in soil by plate counting during static incubation at room temperature. See [*Experimental procedures*](#emi12980-sec-0009){ref-type="sec"} for details. Results expressed as mean cfu g^−^1 ± SEM of three replicates. The *prf* *A*\* and *prf* *A* ^WT^ alleles remained stable throughout the experiments (see [Fig. S5](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)).](EMI-17-4566-g007){#emi12980-fig-0007} Competition experiments {#emi12980-sec-0007} ----------------------- To internally control for possible inter‐sample variation in growth due to physicochemical/nutritional microenvironment heterogeneity in soil (Vivant *et al*., [2013](#emi12980-bib-0073){ref-type="ref"}), the strains were tested in mixed culture in the same soil microcosms. This approach also permits direct determination of the competitive ability and an estimate of the strength of selection acting against the less fit genotype (Lenski, [1992](#emi12980-bib-0038){ref-type="ref"}). Either *prfA*\* or *prfA* ^WT^ bacteria were co‐inoculated in a ≈ 1:1 ratio with Δ*prfA* used as a common reference. This allowed confirmation of the relative frequencies of the competing genotypes by polymerase chain reaction (PCR) screening of the specific deletion in Δ*prfA* (see [*Experimental procedures*](#emi12980-sec-0009){ref-type="sec"}). *prfA*\* bacteria were clearly outcompeted by Δ*prfA* after the first 24 h \[competitive index (CI) \< 1\] until their total disappearance by day 9 (Fig. [8](#emi12980-fig-0008){ref-type="fig"}A). In contrast, no differences in the relative fitness of *prfA* ^WT^ and Δ*prfA* genotypes (CI not significantly different from 1) were observed throughout the experiment (Fig. [8](#emi12980-fig-0008){ref-type="fig"}B). These data indicate that (i) the burden imposed by the activation of the PrfA virulence regulon compromises *L. monocytogenes* survival in soil, and (ii) the virulence‐associated fitness cost in soil is effectively compensated by the ON--OFF switchable PrfA regulator. ![Competition experiments in soil. (A) *prf* *A*\* (P14A) versus Δ*prf* *A*. (B) *prf* *A* ^WT^ (P14^Rev^) versus Δ*prf* *A*. Microcosms were inoculated with ≈ 10^7^ cfu g^−1^ of 1:1 mixes of the indicated *L. monocytogenes* strains. Left panels, bar charts: bar height indicates log total cfu g^−1^; black and grey areas within bars indicate the proportion of competing bacteria. Right panels, competitive index (CI). *P* values for statistically significant differences with the reference value 1 are indicated (see [*Experimental procedures*](#emi12980-sec-0009){ref-type="sec"}). Mean ±SEM of three replicates.](EMI-17-4566-g008){#emi12980-fig-0008} Discussion {#emi12980-sec-0008} ========== Microbial growth is a correlate of the fitness status of the prokaryotic cell and responds to the principle of cost--benefit optimality. To ensure maximal fitness, microbial cells need to optimize the allocation of limited resources to competing traits (Dekel and Alon, [2005](#emi12980-bib-0019){ref-type="ref"}; Molenaar *et al*., [2009](#emi12980-bib-0047){ref-type="ref"}; Berkhout *et al*., [2013](#emi12980-bib-0008){ref-type="ref"}). This is often achieved by coupling gene expression to beneficial processes under specific conditions, as classically illustrated by studies with the *lac* operon or antibiotic resistance determinants (Koch, [1983](#emi12980-bib-0035){ref-type="ref"}; Nguyen *et al*., [1989](#emi12980-bib-0050){ref-type="ref"}; Dekel and Alon, [2005](#emi12980-bib-0019){ref-type="ref"}; Stoebel *et al*., [2008](#emi12980-bib-0066){ref-type="ref"}; Eames and Kortemme, [2012](#emi12980-bib-0022){ref-type="ref"}). Here we analysed the fitness consequences of expressing virulence traits in conditions in which they are not directly beneficial, i.e. during saprophytic growth outside the host. Notwithstanding its undeniable potential significance in pathogen evolution and transmission dynamics, this question had been insufficiently investigated. Using *L. monocytogenes* and a mutant form of its master virulence regulator, PrfA\*^G145S^ (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}), which causes virulence genes to be constitutively expressed *in vitro* to the same high levels seen *in vivo* during infection (de las Heras *et al*., [2011](#emi12980-bib-0029){ref-type="ref"}; Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}), we demonstrate that virulence traits impose a significant burden on bacterial fitness. The fitness disadvantage was evident in extracellular conditions but not in infected cells where the virulence products are indispensable, reflecting that, during infection, the burden associated with virulence factor synthesis is compensated by the beneficial effects on within‐host fitness. Using a soil model, we further show, for the first time, that the virulence‐associated fitness cost translates into significantly impaired bacterial survival in an environmental milieu relevant for pathogen transmission. PrfA\* had no effect on growth in the absence of the target PrfA regulon genes, indicating that the impaired performance was clearly linked to the expression of the virulence factors and not due to PrfA ON disturbing an unrelated housekeeping or metabolic pathway(s). A possible explanation is that some PrfA regulon product(s) might exert a direct inhibitory effect on *L. monocytogenes* via unknown mechanisms. Alternatively, and more plausibly, the PrfA\*‐associated growth deficiency may be the consequence of the gratuitous expression of unneeded PrfA regulon products. Indeed, growth reduction is the typical penalty response observed when wasteful proteins are expressed by bacterial cells, aka protein cost (Dong *et al*., [1995](#emi12980-bib-0021){ref-type="ref"}; Dekel and Alon, [2005](#emi12980-bib-0019){ref-type="ref"}; Stoebel *et al*., [2008](#emi12980-bib-0066){ref-type="ref"}; Shachrai *et al*., [2010](#emi12980-bib-0063){ref-type="ref"}). The growth deficiency was readily apparent in monoculture in resource‐replete conditions, indicating that the impact of PrfA regulon activation on *Listeria* fitness is substantial. LIPI‐1, which contains six of the nine PrfA‐regulated genes (Fig. [1](#emi12980-fig-0001){ref-type="fig"}), appeared to account for the entire burden. Growth rate (μ) and growth yield (A) were both impaired, as would be expected if rate limiting bacterial biosynthetic resources are diverted for virulence factor expression until a critical nutrient(s) is exhausted from the medium. Protein cost is a major driving force in the shaping of regulatory systems (Dekel and Alon, [2005](#emi12980-bib-0019){ref-type="ref"}; Babu and Aravind, [2006](#emi12980-bib-0005){ref-type="ref"}; Kalisky *et al*., [2007](#emi12980-bib-0034){ref-type="ref"}; Stoebel *et al*., [2008](#emi12980-bib-0066){ref-type="ref"}; Gao and Stock, [2013](#emi12980-bib-0026){ref-type="ref"}). The rapid elimination of the *prfA*\* genotype in the competition experiments in soil equates to a selection coefficient of about −0.33 d^−1^ (roughly a 33% difference in fitness measured over a day) (Lenski, [1992](#emi12980-bib-0038){ref-type="ref"}), indicating very strong selection against constitutive virulence gene expression in this environment. This selection is expected to be even greater in non‐sterile soil, where the presence of competing microbiota has been shown to significantly impair *L. monocytogenes* growth/survival (McLaughlin *et al*., [2011](#emi12980-bib-0041){ref-type="ref"}; Locatelli *et al*., [2013](#emi12980-bib-0040){ref-type="ref"}; Vivant *et al*., [2013](#emi12980-bib-0073){ref-type="ref"}). Whether expressing PrfA^WT^ or lacking the PrfA regulator, no significant differences in *L. monocytogenes* fitness were observed in either rich medium or soil. The cost neutrality of PrfA^WT^ in the tested extracellular conditions therefore indicates that the acquisition of an ON--OFF switchable PrfA regulator has been critical in the evolution of *L. monocytogenes* as a facultative parasite. The instability in soil (but not BHI or other conditions) of the chromosomally integrated pPL2 constructs indicates that PrfA^WT^, and indeed the empty complementation vector itself, imposed a burden. This implies that soil is a strongly selective environment for *L. monocytogenes* in which, despite PrfA‐dependent genes being downregulated (Piveteau *et al*., [2011](#emi12980-bib-0051){ref-type="ref"}), any leaky expression due to the basal activity of PrfA^WT^ in the OFF state (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}) may be disadvantageous. Indeed, although not apparent in BHI, Δ*prfA* bacteria also exhibit some fitness advantage over *prfA* ^WT^ bacteria in certain circumstances (e.g. chemically defined medium; our unpublished observations). *Listeria monocytogenes* possesses other mechanisms in addition to ON--OFF PrfA switching to ensure that the PrfA regulon is effectively silenced outside the host. For example, an RNA thermoswitch prevents efficient *prfA* gene translation at environmental temperatures (≤ 30°C) (Johansson *et al*., [2002](#emi12980-bib-0031){ref-type="ref"}). Growth on cellobiose and other plant‐derived β‐glucosides, presumably abundant in the decaying vegetation‐rich soil habitat, also strongly represses PrfA regulated genes (Brehm *et al*., [1999](#emi12980-bib-0011){ref-type="ref"}). The existence of these redundant PrfA‐downregulating mechanisms is consistent with preventing any virulence‐related fitness loss being critically important for *L. monocytogenes* outside the host. Since dispensable genes tend to be readily eliminated from bacterial genomes (Cooper *et al*., [2001](#emi12980-bib-0017){ref-type="ref"}; Mira *et al*., [2001](#emi12980-bib-0046){ref-type="ref"}), *L. monocytogenes* is expected to lose the ability to express the PrfA regulon -- and indeed the PrfA regulon altogether -- during its existence as a free‐living organism. This appears to have occurred during evolution and is the presumed mechanism that gave rise to the obligate saprophytic species of the genus, typified by *Listeria innocua* (Vazquez‐Boland *et al*., [2001a](#emi12980-bib-0070){ref-type="ref"}; Schmid *et al*., [2005](#emi12980-bib-0060){ref-type="ref"}; Hain *et al*., [2006](#emi12980-bib-0028){ref-type="ref"}). Some strains of *Listeria seeligeri*, another non‐pathogenic species, still possess a partially conserved PrfA regulon undergoing gene decay processes. (Vazquez‐Boland *et al*., [2001a](#emi12980-bib-0070){ref-type="ref"}; den Bakker *et al*., [2010](#emi12980-bib-0007){ref-type="ref"}). Similarly, spontaneous *prfA* disabling mutations are not uncommon among *L. monocytogenes* food isolates (Roche *et al*., [2005](#emi12980-bib-0058){ref-type="ref"}). This predicts a scenario of rapid decline and even extinction of the pathogenic *L. monocytogenes*, which is clearly not supported by this species\' known widespread distribution and epidemiology (Vazquez‐Boland *et al*., [2001b](#emi12980-bib-0071){ref-type="ref"}; Freitag *et al*., [2009](#emi12980-bib-0024){ref-type="ref"}). Arguably, therefore, virulence must somehow confer an evolutionary advantage to *L. monocytogenes*. The maintenance of the PrfA regulon may be positively selected in the environmental habitat for a number of reasons. For example, PrfA‐regulated virulence factors may promote survival by helping *Listeria* to evade predation by soil bacterivorous protozoa (Greub and Raoult, [2004](#emi12980-bib-0027){ref-type="ref"}). The PrfA regulon may also facilitate the subclinical colonization of the intestinal tract of animal hosts and subsequent fecal‐oral enrichment of virulent *L. monocytogenes* bacteria in the environment (Vazquez‐Boland *et al*., [2001b](#emi12980-bib-0071){ref-type="ref"}). While essential for within‐host microbial proliferation, virulence, if excessive, may also reduce the time the infected host remains viable and producing pathogen offspring for transmission to new hosts. Based on this tenet, evolutionary theory posits that pathogen fitness is optimized through a trade‐off between virulence and transmission (Anderson and May, [1981](#emi12980-bib-0002){ref-type="ref"}; Antia *et al*., [1994](#emi12980-bib-0003){ref-type="ref"}; Bull, [1994](#emi12980-bib-0013){ref-type="ref"}; Bull and Lauring, [2014](#emi12980-bib-0014){ref-type="ref"}). This assumption, however, is host‐centric and based on direct host‐to‐host transmission models, neglecting that pathogens are also indirectly transmitted from environmental sources (Anderson and May, [1981](#emi12980-bib-0002){ref-type="ref"}; Roche *et al*., [2011](#emi12980-bib-0057){ref-type="ref"}; Mikonranta *et al*., [2012](#emi12980-bib-0045){ref-type="ref"}). Moreover, many pathogens, like *L. monocytogenes*, not only 'sit‐and‐wait' in the environment for new hosts (Walther and Ewald, [2004](#emi12980-bib-0074){ref-type="ref"}) but reproduce as free‐living organisms (Merikanto *et al*., [2012](#emi12980-bib-0044){ref-type="ref"}). Here, we provide with the facultative pathogen *L. monocytogenes* the first formal demonstration that virulence traits are intrinsically costly to the microbe, impairing pathogen proliferation outside the host. A significant implication is that, contrary to current belief (Bonhoeffer *et al*., [1996](#emi12980-bib-0009){ref-type="ref"}; Gandon, [1998](#emi12980-bib-0025){ref-type="ref"}; Walther and Ewald, [2004](#emi12980-bib-0074){ref-type="ref"}; Roche *et al*., [2011](#emi12980-bib-0057){ref-type="ref"}), the evolutionary dynamics of facultative pathogens that do not depend directly on a host for transmission is also constrained by a virulence‐transmission trade‐off. We suggest that this trade‐off has been a key determinant in the evolution of virulence regulation systems in facultative pathogens, as exemplified here with the *Listeria* PrfA switch. A deeper insight into how microbes control the costs of virulence both within and outside the host, and incorporating this knowledge into virulence theory, will be key to improve our understanding of pathogen ecology and the evolution of virulence. Experimental procedures {#emi12980-sec-0009} ======================= Bacteria, plasmids, media and reagents {#emi12980-sec-0010} -------------------------------------- The strains and plasmids used are listed in Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}. *Listeria monocytogenes* bacteria were all derived from the serovar 4b human isolate P14 (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; [1997](#emi12980-bib-0056){ref-type="ref"}). *Listeria* and *Escherichia coli* were grown at 37°C in BHI (Difco‐BD) and Luria--Bertani (Sigma) media, respectively, supplemented with 1.5% agar (w/v) and/or antibiotics as appropriate. Chemicals and oligonucleotides were purchased from Sigma‐Aldrich unless stated otherwise. ###### Bacterial strains and plasmids used in this study Strain/plasmid Genotype/description Source (reference) Internal strainq collection no. ----------------------- ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------- --------------------------------- *L. monocytogenes* P14 *prfA* ^WT^, wild‐type strain of serovar 4b, human clinical isolate Our laboratory (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; [1997](#emi12980-bib-0056){ref-type="ref"}) PAM 14 P14A *prfA\** ^G145S^ isogenic derivative of P14 Our laboratory (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; [1997](#emi12980-bib-0056){ref-type="ref"}) PAM 50 P14^REV^ *prfA* ^WT^, allele exchange wild‐type revertant of P14A This study PAM 3757 Δ*prfA* In frame *prfA* deletion mutant of P14A Our laboratory (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}) PAM 373 Δ*prfA* (vector) Δ*prfA*, PAM 373 complemented with pPL2 empty vector Our laboratory (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}) PAM 3293 Δ*prfA* (*prfA* ^WT^) *prfA* ^WT^, PAM 373 complemented with pPL2prfAbc^WT^ This study PAM 3319 Δ*prfA* (*prfA\**) *prfA\** ^G145S^, PAM 373 complemented with pPL2prfAbc\* This study PAM 3320 ΔREG ΔLIPI‐1Δ*inlABΔinlCΔhpt*, PrfA regulon deletion mutant of P14A This study PAM 3691 ΔREG (vector) PAM 3691 complemented with pPL2 empty vector This study PAM 3734 ΔREG (*prfA* ^WT^) PAM 3691 complemented with pPL2prfAbc^WT^ This study PAM 3694 ΔREG (*prfA\**) PAM 3691 complemented with pPL2prfAbc\* This study PAM 3695 ΔLIPI‐1 Δ*prfA plcA hly mpl actA plcB*, LIPI‐1 deletion mutant of P14A This study PAM 3732 ΔLIPI‐1 (vector) PAM 3732 complemented with pPL2 empty vector This study PAM 3750 ΔLIPI‐1 (*prfA* ^WT^) PAM 3732 complemented with pPL2prfAbc^WT^ This study PAM 3751 ΔLIPI‐1 (*prfA\**) PAM 3732 complemented with pPL2prfAbc\* This study PAM 3752 Δ*inlABC* Δ*inlAB*Δ*inlC* in frame deletion mutant of P14A Our laboratory (unpublished) PAM 3657 Δ*hpt* Δ*hpt* in frame deletion mutant of P14A Our laboratory (Scortti *et al*., [2006](#emi12980-bib-0061){ref-type="ref"}) PAM 377 Δ*hly* Δ*hly* in frame deletion mutant of P14A Our laboratory (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}) PAM 3730 Δ*actA* Δ*actA* in frame deletion mutant of P14A Our laboratory (Suarez *et al*., [2001](#emi12980-bib-0068){ref-type="ref"}) PAM 185 *E. coli* DH5α Cloning host strain Our laboratory Plasmids pPL2 Integrative vector for single‐copy gene complementation in *L. monocytogenes* M. Loessner (Lauer *et al*., [2002](#emi12980-bib-0036){ref-type="ref"}) pMAD Thermosensitive shuttle vector for allelic exchange in Gram‐positives M. Debarbouille (Arnaud *et al*., [2004](#emi12980-bib-0004){ref-type="ref"}) pLSV1 Thermosensitive shuttle vector for allelic exchange in Gram‐positives J. Kreft (Wuenscher *et al*., [1991](#emi12980-bib-0076){ref-type="ref"}) pPL2prfAbc^WT^ pPL2 inserted with PrfA‐autoregulated Δ*plcA*‐*prfA* ^WT^ bicistronic construct This study pPL2prfAbc\* pPL2 inserted with PrfA‐autoregulated Δ*plcA*‐*prfA*\*^G145S^ bicistronic construct This study pLS5′ΔprfA^WT^ pLSV1 inserted with a 5′‐truncated *prfA* ^WT^ used in P14^Rev^ construction This study pMΔLIPI‐1 pMAD inserted with recombinogenic construct for deletion of LIPI‐1 This study pLSVΔhpt pLSV1 inserted with recombinogenic construct for deletion of *hpt* Our laboratory 2015 Society for Applied Microbiology and John Wiley & Sons Ltd General DNA techniques {#emi12980-sec-0011} ---------------------- Chromosomal *Listeria* DNA was extracted and purified as previously described (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}). Plasmid DNA was extracted from *E. coli* using the Spin Miniprep kit from Qiagen and introduced into *L. monocytogenes* by electroporation (Ripio *et al*., [1997](#emi12980-bib-0056){ref-type="ref"}) using a Gene Pulser Xcell apparatus (Bio‐Rad). Polymerase chain reaction was carried out with Taq DNA polymerase (Biotools, Spain) for detection/mapping purposes or high‐fidelity ProofStart DNA polymerase (Qiagen) for mutant construction or gene complementation. The PCR products were purified with the PCR purification kit from Qiagen and analysed by standard gel electrophoresis in 1.0% agarose (Biotools). DNA sequences were determined on both strands by Sanger sequencing. Restriction enzymes were used according to the manufacturer\'s instructions (New England Biolabs). *prf* *A* ^WT^ revertant from *prf* *A* *\** {#emi12980-sec-0012} -------------------------------------------- P14^Rev^ was constructed by replacing the *prfA*\*^G145S^ allele of strain P14A with *prfA* ^WT^ following a procedure described in detail elsewhere (J. Monzó i Gil, PhD thesis, University of Bristol, UK, 2007). Briefly, primers PrfAalleI and PrfAalleII‐long ([Table S1](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)), the latter with a SalI site, were used to amplify the *prfA* gene from wild‐type *L. monocytogenes* P14 (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}). The PCR product was digested with SalI and EcoRI (naturally occurring internal site 25 bp downstream from the *prfA* start codon), and the resulting 5′ end‐truncated *prfA* fragment (which includes codon 145) was inserted into the thermosensitive shuttle vector pLSV1 (Wuenscher *et al*., [1991](#emi12980-bib-0076){ref-type="ref"}), giving rise to the allele replacement plasmid pLS5\'ΔprfA (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}). After electroporation into P14A, integration of pLS5\'ΔprfA^WT^ by homologous recombination was selected at 42°C in BHI supplemented with 5 μg ml^−1^ erythromycin. A single cross‐over recombinant colony was subcultured at 37°C in BHI without erythromycin in the presence of 7.5 μg ml^−1^ fosfomycin (disodium salt) to counterselect for reconstitution of the original *prfA*\* allele of P14A in the second cross‐over event. This is possible thanks to the strictly PrfA‐dependent gene *hpt* encoding the organophosphate permease Hpt, which mediates uptake of (and hence susceptibility to) fosfomycin in *L. monocytogenes* (minimal inhibitory concentration \> 256--512 μg ml^−1^ for *prfA* ^WT^, 2 μg ml^−1^ for *prfA*\*) (Scortti *et al*., [2006](#emi12980-bib-0061){ref-type="ref"}). The *prfA* ^WT^ genotype of P14^Rev^ was confirmed by DNA sequencing. P14^Rev^ exhibited the characteristic PrfA phenotype of wild‐type *L. monocytogenes* as determined by PrfA functional assays (see below and [Fig. S4](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)). Deletion mutants and *prf* *A* complementation {#emi12980-sec-0013} ---------------------------------------------- Unmarked gene deletion mutants were constructed in *L. monocytogenes* P14A (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}) by allelic exchange using a thermosensitive shuttle vector. The in‐frame deletion mutants Δ*prfA*, Δ*hly*, Δ*actA*, Δ*hpt* and Δ*inlABC* were previously available in our laboratory (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}). For deleting LIPI‐1, DNA fragments of 893 bp and 684 bp corresponding to the chromosomal regions encompassing the *prfA* and *plcB* genes at each side of the pathogenicity island (see Fig. [1](#emi12980-fig-0001){ref-type="fig"}) were PCR‐amplified using primer pairs PrsF1/PrsR2 and PrsF3/PrsR4 ([Table S1](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)), then fused together by splicing overlap extension PCR (Pogulis *et al*., [1996](#emi12980-bib-0052){ref-type="ref"}) using the complementary 3′ sequence tails carried by PrsR2 and PrsF3 and a second PCR reaction with PrsF1 and PrsR4. The EcoRI and BamHI sites carried by the latter primers ([Table S1](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)) were used to insert the resulting 1577 bp PCR product into the pMAD vector (Arnaud *et al*., [2004](#emi12980-bib-0004){ref-type="ref"}), giving rise to the plasmid pMΔLIPI‐1 (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}). The ΔREG mutant was constructed by deleting LIPI‐1 and *hpt* from P14A Δ*inlABC* (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}). The *hpt* gene was in frame deleted using the pLSV1‐based pLSΔhpt allele replacement plasmid (Table [1](#emi12980-tbl-0001){ref-type="table-wrap"}). After electroporation, the first and second recombinants were selected and checked by PCR mapping as previously described (Suarez *et al*., [2001](#emi12980-bib-0068){ref-type="ref"}). For *prfA* complementation, *prfA* ^WT^ and *prfA*\*^G145S^ from P14 and P14A, respectively, with all native promoters including the PrfA‐dependent *plcA* promoter that positively autoregulates *prfA* expression (Mengaud *et al*., [1991](#emi12980-bib-0043){ref-type="ref"}; Scortti *et al*., [2007](#emi12980-bib-0062){ref-type="ref"}) (see Fig. [1](#emi12980-fig-0001){ref-type="fig"}), were inserted in monocopy in the *L. monocytogenes* chromosome using the integrative vector pPL2 (Lauer *et al*., [2002](#emi12980-bib-0036){ref-type="ref"}) as previously described (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). *prfA* constructs were generated by in‐frame deleting the *plcA* gene from the *plcA‐prfA* bicistron from either P14 or P14A by splicing overlap extension PCR using suitable primer combinations ([Table S1](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)). After electroporation into Δ*prfA* or ΔREG, pPL2 integrants were selected in BHI plates containing 7.5 μg ml^−1^ chloramphenicol. All gene deletions were confirmed by PCR and DNA sequencing. Western immunoblotting {#emi12980-sec-0014} ---------------------- *Listeria* were grown in 10 ml BHI until OD~600~ ≈ 1.0--1.2 and the cultures (1 ml) were centrifuged at ∼ 7000 × *g* for 5 min at 4°C to separate the supernatant and the bacterial cells. The cell‐free supernatant was precipitated with 16% trichloroacetic acid overnight at 4°C. After centrifugation at 18 000 × *g* for 10 min at 4°C, the protein pellet was washed with acetone, dried, then re‐suspended in 2% SDS 6 M urea Tris‐HCl buffer and stored at −80°C. For cell‐associated proteins, the bacterial pellet was re‐suspended in cold lysis solution (50 mM NaH~2~PO~4~, 300 mM NaCl, pH 7.4) with protease inhibitor cocktail (Roche), transferred to Lysis Matrix B tubes containing 0.1 mm silica beads (Q‐Biogene) and homogenized in a FastPrep instrument (Q‐Biogene) (three cycles of 30 s at speed set to 6). Cell debris was removed by centrifugation at 12 000 × *g* for 20 min at 4°C and the supernatant stored at −80°C. After determining total protein concentration (colorimetric DC protein assay, Bio‐Rad), protein samples were separated by SDS‐PAGE using 4--12% NuPAGE Bis--Tris mini gels (Novex Life Technologies) and electro‐transferred to polyvinylidene difluoride membranes using a Mini‐Protean II cuvette. Membranes were blocked for 2 h with 0.05% Tween 20 5% skim milk (w/v) phosphate‐buffered saline pH 7.2 (PBS) and incubated (1 h or overnight at room temperature) with appropriate primary (see below) and secondary (1:5000‐diluted anti‐rabbit and 1:2000‐diluted anti‐mouse, horseradish peroxidase‐conjugated) antibodies in the same solution. After washing, immunoreactive proteins were detected using Amersham\'s ECL chemiluminescent detection reagents (GE Healthcare). The following primary antibodies were used: PrfA rabbit polyclonal (Vega *et al*. [1998](#emi12980-bib-9015){ref-type="ref"}); PlcA and PlcB mouse monoclonals (J. Wehland, Braunschweig, Germany); Hly mouse monoclonal (T. Chakraborty, Giessen, Germany); InlA and InlB mouse monoclonals (P. Cossart, Paris, France); and InlC rabbit polyclonal (raised against an InlC‐specific peptide). Growth curves {#emi12980-sec-0015} ------------- Overnight BHI cultures were diluted 1:100 into fresh BHI and grown at 37°C with rotary shaking (200 r.p.m.) until ≈ 1.0 OD~600~. Bacteria were collected by centrifugation, washed twice in PBS and suspended in pre‐warmed BHI to give an OD~600~ = 0.05. Triplicate 200 μl aliquots of the bacterial suspensions were transferred to different positions of flat‐bottom 96‐well microplates (Costar). Plates were incubated at 37°C with shaking (200 r.p.m.) and bacterial growth monitored by measuring the OD~600~ every 30 min in an automated plate reader (FluoStar Optima or Omega machines, BMG Labtech). Cultures were monitored by phase‐contrast microscopy to exclude bacterial clumping as a potential source of variation. The maximum growth rate during exponential growth (μ) and maximum bacterial cell density reached during the growth curve (A) were estimated from spline‐fits of OD~600~ values using the [grofit]{.smallcaps} package in R (Kahm *et al*., [2010](#emi12980-bib-0033){ref-type="ref"}). Intracellular infection assay {#emi12980-sec-0016} ----------------------------- *Listeria monocytogenes* intracellular proliferation was tested in human epithelial HeLa cell monolayers using a gentamicin protection assay as previously described (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). Due to the constitutive activation of their PrfA‐regulated cell invasion determinants, *prfA*\* bacteria are more invasive than (broth‐grown) *prfA* ^WT^ bacteria (see Fig. [3](#emi12980-fig-0003){ref-type="fig"}, upper panel). Intracellular proliferation data were therefore normalized to the number of internalized *L. monocytogenes* bacteria using an intracellular growth coefficient calculated with the formula: IGC = (IB~n~ − IB~0~) / IB~0~, where IB~n~ and IB~0~ are the intracellular bacterial numbers at any specific time point (*t* = n) and *t* = 0, respectively (Deshayes *et al*., [2012](#emi12980-bib-0020){ref-type="ref"}). Soil experiments {#emi12980-sec-0017} ---------------- For each experiment, subsurface topsoil samples were collected within a depth of ≈ 10 cm from several locations of a residential garden in Edinburgh (UK). Soil was carefully mixed, sieved through 6 mm mesh to remove coarse particles and autoclaved (121°C‐15 min). The soil used had a pH of 7.23 (range 7.2--7.3) and average moisture content of 25.3% (range 24.1 and 26.5). The pH was measured in the liquid phase of a soil suspension prepared by vigorously stirring 25 g of soil in 50 ml distilled water. The water content was determined in 10 g samples by the oven‐dry method. Prior to the experiments, the soil was tested for the presence of antimicrobial or inhibitory activity against *L. monocytogenes* (P14A, P14^Rev^ and Δ*prfA*). For this, a soluble extract was prepared by suspending 50 g of soil in 50 ml distilled water. After mixing vigorously, the suspension was left to sediment for 20 min at room temperature and the supernatant filtered through 0.22 μ pore size membranes. No inhibition zones were observed in lawn cultures when drops of the soil filtrate were applied onto BHI plates seeded with the three test strains. Growth inhibition assays in fluid BHI culture also failed to detect inhibitory activity in the soil filtrate. For growth assays, sterile soil (≈ 450 g per experiment) was inoculated with (≈ 45 ml) twice‐washed *Listeria* cell suspensions in PBS and thoroughly homogenized for 5 min in a blender. Bacterial inocula were prepared from exponential BHI cultures as above indicated. Random samples were taken to confirm the uniform distribution of the inoculum. Microcosms (three per time point) were prepared by aseptically transferring ≈ 45 g of inoculated soil into Falcon tubes and incubated at room temperature in static conditions, without exposure to sunlight and at constant moisture. At the specified time points, two 1‐g soil samples per replicate were vigorously vortexed for 20 s with 1.5 ml diluent (PBS containing 0.05% trypsin and 0.9 mM 4Na 2H~2~O EDTA to ensure optimal bacterial recovery) in 15 ml Falcon tubes, the suspension allowed to settle for 5 min, and the supernatant decimally diluted and plated for viable count determination. The relative frequencies of the competing strains were determined by analysing at least 50 randomly selected colonies by PrfA phenotyping (see below) and PCR using primers PrfAalleI and PrfAalleII‐long ([Table S1](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)) for detection of the Δ*prfA* deletion. The log cfu numbers for each strain inferred from their frequency data were used to calculate their competitive index using the formula CI = (test/reference log cfu ratio at *t* = n)/(test/reference log cfu ratio at *t* = 0). Strain characterization {#emi12980-sec-0018} ----------------------- The *prfA* genotype of the strains was confirmed by DNA sequencing and the corresponding phenotypes systematically checked using PrfA functional assays. The latter are based on a panel of tests that detect the activity of the products of specific PrfA‐regulated genes used as natural reporters of PrfA activation status, namely: haemolysin activity (*hly* gene) in sheep blood agar (Biomérieux) ([Fig. S4](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo), left panel); phospholipase activity (*plcB* gene) in egg yolk BHI agar (Ripio *et al*., [1996](#emi12980-bib-0055){ref-type="ref"}; Vega *et al*., [2004](#emi12980-bib-0072){ref-type="ref"}) ([Fig. S4](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo), centre panel); and fosfomycin susceptibility (*hpt* gene) (Scortti *et al*., [2006](#emi12980-bib-0061){ref-type="ref"}). Phospholipase and fosfomycin susceptibility was also tested in charcoal (0.5% w/v)‐supplemented BHI plates (BHIC) to determine PrfA^WT^ activability (Ermolaeva *et al*., [2004](#emi12980-bib-0023){ref-type="ref"}; Scortti *et al*., [2006](#emi12980-bib-0061){ref-type="ref"}). Activated charcoal sequesters a diffusible PrfA repressor from the culture medium, leading to partial activation of PrfA‐dependent gene expression (Ermolaeva *et al*., [2004](#emi12980-bib-0023){ref-type="ref"}) (see [Fig. S4](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo), right panel). Using these tests, *L. monocytogenes prfA* ^WT^ is characterized by (i) weak haemolysis (confined to area underneath the colonies), (ii) no PlcB activity and resistance to fosfomycin in BHI, and (iii) strong PlcB activity and susceptibility to fosfomycin in BHIC. *prfA*\* bacteria, in contrast, exhibit (i) strong haemolysis (wide halo extending beyond the colonies), (ii) strong PlcB activity and fosfomycin susceptibility in BHI, and (iii) equally strong PlcB activity and fosfomycin susceptibility in BHIC. Δ*prfA* bacteria are phenotypically distinguishable from *prfA* ^WT^ bacteria since the former remain PlcB negative and resistant to fosfomycin in BHIC. Statistics {#emi12980-sec-0019} ---------- Growth parameters were analysed using one‐way ANOVA followed by Šidák post‐hoc multiple comparison tests unless otherwise stated. Two‐way ANOVA was used to compare intracellular proliferation data. One‐sample Student\'s *t*‐tests were used to determine if CI values differed significantly from 1 (the theoretical CI value if the ratio of the competing strains remains the same respect to *t* = 0). P[rism]{.smallcaps} 6.0 (GraphPad, San Diego, CA) or M[initab]{.smallcaps} 16 (Minitab, State College, PA) statistical software was used. Supporting information ====================== ###### **Fig. S1.** Growth of Δ*inlABC* and Δ*hpt* compared with their parent *prfA*\* strain P14A and isogenic *prfA* ^WT^ (P14^Rev^) and Δ*prfA* P14A derivatives in BHI. Mean ± SEM of four experiments. (A) Growth curves. (B) Corresponding μ (growth rate) and A (maximum growth) values. *prfA*\* strain P14A used as reference in post‐hoc multiple comparison. Numbers indicate *P* values; ns, not significant. **Fig. S2.** Growth of in frame Δ*hly* mutant compared with its parent *prfA*\* strain P14A and isogenic *prfA* ^WT^ (P14^Rev^) and Δ*prfA* in BHI. Mean ± SEM of at least three experiments. (A) Growth curves. (B) Corresponding μ (exponential growth rate) and A (maximum growth) values. *prfA*\* strain P14A used as reference in post‐hoc multiple comparison. Numbers indicate *P* values; ns, not significant. **Fig. S3.** Growth of in frame Δ*actA* mutant compared with its parent *prfA*\* strain P14A and isogenic *prfA* ^WT^ (P14^Rev^) and Δ*prfA* in BHI. Mean ± SEM of at least three experiments. (A) Growth curves. (B) Corresponding μ (exponential growth rate) and A (maximum growth) values. *prfA*\* strain P14A used as reference in post‐hoc multiple comparison. Numbers indicate *P* values; ns, not significant. **Fig. S4.** PrfA phenotype testing. Typical phenotypes of *prfA*\* (P14A), *prfA* ^WT^ (P14^Rev^) and Δ*prfA* bacteria on sheep blood agar (left), egg yolk‐BHI agar (centre) and egg yolk‐BHI agar supplemented with 0.5% (w/v) activated charcoal (right). Note in *L. monocytogenes prfA* ^WT^ the typical activation of PrfA‐dependent expression in charcoal‐supplemented medium as revealed using the activity of the *plcB* gene (PlcB phospholipase) as a reporter (indicated by black triangle). See *Experimental procedures* for details. **Fig. S5.** Stability of PrfA phenotypes from P14A (*prfA*\*) and P14^Rev^ (*prfA* ^WT^) strains in soil. The PrfA phenotype of soil isolates was systematically checked using a battery of functional tests (see Experimental *procedures* and [Fig. S4](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12980/suppinfo)). Example shown corresponds to haemolysin phenotype screening on sheep blood agar of *L. monocytogenes* P14A and P14^Rev^ colonies from the experiment in Fig. 7. Controls: streaks of the originally inoculated (1) P14A, (2) P14^Rev^ and (3) Δ*prfA* bacteria. **Table S1.** Main oligonucleotides used in this study. Relevant restriction sites are underlined; overlapping sequences for recombinant PCR are in lower case. ###### Click here for additional data file. We thank P. Cossart, T. Chakraborty and the late J. Wehland for the kind gift of antibodies to *Listeria* proteins. We also thank the undergraduate students J. Havenstein, M. Karpiyevich and A. Stanton for help with experiments or plasmid/strain constructions. This work was supported by the Wellcome Trust (Programme Grant WT074020MA to JAV‐B). VRKBS was the recipient of a Darwin Trust PhD studentship from the University of Edinburgh (UoE) and additional stipends from the Wellcome Trust‐Funded Programme Grant and UoE\'s Centre for Immunity, Infection & Evolution. Authors declare no conflict of interest.
0*d. Let q = 4 - 1. Let k(i) = -i**3 - 5*i**2 - 5*i + 1. Let a(o) = q*k(o) - 15*r(o). Find the first derivative of a(t) wrt t. -9*t**2 Find the third derivative of -5*f**3 - 28*f**2 + 15*f**3 + 56*f**2 wrt f. 60 Find the third derivative of 5*c**2 - 3*c**2 + c**2 - 11*c**2 + 5*c**5 wrt c. 300*c**2 Suppose 5*f = -4*b + 10, -b + 2*b = 0. Suppose -3*s = f*s - 15. Find the third derivative of 2*o**4 - o**4 - s*o**2 + o**4 + 0*o**2 wrt o. 48*o What is the derivative of 0*i**3 - 17 + i**3 - 16*i**3 wrt i? -45*i**2 Let r(q) = 5*q**3 - 3*q**2 + 3. Let z(o) = 11*o**3 - 5*o**2 + 7. Let k(w) = 7*r(w) - 3*z(w). Find the third derivative of k(s) wrt s. 12 Let i(a) be the third derivative of -a**4/6 + a**3/2 + 2*a**2. What is the derivative of i(l) wrt l? -4 Let z(k) = 9*k**3 + 2*k**2 - 6. Let n = 16 - 33. Let s(m) = 26*m**3 + 6*m**2 - 17. Let a(f) = n*z(f) + 6*s(f). What is the third derivative of a(v) wrt v? 18 Let m(k) be the first derivative of 17*k**7/7 - 7*k**3 - 8. What is the third derivative of m(i) wrt i? 2040*i**3 Let h = 3 - 1. Differentiate 1 + 0*l**h - 2 + l**2 + 2*l**2 wrt l. 6*l Let u = 11 + -9. Suppose 7*m = 4*m + 6. Find the third derivative of m*x**4 - x**u + 1 - 1 wrt x. 48*x Find the second derivative of -5*q**3 + 7*q**3 - 3*q + 4*q wrt q. 12*q Let x be 7 - (-1 - (-2)/1). Suppose 2*g = -g + x. Find the second derivative of -7*j**2 + 2*j**4 + 7*j**g - j wrt j. 24*j**2 Let s(h) be the first derivative of 32*h**3/3 + 41*h**2/2 - 24. What is the second derivative of s(p) wrt p? 64 Let f(r) be the third derivative of 3*r**6/8 + 23*r**3/6 - 4*r**2 + 5*r. What is the first derivative of f(z) wrt z? 135*z**2 Let j(s) be the second derivative of -3*s**5/4 - 3*s**3/2 - 27*s. Find the second derivative of j(n) wrt n. -90*n Let m(k) be the second derivative of -k**10/10080 + k**6/360 - k**4/4 - 4*k. Let o(b) be the third derivative of m(b). Find the second derivative of o(y) wrt y. -60*y**3 Let c(r) = -2*r - 2. Let m be c(-4). What is the third derivative of 3*a**2 + 28*a**m - 5*a**2 - 26*a**6 wrt a? 240*a**3 What is the third derivative of 3*j**4 + 23*j**2 - 10*j**2 + 23*j**5 - 3*j**4 wrt j? 1380*j**2 Let m(v) be the first derivative of v**5/10 + v**4/6 - 2*v - 1. Let t(g) be the first derivative of m(g). Find the third derivative of t(r) wrt r. 12 Let q(m) be the second derivative of -13*m**8/14 - m**4/3 + m**2 + 44*m. What is the third derivative of q(f) wrt f? -6240*f**3 Let u = -65 + 69. Let v(x) be the second derivative of 0 + 0*x**3 + 0*x**2 - 1/10*x**5 + 2*x - 1/4*x**u. What is the third derivative of v(z) wrt z? -12 Let f(n) be the third derivative of n**6/24 - 7*n**5/60 - 3*n**2. Find the third derivative of f(z) wrt z. 30 Let q(s) be the second derivative of 53*s**6/30 + 4*s**2 - 44*s. What is the first derivative of q(y) wrt y? 212*y**3 Let p(v) be the first derivative of -9*v**4 + 20*v + 39. Find the first derivative of p(k) wrt k. -108*k**2 Let m(x) be the second derivative of 0 + 0*x**6 + 0*x**7 + 1/14*x**8 + 0*x**5 - 2/3*x**4 + 0*x**2 + 8*x + 0*x**3. What is the third derivative of m(r) wrt r? 480*r**3 Let w(r) be the first derivative of -r**6/60 - 5*r**3/3 - 3*r**2/2 - 7. Let p(q) be the second derivative of w(q). What is the first derivative of p(l) wrt l? -6*l**2 Let o be 8/5*10/4. Suppose 2*z - 10 = -2*x + o*z, 17 = x - 5*z. What is the third derivative of 3*r**3 + r**x - r**3 - r**3 wrt r? 6 Differentiate 13*d**2 - 110*d + 110*d + 13 wrt d. 26*d What is the second derivative of 4*z - 3*z - 6 + 21 - 36*z**3 + 6*z**3 wrt z? -180*z Let y = 5 - 6. Let t be (-3)/(-1) - (y - -1). Find the second derivative of -2*x + 2*x + t*x**2 - x wrt x. 6 Let h(u) be the third derivative of -u**7/210 - 9*u**3/2 - 23*u**2. Differentiate h(y) wrt y. -4*y**3 Let k(g) = -18*g**3 - 14*g**2 - 7*g - 4. Let a(l) = -18*l**3 - 15*l**2 - 8*l - 5. Let v(o) = 4*a(o) - 5*k(o). What is the third derivative of v(h) wrt h? 108 Find the second derivative of -52*w**4 + 2*w - 5 + 10*w**4 - 12*w**4 wrt w. -648*w**2 Let k(l) = -42*l**3 + 5*l + 22. Let h(u) = -u**3 + u - 1. Let a(w) = 5*h(w) - k(w). What is the first derivative of a(i) wrt i? 111*i**2 Let q(x) be the second derivative of -2/3*x**3 + x + 1/30*x**6 + 0*x**4 + 0*x**2 + 0*x**5 + 0. Find the second derivative of q(c) wrt c. 12*c**2 Let o(z) = -3*z**2 - 5*z + 4. Let u(r) = -10*r**2 - 16*r + 11. Let b(a) = -11*o(a) + 4*u(a). What is the second derivative of b(y) wrt y? -14 Let y(s) = -s**3 + s**2 - s + 4. Let p be y(0). Suppose t + 3*t = 12. What is the third derivative of t*z**2 - z**2 - 2*z**p - 3*z**2 wrt z? -48*z Let k(i) be the first derivative of 10*i**2 + 6*i - 6. Differentiate k(l) wrt l. 20 Let v = 29 - 25. Suppose -k + 6 = 2*k. What is the third derivative of -2*q**2 + 0*q**4 - 2*q**k + 2*q**v wrt q? 48*q Let m = 5 - 4. Find the first derivative of m + 1 + 2*z**3 + 0*z**3 - 5 wrt z. 6*z**2 Find the third derivative of 1 - 293*p**3 + 305*p**3 - 26*p**2 - 1 wrt p. 72 Find the second derivative of -17*h**2 + 3*h**2 + 7*h + 7*h + 2*h**2 wrt h. -24 Find the second derivative of 16*r - 54*r**2 - 15*r**5 + 54*r**2 wrt r. -300*r**3 Let i(s) = s**5 - s**4 + s**3 - s + 1. Let y(d) = 4*d**5 - 6*d**4 + 6*d**3 - 8*d + 6. Let h(m) = 12*i(m) - 2*y(m). What is the second derivative of h(j) wrt j? 80*j**3 Let v(w) be the second derivative of w**8/560 + w**6/120 + w**3/6 + 2*w. Let l(g) be the second derivative of v(g). What is the third derivative of l(o) wrt o? 72*o Let j(f) = -f**2 - 3*f + 2. Suppose 4*g = g - 9. Let z be j(g). What is the second derivative of c + 2*c**2 + 2*c**z - c**2 - 2*c**2 wrt c? 2 Let f(y) be the second derivative of 7*y**6/30 + 2*y**4/3 + 5*y. What is the third derivative of f(t) wrt t? 168*t Let j(b) be the third derivative of -b**7/840 + b**6/180 + 2*b**3/3 + 3*b**2. Let m(n) be the first derivative of j(n). Find the third derivative of m(k) wrt k. -6 Let q(r) be the third derivative of r**7/30 - r**3 + 2*r**2. Find the first derivative of q(h) wrt h. 28*h**3 Let i(g) = 5*g**3 - g. Let l be i(1). Find the third derivative of -5*y**l + 2 - 2 + 3*y**2 + 8*y**4 wrt y. 72*y Suppose -4*n = -n - 6*n. Let p(k) be the second derivative of n*k**3 + 0*k**5 - k + 0 + 0*k**2 + 1/15*k**6 - 1/12*k**4. Find the third derivative of p(x) wrt x. 48*x Let p(f) = f + 12. Let s be p(-7). Find the third derivative of 4*o**s - 3*o**5 - 3*o**5 + o**5 + 3*o**2 wrt o. -60*o**2 Let t(n) be the first derivative of 9*n**5/5 + 20*n + 14. Differentiate t(d) wrt d. 36*d**3 Let t(u) be the first derivative of u**9/3024 - u**6/180 + 4*u**3/3 + 2. Let z(b) be the third derivative of t(b). Find the third derivative of z(d) wrt d. 60*d**2 Let s be 1/2*4/(-2). Let v be -2*(s/2 + -1). Find the third derivative of -2*l**2 + v*l**3 + 2 - 2 wrt l. 18 Let i(b) be the second derivative of -2/3*b**3 + 0*b**2 + 0 - 1/3*b**4 - 7*b. What is the second derivative of i(f) wrt f? -8 Let q(g) = 1. Let h(f) = f**3 - 10*f**2 - 5. Let s(k) = h(k) + 5*q(k). Find the third derivative of s(v) wrt v. 6 Let a(y) = y + 1. Let g(d) = -d**2 - 3*d + 5. Let k be g(-4). Let f be a(k). What is the second derivative of t + 0*t**5 + t**5 - f*t**5 wrt t? -20*t**3 Let l(d) = -d**4 - d**3 + d**2 + d. Let v(w) = 2*w**4 - 4*w**3 - 4*w**2 + 4*w. Let h(r) = 4*l(r) - v(r). Find the third derivative of h(k) wrt k. -144*k Suppose -z = 4*z - 20. Suppose -z*h = -11 - 1. What is the third derivative of -2*v**4 - h*v**2 + 3*v**4 + 4*v**2 wrt v? 24*v What is the second derivative of 3 + 1 + 2*a + 4 - 7 - 20*a**2 wrt a? -40 Let w(q) be the second derivative of 5*q**3/2 - 9*q**2/2 - 4*q. Let a(i) = 7*i - 4. Let b(h) = -7*a(h) + 3*w(h). What is the derivative of b(r) wrt r? -4 Let t(n) be the first derivative of n**8/28 - n**4/3 + 4*n + 3. Let m(x) be the first derivative of t(x). What is the third derivative of m(h) wrt h? 240*h**3 Let y(o) = o**3 - 5*o. Let b(f) = f. Let a(g) = 3*b(g) + y(g). Find the second derivative of a(c) wrt c. 6*c Let t(s) = -8*s**4 + s**2 - 3*s. Let w(o) = 7*o**4 + 2*o + 31 - 31 - o**2. Let i(f) = 2*t(f) + 3*w(f). Find the third derivative of i(r) wrt r. 120*r Let l(v) = -5*v + 7. Let b(c) = -4*c - 1. Let o(z) = -23*z - 5. Let m(w) = 34*b(w) - 6*o(w). Let g(f) = -3*l(f) - 7*m(f). What is the derivative of g(u) wrt u? 1 Let v(i) be the third d
59 Cal.App.4th 1041 (1997) METRIC MAN, INC., Plaintiff and Appellant, v. UNEMPLOYMENT INSURANCE APPEALS BOARD et al., Defendants and Respondents. Docket No. D026737. Court of Appeals of California, Fourth District, Division One. December 8, 1997. *1044 COUNSEL Berger, Kahn, Shafton, Moss, Figler, Simon & Gladstone, Carol P. Schaner and Steven H. Gentry for Plaintiff and Appellant. Daniel E. Lungren, Attorney General, Charlton D. Holland III, Assistant Attorney General, John H. Sanders and Susan A. Nelson, Deputy Attorneys General, for Defendants and Respondents. OPINION PRAGER, J.[*] Plaintiff Metric Man, Inc. (Metric Man), appeals a judgment denying its petition for a writ of mandate (Code Civ. Proc., § 1094.5) to compel defendant California Unemployment Insurance Appeals Board (UIAB) to set aside its decision to grant Donald Folk (Folk) unemployment insurance benefits. Folk worked as a traveling salesman for Metric Man. Metric Man contends the judgment of the superior court is not supported by the weight of the evidence because Folk failed to meet three of the seven requirements for employee status set forth in Unemployment Insurance Code[1] section 621. Specifically, Metric Man contends there was insufficient evidence to support the court's findings that: (1) Folk was required to work for Metric Man on a full-time basis; (2) Folk was required to perform services personally; and (3) Folk's sales were made predominately to wholesalers and retailers. Metric Man further contends the court failed to exercise its independent judgment over the administrative record. We affirm. *1045 FACTUAL AND PROCEDURAL BACKGROUND Metric Man is a distributor of nuts, bolts, fasteners, and a variety of other automotive products including specialty engine parts, lights, and chemicals. In May 1994, Metric Man hired Folk as a sales representative through a newspaper advertisement. Metric Man required Folk to sign a written contract which provided he was not an agent or legal representative of Metric Man. About four months later, Metric Man terminated the contract because Folk's sales failed to meet the company's expectations for the territory Folk was assigned. After he was terminated, Folk filed a claim for unemployment insurance benefits with the Employment Development Department (EDD). Metric Man contested the claim. After interviewing Folk, and Kevin Lolli, Metric Man's president, the EDD decided Folk was an employee under section 621 and that Metric Man's reserve account was subject to charges for unemployment benefits paid to Folk. Metric Man appealed the EDD's decision and an evidentiary hearing was held before an administrative law judge. The administrative law judge reversed the EDD's decision, finding Folk was ineligible for unemployment benefits because under the common law criteria for determining employee status, he was an independent contractor rather than an employee of Metric Man. Folk appealed the administrative law judge's ruling to the UIAB. Reversing the administrative law judge, the UIAB concluded that "under ... section 621, [Folk] was a traveling salesperson covered by the Unemployment Insurance Code as an employee, even though under common law principles he was an independent contractor." Metric Man filed a petition for writ of mandate in superior court seeking review of the UIAB's decision. The court denied the petition and this appeal followed. DISCUSSION I. Standard of Review (1) While the superior court exercises its independent judgment over the administrative record, our inquiry is limited to whether the superior court's findings are supported by substantial evidence. (Lacy v. California Unemployment Ins. Appeals Bd. (1971) 17 Cal. App.3d 1128, 1134 [95 Cal. Rptr. *1046 566].) We resolve all conflicts in evidence in favor of the superior court's findings and draw all legitimate and reasonable inferences to uphold those findings. (Ibid.) Further, "... we review the trial court's ruling, not the reasons given for it. If the ruling is correct, it will be affirmed even if it was reached by a mistaken line of reasoning. [Citation.]" (Oakdale Village Group v. Fong (1996) 43 Cal. App.4th 539, 547 [50 Cal. Rptr.2d 810].) II. Section 621 Section 621 defines the term "employee" for purposes of determining whether a claimant is entitled to unemployment compensation under the code.[2] Based on the express language of section 621, the EDD and UIAB properly used the following seven-part test to determine whether Folk was an employee under the statute. (2) Under that test, a salesperson qualifies as an employee only if: 1. The salesperson performs services on a full-time basis for the principal except for sideline sales activities performed on behalf of some other person; 2. The services consist of soliciting orders on behalf of and transmitting such orders to the principal; 3. The customers solicited are wholesalers, retailers, contractors, or operators of hotels, restaurants, or other similar establishments; 4. The orders solicited from such customers are for merchandise for resale or for supplies for use in the business operations of the customers; 5. The contract under which the salesperson performs contemplates that substantially all of the services are to be performed by him or her personally; *1047 6. The salesperson has no substantial investment in facilities used in connection with the performance of the services, other than in facilities for transportation; and 7. The service is performed as a part of a continuing relationship with the principal, not in a single transaction. (See In re Mission Furniture (1976) Cal. Unemp. Ins. App. Bd. Precedent Benefit Dec. No. P-T-329 (Mission Furniture).) A. Requirement of Full-time Engagement (3) Metric Man contends there is insufficient evidence to support the court's finding that Folk was required to work for Metric Man on a full-time basis. Section 621 does not require that the salesperson be contractually obligated to work full-time at soliciting qualifying orders; it requires only that the salesperson be "engaged upon a full-time basis" in such solicitation. (§ 621, subd. (c)(1)(B), italics added.) "Engaged" means "involved in activity" or "occupied." (Webster's New Collegiate Dict. (9th ed. 1989) p. 412.) Folk testified he worked an average of 10 hours a day, 5 days a week performing sales work for Metric Man. Folk's testimony constitutes substantial evidence that he was "engaged upon a full-time basis" in the solicitation of orders for Metric Man. The court did not err in finding Folk satisfied section 621's "full-time engagement" requirement. B. Requirement That Services Be Performed Personally (4a) Metric Man contends there is insufficient evidence to support the court's finding that Folk's contract with Metric Man contemplated Folk would personally perform substantially all of the services required under the contract. (5a) Because the Unemployment Insurance Code is remedial in nature, its provisions must be liberally construed to further its purpose of reducing the hardship of unemployment. (Gibson v. Unemployment Ins. Appeals Bd. (1973) 9 Cal.3d 494, 499 [108 Cal. Rptr. 1, 509 P.2d 945]; Tomlin v. Unemployment Ins. Appeals Bd. (1978) 82 Cal. App.3d 642, 646 [147 Cal. Rptr. 403].) "`Internal ambiguities and conflicts should be resolved to promote the objective exhibited by the entire plan.' [Citation.]" (Tomlin, supra, at p. 646.) (4b) The instant case presents an "ambiguity and conflict" as to whether the parties contemplated Folk would personally render services under the *1048 subject contract. The contract provided: "The Distributor [Folk] will use his best efforts to promote demand for and sale of the Company's products and will maintain adequate facilities and sales and personnel for the purpose." Although the requirement that Folk "maintain adequate ... personnel" suggests that persons other than Folk would be performing services under the contract, it could well have been the parties' mutual understanding that one full-time salesperson was "adequate personnel" for Folk's assigned territory at the time he entered into the contract. Considering the mandate to liberally construe the provisions of the Unemployment Insurance Code and resolve ambiguities and conflicts to promote the legislative objective of reducing the hardship of unemployment, we conclude the court reasonably found the subject contract contemplated Folk would personally perform substantially all of the services required under the contract. When Metric Man's representative Kevin Lolli was interviewed by the EDD, he stated that Folk satisfied all seven requirements for statutory employee status, including the requirement that services be performed personally. Folk testified it was his understanding when he signed the contract that he was expected to perform the sales work personally and would hire someone else to work for him only if he expanded his territory and his volume of sales increased enough to warrant it. The evidence indicated a single salesperson had previously worked Folk's territory, which further supports the finding the parties contemplated Folk alone would perform his obligations under the contract.[3] The court specifically found it was not economically feasible for Folk to hire any additional personnel in the performance of the contract.[4] The court cited evidence that despite Folk's full-time efforts, he was able to achieve only $8,000 per month in net sales, on which he was paid a 20 percent commission but no expenses. Further, there was evidence that Metric Man only expected Folk to achieve $10,000 per month in net sales. Had Folk met that expectation, his 20 percent commission minus expenses still would have been insufficient to enable him to hire anyone else to work for him. Viewed in light of these economic realities, the contract provision allowing Folk to *1049 hire additional personnel reflected a quixotic ideal rather than a realistic expectation of the parties. The court's reference to the provision as "illusory" was accurate. Finally, we note Metric Man hired Folk after he responded to its help-wanted advertisement in the newspaper, representing himself not as an independent contractor but as an applicant for a job. Folk testified he was required to sign the subject contract to get the job. Nothing in the record indicates Folk had any real bargaining power or choice as to the terms of the contract. (5b) As noted, the provisions of the Unemployment Insurance Code must be liberally construed and ambiguities regarding a claimant's entitlement to unemployment benefits should be resolved to further the code's objective of reducing the hardship of unemployment. Accordingly, a traveling salesperson's right to unemployment benefits should not be defeated by boilerplate contractual terms technically rendering the salesperson an independent contractor under section 621 when such terms are dictated by the employer and accepted by the salesperson without any meaningful choice. (Cf. S.G. Borello & Sons, Inc. v. Department of Industrial Relations (1989) 48 Cal.3d 341, 358-360 [256 Cal. Rptr. 543, 769 P.2d 399] [evidence that agricultural workers signed preprinted contracts providing they were not employees was insufficient to show they were independent contractors rather than employees under the Workers' Compensation Act because there was no indication they had any real choice of terms].) (4c) We conclude there is sufficient evidence supporting the court's finding that the parties' contract contemplated Folk would personally perform substantially all of the services. C. Requirement of Sales to Wholesalers and Retailers (6a) Metric Man next contends there is insufficient evidence to support the court's finding that the orders Folk solicited were predominantly from wholesalers and retailers. Metric Man relies heavily on Mission Furniture, supra, Cal. Unemp. Ins. App. Bd. Precedent Benefit Dec. No. P-T-329. Because the language of section 621 was taken verbatim from a federal statute in the Internal Revenue Code, the UIAB in Mission Furniture quoted two Internal Revenue Service rulings interpreting the language through various hypothetical examples of persons qualifying and not qualifying as employees under the federal statute. In one of the hypothetical examples unrelated to the facts of Mission Furniture, the salesperson sold automotive parts and rubber seal compound to automobile dealers, gasoline service stations, and automotive *1050 repair shops. The author stated that "`generally the operator of a repair shop or garage who has no [retail unit separate from the operator's regular business] is not a "retailer" or other customer of the type specified in [the statute].'" (Mission Furniture, supra, Cal. Unemp. Ins. App. Bd. Precedent Benefit Dec. No. P-T-329 at p. 14.) However, the author concluded the salesperson qualified as a statutory employee because he spent 80 percent of his working time soliciting orders from the requisite types of customers. (Ibid.) Metric Man urges us to follow the reasoning of this hypothetical example and conclude Folk was not a statutory employee because most of his customers were repair shops. We are not bound by tax precedent opinions of the UIAB, much less regulatory opinions of the Internal Revenue Service, as "the doctrine of stare decisis applies only to decisions of appellate courts...." (Fenske v. Board of Administration (1980) 103 Cal. App.3d 590, 596 [163 Cal. Rptr. 182].) We disagree with Mission Furniture to the extent it suggests that generally an automotive repair facility with no separate retail unit is not a "retailer" within the meaning of section 621. "Issues of statutory construction present questions of law, calling for independent review by an appellate court. [Citations.]" (Botello v. Shell Oil Co. (1991) 229 Cal. App.3d 1130, 1134 [280 Cal. Rptr. 535].) The term "retailer" is not defined in the Unemployment Insurance Code. (7) However, we can look to the definition and use of that term in other statutes as a guide to its intended meaning in section 621. (Quarterman v. Kefauver (1997) 55 Cal. App.4th 1366, 1371 [64 Cal. Rptr.2d 741]; Frediani v. Ota (1963) 215 Cal. App.2d 127, 133 [29 Cal. Rptr. 912].) Additionally, "[a] dictionary is a proper source to determine the usual and ordinary meaning of a word or phrase in a statute. [Citation.]" (E.W. Bliss Co. v. Superior Court (1989) 210 Cal. App.3d 1254, 1258, fn. 2 [258 Cal. Rptr. 783].) The definition of "retailer" in Revenue and Taxation Code section 6015 includes "[e]very seller who makes any retail sale or sales of tangible personal property...." (Rev. & Tax. Code, § 6015, subd. (a)(1).) Revenue and Taxation Code section 6007 defines "retail sale" as "a sale for any purpose other than resale in the regular course of business in the form of tangible property." The dictionary definition of "retail" is to sell "in small quantities directly to the ultimate consumer." (Webster's New Collegiate Dict., supra, at p. 1006.) (6b) Under these definitions, the operator of an automotive repair shop is clearly a retailer because the operator sells replacement parts, which are tangible property, in the regular course of business to ultimate consumers for *1051 purposes other than resale to a third party. Indeed, Business and Professions Code section 9884.8 requires an automotive repair dealer to list parts separately from service work on the customer's invoice and to separately list and total the prices and sales tax charged for the parts.[5] The policy underlying the Unemployment Insurance Act (Act) is to promote public and private enterprise by establishing "a system of unemployment insurance providing benefits for persons unemployed through no fault of their own, and to reduce involuntary unemployment and the suffering caused thereby to a minimum." (§ 100.) Section 621 furthers this legislative objective by extending the benefits of the Act to traveling salespersons and certain other independent contractors who do not qualify as employees under common law. The policy underlying the Act, and specifically section 621, would be ill served if salespersons dealing in resalable goods were denied the protection of the Act merely because most of their customers resell the goods incident to repair work only. For purposes of determining eligibility for unemployment benefits under section 621, there is no rational basis for distinguishing between such salespersons and those who sell predominantly to customers who put the goods on a shelf for resale. Accordingly, we hold that any automotive repair dealer that charges customers for parts in the regular course of business is a retailer within the meaning of section 621. In the instant case, Folk testified that about 20 percent of his customers were auto parts stores, about 5 percent were manufacturers that used the products he sold to them as component parts in some other product they were manufacturing, and the rest were automotive repair facilities. Thus, about 95 percent of Folk's customers were retailers under section 621. Folk's testimony sufficiently supports the court's finding that the orders he solicited were predominantly from "wholesalers or retailers" as required by section 621. III. Court's Exercise of Independent Judgment Metric Man contends there is "ample evidence in the record" to suggest the court failed to exercise its independent judgment over the administrative record. In support of this contention, Metric Man points to the absence of any reference to Mission Furniture in the judgment and argues the court *1052 failed to consider the effect of the testimony and exhibits presented at the hearing before the administrative law judge. Metric Man's contention is spurious. The fact the court did not find in Metric Man's favor does not suggest it failed to exercise its independent judgment over the evidence. The judgment expressly states: "The function of this Court is to exercise independent judgment on the evidence, and to determine whether the administrative agency's findings are supported by the evidence." There is no reason to conclude the court did otherwise. DISPOSITION The judgment is affirmed. Kremer, P.J., and Huffman, J., concurred. NOTES [*] Judge of the San Diego Superior Court, assigned by the Chief Justice pursuant to article VI, section 6 of the California Constitution. [1] All further statutory references are to the Unemployment Insurance Code unless otherwise specified. [2] In relevant part, section 621 provides: "`Employee' means all of the following: [¶] ... [¶] (c)(1) Any individual, other than an individual who is an employee under subdivision (a) or (b), who performs services for remuneration for any employing unit if the contract of service contemplates that substantially all of such services are to be performed personally by such individual either: [¶] ... [¶] (B) As a traveling or city salesperson, other than as an agent-driver or commission-driver, engaged upon a full-time basis in the solicitation on behalf of, and the transmission to, his or her principal (except for sideline sales activities on behalf of some other person) of orders from wholesalers, retailers, contractors, or operators of hotels, restaurants, or other similar establishments for merchandise for resale or supplies for use in their business operations. [¶] ... [¶] (2) An individual shall not be included in the term `employee' under the provisions of this subdivision if such individual has a substantial investment in facilities used in connection with the performance of such services, other than in facilities for transportation, or if the services are in the nature of a single transaction not part of a continuing relationship with the employing unit for whom the services are performed." [3] The EDD's representative testified the territory given Folk "was already an established territory which had been covered by a former salesman." Lolli testified that 10 separate distributors over a 20-year period occupied the territory assigned to Folk and that, "The distributor that Mr. Folk replaced[]" performed at the level Folk thought he could attain. [4] Noting Folk's limited income, the court stated: "[I]t does not appear that Mr. Folk could have hired an assistant, and, therefore, the language of the contract which expressly allowed Mr. Folk to utilize other personnel was illusory when compared to the reality of the situation." It is clear from the context of its ruling that the court was not using the term "illusory" as a legal term of art referring to a contract lacking mutuality of consideration, as Metric Man suggests, but rather in the more fundamental sense of "based on ... illusion." (Webster's New Collegiate Dict., supra, at p. 600.) [5] Folk noted in his testimony before the administrative law judge that the parts he sold to automotive repair facilities had to be itemized on repair orders and thus were effectively resold to the customers. He added: "Metric Man only sells to the wholesale trade. In other words, they have to have [a] resale number."
Q: Fetching a file filepath on a live website in .NET? We are designing a website using .NET The website has a folders containing some files. User enters the name of the file and we have to fetch data from that file. On our pc in Visual Studio, we were using StreamReader like this: StreamReader sr = new StreamReader("C:\\Users\\UserName\Documents\\Visual Studio 2012\\Projects\\teach\\uploads\\Submission\\" + filename; But now, we are going live with our website and the problem is the file path? What should we give exactly? The files are in /compiler/Submission folder. A: Use Server.MapPath It could be something like this var yourPath = Server.MapPath("~/uploads") Where ~ is replaced by .NET for the root of your virtual directory.
#include "test.h" #ifdef HAVE_SYS_SOCKET_H #include <sys/socket.h> #endif #include <sys/time.h> #include <sys/types.h> /* * Source code in here hugely as reported in bug report 651464 by * Christopher R. Palmer. * * Use multi interface to get document over proxy with bad port number. * This caused the interface to "hang" in libcurl 7.10.2. */ CURLcode test(char *URL) { CURL *c; CURLcode ret=CURLE_OK; CURLM *m; fd_set rd, wr, exc; CURLMcode res; int running; int max_fd; curl_global_init(CURL_GLOBAL_ALL); c = curl_easy_init(); /* the point here being that there must not run anything on the given proxy port */ curl_easy_setopt(c, CURLOPT_PROXY, arg2); curl_easy_setopt(c, CURLOPT_URL, URL); curl_easy_setopt(c, CURLOPT_VERBOSE, 1); m = curl_multi_init(); res = curl_multi_add_handle(m, c); if(res && (res != CURLM_CALL_MULTI_PERFORM)) return 1; /* major failure */ do { do { res = curl_multi_perform(m, &running); } while (res == CURLM_CALL_MULTI_PERFORM); if(!running) { /* This is where this code is expected to reach */ int numleft; CURLMsg *msg = curl_multi_info_read(m, &numleft); fprintf(stderr, "Not running\n"); if(msg && !numleft) ret = 100; /* this is where we should be */ else ret = 99; /* not correct */ break; } fprintf(stderr, "running %d res %d\n", running, res); if (res != CURLM_OK) { fprintf(stderr, "not okay???\n"); ret = 2; break; } FD_ZERO(&rd); FD_ZERO(&wr); FD_ZERO(&exc); max_fd = 0; fprintf(stderr, "_fdset()\n"); if (curl_multi_fdset(m, &rd, &wr, &exc, &max_fd) != CURLM_OK) { fprintf(stderr, "unexpected failured of fdset.\n"); ret = 3; break; } fprintf(stderr, "select\n"); select(max_fd+1, &rd, &wr, &exc, NULL); fprintf(stderr, "loop!\n"); } while(1); curl_multi_remove_handle(m, c); curl_easy_cleanup(c); curl_multi_cleanup(m); return ret; }
Simply followed the yellow brick road. The final run to the summit is a weave of numerous use trails, ever wanting to avoid scree, but the top offers a number of viewpoints on solid rock of much of southern Oregon. Had a nice fall hike with Amy, Beth, and Andy. We awoke to rain but the weather soon cleared and the clouds made for some great scenery along the way. Various hiker use trails continue to be formed where the trees thin on the ridge, making the main path more difficult to follow than our first time ten years ago. The route is still straighforward, however. This was my 2nd time up the McLoughlin Trail and 3rd time on the summit. Great fall hike! I car-camped at the Mount McLoughlin TH before setting out at 5:20 AM for an early morning hike. 4h40m roundtrip (2h30m up, 10m summit, 2h0m down). I found the trail to be a little boring, but the final 1000' of scree-gain and the summit views made it worth the trip. No summit register, though. I had enough time to backtrack to Medford midday and see "The Dark Knight Rises" during its opening weekend. Then I had dinner with a peakbagging friend and his wife, before heading east for a few hikes in Lake County the following day. This was the first of seven Top 100 peaks summited in Oregon during a 3-1/2 day timeframe.
class Service::OnTime < Service string :ontime_url, :api_key white_list :ontime_url self.title = 'OnTime' def receive_push raise_config_error 'No OnTime URL to connect to.' if data['ontime_url'].to_s.empty? raise_config_error 'No API Key.' if data['api_key'].to_s.empty? http.url_prefix = data['ontime_url'] json = generate_json(payload) resp = http_get 'api/version' version = JSON.parse(resp.body)['data'] curvers = "#{version['major']}.#{version['minor]']}" # Hash the data, it has to be hexdigest in order to have the same hash value in .NET hash_data = Digest::SHA256.hexdigest(json + data['api_key']) if (version['major'] == 11 and version['minor'] >= 1) or (version['major'] == 12 and version['minor'] < 2) result = http_post 'api/github', :payload => json, :hash_data => hash_data, :source => :github, :service_version => 1.0, :ontime_version => curvers elsif (version['major'] == 12 and version['minor'] >= 2) or (version['major'] == 13 and version['minor'] < 3) result = http_post 'api/v1/github', :payload => json, :hash_data => hash_data, :source => :github, :service_version => 1.1, :ontime_version => curvers elsif (version['major'] == 13 and version['minor'] >= 3) or version['major'] > 13 http.headers['Content-Type'] = 'application/json' result = http_post("api/v2/github?hash_data=#{hash_data}&ontime_version=#{curvers}&service_version=2.0", json) else raise_config_error 'Unexpected API version. Please update to the latest version of OnTime to use this service.' end verify_response(result) end def verify_response(res) case res.status when 200..299 when 403, 401, 422 then raise_config_error("Invalid Credentials") when 404, 301, 302 then raise_config_error("Invalid URL") else raise_config_error("HTTP: #{res.status}") end end end
Loa loa and Onchocerca ochengi miRNAs detected in host circulation. A combination of deep-sequencing and bioinformatics analysis enabled identification of twenty-two microRNA candidates of potential nematode origin in plasma from Loa loa-infected baboons and a further ten from the plasma of an Onchocerca ochengi-infected cow. The obtained data were compared to results from previous work on miRNA candidates from Dirofilaria immitis and O. volvulus found in host circulating blood, to examine the species specificity of the released miRNA. None of the miRNA candidates was found to be present in all four host-parasite scenarios and most of them were specific to only one of them. Eight candidate miRNAs were found to be identical in the full sequence in at least two different infections, while nine candidate miRNAs were found to be similar but not identical in at least four filarial species.
Biometric Solution With Multi-Spectral Sensor: An Innovation in Technology Companies have started adopting the use of biometric systems in different functionalities. They are basically served to have access control, attendance, time management and so on. The main purpose of Biometric technology is to ensure security and safety to the organizations. And also to ensure that the employees are on time with an attendance check on them. Most of the companies adopt the use of fingerprint scanners as the modern biometric systems are concerned. Multi-Spectral Sensor Even with the advanced technology due to some difficulties the employees are facing some problems while scanning their fingerprint on the device. This has been overcome by the multi-spectral sensor. The employees may have dirt, sweat or oil on their fingers and with the use of this technology, scanning became quite easy. The reasons to incorporate the use of multispectral sensors are as follows. • Very fast and intuitive to identify the employee. • Preferable in any sort of conditions. • Provides complete security for the organization. Advantages of biometric authentication Basic early access control is the matter of who, when, and where. There are many advantages of Biometric Attendance Machine that helps in examining the exit and entry of the employees especially in any organizations. The advantages are as follows, • Easy identification: With the help of biometric machines it is very easy to access the identification of an individual. So that the organization of a company will be known for the exit and entry of their employees. • Accountability: The organization will have full trust and accountability of the employees. The Biometric Accessories will give a clear information in the state of liability. • Very easy and completely safe: The good thing is that they are completely safe and at the same time very easy for the employees. • Saves time: The use of biometric identification is very quick and does not require much of the time to access it. • User-friendly: The biometric systems are user-friendly and also do their job quickly and responsibly. The Biometric Machine in Delhi requires a minimum amount of training after which they work more effectively. • Scalability: The systems are flexible and at the same time scalable. Therefore, providing a wide range of security on a large scale access. • Security: One of the biggest advantages is security that they provide to the companies. It does not support unauthorized access, thereby allowing only access to users. • Versatility: As there are many different varieties of biometric systems available one can use in the wide range. Therefore, they are suited to use anywhere that requires security payment
Adesh Group started its’ journey in the field of professional education and health care with establishment of its first venture Adesh Hospital & Research Centre (P) Ltd., Muktsar in 1991. With time, it has emerged as a 300-bedded referral hospital having 30 specialist & superspecialist medical professionals, 65 qualified nursing & paramedical staffs .More>>>
Alibaba.com offers 170 baygon insecticide products. About 60% of these are pest control, 40% are insecticide, and 2% are other agrochemicals & pesticide. A wide variety of baygon insecticide options are available to you, such as eco-friendly, disposable, and stocked. You can also choose from mosquitoes, cockroaches, and ants. As well as from pesticide, traps. And whether baygon insecticide is free samples. There are 177 baygon insecticide suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Indonesia, and Turkey, which supply 93%, 3%, and 1% of baygon insecticide respectively. Baygon insecticide products are most popular in Africa, Southeast Asia, and Domestic Market. You can ensure product safety by selecting from certified suppliers, including 74 with ISO9001, 48 with Other, and 8 with ISO14001 certification.
LA Pastor Held on $3M Bail for Alleged Relationship With 14-Year-Old Girl Share To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Gordon Solomon, pastor of Christ's Community Church in Inglewood, Calif., has been arrested by the Los Angeles Sheriff's Department and charged with seven felony counts of committing lewd acts upon a Child for allegedly carrying on a two-year sexual relationship with a teen congregant. Solomon, the senior pastor of Christ's Community Church of Los Angeles, was arrested July 4 and charged last Friday. The minister, who is married, was being held on $3,000,000 bail. In addition to the felony lewd acts, Solomon was charged with one felony count of oral copulation of a person under the age of 14 and one felony count of continuous sexual abuse, according to the District Attorney's office. If convicted, the pastor could be sentenced to 26 years in prison. The authorities became aware of the accusations against the Solomon, 50, when the young girl's mother discovered an explicit text messages allegedly from the minister on the teen's cell phone. The mother immediately contacted the police, who later discovered a trail of emails and text messages allegedly between the 14-year-old and Solomon going back about two years. Officials suspect that other alleged victims may exist and have called on members of the public to contact the Los Angeles County Sheriff's Special Victims Bureau with any information by calling 1-877-710-LASD (5273) or sending an email to specialvictims@lasd.org. Members of Christ Community Church have expressed disappointment and shock at the news that their pastor, who reportedly asked for their prayers before the arrest, had been accused of such a crime. Church musician Walter Woodard, talking to the Los Angeles Times, said in reference to the prayer request, "I didn't think it was something like this. But I know I cannot put my faith in the man." Woodard, 57, told the publication that Solomon worked with children during Bible classes, computer lessons and choir rehearsals but that he never witnessed any inappropriate behavior. Christ Community Church of Los Angeles was organized in 1994, is home to about 200 members and is described on its website as "a vibrant non-denominational congregation." The church is reportedly well-known locally for its food pantry, which provides free food to the community twice a week, and its clothing drive. According to its website, Christ Community Church distributed more than $1,000,000 worth of food and non-food items to the community last year."
Preoperative Analysis of Venous Anatomy Before Deep Inferior Epigastric Perforator Free-Flap Breast Reconstruction Using Ferumoxytol-enhanced Magnetic Resonance Angiography. Venous congestion after deep inferior epigastric artery perforator (DIEP) flap breast reconstruction is a complication that may be partially attributable to variations in venous abdominal wall anatomy. In previous work, we have shown that ferumoxytol may be used as a bloodpool contrast agent to perform high-resolution venous imaging. Our current aim was to use this technology to perform a detailed analysis of the venous anatomy among patients undergoing DIEP flap breast reconstruction. All patients undergoing DIEP flap reconstruction with preoperative ferumoxytol-enhanced magnetic resonance angiography (FE-MRA) were retrospectively reviewed. A detailed anatomic analysis of each abdominal wall on FE-MRA was performed before review of operative findings. Statistical analysis was used to determine venous characteristics associated with superficial inferior epigastric vein (SIEV) augmentation. From 2012 to 2016, 59 patients underwent preoperative FE-MRA. This resulted in imaging for 118 hemiabdomen and 99 flaps. Superficial-deep communication was identified in 117 of 118 hemiabdomen. Fifty (93%) of 59 patients had greater than 1-mm venous communication of the superficial system across midline. Reconstructed breasts were based on dominant medial row perforators in 82 (83%) of 99 flaps. The mean diameters of the SIEV and dominant venous perforator were 3.8 and 2.8 mm, respectively. Anatomic characteristics associated with SIEV augmentation included SIEV diameter (P = 0.01), dominant perforator diameter (P = 0.04), and the ratio between these 2 variables (P = 0.001). Ferumoxytol-enhanced magnetic resonance angiography provides excellent imaging of the venous system. Anatomic characteristics such as the diameter of the SIEV and the diameter of the dominant perforator may be useful in determining which flaps require venous augmentation using the SIEV.
package network // Copyright (c) Microsoft and contributors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // // See the License for the specific language governing permissions and // limitations under the License. // // Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 // Changes may cause incorrect behavior and will be lost if the code is // regenerated. import ( "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) // VirtualNetworkGatewayConnectionsClient is the composite Swagger for Network // Client type VirtualNetworkGatewayConnectionsClient struct { ManagementClient } // NewVirtualNetworkGatewayConnectionsClient creates an instance of the // VirtualNetworkGatewayConnectionsClient client. func NewVirtualNetworkGatewayConnectionsClient(subscriptionID string) VirtualNetworkGatewayConnectionsClient { return NewVirtualNetworkGatewayConnectionsClientWithBaseURI(DefaultBaseURI, subscriptionID) } // NewVirtualNetworkGatewayConnectionsClientWithBaseURI creates an instance of // the VirtualNetworkGatewayConnectionsClient client. func NewVirtualNetworkGatewayConnectionsClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworkGatewayConnectionsClient { return VirtualNetworkGatewayConnectionsClient{NewWithBaseURI(baseURI, subscriptionID)} } // CreateOrUpdate creates or updates a virtual network gateway connection in // the specified resource group. This method may poll for completion. Polling // can be canceled by passing the cancel channel argument. The channel will be // used to cancel polling and any outstanding HTTP requests. // // resourceGroupName is the name of the resource group. // virtualNetworkGatewayConnectionName is the name of the virtual network // gateway connection. parameters is parameters supplied to the create or // update virtual network gateway connection operation. func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdate(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VirtualNetworkGatewayConnection, cancel <-chan struct{}) (<-chan VirtualNetworkGatewayConnection, <-chan error) { resultChan := make(chan VirtualNetworkGatewayConnection, 1) errChan := make(chan error, 1) if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat", Name: validation.Null, Rule: true, Chain: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.VirtualNetworkGateway1", Name: validation.Null, Rule: true, Chain: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.VirtualNetworkGateway1.VirtualNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}, {Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.VirtualNetworkGateway2", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.VirtualNetworkGateway2.VirtualNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}, {Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.LocalNetworkGateway2", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.LocalNetworkGateway2.LocalNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}, }}}}}); err != nil { errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate") close(errChan) close(resultChan) return resultChan, errChan } go func() { var err error var result VirtualNetworkGatewayConnection defer func() { resultChan <- result errChan <- err close(resultChan) close(errChan) }() req, err := client.CreateOrUpdatePreparer(resourceGroupName, virtualNetworkGatewayConnectionName, parameters, cancel) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", nil, "Failure preparing request") return } resp, err := client.CreateOrUpdateSender(req) if err != nil { result.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", resp, "Failure sending request") return } result, err = client.CreateOrUpdateResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", resp, "Failure responding to request") } }() return resultChan, errChan } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdatePreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VirtualNetworkGatewayConnection, cancel <-chan struct{}) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsJSON(), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{Cancel: cancel}) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req, azure.DoPollForAsynchronous(client.PollingDelay)) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualNetworkGatewayConnection, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } // Delete deletes the specified virtual network Gateway connection. This method // may poll for completion. Polling can be canceled by passing the cancel // channel argument. The channel will be used to cancel polling and any // outstanding HTTP requests. // // resourceGroupName is the name of the resource group. // virtualNetworkGatewayConnectionName is the name of the virtual network // gateway connection. func (client VirtualNetworkGatewayConnectionsClient) Delete(resourceGroupName string, virtualNetworkGatewayConnectionName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { resultChan := make(chan autorest.Response, 1) errChan := make(chan error, 1) go func() { var err error var result autorest.Response defer func() { resultChan <- result errChan <- err close(resultChan) close(errChan) }() req, err := client.DeletePreparer(resourceGroupName, virtualNetworkGatewayConnectionName, cancel) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", nil, "Failure preparing request") return } resp, err := client.DeleteSender(req) if err != nil { result.Response = resp err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", resp, "Failure sending request") return } result, err = client.DeleteResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", resp, "Failure responding to request") } }() return resultChan, errChan } // DeletePreparer prepares the Delete request. func (client VirtualNetworkGatewayConnectionsClient) DeletePreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, cancel <-chan struct{}) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsDelete(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{Cancel: cancel}) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) DeleteSender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req, azure.DoPollForAsynchronous(client.PollingDelay)) } // DeleteResponder handles the response to the Delete request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified virtual network gateway connection by resource group. // // resourceGroupName is the name of the resource group. // virtualNetworkGatewayConnectionName is the name of the virtual network // gateway connection. func (client VirtualNetworkGatewayConnectionsClient) Get(resourceGroupName string, virtualNetworkGatewayConnectionName string) (result VirtualNetworkGatewayConnection, err error) { req, err := client.GetPreparer(resourceGroupName, virtualNetworkGatewayConnectionName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Get", nil, "Failure preparing request") return } resp, err := client.GetSender(req) if err != nil { result.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Get", resp, "Failure sending request") return } result, err = client.GetResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Get", resp, "Failure responding to request") } return } // GetPreparer prepares the Get request. func (client VirtualNetworkGatewayConnectionsClient) GetPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsGet(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{}) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) GetSender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req) } // GetResponder handles the response to the Get request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) GetResponder(resp *http.Response) (result VirtualNetworkGatewayConnection, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusOK), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } // GetSharedKey the Get VirtualNetworkGatewayConnectionSharedKey operation // retrieves information about the specified virtual network gateway connection // shared key through Network resource provider. // // resourceGroupName is the name of the resource group. // virtualNetworkGatewayConnectionName is the virtual network gateway // connection shared key name. func (client VirtualNetworkGatewayConnectionsClient) GetSharedKey(resourceGroupName string, virtualNetworkGatewayConnectionName string) (result ConnectionSharedKey, err error) { req, err := client.GetSharedKeyPreparer(resourceGroupName, virtualNetworkGatewayConnectionName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "GetSharedKey", nil, "Failure preparing request") return } resp, err := client.GetSharedKeySender(req) if err != nil { result.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "GetSharedKey", resp, "Failure sending request") return } result, err = client.GetSharedKeyResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "GetSharedKey", resp, "Failure responding to request") } return } // GetSharedKeyPreparer prepares the GetSharedKey request. func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeyPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsGet(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/sharedkey", pathParameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{}) } // GetSharedKeySender sends the GetSharedKey request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeySender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req) } // GetSharedKeyResponder handles the response to the GetSharedKey request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeyResponder(resp *http.Response) (result ConnectionSharedKey, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusOK), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } // List the List VirtualNetworkGatewayConnections operation retrieves all the // virtual network gateways connections created. // // resourceGroupName is the name of the resource group. func (client VirtualNetworkGatewayConnectionsClient) List(resourceGroupName string) (result VirtualNetworkGatewayConnectionListResult, err error) { req, err := client.ListPreparer(resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", nil, "Failure preparing request") return } resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure sending request") return } result, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure responding to request") } return } // ListPreparer prepares the List request. func (client VirtualNetworkGatewayConnectionsClient) ListPreparer(resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsGet(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections", pathParameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{}) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) ListSender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req) } // ListResponder handles the response to the List request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) ListResponder(resp *http.Response) (result VirtualNetworkGatewayConnectionListResult, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusOK), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } // ListNextResults retrieves the next set of results, if any. func (client VirtualNetworkGatewayConnectionsClient) ListNextResults(lastResults VirtualNetworkGatewayConnectionListResult) (result VirtualNetworkGatewayConnectionListResult, err error) { req, err := lastResults.VirtualNetworkGatewayConnectionListResultPreparer() if err != nil { return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", nil, "Failure preparing next results request") } if req == nil { return } resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure sending next results request") } result, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure responding to next results request") } return } // ResetSharedKey the VirtualNetworkGatewayConnectionResetSharedKey operation // resets the virtual network gateway connection shared key for passed virtual // network gateway connection in the specified resource group through Network // resource provider. This method may poll for completion. Polling can be // canceled by passing the cancel channel argument. The channel will be used to // cancel polling and any outstanding HTTP requests. // // resourceGroupName is the name of the resource group. // virtualNetworkGatewayConnectionName is the virtual network gateway // connection reset shared key Name. parameters is parameters supplied to the // begin reset virtual network gateway connection shared key operation through // network resource provider. func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKey(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionResetSharedKey, cancel <-chan struct{}) (<-chan ConnectionResetSharedKey, <-chan error) { resultChan := make(chan ConnectionResetSharedKey, 1) errChan := make(chan error, 1) if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.KeyLength", Name: validation.Null, Rule: true, Chain: []validation.Constraint{{Target: "parameters.KeyLength", Name: validation.InclusiveMaximum, Rule: 128, Chain: nil}, {Target: "parameters.KeyLength", Name: validation.InclusiveMinimum, Rule: 1, Chain: nil}, }}}}}); err != nil { errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey") close(errChan) close(resultChan) return resultChan, errChan } go func() { var err error var result ConnectionResetSharedKey defer func() { resultChan <- result errChan <- err close(resultChan) close(errChan) }() req, err := client.ResetSharedKeyPreparer(resourceGroupName, virtualNetworkGatewayConnectionName, parameters, cancel) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", nil, "Failure preparing request") return } resp, err := client.ResetSharedKeySender(req) if err != nil { result.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", resp, "Failure sending request") return } result, err = client.ResetSharedKeyResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", resp, "Failure responding to request") } }() return resultChan, errChan } // ResetSharedKeyPreparer prepares the ResetSharedKey request. func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeyPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionResetSharedKey, cancel <-chan struct{}) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsJSON(), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/sharedkey/reset", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{Cancel: cancel}) } // ResetSharedKeySender sends the ResetSharedKey request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeySender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req, azure.DoPollForAsynchronous(client.PollingDelay)) } // ResetSharedKeyResponder handles the response to the ResetSharedKey request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeyResponder(resp *http.Response) (result ConnectionResetSharedKey, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } // SetSharedKey the Put VirtualNetworkGatewayConnectionSharedKey operation sets // the virtual network gateway connection shared key for passed virtual network // gateway connection in the specified resource group through Network resource // provider. This method may poll for completion. Polling can be canceled by // passing the cancel channel argument. The channel will be used to cancel // polling and any outstanding HTTP requests. // // resourceGroupName is the name of the resource group. // virtualNetworkGatewayConnectionName is the virtual network gateway // connection name. parameters is parameters supplied to the Begin Set Virtual // Network Gateway connection Shared key operation throughNetwork resource // provider. func (client VirtualNetworkGatewayConnectionsClient) SetSharedKey(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionSharedKey, cancel <-chan struct{}) (<-chan ConnectionSharedKey, <-chan error) { resultChan := make(chan ConnectionSharedKey, 1) errChan := make(chan error, 1) if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Value", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey") close(errChan) close(resultChan) return resultChan, errChan } go func() { var err error var result ConnectionSharedKey defer func() { resultChan <- result errChan <- err close(resultChan) close(errChan) }() req, err := client.SetSharedKeyPreparer(resourceGroupName, virtualNetworkGatewayConnectionName, parameters, cancel) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", nil, "Failure preparing request") return } resp, err := client.SetSharedKeySender(req) if err != nil { result.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", resp, "Failure sending request") return } result, err = client.SetSharedKeyResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", resp, "Failure responding to request") } }() return resultChan, errChan } // SetSharedKeyPreparer prepares the SetSharedKey request. func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeyPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionSharedKey, cancel <-chan struct{}) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } const APIVersion = "2017-03-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( autorest.AsJSON(), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/sharedkey", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) return preparer.Prepare(&http.Request{Cancel: cancel}) } // SetSharedKeySender sends the SetSharedKey request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeySender(req *http.Request) (*http.Response, error) { return autorest.SendWithSender(client, req, azure.DoPollForAsynchronous(client.PollingDelay)) } // SetSharedKeyResponder handles the response to the SetSharedKey request. The method always // closes the http.Response Body. func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeyResponder(resp *http.Response) (result ConnectionSharedKey, err error) { err = autorest.Respond( resp, client.ByInspecting(), azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return }
Essentially a decentralized database, Blockchain technology has taken the world by storm. It is often considered a magical solution to various problems of the world. Application of this decentralized ledger technology to the energy sector enables peer to peer energy trading, thereby enabling a fundamental shift in energy distribution. This disruptive technology is changing the way energy is produced, distributed and consumed. TerraGreen has designed a platform to trade energy using blockchain technology. It is paving way to a greener and cleaner world by eliminating the challenges faced by the renewable energy industry. The cryptographic platform’s out-of-the-box purpose connects users on a common platform to produce, supply, trade, and consume renewable energy. The key feature of the project is TerraGreen Coin, the use of which will aid in micro-management of biomass waste and its conversion into renewable energy products. The coin is an energy efficient coin in the sense that its value is based on the amount of renewable energy produced and delivered to the end users. The concept of the energy efficient coin is influenced by an incentive mechanism to trade energy. While green currencies such as Carbon coin, Ergo, Solar coin and Genercoin are addressing global environmental concerns, however, they fail to cover the discrepancies in the distribution of energy from renewable energy technologies. Therefore, the focus of TerraGreen coin is majorly on the distribution of energy outputs to ensure decentralization. A monetary incentive system is implemented on the platform that will reward users for participating in production of high value renewable energy products from biomass waste. Purchasing the currency will increase its market value overtime, which can be used to avail discounts on goods and services from the renewable energy industries. The currency is backed and secured by smart contract built on SHA384 Algorithm. The founding members and advisors of the project have years of experience in the renewable energy industry and blockchain technology and are determined to design a sustainable economic system that is based on renewable energy resources. Given the staggering statistics and dreadful state of waste management in most countries, TerraGreen strives to design an ideal system to improve and develop a clean, green and sustainable community. And this is only possible with the help of the blockchain technology and its decentralized feature that enables a transparent and disintermediate trade of energy.
A pilot toxicology study of single-walled carbon nanotubes in a small sample of mice. Single-walled carbon nanotubes are currently under evaluation in biomedical applications, including in vivo delivery of drugs, proteins, peptides and nucleic acids (for gene transfer or gene silencing), in vivo tumour imaging and tumour targeting of single-walled carbon nanotubes as an anti-neoplastic treatment. However, concerns about the potential toxicity of single-walled carbon nanotubes have been raised. Here we examine the acute and chronic toxicity of functionalized single-walled carbon nanotubes when injected into the bloodstream of mice. Survival, clinical and laboratory parameters reveal no evidence of toxicity over 4 months. Upon killing, careful necropsy and tissue histology show age-related changes only. Histology and Raman microscopic mapping demonstrate that functionalized single-walled carbon nanotubes persisted within liver and spleen macrophages for 4 months without apparent toxicity. Although this is a preliminary study with a small group of animals, our results encourage further confirmation studies with larger groups of animals.
--- author: - 'Thomas Siegert [^1]' - Alain Coc - Laura Delgado - Roland Diehl - Jochen Greiner - Margarita Hernanz - Pierre Jean - Jordi Jose - Paolo Molaro - 'Moritz M. M. Pleintinger' - Volodymyr Savchenko - Sumner Starrfield - Vincent Tatischeff - Christoph Weinberger date: 'Received 21 Dec 2017; accepted 15 Mar 2018' title: 'Gamma-Ray Observations of Nova Sgr 2015 No. 2 with INTEGRAL' --- [INTEGRAL observed Nova Sgr 2015 No. 2 (V5668 Sgr) around the time of its optical emission maximum on March 21, 2015. Studies at UV wavelengths showed spectral lines of freshly produced $^7\mathrm{Be}$. This could be measurable also in gamma-rays at 478 keV from the decay to $^7\mathrm{Li}$. Novae are also expected to synthesise $^{22}\mathrm{Na}$ which decays to $^{22}\mathrm{Ne}$, emitting a 1275 keV photon. About one week before the optical maximum, a strong gamma-ray flash on time-scales of hours is expected from short-lived radioactive nuclei, such as $^{13}{\mathrm{N}}$ and $^{18}{\mathrm{F}}$. These nuclei are $\beta^+$-unstable, and should yield emission up to 511 keV, but which has never been observed from any nova.]{} [The spectrometer SPI aboard INTEGRAL pointed towards V5668 Sgr by chance. We use these observations to search for possible gamma-ray emission of decaying $^7\mathrm{Be}$, and to directly measure the synthesised mass during explosive burning. We also aim to constrain possible burst-like emission days to weeks before the optical maximum using the SPI anticoincidence shield (ACS), i.e. at times when SPI was not pointing to the source.]{} [We extract spectral and temporal information to determine the fluxes of gamma-ray lines at 478 keV, 511 keV, and 1275 keV. Using distance and radioactive decay, a measured flux converts into the $^7\mathrm{Be}$ amount produced in the nova. The SPI-ACS rates are analysed for burst-like emission using a nova model light-curve. For the obtained nova flash candidate events, we discuss possible origins using directional, spectral, and temporal information.]{} [No significant excess for the 478 keV, the 511 keV, or the 1275 keV lines is found. Our upper limits ($3\sigma$) on the synthesised $^7\mathrm{Be}$ and $^{22}\mathrm{Na}$ mass depend on the uncertainties of the distance to V5668 Sgr: The $^7\mathrm{Be}$ mass is constrained to less than $4.8 \times 10^{-9}\,(d/{\mathrm{kpc}})^2\,{\mathrm{M_{\odot}}}$, and the $^{22}\mathrm{Na}$ mass to less than $2.4 \times 10^{-8}\,(d/{\mathrm{kpc}})^2\,{\mathrm{M_{\odot}}}$. For the $^7\mathrm{Be}$ mass estimate from UV studies, the distance to V5668 Sgr must be larger than 1.2 kpc ($3\sigma$). During three weeks before the optical maximum, we find 23 burst-like events in the ACS rate, of which six could possibly be associated with V5668.]{} Introduction ============ Nova Sagittarii 2015 No. 2 / V5668 Sgr -------------------------------------- On 15 March 2015, Nova Sagittarii 2015 No. 2 (V5668 Sgr, short V5668) was detected by @Seach2015_V5668 [galactic coordinates $(l_0/b_0) = (5.38^{\circ}/-9.87^{\circ})$]. After a six-day rise in brightness, V5668 reached its optical maximum on March 21.67 UT, corresponding to $T_0 = \mathrm{MJD}\,57102.67$, with a V-band magnitude of 4.32 mag. Two independent studies [@Molaro2016_V5668; @Tajitsu2016_V5668] measured blue-shifted spectral lines of singly ionised Be II at wavelengths around 313 nm. The Doppler-velocities of these UV line profiles range between -700 and $-2200~{\mathrm{km~s^{-1}}}$. Based on a canonical ejected mass of $10^{-5}~{\mathrm{M_{\odot}}}$ for novae, the measured abundance ratios allowed to estimate the mass of synthesised and ejected $^7\mathrm{Be}$ for V5668 from these UV measurements of $7\times10^{-9}~{\mathrm{M_{\odot}}}$ [@Molaro2016_V5668]. Novae have only recently been verified as significant sources of $^7\mathrm{Li}$ in the Galaxy by detections of $^7\mathrm{Li}$ I at $6708~{\mathrm{\AA{}}}$ in Nova Centauri 2013 [V1369 Cen, @Izzo2015_novaeLi], and the $^7\mathrm{Be}$ II doublet at $313.0583~{\mathrm{nm}}$ and $313.1228~{\mathrm{nm}}$ in Nova Delphini 2013 [V339 Del, @Tajitsu2015_novaBe7]. The distance to V5668 is not precisely known. Based on expansion velocity measurements and geometrical considerations, @Banerjee2016_v5668 estimated a distance[^2] of $1.54~{\mathrm{kpc}}$. @Jack2017_V5668 derived a similar estimate of 1.6 kpc, using a distance modulus approach, but without providing uncertainties. The “maximum magnitude vs. rate decline” (MMRD) method to determine the distance to novae, as reviewed by @dellaValle1995_nova [see also @Schmidt1957_novae2], provides similar distance estimates but with an intrinsically large uncertainty. Due to the fact that the luminosity of a nova outburst depends on more physical parameters than only the white dwarf mass [@dellaValle1995_nova], the distance to a single nova, as opposed to a population, might be uncertain by $\approx50\%$. V5668 Sgr has been observed at many wavelengths. In mid- [e.g. @Gehrz2015_v5668SOFIA with SOFIA/FORCAST] and near-infrared [e.g. @Banerjee2016_v5668 using NICS] observations, clear signatures of dust have been seen, and also a strong detection of CO in emission [@Banerjee2015_v5668CO]. The total dust mass produced by V5668 is estimated to be about $2.7\times10^{-7}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ [@Banerjee2016_v5668], so that the mass of the gaseous component of the ejecta is between $2.7$ and $5.4\times10^{-5}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$. Note that the ejecta mass is difficult to estimate, because the absolute flux measurements have to be scaled by the distance to a nova, which is model-dependent and often uncertain by several tens of per cent, and also because there is no adequate observable to estimate the accretion rate. In the optical, V5668 was monitored for more than 200 days after the outburst [@Jack2017_V5668 using TIGRE], finding transient Balmer and Paschen lines of H, different Fe II lines, as well as N I and N II. In general, the spectral shapes change during the evolution of the optical light curve, whereas after the deep minimum at day 110 after the optical maximum, clear double-peak profiles are observed. The spread in Doppler-velocities (expanding shell velocity) is up to $2000~{\mathrm{km~s^{-1}}}$. About 95 days after the outburst, the nova was detected in soft X-rays [@Page2015_V5668Xrays_a with Swift/XRT], softening and brightening up to $(6.0\pm0.5)\times10^{-2}~{\mathrm{cts~s^{-1}}}$ until day 161. During this time, the apparent H-column density towards V5668 decreased from $N_H = (4.7^{+1.6}_{-1.2})\times10^{22}~{\mathrm{cm^{-1}}}$ to $(0.4\pm0.1)\times10^{22}~{\mathrm{cm^{-1}}}$, while the plasma temperature increased from $1.3^{+0.5}_{-0.3}~{\mathrm{keV}}$ to $3.4^{+1.4}_{-0.8}~{\mathrm{keV}}$. This spectral change may be connected to the destruction of dust by soft X-ray and UV emission [@Page2015_V5668Xrays_a Swift/UVOT]. High-energy gamma-rays have also been observed for V5668 [0.1-100 GeV @Cheung2016_V5668Fermi using Fermi/LAT], beginning about two days after the optical maximum, as measured from similar nova outbursts [e.g. @Hays2013_novadelFermi; @Cheung2013_novacenFermi; @Ackermann2014_novaeFermi]. In the $>100$ MeV band, V5668 was visible for 55 days with an average flux of $\approx 10^{-7}~{\mathrm{ph~cm^{-2}~s^{-1}}}$, fainter than observed for this type of source [@Ackermann2014_novaeFermi note that less than ten gamma-ray novae have been detected until the write-up of this paper]. Even though the gamma-ray emission appears sporadic and maybe delayed to the optical emission, it seems that the hard gamma-ray flux correlates with the optical light curve. In fact, for the brightest gamma-ray nova detected so far, ASASSN-16ma [@Luckas2016_ASASSN-16ma], the optical light-curve strongly correlates with the 0.1-300 GeV flux during the decline phase. The gamma-ray-to-optical-flux ratio remained constant at a value of $\approx 0.002$. This tight correlation made the authors to cast doubt on the standard model for optical nova emission. Nuclear burning on the surface of the white dwarf (see Sec. \[sec:explosivenova\]) results into freshly-synthesised nuclei which are ejected, decay and/or de-excite, and emit MeV gamma-rays. Some of these gamma-rays may undergo Compton scattering, which powers a continuum with a low-energy cut-off around 20-30 keV, due to photo-electric absorption [@Gomez-Gomar1998_novae]. The optical emission, on the other hand, is mainly thermal radiation from the heated, now cooling because expanding, gas. Even though the nova explosion is triggered by nucleosynthesis reactions, the resulting $\sim\,{\mathrm{MeV}}$ gamma-rays provide only a small amount of energy during the envelope expansion. The visual maximum corresponds therefore to the maximum expansion of the photosphere. After the optical peak, the photosphere recedes, and higher temperatures become visible, so that the peak moves to UV wavelengths. But this scenario cannot explain the high-energy GeV gamma-ray emission, which is generally attributed to shock-accelerated particles, and even more so cannot explain the correlation between the optical and GeV emission. Instead, the authors propose that the optical emission is predominantly originating also in the shocks, rather than in the photosphere above. This automatically explains the simultaneous emission and also the questionable super-Eddington luminosities observed for many novae, because shocks may not be treated as hydrostatic atmospheres [see also @Martin2017_novagammas]. In the case of V5668, the Fermi/LAT $>100$ MeV light-curve also seems to show multiple peaks, so that this nova might have had multiple mass ejections, and thus there might be multiple onsets of nucleosynthesis. If a local maximum in the optical or $>100$ MeV light-curve indeed corresponds to an additional mass ejection, the estimates of this and other works might be more uncertain, as each individual outburst might eject its own amount of mass. In this paper, we focus on a single time origin of explosive burning, and discuss only one major mass ejection. This shock scenario also predicts hard X-ray emission at later times, contemporaneous to the GeV emission, depending on the nova outflow properties, such as density, mass, velocity, and the resulting optical depth. The peak of such an additional X-ray emission would be expected between 30 and 210 days after the optical maximum [@Metzger2014_novashocks]. These non-thermal X-rays would be produced by optical (eV) photons, being Compton up-scattered on GeV particles. Following @Metzger2015_novashocks, an order of magnitude estimate for the expected X-ray flux can be derived from the measured GeV flux $F_{GeV}$. This assumes a fraction $f_X$ of high-energy gamma-ray luminosity to be radiated away in X-rays of energy $E_X$, so that the resulting X-ray flux is $F_X \sim f_X F_{GeV} / E_X$. For example, an $f_X$-value of 0.01 would predict a hard X-ray flux of the order of $10^{-6}~{\mathrm{ph~cm^{-2}~s^{-1}~keV^{-1}}}$ for V5668 Sgr at about 50 keV. The influence of such a plausible additional flux at hard X- and also soft gamma-rays on the INTEGRAL measurements will be discussed in Sec. \[sec:nucsysejecta\]. Explosive nucleosynthesis in novae {#sec:explosivenova} ---------------------------------- The nova explosion is typically explained by a thermonuclear runaway on the surface of a white dwarf. At a mass accretion rate of $10^{-10}$-$10^{-9}~{\mathrm{{M\ensuremath{_\odot}\xspace}~yr^{-1}}}$, the accreted matter reaches degeneracy by the strong gravitational field of the white dwarf. Once the ignition conditions for hydrogen burning are met, nucleosynthesis starts. Even though the envelope is initially degenerate, once the temperature in the envelope exceeds $3\times10^{7}~{\mathrm{K}}$, degeneracy is lifted in the whole envelope [@Jose2016_stellarexplosions]. However, it is important to stress that a nova outburst likely occurs because of a hydrogen thin shell instability [@Schwarzschild1965_thermonuclear_stability; @Yoon2004_thermonuclear_stability], for which degeneracy is not required at all. In general, nuclei up to $A \approx 40$ are produced and ejected in a classical nova explosion. In this scenario, hydrogen burning proceeds through the CNO-cycle, i.e. it is required that such seed nuclei are present. During the CNO burning, short-lived nuclei (e.g. $^{13}{\mathrm{N}}$, $^{14}{\mathrm{O}}$, $^{15}{\mathrm{O}}$, or $^{17}{\mathrm{F}}$ with half-life times of 597, 71, 122, and 65 s, respectively) are produced, which undergo $\beta^+$-decay, and work as an energy source for the expansion of the outer, low-density shell [e.g. @Starrfield1972_novaCNO; @Jose2006_novae]. For gamma-ray observations, mainly two species are important as they emit the strongest mono-energetic gamma-ray lines, $^7\mathrm{Be}$ and $^{22}\mathrm{Na}$. In addition, $^{13}{\mathrm{N}}$ and $^{18}{\mathrm{F}}$ ,from the family of short-lived $\beta^+$-unstable nuclei, may be observable due to positron annihilation as a 511 keV gamma-ray flash (see below). \ The isotope $^7\mathrm{Be}$ is thought to be produced in such a nova explosion via the reaction $^3\mathrm{He}(\alpha,\gamma)^7\mathrm{Be}$. In accelerator experiments, $^7\mathrm{Be}$ is produced predominantly in the ground state, ${\mathrm{1/2^-}}$. However, about 40% of the time, it is created in its first excited nuclear state, ${\mathrm{3/2^-}}$, at 429 keV [@Parker1963_7Be; @diLeva2009_429keV], for which reason also a 429 keV line could be expected, if the nova envelope was not opaque at this time[^3]. $^7\mathrm{Be}$ decays with a half-life time of $T_{1/2}^{7\mathrm{Be}} = 53.12~{\mathrm{d}}$ (characteristic time of $\tau^{7{\mathrm{Be}}} = T_{1/2}^{7\mathrm{Be}} / \ln(2) = 76.64~\mathrm{d}$) via electron capture to $^7\mathrm{Li}$. This daughter nucleus de-excites to its ground state after 73 fs via the emission of a gamma-ray at $E_0^{7{\mathrm{Be}}} = 477.60~{\mathrm{keV}}$ with a branching ratio of $p^{7{\mathrm{Be}}} = 10.52\%$ [@Firestone03]. Depending on the nova model assumptions, the yield of $^7{\mathrm{Be}}$ in CO novae ranges between $10^{-11}$ to several $10^{-9}~{\mathrm{M_{\odot}}}$ [e.g. @Hernanz1996_novae7Li]. In general, there are two main types of classical novae, CO- and ONe-types. On the one hand, this classification is based on the final composition of the white dwarf, reflected by the mass and hence the expected burning stages that the progenitor star underwent. Here, a CO white dwarf comes from a star only having burnt H and He, whereas an ONe white dwarf progenitor also started C burning. In return, it is also a plausible assumption, for example in nova model simulations, that white dwarfs below $\sim 1.1~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ are CO-rich, while more massive ones are of ONe type. On the other hand, it is based on observational properties of novae, pointing to exactly this white-dwarf-composition, for example by measuring emission or absorption lines (see above). In a CO-type nova, the peak temperature allows nuclear burning up to oxygen, with only traces of heavier nuclei. On the other hand, because ONe white dwarfs have heavier seed nuclei, such as $^{20}{\mathrm{Ne}}$ or $^{24}{\mathrm{Mg}}$, in their chemical composition, ONe novae may reach temperatures to also produce silicon or argon. In the latter case, large amounts of $^{22}{\mathrm{Ne}}$ are expected to be produced via the reaction chains $^{20}\mathrm{Ne}(p,\gamma)^{21}\mathrm{Na}(p,\gamma)^{22}\mathrm{Mg}(\beta^+)^{22}\mathrm{Na}$ or $^{20}\mathrm{Ne}(p,\gamma)^{21}\mathrm{Na}(\beta^+)^{21}\mathrm{Ne}(p,\gamma)^{22}\mathrm{Na}$, and the sub-sequent decay ($T_{1/2}^{22\mathrm{Na}} = 2.6~\mathrm{yr}$) to an excited state of $^{22}{\mathrm{Ne}}$. $^{22}{\mathrm{Ne}}$ then de-excites by the emission of an $E_0^{22{\mathrm{Na}}} = 1274.53~{\mathrm{keV}}$ gamma-ray for $p^{22{\mathrm{Na}}} = 99.96\%$ of the time [@Firestone03]. Although this is not expected for CO novae [@Hernanz2014_nova], we perform a search for $^{22}{\mathrm{Na}}$ in V5668 (see Sec. \[sec:spianalysis\]). Theoretical studies [e.g. @Clayton1974_novae; @Leising1987_novae511; @Jose2001_novaegamma; @Jose2003_novae; @Jose2006_novae; @Hernanz2006_novae; @Hernanz2014_nova] predict a gamma-ray flash around one week before the optical maximum due to short-lived isotopes, such as $^{13}{\mathrm{N}}$ and $^{18}{\mathrm{F}}$, which decay via positron emission. The true time lag between the initial explosive burning, which results in $\lesssim{\mathrm{MeV}}$ gamma-ray emission, and the optical maximum is fundamentally unknown. Initially, when the nova envelope starts to expand, it is optically thick, so that low-energy photons could not escape. Depending on the nova model set-up, the lag is determined by the time of the maximum temperature (as provided by theory), and the largest extent of the photosphere (optical maximum), resulting in a temporal offset between a few days a two weeks. Throughout this paper, we use a canonical value of the onset of explosive burning of $T_0 - 7~{\mathrm{d}} = \mathrm{MJD}\,57095.67$ when estimating line fluxes, and release this constraint when searching for the gamma-ray flash (cf. Secs. \[sec:spianalysis\] and \[sec:acsanalysis\]). The produced positrons may annihilate quickly and may produce a strong gamma-ray line at 511 keV, and a hard X-ray / soft gamma-ray continuum up to 511 keV. For a nova distance of 1 kpc, the peak flux in the 75-511 keV band, including the annihilation line, may be as high as $0.2~{\mathrm{ph~cm^{-2}~s^{-1}}}$ [e.g. @Hernanz2014_nova]. Thus, independent of the direction of a nova with respect to INTEGRAL, this may be seen up to distances of several kpc [@Jean1999_novasearch]. After the peak, the flux declines sharply, and the flash may only be seen by chance. INTEGRAL/SPI observations of V5668 Sgr -------------------------------------- ESA’s gamma-ray observing satellite INTEGRAL [@Winkler2003_INTEGRAL] was pointed to the galactic centre before and after the optical maximum of V5668. Due to the large field of view of the spectrometer SPI [@Vedrenne2003_SPI] aboard INTEGRAL (field of view: $16^{\circ} \times 16^{\circ}$; angular resolution: $2.7^{\circ}$), the nova was also observed, as part of other regular observations. SPI measures X- and gamma-ray photons in the energy range between 20 and 8000 keV, using high-purity Ge detectors. The spectral resolution at 478 keV is 2.1 keV (FWHM). INTEGRAL revolutions 1514, 1517, and 1519 correspond to days -18 to -16, -11 to -9, and -5 to -3 with respect to the optical maximum of V5668. After the optical maximum, data from INTEGRAL revolutions 1521 to 1534, with several observation gaps, allow for high spectroscopic resolution gamma-ray measurements with SPI until day 37, see Fig. \[fig:novadirection\]. The intermediate gaps can be studied in their global gamma-ray emission by the anticoincidence shield (ACS) of SPI, made of 91 scintillating BGO crystals, sensitive to photon (and particle) energies above $\approx 75~{\mathrm{keV}}$, but without spectral information. The ACS has an almost omni-directional field of view with a rather modest angular resolution ($\approx 60^{\circ}$) and high timing capabilities ($\Delta T = 50$ ms). This allows to search for flash-like events during the expected periods around day 7 before the optical maximum. The use of the ACS to detect hard X-ray / soft gamma-ray emission from novae has been investigated by @Jean1999_novasearch. In this paper, we report a search for gamma-ray line emission from nucleosynthesis ejecta of V5668 using INTEGRAL/SPI, as well as a search for burst-like gamma-ray emission from short-lived nuclei during explosive burning weeks before the optical maximum. In Sec. \[sec:dataanalysis\], we describe the analyses of SPI and its ACS data, exploiting spectral, temporal, and directional information, and derive physical parameters. The implications of the analysis results are discussed in Sec. \[sec:conclusion\]. Data analysis {#sec:dataanalysis} ============= SPI analysis {#sec:spianalysis} ------------ We analyse the spectral and the temporal domain for the expected gamma-ray lines at 429, 478, 511, and 1275 keV. Starting at $T_1 = T_0 - 7~{\mathrm{d}} = \mathrm{MJD}\,57095.67$, the total exposure in our SPI data set until day +37 ($T_2 = T_0 +37~{\mathrm{d}} = \mathrm{MJD}\,57139.67$) is $T_{exp} = 1.53~{\mathrm{Ms}} < \Delta T = T_2 - T_1 = 44~{\mathrm{d}} \approx 3.80~{\mathrm{Ms}}$. This is reduced further by observational gaps in the regular INTEGRAL observation program, data selection criteria, such as orbit phases ($0.1$-$0.9$ to avoid the Earth’s radiation belts) and onboard radiation monitor rates acceptance windows (to avoid charged particle showers and solar flares), and detector dead time. The total observation time is $T_{obs} = 1.06~{\mathrm{Ms}}$. SPI data are dominated by background from cosmic-ray bombardment of satellite and instruments, leading to decay and de-excitation photons. The spectrometer data are analysed by a maximum likelihood method, comparing measured Ge detector data to models of celestial emission and background. We use a self-consistent, high spectral resolution, background modelling procedure [e.g. @Diehl2014_SN2014J_Ni; @Siegert2016_511; @Siegert2017_PhD] to extract spectra in the energy ranges 70 to 530 keV, and 1240 to 1310 keV. In general, the modelled time-patterns (count sequences in different detectors per unit time) for each model component are fitted to the measured time-pattern of the data by minimising the Cash-statistic [@Cash1979_cstat] $$C(D|\theta_i) = 2 \sum_k \left[ m_k - d_k \ln m_k \right]{\mathrm{,}} \label{eq:cstat}$$ which accounts for Poisson-distributed photon count statistics. In Eq. (\[eq:cstat\]), $d_k$ are the measured and $m_k$ the modelled data, which are matched (fitted) to $d_k$ by adjusting intensity scaling parameters $\theta_{(t)}$ of the model components, which are possibly time-dependent, $$m_k = \sum_{t_S} \sum_j R_{jk} \sum_{i=1}^{N_S} \theta_{i,t_S} M _{ij} + \sum_{t_B} \sum_{i = N_S+1}^{N_S + N_B} \theta_{i,t_B} B_{ik}{\mathrm{.}} \label{eq:modeldesc}$$ Here, we describe the model in each half-keV energy bin $k$ as a superposition of $N_S$ celestial models $M_{ij}$, to which the instrumental imaging response (coded-mask response, $R_{jk}$) is applied for each image element $j$, and $N_B$ background models $B_{ik}$. For the background model, we use the information gathered over the INTEGRAL mission years, separating long-term stable or smoothly varying properties, such as detector degradation (linear within half a year) or solar activity (solar cycle anti-correlates with cosmic-ray intensity rate), from short-term variations, such as solar flares and general pointing-to-pointing variations. Instrumental gamma-ray continuum and line backgrounds are treated separately, according to their different physical origins inside the satellite. Each instrumental line imprints a certain pattern onto the gamma-ray detector array, depending on the distribution of the radiating material inside the satellite. These patterns are constant over time, as the material distribution does not change. Only detector failures lead to a change in those patterns, as double scattering photons in dead detectors are then seen as single events in working neighbouring detectors. Different activation rates (cosmic-ray bombardment) and isotope decay times then lead to different amplitudes in those patterns, which are determined using Eqs. (\[eq:cstat\]) and (\[eq:modeldesc\]). For any specific process, these patterns are constant in time; however, for single energy bins (typically 0.5 keV, cf. instrumental resolution of 2-3 keV), these patterns may change due to different degradation strengths in the 19 detectors. Hence, we determine the detector patterns by performing a spectral decomposition (statistical fit) in each of the detectors on a three-day (viz. one INTEGRAL orbit) time scale. This allows to trace the degradation and the general response properties of all detectors with time, and at the same time smears out celestial contributions, because the varying time-patterns of the coded-mask response in combination with the INTEGRAL dithering strategy average out. The procedure and functions to determine the spectral response parameters have already been discussed in @Siegert2016_511 and @Siegert2017_PhD; see also @Diehl2017_SPI for an analysis of the 15-year SPI data base. By performing this maximum likelihood estimation for each of the spectral bins in the energy region of interest, we create spectra for each source or general emission morphology. In the case of V5668, the only ($N_S=1$) celestial model is a point-source at the position of the nova, $M_{1j} = \theta_{1,t_S} \delta(l-l_0) \delta(b-b_0)$. The decay time of $^7{\mathrm{Be}}$ of 77 days, and the possible gamma-ray flash before the optical maximum leads to two analysis cases: (1) Integration over the entire exposure time to obtain a maximum of sensitivity for the longer-lived nucleosynthesis products in fine energy resolution (no time-dependence, $\theta_{1,t_S} \rightarrow \theta_{1}$), and (2) several time intervals of 2-3 hours to study the transient behaviour in broader energy binning to enhance the sensitivity of observing a gamma-ray flash, and also to trace the radioactive decay of $^7{\mathrm{Be}}$ (light-curve). In Figs. \[fig:spec\_478\] and \[fig:spec\_1275\], the average spectra for V5668 between days -7 and 37 after the optical maximum are shown. In both energy bands, no significant excess is seen, and the spectra are consistent with zero flux. We derive upper limits on the fluxes by assuming an average line-shift of $-1000~{\mathrm{km~s^{-1}}}$ [@Molaro2016_V5668; @Tajitsu2016_V5668], which corresponds to shifted gamma-ray lines centroids of 479.1 keV and 1278.8 keV, respectively. The line broadening is adopted as 8 keV for the 478 keV line, and 21 keV for the 1275 keV line [FWHM, e.g. @Hernanz2014_nova; @Siegert2017_PhD instrumental resolution 2.08 keV at 478 keV, and 2.69 keV at 1275 keV]. At days $T_0+(15\pm22)$, the $3\sigma$ upper limit on the 478 keV line flux is estimated to be $8.2 \times 10^{-5}~{\mathrm{ph~cm^{-2}~s^{-1}}}$. The $3\sigma$ upper limit on the 1275 keV line flux is $7.6 \times 10^{-5}~{\mathrm{ph~cm^{-2}~s^{-1}}}$. In order to derive an upper limit on the mass of $^7{\mathrm{Be}}$, the expected flux of the 478 keV line from the radioactive decay law, $$F^{7{\mathrm{Be}}}(t) = \frac{M^{7{\mathrm{Be}}} p^{7{\mathrm{Be}}}}{4\pi d^2 N^{7{\mathrm{Be}}} u \tau^{7{\mathrm{Be}}}} \exp\left(-\frac{t - \Delta t}{\tau^{7{\mathrm{Be}}}}\right){\mathrm{,}} \label{eq:radiodecaylaw}$$ is used. In Eq. (\[eq:radiodecaylaw\]), $M^{7{\mathrm{Be}}}$ is the synthesised mass of $^7{\mathrm{Be}}$ seen to decay, $d$ is the distance to V5668, $N^{7{\mathrm{Be}}} = 7$ is the number of nucleons in $^7{\mathrm{Be}}$, $u = 1.66 \times 10^{-27}~{\mathrm{kg}}$ is the atomic mass unit, $\Delta t$ is fixed to $7$ days before the optical maximum, and $p^{7{\mathrm{Be}}} = 10.52\%$ is the probability of emitting a 478 keV photon after the decay. The flux limits convert to a $^7{\mathrm{Be}}$ mass limit of $M^{7{\mathrm{Be}}}_{3\sigma} < 4.8 \times 10^{-9}\,(d/{\mathrm{kpc}})^2~{\mathrm{M_{\odot}}}$, and a $^{22}{\mathrm{Na}}$ mass limit of $M^{22{\mathrm{Na}}}_{3\sigma} < 2.4 \times 10^{-8}\,(d/{\mathrm{kpc}})^2~{\mathrm{M_{\odot}}}$. Using the distance values from @Banerjee2016_v5668 or @Jack2017_V5668 of $\approx 1.6$ kpc, the limits on the ejected masses yield $M^{7{\mathrm{Be}}}_{3\sigma} < 1.2 \times 10^{-8}~{\mathrm{M_{\odot}}}$, and $M^{22{\mathrm{Na}}}_{3\sigma} < 6.1 \times 10^{-8}~{\mathrm{M_{\odot}}}$, respectively. @Molaro2016_V5668 estimated a $^7{\mathrm{Be}}$ mass of $7\times10^{-9}~{\mathrm{M_{\odot}}}$ from their UV spectra. This is consistent with our limit. Assuming their amount of ejected mass, the non-detection by INTEGRAL/SPI requires the distance to the nova V5668 to be larger than 1.2 kpc ($3\sigma$ lower limit). \ \ \ In Figs. \[fig:lc\_429\], \[fig:lc\_478\] and \[fig:lc\_511\], the gamma-ray light curves of the 429, 478 and 511 keV lines are shown, respectively. There is no significant excess in the gamma-ray light curves from $^7{\mathrm{Be}}$ or positron annihilation from the position of V5668. Around day -10, the mean 429 keV line flux has a significance of 3.6 $\sigma$ above the background. However, in the continuum band between 70 and 520 keV, no signal is detected ($\stackrel{{\mathrm{3\sigma}}}{<}0.018~{\mathrm{ph~cm^{-2}~s^{-1}}}$), and we consider this to be a statistical fluctuation. During days -8 to -6, for example, INTEGRAL observed other parts of the sky, and V5668 was not in the field of view (see angular position of V5668 with respect to the SPI on-axis frame in Fig. \[fig:novadirection\]). Either the source was in the field of view of SPI and IBIS, or the backside of the veto-shields are exposed to the source. Fitting the radioactive decay, Eq. (\[eq:radiodecaylaw\]), to the data in Fig. \[fig:lc\_478\] obtains an upper limit on the synthesised $^7{\mathrm{Be}}$ mass of $M^{7{\mathrm{Be}}}_{3\sigma} < 6.4 \times 10^{-9}\,(d/{\mathrm{kpc}})^2~{\mathrm{M_{\odot}}}$. Using the 1.6 kpc distance estimate as before, the mass is constrained to $M^{7{\mathrm{Be}}}_{3\sigma} < 1.6 \times 10^{-8}~{\mathrm{M_{\odot}}}$. Assuming again the mass estimate by @Molaro2016_V5668, V5668 must be further away than $d^{{\mathrm{7Be}}}_{3\sigma} > 1.1~{\mathrm{kpc}}$. These limits become more constraining, if the time-dependence of the expected signal (radioactive decay) is taken into account, i.e. $M^{7{\mathrm{Be}}}_{3\sigma} < 4.8 \times 10^{-9}\,(d/{\mathrm{kpc}})^2~{\mathrm{M_{\odot}}}$ and $d^{{\mathrm{7Be}}}_{3\sigma} > 1.2~{\mathrm{kpc}}$. ACS analysis {#sec:acsanalysis} ------------ We use the SPI-ACS to search for burst-like emission during three weeks before the optical maximum of the nova. SPI was not pointed to the direction of V5668 for most of this time, and the ACS has an omni-directional field of view to possibly detect such a feature. The nova model light-curve by @Hernanz2014_nova provides a template for the temporal evolution of the expected gamma-ray flash. Between 75 and 511 keV, the light-curve rises with a maximum at hour 1, sharply decreases exponentially to hour 4, and then fades away. We interpolate this coarsely sampled model onto our one-minute time binning of the (total) ACS rate, $R_{ACS}^{tot}(t)$. In particular, we perform a search of excess signals in the ACS rate, i.e. nova flash candidate events, using a maximum likelihood method. This requires a background model for the ACS counts at any time, $B(t)$, an intensity scaling parameter, $a_0$, for the flash model, $F(t)$, and a temporal variable, $t_0$, determining the flash time. During the three weeks before the optical maximum, we test a grid of 200 amplitudes (source intensities), equally spaced between 0.0 and 0.2 of the peak amplitude of $0.21~{\mathrm{ph~cm^{-2}~s^{-1}}}$, and use 400 time bins, equally spaced inside each INTEGRAL revolution, i.e. approximately 7.5-minute steps. Due to irregular particle events when entering and exiting the radiation belts, we limit the search to times when the ACS rate shows a smooth, non-erratic behaviour. This typically cuts out a few hours after the belt exits. We analyse INTEGRAL revolutions 1513 to 1521, i.e. days -20 to +3 in this way. The total model, $M(t;a_0,t_0)$, that is tested on the specified grid through the ACS rate in each orbit is then $$M(t;a_0,t_0) = B(t) + a_0 \times F(t-t_0){\mathrm{.}} \label{eq:acsmodelfit}$$ As a background model for the ACS, we use a median filter of one hour applied to the ACS rate itself, $B(t) = {\mathrm{median}}(R_{ACS}^{tot}(t),{\mathrm{60~min}})$. This smears out features on this and shorter time scales, which may then be captured by our nova model light curve. In each revolution, we find several candidates by iteratively accepting the (next to) maximum likelihood value in the marginalised probability density function of $t_0$. Table \[tab:cands\] summarises the nova flash candidate events with a statistical significance of more than $3.9\sigma$ ($p<10^{-4}$). In the following, we perform additional analyses to possibly distinguish between solar (particle and photon) events, gamma-ray bursts (GRBs), the nova itself, or other X-ray transients. Distinguishing between event candidates --------------------------------------- ### Temporal characteristics The temporal characteristic of a candidate is a first indicator of its origin. While all candidates have been identified by using a nova model light-curve, any emission above the background, captured by this short-duration model, will improve the fit, but is not necessarily attributable to a nova. In general, a sharp rise and fast decay feature might point to a nova origin. However, also solar flare events show this behaviour, but in this case, this is followed by irregular and strong particle flux which is also measured by the ACS. Very strong gamma-ray bursts on time scales of seconds to minutes will also be captured by our method, but can easily be identified as such by investigating the residuals of the fit, as their temporal profiles are more stochastic than our smooth nova model. We provide the duration of the candidate event, $\Delta T$, as well as the $\chi^2$-goodness-of-fit value of the nova model in Tab. \[tab:cands\], and mark possible GRBs and non-nova-like events in the comments column. ### Directional information {#sec:integralresponse} The ACS consists of 91 individual BGO blocks, arranged in a hexagonal structure surrounding SPI up to its mask. If an event originates from a particular direction (point source), the facing side of the ACS will record more counts than the averted side. This can be expressed by an anisotropy factor of the different ACS sub-units, which consist of rings at different positions with respect to the SPI camera, and with different BGO thicknesses, ranging from 16 mm at the top (upper collimator ring, UCR1) to 50 mm at the lower veto shield directly below SPI. Most of the sub-units alone are not suitable to perform such an anisotropy analysis because they are (partially) shadowed by the other main instrument on INTEGRAL, IBIS [@Ubertini2003_IBIS]. On the level of the Ge detector array, the path between two opposing BGO crystals is blocked by the camera itself, so that the information may be skewed. By performing a similar analysis for a major solar flare, @Gros2004_solarflare concluded that the UCR1 is the most sensitive sub-unit to infer coarse directional information. @Gros2004_solarflare defined the anisotropy parameter by $A = (R - L)/(R + L)$, where $R$ and $L$ are the rates of opposing UCR1 hemispheres, i.e. the total count rates of three BGO detectors each. This allows to identify azimuthal directions if the source direction (aspect) is perpendicular to the crystal. ![image](isgrivetoacs_lightcurve.pdf){width="1.25\columnwidth"} For general incidence angles to the veto shields, this anisotropy smears out, and the omni-directional INTEGRAL-response is required [see e.g. @Savchenko2017_LVT151012; @Savchenko2017_GW170817 for the search of electromagnetic counterparts of gravitational wave]. We perform an analysis of the INTEGRAL veto-shields and ISGRI count rates. Depending on the source position (Fig. \[fig:novadirection\]) and the expected spectrum, the count rates vary accordingly. Comparing the ACS to Veto count ratio, and the ACS to ISGRI count ratio with the prediction from the source spectrum, a localisation is possible for strong sources, such as gamma-ray bursts. For the weak candidate events, we use this response to distinguish between a possible nova or another origin. In Fig. \[fig:INTEGRALresponse\], the top three panels show the IBIS/ISGRI, IBIS/Veto, and SPI-ACS count rates, respectively. The bottom panel illustrates the expected count ratios from the direction of V5668 with estimated uncertainties (orange and blue bands), together with the actual measured count ratios for candidate events for which ISGRI data is available (cf. Fig. \[fig:novadirection\], perigee passages). The spectral properties during the luminosity peak of the gamma-ray transient have been assumed to follow a power-law with a spectral index of $-1$. The angular dependence of the INTEGRAL all-sky response is mostly sensitive to spectral shapes between 50 and 300 keV, so that the model approximation remains valid although the spectrum, as estimated by @Hernanz2014_nova, may be more complex[^4]. For the purpose of localisation, it is sufficient to adopt a simple power-law spectrum. If the measured points coincide with this expectation, the event origin is likely to be from the direction of the nova, although may not be caused by the nova. ### Spectral hardness The distinction between solar events and X-ray transients can be augmented further by investigating the count rate in SPI during the time of a candidate event. While transient X-ray sources predominantly emit up to 500 keV photon energies, often with an exponential cut-off [e.g. @Done2007_xrb], solar flares can also show a strong increase in the high-energy continuum up to several MeV, and in addition de-excitation lines from ${\mathrm{^{16}O^*}}$ (6.129 MeV), ${\mathrm{^{12}C^*}}$ (4.438 MeV), and ${\mathrm{^{2}H^*}}$ (2.223 MeV), for example [@Gros2004_solarflare; @Kiener2006_solarflare]. Note however, that solar X-class flares which produce high-energy gamma-ray lines appear to be rare events [$<10\%$, @Vestrand1999_SMMsolarflares], but can be identified as such by SPI and its sub-systems. The SPI-ACS transparency is increased with increasing energy, so that we can utilise the information from the entire SPI instrument, i.e. ACS and Ge detectors. The measured residual Ge detector counts that pass the shield are thus a convolution of the ACS-transparency with the source emissivity. We define a hardness ratio between the energy bands 500-8000 keV and 20-500 keV, $HR_{500} = F_{500-8000} / F_{20-500}$, in one minute steps, to obtain another basis of decision for the nova flash candidates. In general, the hardness ratio $HR_{500}$ is a smoothly varying function with time and is typically around 0.4 in SPI raw data. This value can change over time, e.g. due to X-ray transients, showing a decreased $HR_{500}$ with respect to the average, whereas solar events increase the ratio. We analyse $HR_{500}$ during each candidate event, and compare the value to the averaged hardness ratio, integrated over one hour before and one hour after the event. We express the change in hardness in units of $\sigma$ in Tab. \[tab:cands\]. Positive deviations would indicate a solar origin, negative ones possibly X-ray transients. ### Event identification and discussion Based on the above criteria, we exclude nine of the 23 candidates by their temporal fitting residuals (GRBs, not nova-like in general), and five events are presumably from the direction of the Sun, based on an increased hardness ratio. Based on theoretical predictions [@Gomez-Gomar1998_novae] of such a nova flash 2 to 10 days before the optical maximum, we consider one event as too early ($t-T_0 = -429.8~{\mathrm{h}} \approx -18~{\mathrm{d}}$) to originate from V5668. Two other events may be considered too late, happening close or even after the V-band maximum. Several events coincide closely with GOES X-ray data, even though the hardness ratio is unchanged. Two additional events can be excluded in this way. The remaining three ACS features can be assigned a direction off the nova location, and do not show a decreased $HR_{500}$ which could be expected if a transient X-ray source was near. $t-T_0$ $T_1$ $T_2$ $\Delta T$ $a_0$ $\chi^2/\nu$ $\Delta L$ $[\sigma]$ $HR_{500}$ $[\sigma]$ Comments --------- --------- --------- ------------ ----------- -------------- ----------------------- ----------------------- ----------------- -455.2 83.713 83.715 181.3 &gt;0.2 234.6 &gt;1000 -0.0 GRB -450.2 83.914 83.922 690.8 0.092(7) 8.0 13.1 +0.0 not nova-like -429.8 84.762 84.770 690.8 0.065(7) 3.2 9.3 -0.4 too early -352.5 87.993 87.994 25.7 0.049(8) 22.3 6.1 +2.2 GRB -329.7 88.943 88.950 604.5 0.050(7) 3.3 7.1 +2.9 Sun / GOES -322.0 89.205 89.223 1555.0 0.030(8) 1.2 3.9 +2.7 Sun / GOES -321.7 89.278 89.285 605.1 0.065(7) 8.2 9.3 -0.8 not nova-like -281.2 90.959 90.960 69.2 0.183(8) 60.4 22.9 +1.3 GRB -277.6 91.105 91.138 2851.0 0.054(7) 85.9 7.7 +3.0 Sun / GOES -277.0 91.140 91.142 129.2 0.155(7) 120.0 22.1 +1.5 GRB -227.4 93.197 93.200 259.1 0.054(7) 4.7 7.7 +0.7 not nova-like -225.5 93.260 93.305 3887.8 0.031(8) 1.9 3.9 +0.6 GOES -176.7 95.300 95.340 3455.4 0.064(7) 3.2 9.1 +0.5 false response -150.8 96.395 96.400 432.4 &lt;0.054 4.5 &lt;7.7 +0.3 weak -150.8 96.402 96.403 95.6 &lt;0.054 4.5 &lt;7.7 +0.7 GRB -144.5 96.650 96.661 950.5 0.036(8) 1.9 4.5 -0.0 false response -103.3 98.362 98.405 3715.1 0.048(7) 1.8 6.9 +0.7 false response -95.6 98.687 98.714 2332.8 0.037(7) 1.8 5.3 +3.6 Sun -88.8 98.972 98.999 2332.8 0.034(7) 2.2 4.9 -1.7 GOES -69.1 99.781 99.782 86.4 &gt;0.2 18.2 &gt;1000 -1.2 GRB / GOES -69.1 99.793 99.799 518.8 &lt;0.069 20.4 &lt;8.0 +3.7 Sun / weak -20.3 101.825 101.866 3542.4 0.034(7) 1.5 4.9 +1.0 too late +24.3 103.686 103.699 1123.2 0.049(7) 2.6 7.0 -0.0 after V-maximum Even though our decision-making can exclude any of the candidates to be associated with V5668, the case for six of them remains intriguing. This may be either because the response is exactly met but the light-curve does not appear nova-like, or vice versa, or because an event is only close in time with GOES and not strictly coincident (marked (c) in Tab. \[tab:cands\]). These candidates would either be classified as “fast” ($T_0-5~{\mathrm{d}}$ to $T_0-2~{\mathrm{d}}$) or “moderately fast” ($T_0-10~{\mathrm{d}}$ to $T_0-5~{\mathrm{d}}$) novae, based on the gamma-ray flash occurrence time [@Gomez-Gomar1998_novae]. The measured peak fluxes are similar for all candidates, ranging between $6.5$ and $13.4 \times 10^{-3}~{\mathrm{ph~cm^{-2}~s^{-1}}}$ in the energy regime of the SPI-ACS. These values are systematically uncertain by about 60% because the true effective area at each individual event is not known. The statistical uncertainties range between 10 and 25%. Taking all uncertainties into account, these events would correspond to luminosities $\approx 4$ to 50 times higher than the model assumption. This is a factor of 2-7 above the inferred values from @Banerjee2016_v5668 [1.54 kpc] or @Jack2017_V5668 [1.6 kpc]. This discrepancy might be due to our use of the nova model light-curve, its shape and its peak amplitude, being uncertain by about an order of magnitude [e.g. @Hernanz1997_nova; @Hernanz2005_novae; @Hernanz2014_nova]. With increased information of nuclear cross sections and more elaborate modelling approaches, the estimates become more realistic, though still, large uncertainties can arise, e.g. when considering the initial conditions. The ACS feature with the best-fitting temporal behaviour ($\chi^2/\nu = 1.8$) occurred 4.3 days (103.3 hours) before the optical maximum, with a significance of $6.9\sigma$ above the median ACS count rate. Its duration is at least 3700 s, and thereafter drowning into the background again. The hardness ratio is not significantly increased ($+0.7\sigma$), so that a solar or nova origin can neither be excluded nor suggested. The response function excludes this source by about $3\sigma$ to come from the direction of V5668. Based on only the hardness ratio, the feature around day 3.7 (hour 88.8) before the optical maximum ($\Delta HR_{500} = -1.7\sigma$) would suggest an X-ray source, though with low significance. During these two events, INTEGRAL was pointed towards V5668, and the source was in the partially coded field of view of IBIS and SPI. No hard X-ray / soft gamma-ray emission (70-520 keV) was detected during this time for both instruments (SPI: $<0.017~{\mathrm{ph~cm^{-2}~s^{-1}}}$ at day -4.3, $<0.049~{\mathrm{ph~cm^{-2}~s^{-1}}}$ at day -3.7; IBIS: $<0.003~{\mathrm{ph~cm^{-2}~s^{-1}}}$; $3\sigma$ upper limits). The strongest signal in the ACS rate occurred 7.4 days (176.6 hours) before the visual maximum with $9.1\sigma$ above the background level. This event was preceded by another, but weaker ($<3.9\sigma$) flash-like signal. This is typical for solar particle events in which the gamma-ray emission is preceding the low-energy particles by a few minutes. Yet, the hardness ratio ($\Delta HR_{500} = +0.5\sigma$), which would be expected to be significantly increased in this case, is not constraining enough. The expected Veto-to-ACS ratio (response) is met, but the ISGRI-to-ACS ratio is off by one order of magnitude, so that the true origin is questionable. Three events might either be too short (950 s at day -6.0 (hour -144.5), 260 s at day -9.5 (hour -227.4)) or too weak ($(6.5\pm1.7) \times 10^{-3}~{\mathrm{ph~cm^{-2}~s^{-1}}}$ at -9.4 (hour -225.5), viz. 5.7 kpc distance; SPI $3\sigma$ upper limit in the 70-520 keV band: $<0.017~{\mathrm{ph~cm^{-2}~s^{-1}}}$) if the nova model was correct within a factor of three. However, especially the candidate at hour -227.4 before the optical maximum attracts attention, because it is the only event for which the response is exactly met. Although the gamma-ray light curve is not nova-like, this short peak may only be “the tip of an iceberg”, so that most of the photons are either drowned in the background, do not escape, or are not produced, and the nova flash is only leaking for several minutes. Including the systematic uncertainties as described above, the flux for this event would be between $(11\pm7)\times10^{-3}~{\mathrm{ph~cm^{-2}~s^{-1}}}$ (ACS, model-dependent) and $(200\pm150)\times10^{-3}~{\mathrm{ph~cm^{-2}~s^{-1}}}$ (model-independent, for 260 s). In general, neither of the six signal excesses during the weeks before the optical outburst of V5668 can clearly be claimed to be due to the gamma-ray flash of explosive burning. On the other hand, the cases for origins other than the nova (or X-ray transients in general) are also only weak. Summary, discussion, and conclusions {#sec:conclusion} ==================================== Nucleosynthesis ejecta {#sec:nucsysejecta} ---------------------- We report an analysis of INTEGRAL gamma-ray observations of Nova Sgr 2015 No. 2 (V5668 Sgr). Novae are expected to produce significant amounts of ${\mathrm{^7Be}}$. This large mass appeared to be detected for the first time for V5668 Sgr by observations of Be II lines in UV wavelengths. Although the ${\mathrm{^7Be}}$ II doublet at $313.0583~{\mathrm{nm}}$ and $313.1228~{\mathrm{nm}}$, respectively, has only an isotopic shift of $\Delta \lambda = -0.161~{\mathrm{\AA{}}}$ with respect to the ${\mathrm{^9Be}}$ II doublet [@Yan2008_Beisotopeshift], the high resolution spectra from HDT [$R \approx 50000$, cf. @Tajitsu2016_V5668 Subaru Telescope] or UVES [$R \approx 100000$, cf. @Molaro2016_V5668 VLT] can easily distinguish between the two isotopes for narrow components. There are also other lines from iron-peak elements, such as Cr II or Fe II, which could contaminate the ${\mathrm{^7Be}}$ II measurements but which can also clearly be identified as such. @Molaro2016_V5668 estimate the possible contamination of the equivalent width of ${\mathrm{^7Be}}$ II absorption to about 3.5%. However, the absolute ${\mathrm{^7Be}}$ mass estimates may be more uncertain, and given our limits, it is interesting to check how much larger it could be: @Tajitsu2016_V5668 and @Molaro2016_V5668 estimate the mass fraction of ${\mathrm{^7Be}}$, following @Tajitsu2015_novaBe7 and @Spitzer1998_ISM, by comparing the equivalent widths of a reference element to the ${\mathrm{^7Be}}$ II doublet. Here, the authors used Ca, which is not a nova product, in particular the Ca II K line at 393.3 nm. The conversion of the equivalent widths to the respective column densities only works if the lines are not saturated and fully resolved. In addition, the covering factor of the nova shell should not be a strong function of wavelength, as otherwise, the Ca II K line could be intrinsically stronger or weaker absorbed. The compared isotopes must be in the same ionisation state to infer the column density ratios, which are then also the relative elemental abundance. This seems to be true since no doubly ionised nor neutral lines for Ca have been found [@Tajitsu2016_V5668; @Molaro2016_V5668]. Once the abundance ratio $X({\mathrm{^7Be}})/X({\mathrm{^{40}Ca}})$ has been determined, an assumption on the Ca abundance delivers the ${\mathrm{^7Be}}$ abundance in the nova ejecta. Here, the authors assume a solar Ca abundance, which might under-estimate the ${\mathrm{^7Be}}$ abundance by $\approx 30\%$, due to the abundance gradient in the Milky Way [@Cescutti2007_abundancegradient]. The ejected ${\mathrm{^7Be}}$ mass was then estimated by @Molaro2016_V5668 by assuming a canonical ejected mass of $\approx 10^{-5}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$. In general, the ejected mass may range between $10^{-7}$ and $10^{-3}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ for CO novae [@Bode2008_novae]. @Banerjee2016_v5668 estimated a gas ejecta mass of $2.7$-$5.4\times 10^{-5}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$, based on a canonical gas-to-dust ratio between 100 and 200, and their measured dust mass of $2.7\times 10^{-7}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$. The authors assumed a distance to V5668 of 2 kpc in their calculations, so that the dust mass and hence the gas mass, normalised to our 1.6 kpc assumption, may be about 40% smaller. Altogether, the total ejected mass may be a factor of few (2-5) larger than canonically expected. This would then also lead to an increase in the ejected ${\mathrm{^7Be}}$ mass to a few $10^{-8}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$. If the ${\mathrm{^7Be}}$ mass indeed was $2 \times 10^{-8}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$, this would be in tension with our derived upper limits on the mass if the distance of 1.6 kpc was correct. With a half-life time of $\approx 53~{\mathrm{d}}$, the radio-isotope ${\mathrm{^7Be}}$ is decaying via electron capture to an excited state of ${\mathrm{^7Li}}$, which de-excites by the emission of a gamma-ray photon at 478 keV. Using the spectrometer SPI aboard INTEGRAL, we searched for ${\mathrm{^7Be}}$-line emission during the observations of V5668, which covered several weeks around the nova’s optical maximum. From high spectral resolution as well as temporal (light-curve) analysis, we found no significant excess in the energy region of interest. We provide $3\sigma$ upper limits on the 478 keV line flux of $8.2 \times 10^{-5}~{\mathrm{ph~cm^{-2}~s^{-1}}}$, which can be converted to an upper limit on the ejected ${\mathrm{^7Be}}$ mass of $M^{7{\mathrm{Be}}}_{3\sigma} < 1.6 \times 10^{-8}~{\mathrm{M_{\odot}}}$. This, however, is based on uncertain distance estimates of 1.6 kpc. Assuming an ejected mass as derived by @Molaro2016_V5668, we can constrain the distance to V5668 to be further away than $d^{{\mathrm{7Be}}}_{3\sigma} > 1.1~{\mathrm{kpc}}$. Considering the detection of high-energy gamma-rays in the GeV range for V5668 Sgr, the fluxes around the 478 and 1275 keV lines may also have an underlying continuum from shock-accelerated particles. The estimated flux at both lines would be of the order $10^{-8}$-$10^{-7}~{\mathrm{ph~cm^{-2}~s^{-1}}}$, following the description of @Metzger2015_novashocks. Even though this is below the sensitivity limit of SPI, and would only contribute less than 1% to our upper limits, this effect is already accounted for in our derivation. Because approximated (Gaussian) line shapes and not only the flux values themselves are used, the line flux limits do not depend on the continuum below. Nova Sgr 2015 No. 2 was identified to be a CO nova, and thus no to little $^{22}{\mathrm{Na}}$ is expected to be produced and ejected. A gamma-ray line at 1275 keV would reveal the presence of $^{22}{\mathrm{Na}}$, which is not seen in our analysis ($F^{22{\mathrm{Na}}}_{3\sigma}<7.6 \times 10^{-5}~{\mathrm{ph~cm^{-2}~s^{-1}}}$). Burst-like emission {#sec:burst_like} ------------------- Explosive nucleosynthesis in novae is accompanied by burst-like gamma-ray emission from short-lived isotopes. The $\beta^+$-decay of these isotopes is expected to be followed by positron annihilation in the nova cloud, leading to a strong annihilation line at 511 keV and continuum down to $\approx 20$-$30$ keV, depending on the conditions in the nova. Although this signal would be expected to be of the order of $0.1~{\mathrm{ph~cm^{-2}~s^{-1}}}$ at 1 kpc in the 70-520 keV band, i.e. measurable in the SPI-ACS, it has never been observed, because it is expected to occur about one week before the optical maximum of the nova. Hence it may only be seen by chance or by a retrospective analysis of large data sets. The time of this gamma-ray flash is also uncertain, and may range and vary between 2 to 10 days before the optical maximum of a nova. SPI was not pointed to V5668 during the interesting time of the gamma-ray flash. But the INTEGRAL satellite with its main instruments and veto-shields has an almost omni-directional response. Therefore, we performed a search in the SPI-ACS data, using a nova light-curve model. Our search found 23 candidate events with significances above $3.9\sigma$ of which we identify six to be possibly associated with V5668, based also on the directional INTEGRAL-response. However, all six excess signals lack strong evidence to really originate from the nova, and all but one would suggest distances of more than 3 kpc (see discussion about positrons below). Based on temporal, spectral, and directional information from multiple instruments aboard INTEGRAL, we illustrated a way of searching for X-ray transient features in archival data. Our search is similar to GRB-analyses [e.g. @Rau2005_GRB], but is augmented further by the combined use of energy- and angular-responses of the INTEGRAL veto-systems and main instruments. Up to date of this work, the INTEGRAL archive comprises 15 years of data. The estimated galactic-wide nova rate is $50^{+31}_{-23}~{\mathrm{yr^{-1}}}$, and the local nova rate may range between $0.1$ and $0.5~{\mathrm{kpc^{-3}~yr^{-1}}}$ [@Shafter2017_novarate], so that tens of novae could be expected to be hidden in the current 15 years of INTEGRAL data. A thorough retrospective search for X-ray transient features in the INTEGRAL satellite’s veto-systems might reveal an entire family of unobserved/unrecognised sources. While model calculations provide estimates of how much material is produced and ejected, the true conditions shortly after explosive burning are uncertain. The short gamma-ray flash is believed to originate from the injection of positrons from the $\beta^+$-decay of, predominantly, $^{13}{\mathrm{N}}$ ($\tau = 14.4~{\mathrm{min}}$) and $^{18}{\mathrm{F}}$ ($\tau = 158.4~{\mathrm{min}}$) into the expanding envelope, where they annihilate with electrons and produce the 511 keV annihilation line and a continuum below. At this time, the envelope must be transparent enough to have the gamma-rays escaping, as otherwise no emission would be seen, and the expanding nova cloud would heat up by the absorption of these gamma-rays. Although $^{13}{\mathrm{N}}$ and $^{18}{\mathrm{F}}$ are short-lived, it might also be possible that their decay positrons escape from the nova in large amounts. The positron escape fraction, $f_{esc}$, could be added as a free parameter, similar to supernovae [e.g. @Milne1999_SNIa], which would allow diagnostics in the galactic-wide positron puzzle: The strongest persistent and diffuse soft gamma-ray signal is the 511 keV line from electron-positron annihilation, presumably in the interstellar medium of the Milky Way. The morphology of the emission and the origin of the positrons are probably decoupled, because 1) there are more sources to explain the amount of positrons than is actually seen, and 2) the positrons annihilate at rest, which involves a significant deceleration (i.e. kinetic energy loss) from relativistic energies to less than 1 keV. This requires several 100 pc propagation distances in typical interstellar medium conditions [e.g. @Alexis2014_511ISM]. Which sources contribute to what extent is only known very roughly [see @Prantzos2011_511; @Siegert2017_PhD for a review and a global measurement-based discussion, respectively]. Based on the apparent non-detection of V5668, novae could add to the reservoir of “longer-lived”[^5] positrons in the Galaxy, if the escape is larger than predicted. Depending on the nova type, of the order $10^{-7}$-$10^{-3}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ of material may be ejected [e.g. @Jose1998_novae; @Starrfield1998_V1974]. The mass fractions of the dominant positron-producers[^6] $^{13}{\mathrm{N}}$ and $^{18}{\mathrm{F}}$ is of the order $10^{-3}$ and $10^{-4}$ [@Jose2001_novaegamma; @Jose2003_novae], respectively, so that $10^{-8}$-$10^{-7}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ of $^{13}{\mathrm{N}}$ and $10^{-9}$-$10^{-8}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ of $^{18}{\mathrm{F}}$ are created. The decay modes for both isotopes are nearly 100% positron emission, so that in total $10^{48}$-$10^{49}$ positrons are created - per nova event. Considering the global nova rate, the average number of positrons created by the population of novae in the Milky Way is $(0.9$-$25.8)\times 10^{42} \times f_{esc}~{\mathrm{e^+~s^{-1}}}$, where $f_{esc}$ may range between 0 and 1. For example, if all positrons escape, this would make novae the dominant positron producers in the Galaxy. On the one hand, according to @Gomez-Gomar1998_novae, a 100% escape would be in strong tension with simulations. On the other hand, a 1-10% escape, as could be suggested for V5668, would contribute to about 1% of the total required positron production rate to explain the 511 keV emission in the Milky Way [@Siegert2016_511]. In this work, we showed that INTEGRAL/SPI is capable to detect a broad (8 keV, FWHM) $^{7}{\mathrm{Be}}$ line at 478 keV from classical novae up to a distance of $\approx800$ pc with $5\sigma$ significance for an observation time of 1 Ms, starting at the visual maximum of the nova. This is derived from tight upper limits on the expected $^{7}{\mathrm{Be}}$ line flux at 478 keV from the nova V5668 Sgr. In addition, we show that retrospective searches in archival INTEGRAL data can return valuable information for studies of X-ray transients. During the ongoing INTEGRAL mission, at least one such nova event could be expected. This research was supported by the German DFG cluster of excellence ’Origin and Structure of the Universe’. The INTEGRAL/SPI project has been completed under the responsibility and leadership of CNES; we are grateful to ASI, CEA, CNES, DLR, ESA, INTA, NASA and OSTC for support of this ESA space science mission. LD and MH acknowledge support from the Spanish MINECO grant and FEDER funds. JJ acknowledges support from the Spanish MINECO through grant , the E.U. FEDER funds, and the AGAUR/Generalitat de Catalunya grant . SS acknowledges partial support from NSF, NASA, and HST grants to ASU. TS thanks Francesco Berlato for Fermi/LAT analysis of the candidate events. [64]{} natexlab\#1[\#1]{} , M., [Ajello]{}, M., [Albert]{}, A., [et al.]{} 2014, Science, 345, 554 , A., [Jean]{}, P., [Martin]{}, P., & [Ferri[è]{}re]{}, K. 2014, , 564, A108 , D. P. K., [Ashok]{}, N. M., [Venkataraman]{}, V., & [Srivastava]{}, M. 2015, The Astronomer’s Telegram, 7303 , D. P. K., [Srivastava]{}, M. K., [Ashok]{}, N. M., & [Venkataraman]{}, V. 2016, , 455, L109 , M. F. & [Evans]{}, A. 2008, [Classical Novae]{} , W. 1979, , 228, 939 , G., [Matteucci]{}, F., [Fran[ç]{}ois]{}, P., & [Chiappini]{}, C. 2007, , 462, 943 , C. C., [Jean]{}, P., [Shore]{}, S. N., & [Fermi Large Area Telescope Collaboration]{}. 2013, The Astronomer’s Telegram, 5649 , C. C., [Jean]{}, P., [Shore]{}, S. N., [et al.]{} 2016, , 826, 142 , D. D. & [Hoyle]{}, F. 1974, , 187, L101 , M. & [Livio]{}, M. 1995, , 452, 704 , A., [Gialanella]{}, L., [Kunz]{}, R., [et al.]{} 2009, Physical Review Letters, 102, 232502 , R., [Siegert]{}, T., [Greiner]{}, J., [et al.]{} 2017, ArXiv e-prints , R., [Siegert]{}, T., [Hillebrandt]{}, W., [et al.]{} 2014, Science, 345, 1162 , C., [Gierli[ń]{}ski]{}, M., & [Kubota]{}, A. 2007, , 15, 1 , R. 2003, [Overview of Nuclear Data]{}, Website, available online at <http://www.escholarship.org/uc/item/7p80t5p0>; visited on August 14th 2013. , R. D., [Evans]{}, A., [Woodward]{}, C. E., [et al.]{} 2015, The Astronomer’s Telegram, 7862 , J., [Hernanz]{}, M., [Jose]{}, J., & [Isern]{}, J. 1998, , 296, 913 , M., [Tatischeff]{}, V., [Kiener]{}, J., [et al.]{} 2004, in ESA Special Publication, Vol. 552, 5th INTEGRAL Workshop on the INTEGRAL Universe, ed. V. [Schoenfelder]{}, G. [Lichti]{}, & C. [Winkler]{}, 669 , E., [Cheung]{}, T., & [Ciprini]{}, S. 2013, The Astronomer’s Telegram, 5302 , M. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 330, The Astrophysics of Cataclysmic Variables and Related Objects, ed. J.-M. [Hameury]{} & J.-P. [Lasota]{}, 265 , M. 2014, in Astronomical Society of the Pacific Conference Series, Vol. 490, Stellar Novae: Past and Future Decades, ed. P. A. [Woudt]{} & V. A. R. M. [Ribeiro]{}, 319 , M., [G[ó]{}mez-Gomar]{}, J., [Jos[é]{}]{}, J., & [Isern]{}, J. 1997, in ESA Special Publication, Vol. 382, The Transparent Universe, ed. C. [Winkler]{}, T. J.-L. [Courvoisier]{}, & P. [Durouchoux]{}, 47 , M. & [Jos[é]{}]{}, J. 2006, , 50, 504 , M., [Jose]{}, J., [Coc]{}, A., & [Isern]{}, J. 1996, , 465, L27 , L., [Della Valle]{}, M., [Mason]{}, E., [et al.]{} 2015, , 808, L14 , D., [Robles P[é]{}rez]{}, J. d. J., [De Gennaro Aquino]{}, I., [et al.]{} 2017, Astronomische Nachrichten, 338, 91 , P., [G[ó]{}mez-Gomar]{}, J., [Hernanz]{}, M., [et al.]{} 1999, Astrophysical Letters and Communications, 38, 421 , J. 2016, [Stellar Explosions: Hydrodynamics and Nucleosynthesis]{} , J. & [Hernanz]{}, M. 1998, , 494, 680 , J., [Hernanz]{}, M., & [Coc]{}, A. 2001, Nuclear Physics A, 688, 118 , J., [Hernanz]{}, M., [Garc[í]{}a-Berro]{}, E., & [Gil-Pons]{}, P. 2003, , 597, L41 , J., [Hernanz]{}, M., & [Iliadis]{}, C. 2006, Nuclear Physics A, 777, 550 , J., [Gros]{}, M., [Tatischeff]{}, V., & [Weidenspointner]{}, G. 2006, , 445, 725 , M. D. & [Clayton]{}, D. D. 1987, , 323, 159 , P. 2016, The Astronomer’s Telegram, 9678 , P., [Dubus]{}, G., [Jean]{}, P., [Tatischeff]{}, V., & [Dosne]{}, C. 2017, ArXiv e-prints , B. D., [Finzell]{}, T., [Vurm]{}, I., [et al.]{} 2015, , 450, 2739 , B. D., [Hasco[ë]{}t]{}, R., [Vurm]{}, I., [et al.]{} 2014, , 442, 713 , P. A., [The]{}, L.-S., & [Leising]{}, M. D. 1999, , 124, 503 , P., [Izzo]{}, L., [Mason]{}, E., [Bonifacio]{}, P., & [Della Valle]{}, M. 2016, , 463, L117 , K. L., [Kuin]{}, N. P. M., [Osborne]{}, J. P., & [Schwarz]{}, G. J. 2015, The Astronomer’s Telegram, 7953 , P. D. & [Kavanagh]{}, R. W. 1963, Physical Review, 131, 2578 , N., [Boehm]{}, C., [Bykov]{}, A. M., [et al.]{} 2011, Reviews of Modern Physics, 83, 1001 , A., [Kienlin]{}, A. V., [Hurley]{}, K., & [Lichti]{}, G. G. 2005, , 438, 1175 , V., [Bazzano]{}, A., [Bozzo]{}, E., [et al.]{} 2017, , 603, A46 , V., [Ferrigno]{}, C., [Kuulkers]{}, E., [et al.]{} 2017, , 848, L15 , T. 1957, , 41, 182 , M. & [H[ä]{}rm]{}, R. 1965, , 142, 855 , J. 2015, Central Bureau Electronic Telegrams, 4080 , A. W. 2017, , 834, 196 Siegert, T. 2017, Dissertation, Technische Universit[ä]{}t M[ü]{}nchen, Published online at https://mediatum.ub.tum.de/node?id=1340342 , T., [Diehl]{}, R., [Khachatryan]{}, G., [et al.]{} 2016, , 586, A84 , L. 1998, [Physical Processes in the Interstellar Medium]{}, 335 , S., [Truran]{}, J. W., [Sparks]{}, W. M., & [Kutter]{}, G. S. 1972, , 176, 169 , S., [Truran]{}, J. W., [Wiescher]{}, M. C., & [Sparks]{}, W. M. 1998, , 296, 502 , A., [Sadakane]{}, K., [Naito]{}, H., [Arai]{}, A., & [Aoki]{}, W. 2015, , 518, 381 , A., [Sadakane]{}, K., [Naito]{}, H., [et al.]{} 2016, , 818, 191 , P., [Lebrun]{}, F., [Di Cocco]{}, G., [et al.]{} 2003, , 411, L131 , G., [Roques]{}, J.-P., [Sch[ö]{}nfelder]{}, V., [et al.]{} 2003, , 411, L63 , W. T., [Share]{}, G. H., [J. Murphy]{}, R., [et al.]{} 1999, , 120, 409 , C., [Courvoisier]{}, T. J.-L., [Di Cocco]{}, G., [et al.]{} 2003, , 411, L1 , Z.-C., [N[ö]{}rtersh[ä]{}user]{}, W., & [Drake]{}, G. W. F. 2008, Physical Review Letters, 100, 243002 , S.-C., [Langer]{}, N., & [van der Sluys]{}, M. 2004, , 425, 207 [^1]: E-mail: tsiegert@mpe.mpg.de [^2]: @Banerjee2016_v5668 provide three distance estimates, one considering the geometry and expansion velocity, but without uncertainties, and two others, based on two different MMRD method assumptions, leading to two different absolute magnitudes, $M_V=-6.91\pm0.40$ and $-6.65\pm1.82$, and thus two distances of 1.31-1.76 and 0.68-3.6 kpc, respectively. [^3]: This is true for many other prompt gamma-ray lines which are produced during explosive burning. Here, the 429 keV line serves as a proxy for similar $(p,\gamma)$-reactions of the CNO-cycle, operating in shells of temperatures of $\approx 10^8$ K, which are expected to be opaque at this time. [^4]: Between 50 and 300 keV, the nova model by @Hernanz2014_nova indeed follows a power-law with index -1. [^5]: Longer-lived here means no prompt annihilation in the nova itself, but annihilation 0.01-10 Myr later in the interstellar medium. [^6]: This is the case for both, CO and ONe novae. In ONe novae, however, of the order of $10^{-8}~{\mathrm{{M\ensuremath{_\odot}\xspace}}}$ of $^{22}{\mathrm{Na}}$ are produced additionally, which is also a $\beta^+$-decayer, and which in any case contributes to the galactic positron content, because $^{22}{\mathrm{Na}}$ has a half-life time of 2.75 years, i.e. at times when the nova is fully transparent and the ejecta further away from the white dwarf.
Science, Technology & Society B.S. Course Description Intermediate Algebra w Trig Topics include: Exponents, roots, and radicals; Functions and their graphs; Solving and graphing quadratic equations and applications; Solving, radical, equations; Equations in quadratic form; General angle trigonometry; Solving systems of linear equations in two or three variables and applications. (TI-83 plus or TI-84 plus required, TI-Nspire or similar calculator is not allowed.) Prerequisite: MAGN 101 (C or better required) or equivalent 3 credits (3 lecture hours), fall or spring semester This course satisfies the Liberal Arts and Sciences requirement and the SUNY General Education Requirement for Mathematics
Q: How to save date into 24 hours format in oracle I am new to Oracle, and I need to save date and time in an Oracle database. I am using time stamp as datatype for row. But now my problem is it saves date and time in 12 hours format like this 17/11/2011 10:10:10 PM. But I need it in 24 hours format like 17/11/2011 22:10:10. I didn't understand the results that Google search result provided. Can any one please help me by posting some code. A: Oracle always stores timestamps (and dates) in a packed binary format that is not human readable. Formatting is done only when a timestamp (or a date) is converted to a string. You can control the formatting of your output by coding an explicit to_char. For example SELECT to_char( your_timestamp_column, 'DD/MM/YYYY HH24:MI:SS' ) FROM your_table A: Oracle stores timestamps in an internal format (with a default representation). You can customize this representation on output like with the to_char() function. For input (into the database) you can use to_date().
I listened, tweeted and chaired. These three activities filled the five days in a way that I, in part, expected and in part did not. Roller Coaster When you arrive on Monday and leave on Friday, you are neither the first to arrive nor the last to go. But many are only staying for a couple of days, and it really is an emotional roller coaster to see the facilities get filled during the Tuesday to a point that you simply cannot find the people you are looking for and empty gradually from the Wednesday evening on. It is like being trapped in a strange time dimension. The main rhythm was imposed by the plenary sessions, during which the higher moment of the ascending curves was reached: one was the opening keynote on Tuesday, the other the closing keynote on Friday. But the poster session (Wednesday in the late afternoon), although not a plenary during the slam, also brought everyone together – the third highlight. When I organized the third DH Berlin Einstein-Workshop with Claudia Müller-Birn a year ago, general feeling also was that the poster session was the real climax of the event. The remarkable thing about a poster session is that, as an organizer, you can handle it more or less good, plus, the people presenting their posters are unpredictable slamers, some boring, some really good and many in between, and there may not be enough room for people to navigate between the posters and talk as much as they’d wish (it is often loud). Still, there is so much more exchange and discussion room than in any other traditional humanities format that everyone comes out of it filled with an amazingly energetic momentum. Surrogates I had the pleasure of chairing sessions dealing in part with media I am not used to work with. Plus, I deliberately went to sessions dealing with still other media. It was intellectually extremely stimulating to see how approaches from radically different angles (theater, architecture, music, graphic novel, video game) would (to my eyes at least) converge with text-based methodological issues, one of which being the impossibility to render everything. I have been thinking about surrogates in a way that was, so far, very abstract. Surrogates are a key to us dealing with the fact that totality is delusional. Resorting to them amounts to acknowledging that there are missing bits which we can define solely as an instance, by their function or place for example. In Graz, I experienced the presentation of approaches struggling with a much more material understanding of surrogates and sometimes even sources. I think that we (we text people) do also work with representations that necessarily flatten performativity and have to deal with epistemological constraints very similar to those of other media, but the technical impedimenta still seemed to be weighing too much to make similarities emerge so easily. I studied German Studies and Philosophy in Paris where I got my PhD in 2002. I then moved to Berlin, where I have been living & doing research ever since. My areas of specialty include German literature, Digital Humanities, textual scholarship and intellectual history. I am currently working at the Centre Marc Bloch in Berlin as an expert in digital technologies for the humanities.
Stereogenic phosphorus-induced diastereoselective formation of chiral carbon during nucleophilic addition of chiral H-P species to aldehydes or ketones. P,C-stereogenic α-hydroxyl phosphinates or phosphine oxides were prepared from the additions of (RP)-phosphinate to ketones or (RP)-phosphine oxide to aldehydes, respectively, catalyzed by bases at room temperature in up to >99:1 diasteromeric ratio (d.r.P/d.r.C) and 99 % yields. The diastereoselectivity was induced by reversible equilibrium and different stabilities between two diastereomers of adduct, which was caused by the spatial interaction between menthoxyl or menthyl to alkyl groups of aldehydes or ketones.
The four tries scored in the final edged the Crusaders to 90 tries, one more than the Lions with the Waratahs back on 81 and the Hurricanes fourth on 72. In clean breaks during the season the Waratahs were best with 276 followed by the Chiefs on 259 and the Crusaders on 254. The Lions were fifth on 208. The Lions had most carries with 2267 followed by the Crusaders on 2196. But in metres carried the Crusaders were second on 8712 behind the Waratahs on 9218 while the Chiefs were third on 8619 and the Lions fourth on 8178. The Lions beat most defenders with 519 while the Crusaders were second on 446. In tackles won, the Crusaders were second on 85 percent, just behind the Sharks on 85.2 while the Lions were 12th on 82.2 percent. The Lions had the highest winning lineout percentage of 90.8, just ahead of the Bulls on 89.9 while the Crusaders were 10th on 86.5 percent. The Crusaders were sixth-best in offloads with 157 while the Lions were 12th with 131. The Sharks were the best on 203 with the Chiefs second on 198 and the Blues third on 183. The scrummaging statistic of success revealed an intriguing figure, the five New Zealand sides occupied the top five places with the Lions, Stormers and Reds sixth equal. The Chiefs were best on 95 percent with the Blues on 94, the Crusaders on 93 and the Highlanders and Hurricanes equal on 92 percent. The Lions, Reds and Stormers were each on 91 percent. In rucks won, the Crusaders and Jaguares shared first with 97 percent while the Lions, Hurricanes, Reds, Chiefs, Rebels, Sharks, Stormers, Blues, Sunwolves and Brumbies shared third with 96 percent.
version https://git-lfs.github.com/spec/v1 oid sha256:407856398cc3ce1186f73afde69b6f6e054fe246f6f5ad7a728fdb26280add66 size 17161
Q: Solving for $3^x - 1 = 2^y$ Besides $x=2, y=3$, are there any other solutions? I know that if there is another solution: $y$ is odd since $2^y \equiv -1 \pmod 3$ $x$ is even since $3^x - 1 \equiv 0 \pmod 8$ $3 | y$ since $-1 \equiv 2^y \pmod 9$ Are there any other solutions? If not, what is the argument for showing that if $3^x > 9$, then $2^y \neq 3^x-1$ Thanks, -Larry A: You don't need the full strength of Catalan's conjecture here. Two solutions are found easily. $3^1-1=2^1$ gives $x=y=1$. $3^2-1=2^3$ gives $x=2$ and $y=3$. To prove that these are the only solutions, assume $x > 2$. As you have noted, $x$ must be even: $x = 2z$. Now we have $$ 3^x-1 = 3^{2z}-1 = (3^z-1)(3^z+1) = 2^y. $$ It follows that both $3^z-1$ and $3^z+1$ are powers of $2$. But this is impossible, because both numbers are larger than $2$ (since $z > 1$), and they are only two units apart. QED. PS: I think I've seen this argument somewhere on math.SE.
Boren, a 6-foot-3, 320-pound guard, was a Second Team All-American for Ohio State in 2010. He went undrafted and signed with the Baltimore Ravens practice squad for the 2011 season. He also worked as a center for the Ravens. During the 2012 preseason, Boren suffered a foot injury. The Ravens released him in early September after reaching an injury settlement. Boren is known locally for his contentious departure from the University of Michigan, where he played his first two college seasons. After the retirement of coach Lloyd Carr, Boren left the program, citing the erosion of family values. An Ohio native, Boren transferred to Ohio State to finish his collegiate career.
Online computer support from a technician is simple. You give us the go ahead to connect to your computer in Essex remotely and we will fix your problem while you sit back and relax. You can even watch what the technician is doing at all times and if you prefer we can have someone in Essex at your door to do the repair work at no time. Monthly unlimited PC and Laptop support for just £19.99 per month.Unlimited 24/7 access all year long. Secure data and your online identity Tools to repair and keep your PC running fast.Comprehensive IT support for Servers, PC Computers, Laptops and Software Applications, including technical consultation.To get help in Essex from a wEinstein Solutions technician right now call 0345 388 1879
Q: Is it correct to use DIV inside FORM? I'm just wondering what are you thinking about DIV-tag inside FORM-tag? I need something like this: <form> <input type="text"/> <div> some </div> <div> another </div> <input type="text" /> </form> Is it general practice to use DIV inside FORM or I need something different? A: It is totally fine . The form will submit only its input type controls ( *also Textarea , Select , etc...). You have nothing to worry about a div within a form. A: It is completely acceptable to use a DIV inside a <form> tag. If you look at the default CSS 2.1 stylesheet, div and p are both in the display: block category. Then looking at the HTML 4.01 specification for the form element, they include not only <p> tags, but <table> tags, so of course <div> would meet the same criteria. There is also a <legend> tag inside the form in the documentation. For instance, the following passes HTML4 validation in strict mode: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <META http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Test</title> </head> <body> <form id="test" action="test.php"> <div> Test: <input name="blah" value="test" type="text"> </div> </form> </body> </html> A: You can use a <div> within a form - there is no problem there .... BUT if you are going to use the <div> as the label for the input dont ... label is a far better option : <label for="myInput">My Label</label> <input type="textbox" name="MyInput" value="" />
# This set of tests exercises the backward-compatibility class # in mailbox.py (the ones without write support). import mailbox import os import time import unittest from test import support # cleanup earlier tests try: os.unlink(support.TESTFN) except os.error: pass FROM_ = "From some.body@dummy.domain Sat Jul 24 13:43:35 2004\n" DUMMY_MESSAGE = """\ From: some.body@dummy.domain To: me@my.domain Subject: Simple Test This is a dummy message. """ class MaildirTestCase(unittest.TestCase): def setUp(self): # create a new maildir mailbox to work with: self._dir = support.TESTFN os.mkdir(self._dir) os.mkdir(os.path.join(self._dir, "cur")) os.mkdir(os.path.join(self._dir, "tmp")) os.mkdir(os.path.join(self._dir, "new")) self._counter = 1 self._msgfiles = [] def tearDown(self): list(map(os.unlink, self._msgfiles)) os.rmdir(os.path.join(self._dir, "cur")) os.rmdir(os.path.join(self._dir, "tmp")) os.rmdir(os.path.join(self._dir, "new")) os.rmdir(self._dir) def createMessage(self, dir, mbox=False): t = int(time.time() % 1000000) pid = self._counter self._counter += 1 filename = os.extsep.join((str(t), str(pid), "myhostname", "mydomain")) tmpname = os.path.join(self._dir, "tmp", filename) newname = os.path.join(self._dir, dir, filename) fp = open(tmpname, "w") self._msgfiles.append(tmpname) if mbox: fp.write(FROM_) fp.write(DUMMY_MESSAGE) fp.close() if hasattr(os, "link"): os.link(tmpname, newname) else: fp = open(newname, "w") fp.write(DUMMY_MESSAGE) fp.close() self._msgfiles.append(newname) return tmpname def assert_and_close(self, message): self.assertTrue(message is not None) message.fp.close() def test_empty_maildir(self): """Test an empty maildir mailbox""" # Test for regression on bug #117490: self.mbox = mailbox.Maildir(support.TESTFN) self.assertTrue(len(self.mbox) == 0) self.assertTrue(next(self.mbox) is None) self.assertTrue(next(self.mbox) is None) def test_nonempty_maildir_cur(self): self.createMessage("cur") self.mbox = mailbox.Maildir(support.TESTFN) self.assertTrue(len(self.mbox) == 1) self.assert_and_close(next(self.mbox)) self.assertTrue(next(self.mbox) is None) self.assertTrue(next(self.mbox) is None) def test_nonempty_maildir_new(self): self.createMessage("new") self.mbox = mailbox.Maildir(support.TESTFN) self.assertTrue(len(self.mbox) == 1) self.assert_and_close(next(self.mbox)) self.assertTrue(next(self.mbox) is None) self.assertTrue(next(self.mbox) is None) def test_nonempty_maildir_both(self): self.createMessage("cur") self.createMessage("new") self.mbox = mailbox.Maildir(support.TESTFN) self.assertTrue(len(self.mbox) == 2) self.assert_and_close(next(self.mbox)) self.assert_and_close(next(self.mbox)) self.assertTrue(next(self.mbox) is None) self.assertTrue(next(self.mbox) is None) def test_unix_mbox(self): ### should be better! import email.Parser fname = self.createMessage("cur", True) n = 0 with open(fname) as fp: for msg in mailbox.PortableUnixMailbox(fp, email.Parser.Parser().parse): n += 1 self.assertEqual(msg["subject"], "Simple Test") self.assertEqual(len(str(msg)), len(FROM_)+len(DUMMY_MESSAGE)) self.assertEqual(n, 1) class MboxTestCase(unittest.TestCase): def setUp(self): # create a new maildir mailbox to work with: self._path = support.TESTFN def tearDown(self): os.unlink(self._path) def test_from_regex (self): # Testing new regex from bug #1633678 f = open(self._path, 'w') f.write("""From fred@example.com Mon May 31 13:24:50 2004 +0200 Subject: message 1 body1 From fred@example.com Mon May 31 13:24:50 2004 -0200 Subject: message 2 body2 From fred@example.com Mon May 31 13:24:50 2004 Subject: message 3 body3 From fred@example.com Mon May 31 13:24:50 2004 Subject: message 4 body4 """) f.close() box = mailbox.UnixMailbox(open(self._path, 'rb')) messages = list(iter(box)) self.assertTrue(len(messages) == 4) for message in messages: message.fp.close() box.fp.close() # Jython addition: explicit close needed # XXX We still need more tests! def test_main(): support.run_unittest(MaildirTestCase, MboxTestCase) if __name__ == "__main__": test_main()
Canadian Forestry Corps Canadian Forestry Corps Initial creation : 14 November 1916 Disbanded: 1920 Reraised : 1940 Disbanded: 3 December 1945 The Canadian Forestry Corps was an organizational corps of the Canadian Army during both World Wars. Lineage 14 Nov 1916: Canadian Forestry Corps created, formed from an existing forestry battalion (224th Battalion, CEF) and the conversion of other infantry battalions (including the 238th Battalion, CEF) for forestry duties. 1920(?): Disbanded. May 1940: Canadian Forestry Corps once again created. 3 Dec 1945: Disbanded. Functions The Canadian Forestry Corps provided lumber for the Allied war effort by cutting and preparing timber in the United Kingdom and on the continent of Europe in both the First World War and the Second World War. Forestry units also cleared terrain for the construction of installations such as airfields and runway, prepared railway ties, as well as lumber for the creation of barracks, road surfaces, ammunition crates, trench construction, etc. These units were sometimes called on in the First World War to perform as infantry. History First World War Above - Light railway in use by men of the Canadian Forestry Corps. LAC Photo. Right - Aboriginal member of the Canadian Forestry Corps in the UK. LAC Photo. The success of German U-Boats in the Atlantic in the First World War caused a restriction on the number of imports to Britain. Millions of tons of lumber has travelled across the ocean from Canada to the UK in 1915. In Feb 1916, the British government requested assistance from Canada with regards to the production of timber, hoping to utilize resources available in Britain. The 224th Canadian Forestry Battalion was raised and arrived in England in Apr 1916, less than three months after the initial request. The battalion moved to Virginia Water Camp in Surrey, to produce sawn lumber. Detachments were sent to other places in England and Scotland. A second British request for additional forestry units resulted in the formation of the 238th Canadian Forestry Battalion, which arrived in England in Sep 1916. In Oct 1916, authority was granted to form the Canadian Forestry Corps. Both battalions joined the corps; by Nov 1916, six forestry battalions had arrived overseas, including the 242nd Battalion, CEF. In Dec 1916, the battalions were broken up to form independent forestry companies. Eventually 102 companies were formed in Europe. A small group was already operating in France at Bois Normand, with the first headquarters at Conches (Eure). This headquarters was expanded into a Canadian Forestry Group headquarters (eventually designated Centre Group) divided into two districts. By Jun 1918, three other groups were in operation; Jura Group, Bordeaux Group, and Marne Group, and each of these groups also had two district headquarters under command. Canadian Forestry Corps headquarters for France was established at Paris-Plage, near Boulogne, with an office in Paris linking the district and group headquarters with a corps supply depot where technical equipment was warehoused, at Le Havre. Arrangements had been made in Canada for the purchase and shipment of necessary machinery and equipment to operate saw mills and other facilities. The corps also ran three forestry hospitals. In Mar 1918, the corps was called on to train 800 men as reinforcements for the Canadian Corps, to be drawn from across all the districts. On 2 Feb 1917, independent forestry companies were formed in each Military District in Canada as well. On 17 Jul 1917, Forestry Depot Companies were formed in each Military District in Canada. At the end of the war, 56 companies were in operation on the Western Front, including 13 made up of German prisoners of war. In total, 19,162 men were on strength. Seven more companies were engaged exclusively in technical work for Allied air forces, including clearing, grading, leveling and draining land in the creation of airfields. A scarcity of rivers and waterways in France had necessitated the adoption (and creation) of broad, narrow-guage railways. Six districts were in operation in the UK at war's end (at Carlisle, Egham, Southampton and East Sheen in England and Stirling and Inverness in Scotland). Some 43 companies were in operation, with a strength of 12,533 including 3,046 attached labourers and prisoners of war. Their base depot was located at Smith's Lawn, Windsor shortly after the 224th Battalion arrived overseas, and all newly arriving soldiers for the corps arrived at the depot before reinforcements for companies in France or the UK were selected. The average monthly turnover at the depot was 1,500 men. In total, the combined strength of the corps on 11 Nov 1918, including attached officers, foreign soldiers (including British, Portugese, Finns and prisoners of war) was 31,447. Second World War Lumberman and teamster Royal Fournier of Maniwaki, Quebec, with the 26th Company of the Canadian Forestry Corps at a logging camp in Quebec in 1943. LAC Photo. The attempted blockade of the UK in the Second World War once again required the British to look to Canada for assistance in meeting the need for timber. The first request from England for forestry companies was actually made in Oct 1939.1 Wood was needed for living quarters, messes, and recreation facilities, as well as crates for vital supplies such as food, ammunition and even vehicles, and for the creation of explosives, stocks for weapons, the construction of ships, aircraft and factory facilities. After the success of the original Canadian Forestry Corps, a new corps was created in May 1940 to perpetuate their work, and twenty companies were initially raised. Ten more were formed as the war progressed. Canada agreed to shoulder the expense of pay, allowances and pensions, all initial personal equipment, and transport to and from the United Kingdom by individual members of the corps. The British Government paid for "all other services connected with equipment, work or maintenance" and certain others such as medical services (though Canada covered the costs associated with Medical Officers, Britain paid for actual hospitalization). While the British designated the areas of work, and the final disposal of the lumber created, military operations were under the purview of Canadian Military Headquarters in London. Both anglophones and francophones were recruited from across Canada, including many veterans of the corps from the First World War, including the corps' first commander, Brigadier J.B. White, who had commanded timber operations in France in 1918. Unlike the First World War, where "Canadian Forestry Corps personnel did not receive military training other than basic drill, courtesies and protocols",2 personnel of the CFC in the Second World War received five to seven months of training, mainly at Valcartier, before moving overseas. The decision to provide military training to these men was made in Jun 1940, given the impending danger of German invasion prevalent at that time. For the most part, the C.F.C. camps were constructed from scratch, and the personnel built barracks, roads, bridges and set up power plants. Each company's sawmill usually was located close to their camp and employed both "Canadian Mills" and the smaller "Scotch Mill" but the later was not viewed with approval by the Canadians. The average time lag between arrival at the camps and the start of logging operations was 97 days. The companies worked in two sections, one cutting in the bush and bringing out the timber, and the other sawing it into lumber at the company mill. The felling crew consisted of three men, two sawing and one trimming. Hand saws and axes were the tools employed and three man "Cat" teams yarded the logs to the roadside landings, either by dragging them or use of sulkies. Each C.F.C. Unit was a self-contained community, including men capable of turning their hand at any task from black smithery and mechanical repair to snow clearance on the highland roads. A regular potion of each unit's time was devoted to military training, each company preparing defensive positions in its area in cooperation with the troops of Scottish Command in the event of German invasion.3 By May 1941, Corps Headquarters was in operation in Scotland with 13 forestry companies (each about 200 men strong), organized into five Forestry Districts each with its own headquarters (in the counties of Inverness, Ross, Aberdeen, Nairn and Perth). Seven more companies arrived in late Jul 1941. The corps cleared approximately 230,000 forest acres in Scotland during their stay. In 1942, ten additional companies had been raised, the last arriving in Oct 1942. By the spring of 1943, however, manpower problems in the Canadian Army caused the remustering of several hundred soldiers suitable for other employment to other overseas units. In Oct 1943, ten companies were repatriated to Canada (totalling close to 2,000 men) for forestry duties there. After the landings in Normandy in Jun 1944, ten companies eventually moved to the Continent to continue operations there; 77 square timber rafts and 54 round timber rafts had been created in Southampton to moved timber across the English Channel with them. By the end of Aug 1944, operations had commenced on the continent; six companies of the CFC were called out to hold the line during the German Ardennes Offensive in Dec 1944, when Allied reserves were stretched to the limit. On 1 Sep 1945 the CFC was officially disbanded (forestry operations had already ceased in Scotland in Jun) and all 20 companies returned to Canada. In all at it's peak, the overseas strength of the corps had been 220 officers and 6,771 other ranks. A total 442,100,100 foot board measures of timber had been cut in Scotland, England and France during their time in Europe. Also of note is the fact that Newfoundland had also contributed foresters to the war effort; the Newfoundland Overseas Forestry Unit was created in Nov 1939 of civilians; in Dec 1942 they numbered 1,497 men who had volunteered for the duration of the war. They conducted operations in Scotland similar to that of the CFC. Notes Stacey, C.P. Official History of the Canadian Army in the Second World War Volume I: Six Years of War (Queen's Printer, 1955) p. 65 Love, David W. A Call to Arms: The Organization and Administration of Canada's Military in World War One (Bunker to Bunker Books, Calgary, AB, 1999) ISBN 1894255038 p.249 MacPherson, John. Echo Two website accessed 13 Jan 2006
<?xml version="1.0" encoding="utf-8"?> <bitmap xmlns:android="http://schemas.android.com/apk/res/android" android:src="@drawable/tile" android:tileMode="repeat" />
Days Black People Not Re-Enslaved By Trump Tuesday, December 31, 2013 Despite the headlines, that's not really the most important news. No the important news is buried in the piece. Take a look at the picture. Look at all those black faces. Look at them. Illegal guns. Illegal guns. Illegal guns drive violence. And military-type weapons like the one we believe to have been used in this shooting belong on a battlefield — not on a street or in a corner or in a park," McCarthy says. Let's look at that statement. If the problem were definitively "illegal guns" then we would expect that the murder victims (via guns) would be proportionate to the populations. Whites are 45% of the population. African-Americans make up 33% of the population. Latinos (who can be any race) 29%. In a situation where all groups are equally disposed to committing gun crimes, we would see about the same proportion of murder victims (and shottas). But the reality is not so. In 2011 75% of murder victims are African-American. 4.6% are white and 18% "Hispanic". In fact since 1991 the percentage of murder victims in Chicago has been near 80%. This from a population that makes up 33% of the population. Question: Is it "Illegal guns" or something totally fucked up in our communities? Are "Illegal guns" up and jumping into the hands of African-American males and then by some kind of mind control making them point said "illegal guns" at other African-American males and pull the trigger? Are the "illegal guns" operating on some kind of remote control and taking out African-Americans all on their own? Do "illegal guns" someone dislike white people and therefore go out of their way to avoid the hands of white males? See lets keep it real. "Illegal guns" do not do anything on their own. Guns of any legal status do nothing on their own. People are the problem and clearly in Chicago it is one set of people who have a problem: African-Americans. Let's keep it 100% real here. If we keep blaming inanimate objects for our total failure to properly socialize our children, we will continue to see these population control levels of murders In a city of neighborhoods, though, crime rates are not equal, and many of the shootings here are gang-related in the city's South and West sides. I know NPR doesn't want to offend African-Americans, but lets keep it real. The above should have read: In a city of neighborhoods, though, crime rates are not equal, and many of the shooting here are gang-related in the city's Black South and West sides. That would be keeping it real. But I'm sure folks would say it is "racist" to say that even though it is factual (I suppose I'm a self hating negro for pointing it out too). Community activists and ministers recently attended a public hearing convened by the Rev. Al Sharpton. "They say that the shooting is down. Well, if one person is shot, it's one too many," he said. Lets be clear here. Just like with any crime, there will always be crime. Always. So lets lay off the "one too many" talk. People lose control from time to time. It happens. It's a part of the human condition. Natjuan Herrin lives on Chicago's West Side and is also skeptical. "Well, where I come from, they shoot every day, all day, but it's not safe nowhere in Chicago. Wherever you go, it's not safe," Herrin says No ma'am it is not "not safe nowhere in Chicago." As pointed out, the unsafe places are where African-Americans live in large numbers. You know, the places where illegal guns magically appear and jump into peoples hands and take over their minds to commit murders that is the fault of white people who don't live there and white people scheming in their white neighborhoods via their school systems where they control the minds of black men (who drop out and skip out at alarming rates) in order to remote control them into committing murder. I suppose I'm an even more self hating negro for pointing this out too. . Fardon says he doesn't believe the city can arrest its way out of its gang problem. "It is too big. It is too deep. It is too insidious. It starts at too young an age," Fardon says. Fardon is absolutely correct. You cannot arrest your way out of the problem because the problem starts at home. The crime problem in Chicago, among African-Americans is largely a problem of socialization. This means that it is generational. And no, it's not all African-Americans or even most. But it is enough. It will change when we decide we have had enough. Others, like Matthew VanDyke, say the report highlights the nuance of the attack, which was lost in the wake as political and military pundits sought to score points. VanDyke — an American who fought with Libyan rebels to oust Gaddafi — says the report vindicates his initial assessment of Benghazi, and says people mentioning Al Qaeda have a fundamentally flawed view of Al Qaeda as a top-down organization with regimented ranks. So why was there an American in a foreign country helping to overthrow the government of that country? How does that help other Americans in countries when they are suspected of agitating against the governments of those countries? I read a lot of commentary in regards to the report that 48% of Republicans do not believe in Evolution. It mainly was the type of "Republicans are stoopid see?" When I read the report I noticed that nearly 30% of Democrats and nearly 30% of Independents also think that felis domesticus was walking around in the Garden of Eden with the dinosaurs (no doubt laying with lambs). Personally I don't think it's pretty much to crow about cause 30% is a rather large number even if it is less than Republican numbers. However there is another reason for the increase in percentages of Republicans believing that Dinosaurs and humans roamed the earth at the same time: Less people identifying as Republicans. I'm pretty certain that the trend noted in the linked piece has continued to whatever extent. I believe that those who were on the side of evolution probably are Independents or Democrats now (more likely Independents) so I'm almost certain that a significant portion of the change is due to the shrinking base of Republicans and the defection of more scientific minded ones to other parties. Monday, December 30, 2013 Well the problem is what Tony Herbert said. See it's not the NYPD's fault for the kids running around the mall fighting and carrying on. No. It is firstly the fault of the persons involved and secondly it is the fault of the parents for fucking up raising their children so they think it's OK to act a total criminal fool in public. How about this. Be in constant contact and constant talk with the parents of these folks to let 'em know WE will not tolerate their kids rampaging. How about when and if one of these kids gets popped by the police after or during one of these events we don't go talking about police brutality and save that for actual innocent people like Louima and Diallo? But it's always easy to put the onus on the police (the state) rather than on the people who are legally responsible for the folks acting a fool. It's not the job of the NYPD to properly socialize our children. Thursday, December 26, 2013 One of the things that often bothers me about films about historical black events is the focus on the "good white folks". It's the white savior syndrome that creeps in to a film for no other reason than to get the white audience members to identify or relate to the film. As an aside, it is interesting, thought quite explainable, how black people are willing and able to identify with white characters in movies that have ZERO black people in it (see the fascination with God Father movies) but somehow it is "difficult" to have white audiences "relate" to a black [themed] movie. Anyway, below you'll see the photos for the Italian posters for 12 years A Slave. Clearly, someone thought (perhaps correctly) that Italians would be unable or unwilling to see the movie if the focus appeared to be on the actual central character of the movie. While Fassbender could possibly rate a large poster due to his lengthy performance, The fact that Brad Pitt appears for maybe 20 minutes but yet gets his as the central point of poster says a whole lot. And poor Lupita Nyong'o doesn't even rate a mention on the poster and she had more screen time than Pitt. Patsey: Whole Life A Slave. Perhaps the Mandela movie will feature a huge picture of De Klerk on it's poster. Monday, December 23, 2013 Read a piece in Counterpunch in which one Joy Freeman-Coulbary (Yes, FREEMAN, *heavy sigh*) was quoted as follows: As woman of African, Irish, Mexican and Native American descent, married to a husband of Senegalese and Portuguese descent, I believe that intermarriage contributes to inter-cultural understanding and diminishes racial prejudices and tensions. The less we adhere to “race,” the less racism persists. Romantically, it’s also spicy and thrilling to defeat our hardwired biases and find love in the less familiar… Yes, and fuck you too. I meet a lot of moos, usually those who are bi (or whatever) racial who like to make such claims with blatant disregard to how racist and insulting it is to the rest of us so called "pure breeds". Basically this chick (can you tell I have no respect of her?) is saying that if all of us were like her and her fuck buddy, we'd all be understanding because you know folks can't be not racist unless they get all mixed up. Total bullshit. Oh and not only that. the rest of us "pure breeds" are having less spicy and thrilling sex because, well we're not fucking mixed. The fucking horror. All the spicy and thrilling sex I'm missing because I am not adequately mixed. Yes and fuck you too lady. You know this "less black is best" is common among enslaved populations and among those who have been colonized by white people. I guess this chick never thought about that. Yes, I'm being THAT harsh because comments like that are fucking out of order. Instead, millions of people on MoveOn’s list are continually deluged with emails pretending that Republicans are the only major problem in Washington — while nearly always ignoring Obama administration policies that are antithetical to basic progressive values. I guess it's taken Norman THIS long to figure out that MoveOn is a Democratic party organ rather than an independent body. Just like I figured out that not a few people who I saw on Twitter were Democratic party operatives/apologists/partisans rather than thinkers. It's tough living with principles, open eyes and stuff like actual data. The National Union of Metalworkers of South Africa, which calls itself “the biggest union in the history of the African continent,” with 338,000 members, announced Friday after a special congress that it would seek to start a socialist party aimed at protecting the interests of the working class. It was a direct rebuke to the A.N.C., which since its days as an underground movement resisting apartheid rule has portrayed itself as the champion of South Africa’s downtrodden. “It is clear that the working class cannot any longer see the A.N.C. or the S.A.C.P. as its class allies in any meaningful sense,” Irvin Jim, the union’s secretary general, said at a news conference, referring to the governing party and its partner in government, the South African Communist Party. When the recent miner's strike was ended with murder of the strikers just like before the ANC took power, with the OK of the so called "black leadership". We all knew what was known by a few for a long time, The ANC has been lost it's way. Anyway, this just reminds me that I need to write this piece on Mandela and the ANC. Thursday, December 19, 2013 he new school superintendent in Camden, N.J., says it was a "kick-in-the-stomach moment" when he learned that only three district high school students who took the SAT in the 2011-12 school year scored as college-ready. 3? 3? I bet the rest of them know Drake and JayZ and Kanye by heart though. I suppose the problem is racism. You know, the fault of the 5% white population. Or maybe the fact that these kids don't get to sit in classrooms with white students cause you know, none other than the US Supreme Court has declared that black kids can't do shit unless they are sitting at a desk next to white folks. I don't doubt poverty played a huge roll in the level of "preparedness", but when ALL your students are unprepared for college, something else entirely is afoot. There is no dispute that Wafer shot McBride -- who was drunk and seeking help after a car crash -- through the screen of his front door in the early hours of Nov. 2. Wafer called 911 around 4:30 a.m. and said he had shot someone who was “banging on my door.” More than three hours earlier, McBride had crashed her car into a parked car in a residential neighborhood… But Wayne County assistant prosecutor Danielle Hagaman-Clark said it's “ridiculous” to believe that Wafer was deeply afraid but still decided to open the door and fire instead of first calling the police. “He shoved that shotgun in her face and pulled the trigger,” Hagaman-Clark said. You can be scared for your life at the banging on your door. No problem. You can grab your gun too. But you can't open your door put the gun in that person's face and pull the trigger. Opening the door mean's you ain't all that scared. Three members of Japan's House of Representatives called on Glendale to remove an 1,100-pound statue honoring an estimated 80,000 to 200,000 "comfort women" from Korea, China and other countries who were forced into prostitution by the Japanese army during World War II. You lost WWII. Have a seat. It's a lot of gall to go to a third party country and tell them what they can and cannot erect in their city. Have. A. Seat. For the Japanese living in LA. Look, you live in a multicultural, multi-ethnic and multi-racial country. Folks are going to have things you don't necessarily approve of. Deal with it. Under the Obama administration, there have thus far been 1,869,025 removals. If removals continue at this same pace, President Obama’s administration will reach two million deportations in 130 days – on April 26, 2014. It is thus accurate to say that President Obama will surpass President Bush’s record on deportations – unless he stops deporting people well before April 26. It is also accurate to say that President Obama has deported more people than any previous president except George W. Bush. It is also accurate to predict that President Obama is on track to surpass, in just over six years, the sum total of all deportations carried out under the most recent Bush administration. And the problem is what? Every sovereign nation on the planet has the right to say who can come in and who can stay. Every sovereign nation on the planet has the right to deport individuals or groups who are not supposed to be in that country. The US has had an explosion of illegal immigration since the 1986 amnesty so why does it shock anyone that there are more deportations than ever? Why do people in the country illegally think they have a right to stay? It's not even in the UN charter. Why the non-concern for low wage, low skilled US citizens who currently bear the brunt of increased competition in the job market? Oh, you think raising the minimum wage will fix that eh? Oh, you think that by making those persons "legal" you can pretend like there is no "citizen employment problem". I see you. Wednesday, December 18, 2013 A few days ago I posted about Camden, that 90% black city which has a murder rate on par with some of the most dangerous countries in the world. In it I commented as follows: Oh, so the solution to the problem of black crime is white supervision. Let that sink in for a minute. How do a people, who spend a whole lot of time talking about equality need white supervision in order to behave properly? One would think that surely after all the exploitation and racism shown by the colonial powers in Africa that in the absence of said exploiters and oppressors Africans would, you know, manage their own concerns. But: The man, Abdon Seredangaru, 25, a primary-school teacher, was one of the many hundreds attacked in three days of mass killings this month here in Bangui, the capital of the Central African Republic. More than 450 people were massacred in the city, according to the United Nations, and 150 others nationwide… The arrival of French troops, and a contingent of African Union troops airlifted in by the American military, has brought some stability, and everyday life is slowly returning to the capital. Crowds jostled at the banks downtown as women in colorful dresses and high heels turned up for work. So half a globe away, in yet another black populated area of the world, French (read: white) troops are needed to restore "stability". Am I the only African bothered by this? Why is it, time after time, White folks need to be flown in to one of the places that we supposedly "run" in order to restore "stability" (among other things)? Camden and Central African Republic: more in common than you think. Tuesday, December 17, 2013 But the Somali children were less likely than the whites to be “high-functioning” and more likely to have I.Q.s below 70. (The average I.Q. score is 100.) The study offered no explanation of the statistics. No. The median IQ for those classified as caucasian is 100. it is 110-115 for those classified as Asian and 85 for those classified as African. I get no joys from these statistics, but there they lay. So when looking at a sample of white (European descended) children vs a sample of African descended children, that the European descended children still score higher and be "higher functioning" is expected. So the explanation is already known. The results echoed those of a Swedish study published last year finding that children from immigrant families in Stockholm — many of them Somali — were more likely to have autism with intellectual disabilities. Again. Not surprising. Northern European natives score a median of 100. Africans score a median of 85. So of when there are other non biological factors added that would trigger intellectual disabilities, Africans are, unfortunately to show more of their population with such issues. Again, I don't enjoy writing this. It's not meant as a knock against the African. The same research shows that urbanized Africans have a significantly higher IQ than non-urban Africans, which suggests a strong influence of environment, education and I think, methylation processes. But this is not the place for that discussion. Even though the city has Asian and Native American communities, records for so few of those children were studied that they were not included in the analysis, she added, “but it’s reasonable to extrapolate that autism rates among them are lower.” Well research shows that Asians have a higher median IQ than Caucasians and Africans (115). Therefore it would stand to reason that it would take a drop of 2 standard deviations for an Asian to reach the median IQ of the African, and even more to be registered as having a intellectual disability. Of course the data would have to be collected and studied to confirm that. Anything else is just a guess. At onetime, 25 percent of the children in local special education classes were Somali, while Somalis represented only 6 percent of the student body. While some children back home had the same problems children everywhere do, parents said, autism was so unfamiliar that there was no Somali word for it until “otismo” was coined in Minnesota. Personally I think it's the weather. I would act a fool if I had to deal with the winter they have to deal with. Kidding. The other thing could be a large number of misdiagnosed developmental issues that are NOT actually autism. It could be that the children are simply not that bright relative to the white population. It may also be clashes of cultures (language, habits, etc.) that are affecting them in ways that look like autism. Of course that's speculation and I'm not even a little bit of an expert on autism. But the IQ statement? The NY Times should know better than to pass of 100 as THE average IQ, when it is known that it is the median for a specific group. So a judge in DC ruled that the NSA's grabbing of "metadata" was unconstitutional. I'm shocked. Not because I don't agree with the ruling but because apparently there is an official in DC that actually understands the US. Constitution. It seems to me that a great deal of people do not understand what the term "probable cause" means and why it is in the Constitution. The U.S. government is prohibited from searching your private effects without a warrant. To qualify for a warrant the government must provide reasonable suspicion that you have committed crime. Similarly,in order to search you and your place agents of the government must have probable cause to do so. These two things, Reasonable Suspicion and Probable Cause, are the two things that keep the government in check. Those items are what are supposed to keep the government from going on "witch hunts" and invading the lives of private citizens. As the NSA said on 60 minutes last Sunday, it is supposed to do foreign intelligence. Of course, under U.S. Law the NSA may search whomever and whatever it pleases. No one who is not a U.S. citizen is covered by constitutional guarantees of privacy. Nor are national governments or so called "enemies of the state". So nothing written here is a commentary on any of that. However; U.S. citizens are supposedly covered under such guarantees and the NSA has clearly and blatantly violated these guarantees. The NSA has made the ridiculous argument that in terms of phone calls that it cannot get to the contents of a phone conversation. ANYONE with a Google Voice account knows full well that all the NSA has to do (and does, do not be fooled) is translate the conversation in real time to a small text file and store that. That whole "we don't have access to the contents, is for the simple minded among you. The next thing they claim is that it is OK for them to intercept so called "metadata". Let us be clear, even the capturing of metadata of U.S. persons is a violation of reasonable suspicion and probable cause. Why? Such metadata is not "public". It is not the same as you and I walking down the street in plain view of a camera (and I have issues with cameras, but that's another issue for another time). It can be argued that if you conduct your business in a place where any Jamaal can see you, then a government agent, such as the police, can also observe your clearly public activities and take action if said public activity is illegal. Your phone calls are not public. While someone may observe that you are making a phone call, no Jamaal on the street can know, simply by looking at you who you are calling. Whether you made or received the call, etc. Therefore your phone calls are not "public" data sitting in plain view and therefore ANY collection of that information iscovered (notice I did not say "should") under the Reasonable Suspicion and Probable Cause requirements. It is abundantly clear that the vast majority of persons in the US are not terrorism suspects. Therefore it is NOT reasonable suspicion to collect their private information which includes so called "meta-data". The same thing applies to e-mail. I don't understand how any arm of the government thinks that the e-mails that a person sends is OK to collect for any reason at all that is not covered under Reasonable Suspicion or Probable Cause. Exactly what probable cause does the NSA or any agency have for collecting and storing your e-mail (including the "meta-data")? You have committed no crime. You have not been implicated in any crime. You have been implicated in no conspiracy to commit a crime. Why then are your personal effects being collected and stored by the government? The only argument is that you MIGHT be a terrorist. You MIGHT at some point in the future be implicated in a crime. You may in the future you MAY be implicated in a criminal conspiracy. And IF such "reasonable suspicion" were to arise, we ALREADY have access to your information. In other words the entire idea of law enforcement, namely investigations of crimes either already committed or in the process of being committed are being twisted and reversed in the name of "security". Let me tell you what is "reasonable". Stop funding Al-Qaeda in Syria and elsewhere. Stop meddling in the internal affair of other countries. Stop allowing free entry of persons who don't understand or value constitutional freedoms. I guarantee you that doing these things will do more to promote "safety and security" than the trampling of so called "constitutionally guaranteed rights". So lets be clear. I don't care what the NSA looks at and stores in regards to people who are not "US persons". Those persons have no coverage of constitutional guarantees. But U.S. Persons have so called "rights" that no government agency should be able to get around. get the warrant. Show the reasonable suspicion and probable cause, then collect away. Short of that, delete. Sunday, December 15, 2013 For the longest time I have been wondering why the African-American population has not grown much over the past couple of decades. As I've written the last two pieces on Camden and Detroit respectively it hit me as to how devastating black crime is on the African-American population. As I've written before, the number one cause of death for African American males between the ages of say 12 and 40 is assault (usually a homicide). Going back to the Camden post we find: The next year, 2012, little Camden set a record with 67 homicides That was a record. Now for the longest time I could not understand the gravity of the murder rate in terms other than it's impact on business, employment and community stress. But now, all late, it has dawned on me how bad this is. If you the reader don't understand, then lets look at it in terms of a school. The average urban school class has 30-35 students. I'm going to go with the smaller number. If you have 67 people killed, it would be like every year, two classes of students simply disappearing. I looked at an article on a Boston high school in which the school in question had 1842 students. A 67 person murder rate, over ten years would see 1/3 of the school population disappear. That doesn't include the shooters. If we add the shooters, then over ten years 1340 persons no longer in the population due to being dead or being put in jail for the crime. That would be close to an entire school population gone every 10 years (of course many, if not most murders go unsolved and so the number of shooters do not match the number shot. Also as with other crimes, one person may have killed multiple people. Therefore the math would be off by quite a bit but is still alarming considering that even unfound killers are a drain on the communities they live in). Multiply that by all the mostly black communities spread out across the United States. Every 10 years black folks in America eliminate one school's worth of people in each of their communities. How much loss is that? No one can say. Even considering that murder numbers vary from place to place and the "entire school" argument has it's noted issues, it is still staggering to consider that at the very least, a classroom's worth of people are disappeared each year. If you are a teacher in a college or university when you next go to teach your class, imagine that all of your students in one of your class being gone next year. If you are a teacher in a high school, imagine that your classroom would be empty next year because all the children that would be there are dead. That's what this high murder rate is. Worse than anything that happened in Sandy Hook. And it happens every year. Here's a challenge. Camden is 95% Black. Black folks say that the problems they have are caused exclusively by White people. Since there are essentially no White people in Camden, Black folks should be able to show the world what they can do in the absence of White people and their racism. What say you NAACP? Urban League? National Action Network? What say you? in Camden, chaos is already here. In September, its last supermarket closed, and the city has been declared a "food desert" by the USDA. The place is literally dying, its population having plummeted from above 120,000 in the Fifties to less than 80,000 today. Thirty percent of the remaining population is under 18, an astonishing number that's 10 to 15 percent higher than any other "very challenged" city, to use the police euphemism. Their home is a city with thousands of abandoned houses but no money to demolish them, leaving whole blocks full of Ninth Ward-style wreckage to gather waste and rats... Over three years, fires raged, violent crime spiked and the murder rate soared so high that on a per-capita basis, it "put us somewhere between Honduras and Somalia," says Police Chief J. Scott Thomson. "They let us run amok," says a tat-covered ex-con and addict named Gigi. "It was like fires, and rain, and babies crying, and dogs barking. It was like Armageddon." Again. Remember, no White folks to blame because White folks don't do Camden. What say you NAACP? Urban League? National Action Network? With legal business mostly gone, illegal business took hold. Those hundreds of industries have been replaced by about 175 open-air drug markets, through which some quarter of a billion dollars in dope moves every year. But the total municipal tax revenue for this city was about $24 million a year back in 2011 – an insanely low number. The police force alone in Camden costs more than $65 million a year. If you're keeping score at home, that's a little more than $450 a year in local taxes paid per person, if you only count people old enough to file tax returns. That's less than half of the $923 that the average New Jersey resident spends just in sales taxes every year. Remember that post on Detroit I did recently? What did I say about tax revenue? What did I say about how revenue is generated? That's why the title of THAT post was "You Expected What In Detroit?" . But once Christie assumed office, he announced that "the taxpayers of New Jersey aren't going to pay any more for Camden's excesses." In a sweeping, statewide budget massacre, he cut municipal state aid by $445 million. The new line was, people who paid the taxes were cutting off the people who didn't. In other words: your crime, your problem. It's pretty easy to blame Christie for deciding not to spend other people's money on Camden's problems. Let's even call him racist for doing so. But here's the thing: What exactly is wrong with "your crime, your problem"? No seriously, the people who left Camden left because they didn't want to deal with the rising crime rates. They didn't want to be threatened by it and they certainly didn't want to pay for it. Is that racist? Really? And if such crime rates are limited to Camden and it's residents, then isn't it the responsibility of the residents to fix? And what if outsiders do pay to "clean up" Camden? What if in return for cleaning it up, they, you know, gentrify it? I mean isn't that fair?. If you ask me to pay to fix your problem, that you created, I should be compensated right for my money and effort, no? After the 2011 layoffs, police went into almost total retreat. Drug dealers cheerfully gave interviews to local reporters while slinging in broad daylight. Some enterprising locals made up T-shirts celebrating the transfer of power from the cops back to the streets: JANUARY 18, 2011 – it's our time. A later design aped the logo of rap pioneers Run-DMC, and "Run-CMD" – "CMD" stands both for "Camden" and "Cash, Money, Drugs" – became the unofficial symbol of the unoccupied city, seen in town on everything from T-shirts to a lovingly rendered piece of wall graffiti on crime-ridden Mount Ephraim Avenue. Drug dealers did what? So when the white folks left the only thing that enterprising black folks could come up with was drug dealing? Hmmmmmmmm. And get on national TV to brag about it? And for all the talk about how racist police who love to kill black men (though police involved shootings of black males accounts for maybe...maybe 1% of all black men shot) need to be restrained, we see in the absence of such "racist" police crime goes through the roof and hits a plane flying over head at cruising altitude? Clearly then the issue of high crime has little to do with "racist" police. At times in 2011 and 2012, the entire city was patrolled by as few as 12 officers. That's like the number of police in an upscale neighborhood. In a rational world, 12 officers would be too many. But hey, since white folks are essentially absent from Camden, and white folks represent THE cause of everything wrong in black communities, why s there so much crime? No matter what side of the argument you're on, the upshot of the dramatic change was that Camden would essentially no longer be policing itself, but instead be policed by a force run by its wealthier and whiter neighbors, i.e., the more affluent towns like Cherry Hill and Haddonfield that surround Camden in the county. The reconstituted force included a lot of rehires from the old city force (many of whom had to accept cuts and/or demotions in order to stay employed), but it also attracted a wave of new young hires from across the state, many of them white and from smaller, less adrenaline-filled suburban jurisdictions to the north and east. Oh, so the solution to the problem of black crime is white supervision. Let that sink in for a minute. How do a people, who spend a whole lot of time talking about equality need white supervision in order to behave properly? You don't understand the question? Let me change the scenario. When have you EVER heard of a poor white community with the crime rates of Camden that needed black intervention to get it's crime under control? I'll wait. See this is why Garveyism never took hold. Garveyism required hard work. It was about building a community and nation. Not guilt tripping white people about the obvious stuff they had and continued to do. It was about proving your equality by doing what every other people do: Run their communities end to end. But that requires work. It requires dealing quite harshly with people who want to be fuck ups. That's hard. Now getting access to stuff that other people already built? That's easy. That's why the NAACP, Rainbow Coalition, Urban League and National Action Network can spend time in meetings with Macy's and Barney's rather than meeting with black millionaires on how to finance a massive change in Camden where you can't blame white folks for shit. Saturday, December 14, 2013 "I don't really think there should be a white Santa Claus," said one mother whose son shuns white mall Santas. "You've got to do what your kids believe in because if you don't, it's like, you're lying to them." Stupid. stupid. Stupid. You know Negroes annoy me a great deal sometimes. Really. they do. Negroes hate their own origins so much that they go and tell other people to place them in their own mythos' so that they can feel better about themselves. Sad thing is that those people are too scared to tell these Negroes to fuck off. The latest nonsense is the apparent stunning fact that Santa is a white male. Now personally I do not celebrate Christmas. There are two reasons for this: 1) I am not a Christian. Therefore I have no reason to celebrate the "birth of Christ". 2) I don't like the idea of telling children that some overweight white male somehow made an entrance to my home and dropped off gifts when the reality is mom and dad (most cases mom) worked and saved their money (often imperiling their future finances) to plant a toy that will soon be broken or forgotten under a tree. Essentially lying to their children. Later they will punish those same children for telling lies themselves. Set the example. But I have NEVER, EVER bought into the idea that Santa needs to be black for the supposed self-esteem of black children. Just as I objected to, and still object to Idris Elba (and the Asian) playing in Thor. So let's get this straight for the folks upset about Santa not being black. Santa Claus is derived from a German myth. You do realize Black people do NOT originate in northern Europe right? Santa Claus is another name for Saint Nick or Saint Nicholas, also known as Sinter Klaus. You also have the figure of Kris Kringle. This long standing germanic myth of a guy who handed out gifts to children became embedded into Christianized Europe along with the Tree of Thor and such things as Yule tidings. The English character of Father Christmas as a huge white man in red robes were merged into this conglomerate of ideas to solidify the Santa Claus as we know him. This character made it's way in the US by obvious means. So lets put this dumb shit to rest: Santa as a figure has it's origins in Europe and out of European pre-Christian myths. He was appropriated into Christianized Europe as was the Christmas tree, Easter Eggs and many other things not relevant to this blog post. All the "black" Santa's are essentially Santa in blackface. The worst thing about that is that millions of people will see black Santa and think that the history somehow includes them when it does not. These same people grow up to be the same people making silly complaints of racism, when the truth is pointed out about him. As for the silly Negroes, how about stop hating on your own origins and celebrate your OWN historic celebrations? If you do that you don't have to worry about making things black, they'll just be that way. But you know, it's easier to complain and edge your way into other peoples culture than to do and maintain your own. Thursday, December 12, 2013 WASHINGTON — Just a month before a peace conference that will seek an end to the grinding civil war in Syria, the Obama administration’s decision to suspend the delivery of nonlethal aid to the moderate opposition demonstrated again the frustrations of trying to cultivate a viable alternative to President Bashar al-Assad. That would be "official" non-leathal aid. And of course, no mention of "lethal aid". The administration acted after warehouses of American-supplied equipment were seized Friday by the Islamic Front, a coalition of Islamist fighters who have broken with the moderate, American-backed opposition, but who also battle Al Qaeda. Meaning, after people such as myself, pointed out numerous times that Al-Qaeda was quite active in the Civil War and stood to gain the most by the removal of Assad. Just as there was a need to justify an intervention, there is a need to justify a reversal. This event provides the necessary "we aren't aiding Al-Qaeda" cover needed by the administration. Also, by knowing that Al-Qaeda was involved in the opposition, the US government was in violation of it's own so called "anti-terrorism" laws, which state clearly that any aid to a terrorist organization is a violation of federal law. Yeah, so there's that. Administration officials said that the suspension, confirmed on Wednesday, was temporary and that the nonlethal aid, which is supplied by the State Department, could flow again. Translation: As soon as we figure out how to get around this Al-Qaeda problem. American officials are still struggling to assess what the internecine battle means. “If we’re able to understand that, we could revert to the provision of nonlethal assistance,” a senior administration official said. What is there to "figure out"? The more militant group is trying to take over from the less militant group. The goal: topple Assad. If the more militant group wins, Al -Qaeda gets a new state. What else needs to be explained? For US policy (which I'm not commenting directly on here) is to keep a lid on Al-Qaeda, making a proxy war with Assad was a huge, huge, huge mistake. The official said that the United States would not rule out talks with the Islamic Front, but that it was too soon to determine whether the administration would abandon its insistence that all American and allied assistance be funneled through the Supreme Military Council. Read that again. Now read my previous comment about the terrorism laws. So the administration is actively considering funding an Al-Qaeda organization? I wonder what all those soldiers who have blown off limbs and the families of the slain who were told they were fighting the enemy think about that? At the same time, the opposition groups that the Obama administration has designated as the legitimate representatives of the Syrian people appear to have grown weaker, in part because of their tenuous ties to many of the rebel fighters inside the country and because of the lukewarm support they have received from the West. Obama gets to decide who is the legitimate representative of people outside the US? I had no idea. You know, that would be like, say Putin deciding that the Tea Party was the legitimate representative of the United States. No really, THAT is what it's like. A major aim of the meeting is to begin the process of identifying Syrians who might serve in a transitional governing body that would run the country if Mr. Assad yielded power. Imagine if you would that during the US Revolutionary War, France decided that as a condition of their support, they would set up a meeting in say Madrid, where they would get to have a direct hand in who would be the "legitimate leadership" of the new republic. Yeah, it's THAT odd. How about this: To the winner goes the spoils? Khatab, the commander of a small Free Syrian Army battalion, interviewed by phone in Turkey, said that the suspension would hamper fighters like his. But he added that it would ultimately harm the Islamic Front as well, suggesting that whatever the official policies, the Islamic Front had cooperated with the Supreme Military Council and received supplies through it. Wednesday, December 11, 2013 The same documents also appear to show the NSA as using data gathered from smartphone apps, under another section apparently named "Happyfoot." The location data stemming from a smartphone app is considerably more useful for the security agencies, due to the higher positional accuracy when compared to far broader cellular mast-based tracking, potentially allowing for agencies to track the movements of suspects without the need for any intrusive techniques. So the official bankruptcy hammer has fallen on Detroit. No one who has open eyes is surprised by any of this. There are a lot of black folks on the left side of the spectrum who see racism at the heart of the matter. While there are certainly racial angles to the problem, it is not the full issue. As many people on either side of the argument will point out is that Detroit is a mostly black city (~85%). It is also a heavily Democratic city. What I always fail to see on coverage of Detroit is that it once was a mostly White city. When Detroit was mostly white it was prosperous. No one can argue against that fact. Folks don't like hearing that but hey don't hate me hate the history. We all know that there was a riot (or uprising if you will) that started what we refer to as "white flight". It is at this juncture that we, meaning The Ghost, and regular black liberal folks, part ways in how we assign "blame" for the current mess. Generally speaking whites essentially left Detroit and took their money with them. For many people this was a racist move. Sure. You can say that. Let's say it is. What are you really saying? Complaining about white flight is equivalent to saying that White people don't have the right to live where they choose, among who they choose, and to take the money they earned with them. That is called freedom of association. Do you believe in freedom of association/ NO seriously. Answer that question. Because either you believe in freedom of association or you do not. Because if white people aren't free to up and move wherever they please, then black people do not have the freedom to up and move and live where they please. Isn't one of the principles of Kwanzaa, which many "pro black" people support, Self-Determination? If you believe in self-determination then you mustrecognize that everyone else has that right as well. So again, do you believe in freedom of association? What the white flight argument essentially is is a claim on white people's money and property. Black people ought to have access to the money and wealth that white people have based on the idea that white people's money was gotten in an unfair manner. White people didn't really earn it. OR due to discrimination white people are obligated to fund black development. of course being the Garveyite that I am, though I am all for reparations, I am not of the position that whites are obligated to do black development too. I believe that black people are responsible for black development. If whites must pay, blacks must work. But lets focus on Detroit. When I was in Michigan in 1989-91 I knew that middle class black folks were trying to get the hell out of Detroit. So there goes the white flight argument. I read about the Halloween devil's night events in the Detroit Free Press. Never had I heard of any such thing. Folks committing arson for no other reason than they could and it was Halloween. I read about the staggering crime rates. How anyone with a straight face would say that Detroit's problems were the sole result of white people moving out is smoking very potent crack. Anybody who was able to was leaving Detroit as far back as 1989 when I finally knew Detroit existed. Detroit's bankruptcy was long in coming. In 1950 Detroit had a population of 1.8 million. In 2013 that number is around 700,000. Over half the population (assumed tax paying) up and gone. Where were the business development done by the black middle class? Why weren't black people who controlled the government able to get a grip on crime? Was that the fault of white people too? To restate Garvey: Black Detroit! Where are your factories, Where are your cars? Where are your men of big affairs? Where are the fathers of the babies? Why did Detroit have this coming? Simple economics. Here's the deal: poor people do not generate [much] tax revenue. They do not generate enough tax revenue to run a city the size of Detroit as it stood in the late 1970s. A city the size and complexity of Detroit depends on middle class (and higher) persons and businesses to generate the income necessary for stuff like bus service, street lights, street cleaning, police (and we know Detroit needs police), schools, etc. I mean really, do you THINK the fare you pay to get on a bus or subway reflects the ACTUAL COST of running that service? Really? Wake up son you've been dreamin'. You think I'm making this up right? OK check the image from the Tax Foundation: See the light blue area? That's property taxes. Watch it fall. See the darker blue area? That's the income taxes. Watch it fall. See the even darker blue area? That's the utility taxes. Watch it fall. Watch it all fall. However pay close attention to that first area. The light blue representing income tax. Note that it never kicks up again after 1973. So as the middle class, both white and black, left Detroit, it's tax base shrunk…and...shrunk…and...shrunk. But those services still had to be paid for. The choices: raise taxes, borrow or gamble. Raising taxes on a population already unable to pay for services they want doesn't work so Detroit borrowed and gambled (see that black area on the chart). And lost. Here's the obvious truth. One that hit Mandela while he was on Robben Island: You can't run out the folks who create the wealth and expect to keep the city afloat. Now for the part that shouldn't have surprised anyone: That the bankruptcy judge allowed the city to put it's pension on the chopping block in direct violation of the state constitution is alarming. Well it would be if so called "constitutional guarantees" were in fact guaranteed. As we have found out in the past couple of years, our so called "constitutional rights" have already been tossed aside by none other than the chief executive and his people. Since we cannot expect the highest law of the land to be honored by those who swore to uphold it, how can we expect that any lesser constitution such as a state constitution to be worth much more than the paper it is printed on? It should shock nobody that a judge saw no reason for the pensions to be put on the chopping block rather than paid in full as provided for and required by the state constitution. I'll call this trickle down disenfranchisement. It's not surprising that these things are happening. We are living in a country in which not only does the federal government not give a damn for the rights of citizens, but it is actively trying to accommodate those non-citizens who have entered the country illegally and who use fraud as a means of remaining. It is insulting, as an African-American to see the government at the state and federal level devote money and other resources that would be better spent on infrastructure, education of it's citizens and on job opportunities for it's citizens on folks who have not even a legal right to even be in the country. And I'm supposed to feel guilty about holding the position that the government ought to be looking out for it's citizens first and above all else. If the government on the state and federal level actually gave a rats ass about the rights of the citizenry, Detroit's pension would not even be on the chopping block in violation of the state constitution. It would be on the state of Michigan to make good on it's obligations to it's pensioners even if they wished to change the plans for future benefits. Barclay's Bank and all other financial institutions that took the risk of lending to Detroit would have been told that they line up AFTER the constitutionally protected pensioners. But no, In a country where citizens rights and privileges are only as valid as the paper it's written on, no one should have expected any different. But back to Black Detroit. What is glaringly obvious from the course of events in Detroit is that all the talk about "black power" means diddly squat without a sound economic plan and a social movement that addresses young black men. It serves to underscore that for all that "we're a black city" talk, Detroit essentially was dependent upon white money to stay afloat and when that money left, we saw what happened. If black folks want to talk about how equal they are to everybody else, I suggest they take up the words of Garvey and build and maintain their own stuff. All those black ballers making millions? Step up to the plate and finance these black communities rather than complain about what Bank Of America is and isn't doing. These rich black folks? Why don't YOU put your money where your mouths are and finance home mortgages since it is the consensus that B of A is so racist and corrupt. Let's put our money where our mouths are and build up Detroit without the money of those "racist" white folks who "selfishly" took their money ball and left the playground. I'm with Garvey and Delany before him: build your own and show and prove. Name calling doesn't lower crime. It doesn't open small businesses. And it certainly does not provide employment. Sunday, December 08, 2013 Continuing on with my discussion of the impending employment crisis that will soon hit the U.S. Salon has an informative article on some of the changes that are already happening: I eventually realized that I could sit down and order breakfast-via-iPad from any seat in the concourse. Before starting, I was required to input my flight details (presumably so I could be warned when my flight was boarding). Then I ordered coffee and breakfast — two eggs sunny-side up, home fries, bacon and orange juice — through a clunky menu interface. A card-reader to my right enabled payment. A few minutes later, a waitress appeared with a cup of coffee. Ten minutes after that, she returned with the rest of the food. We exchanged hardly a word. And I wondered: Why was the airport bothering with any human touch at all? Why wasn’t a drone bringing me my bacon? I mean, isn’t that the obvious next step? And the next step is already with us in Japan. It is only a matter of time before what we saw in I, Robot comes to life. This is why I commented on the whole "livable wage" protest going on. Not that I don't agree on livable wages, but that as those wages increase the cost/profitability ration tips more and more in favor of automated processes. These machines will cost less and less as market penetration happens. Soon, possibly in my lifetime, those workers will simply be unemployed. I don't really think that the leadership in government really understands what is coming down the pipe. Job training will not help. Automation will be so prevalent that humans will simply be unnecessary in many of the things that we currently expect to see them. Those people, who will still presumably require money to live, will do what to earn? Will the entire concept of "earn" be done away with? At this point I only see the countries that undergo these transformations living like citizens of certain oil states. They get a stipend from the government based on national productivity. Going to have to consider that or something. Finally, the “R” – redistribution – benefited corporations most because a succession of finance ministers lowered primary company taxes dramatically, from 48 percent in 1994 to 30 percent in 1999, and maintained the deficit below 3 percent of GDP by restricting social spending, notwithstanding the avalanche of unemployment. As a result, according to even the government’s own statistics, average black African household income fell 19 percent from 1995–2000 (to $3,714 per year), while white household income rose 15 percent (to $22,600 per year). Not just relative but absolute poverty intensified, as the portion of households earning less than $90 of real income increased from 20 percent of the population in 1995 to 28 percent in 2000. Across the racial divide, the poorest half of all South Africans earned just 9.7 percent of national income in 2000, down from 11.4 percent in 1995. The richest 20 percent earned 65 percent of all income. The income of the top 1 percent went from under 10 percent of the total in 1990 to 15 percent in 2002, (That figure peaked at 18 percent in 2007, the same level as in 1949.) The most common measure, the Gini coefficient, soared from below 0.6 in 1994 to 0.72 by 2006 (0.8 if welfare income is excluded). Wednesday, December 04, 2013 Oh yeah. OHHHH yeah. I remember a few years back when certain mainstream news organizations were talking about how Al-Qaeda was defeated and it's goals and aims were not bearing fruit. I laughed because I knew that wasn't true in the least bit. In my opinion such commentary was no better than GW Bush's "Mission Accomplished" grandstanding. I said then that the Arab Spring was actually an opening for Al-Qaeda because one of their stated goals were to topple regimes that they saw as insufficiently Islamic as well as too closely aligned (or aligned at all) with The West. So far from a rejection, many of the events were in fact in line with the desired outcomes of Al-Qaeda. Intensifying sectarian and clan violence has presented new opportunities for jihadist groups across the Middle East and raised concerns among American intelligence and counterterrorism officials that militants aligned with Al Qaeda could establish a base in Syria capable of threatening Israel and Europe. So NOW these folks are concerned about that? I would have thought these geniuses would have figured that out long ago. And what's this? “We need to start talking to the Assad regime again” about counterterrorism and other issues of shared concern, said Ryan C. Crocker, a veteran diplomat who has served in Syria, Iraq and Afghanistan. “It will have to be done very, very quietly. But bad as Assad is, he is not as bad as the jihadis who would take over in his absence.” Which we here at The Ghost knew all along. You know there is a reason why Mubarak was called "Our boy in Egypt" right? Look. It was STUPID to have gotten involved with the Syrian civil war. It is equally bad for the US to be willy nilly turning on former allies. That anybody trusts the words of the US after the recent backstabbing of former allies is beyond me. One minute they're visiting the White House or a diplomat is taking a photo for a photo-op and the next we're talking about who has to go and where we're going to bomb and what red line exists. That's really bad policy, kinda like the new dumb shit going on in the South China sea. I mean really? The US is where? The South China Sea is where? Why does the US think they have the right to fly anywhere and declare who we will and will not notify like the whole world is the US's property. But rest assured that if the Chinese up and decided to fly a warplane anywhere near the US west or east coast for a couple of hours without notification there would be all kinds of complaining going on. Whether white men can be discriminated against is one question. Whether THESE white men were discriminated against is a second question. Whether men should generally be bothered by a female referring to them by their male anatomy is yet a third question. I will let you answer these questions for yourself. But talk about a toxic workplace. The only valid question in this set is question number two: Whether those [white]men were discriminated against. The other two "questions" are not questions at all. They are a matter of fact. White men, like all other men can be discriminated against and when it comes to the workplace a man can and should be bothered by a female referring to them by their "special" anatomy. There is no "make up your own mind" when it comes to this. There is no fuzzy grey area. This is what I was referring to last night. First and foremost; regardless as to what position one holds on the incident, if one is going to present a video report on the subject one should at least ask one of the mentioned parties for comment. If anything the absence of commentary from the students involved only serves to underline their contention that they are being singled out for discrimination. Lets start with the allegation: A black female professor at Minneapolis Community and Technical College was formally reprimanded by school officials after three of her white male students were upset by a lesson she taught on structural racism. Shannon Gibney says that the students reacted in a hostile manner to the lesson in her Introduction to Mass Communication class, with one of them asking her, “Why do we have to talk about this in every class? Why do we have to talk about this?” Gibney says that, after this initial comment, another white male student said, “Yeah, I don’t get this either. It’s like people are trying to say that white men are always the villains, the bad guys. Why do we have to say this?” These students continued to argue and disrupt the lesson until Gibney told them that if they were troubled by her handling of the subject, they could file an official complaint with the school’s legal affairs department. I was not a communications major when I was attending university so I do not know what subject matter is covered in an Intro to Mass Communication class but I would seriously ask, on academic relevancy grounds what a lesson on "structural racism" has to do with Intro to Mass Communication. Not that there are no academic ground to cover such a subject but is an Intro to Mass Communication the place for such a topic? The other question in regards to that subject would be has or had the professor covered other "discriminatory structures" in Mass Media (misandry comes immediately to mind). Secondly by the statements made by the students and the words of the professor herself, it would appear that the entire subject matter was the negative representation of white heterosexual males. One could ask "what is wrong with that?" and the answer would be that if one is talking about racism and racist attitudes and actions in "Mass Media" that only made the "bad guys" out to be white males, then one is not only covering up group acts of racism (remember that racism is actually a set of beliefs and attitudes in regard to race, any race that are not necessarily negative or positive). Therefore the subject would have to cover similar attitudes, if any, present in other groups. For example the lack of black people in much "Hispanic" mass media even though there are a great deal of black Hispanics. Or the general absence of dark skinned Indians in their media outlets, advertising, etc. Particularly in regard to women. Are these not "structural racisms" that do not involve white heterosexual males? Moving on. When you watch the video and see Gibney's commentary you find some very typical, shall we say entitlement and hysteria common among female liberal faculty members: 1) She said she is scared. No seriously. What is with liberal "feminist" women and their apparent lack of emotional control? Just about every time I see a liberal women get challenged on something she says, there are comments about not feeling safe and the so called intimidation they are feeling. It is as if everyone is obliged to walk on eggshells lest these delicate "damseling" women catch fright. Gibney should be put under a psychiatrists care if she is scared to be in a class of students willing to ask questions and not simply take what is said as canon. Isn't that what these institutions of higher learning are supposed to be about anyway? 2) She's complaining about her authority taken away (or challenged). Well yeah. Students can actually do that, provided they are respectful. I'll say this as someone who regularly challenged my teachers in college, including reporting one to a dean; If I feel that I am being picked on based on my race or gender, I will do something quite similar to these 3 fellows. It is a GREAT thing that I did not have to sit in a gender class for graduation. Based on what I have been seeing, I would have certainly gotten into it with not a few faculty members over the blatant bullshit passed off as sound academics. Here's the deal: If you don't want to be challenged in class, make sure your shit is tight. That means checking your material against that which opposes it. I know this is hard for some people to understand, but as an academic you have an obligation to look into opposing viewpoints and data AND to present that to the students so long as that data is credible. If you don't expect to be challenged. 3) She handled the situation all wrong: In my view this is partly why she is being reprimanded. As a teacher and someone who is supposed to be facilitating the development of critical thinking skills she was very dismissive of the students concerns (most likely because being white, male and presumably heterosexual, they have no say. No really, there are so called feminists who believe that men have absolutely no right to opine on various subjects). Instead of inviting the students to lodge a complaint she should have taken the opportunity to provide them with an assignment to air their view on the subject of "Structural Racism in Mass Media". This was a perfect learning opportunity for the entire class. Since the original complaint stemmed from a student presentation(s) (as claimed by Gibney) let these students present their case in an equal setting and be subject to the same challenges they wished to impose on others. I will lay down $1000 that had Gibney done this rather than take offense at "challenges to her authority" she would not have been reprimanded AND she would have gained the grudging respect of the students who disagreed with her, because at least she was open to opposing views. She also would not have opened the school to a lawsuit. 4) If Gibney is so concerned about intimidating white males why is she teaching there? If Gibney is so concerned about the performance of "students of color" and has issues being confronted by white male heterosexual students, why doesn't she teach at an HBCU? No really? I have been asked to "teach" (that is really funny) at a HBWC and I have flat out refused because I don't do challenges like that. As a Garveyite I concern myself with the education of the African as my primary mission. I'm not here to educate other folks. So I sensibly stay away from the profession. I WOULD be so inclined at an HBCU. Perhaps Gibney would be better off teaching at say Spellman. There she could teach without being bothered with intimidating males at all. 5) Why do black (alleged feminists) keep letting white women off the hook? Really? Why? Gibson blatantly states that the "structured racism" is the sole creation and operation of white [heterosexual] males as if white women were not and are not supportive and benefiting from the same system. Like they didn't raise the kids, make the false accusations, and request the handmaidens (among other things). No seriously, why do black folks (particularly of feminist stripes) continue to give white women the oppressor pass? It's like they forget those women are white as well until they do something that upsets black feminists who then throw a fit about racism in white feminism. Well duh…welcome to the "structural racism" ladies. 6) Lastly there is the issue of folks thinking they can get away with commentary that is discriminatory in nature when it is directed at White [heterosexual] males. This particular poison is the result of the government's creation of "protected classes" of people. Of course it was started with the noble intention of protecting black people from discrimination but it has since been expanded and diluted to the point that if one is not a white [heterosexual] male, then one is a protected class. This is silly and it has dangerous consequences. It may seem odd for a Garveyite to be seen as "protecting" white males. However; upon further examination you will see that it is not so strange. One thing I have noticed from the flip dismissal of commentary or concerns of white males most recently by feminists is that it has crept into flip dismissals of black males (see so called Black Male Privilege) by so called black feminists. Often in many so called progressive conversation, "white" is stripped from male and conversations devolve down into supposed evils of heterosexual males as a group. At that point, as a heterosexual male it becomes my business and my concern. This brings us to article number two: Nancy Silberkleit is accused by her male employees of gender discrimination such as referring to them as 'penis' instead of by name Yeah you read right. But that was not the worst of what she did. Not satisfied with engaging in behavior that would have gotten a male worker fired and make headlines across twitter, Nancy's legal team whips out the big gun: In papers filed in Westchester Supreme Court, Nancy Silberkleit's lawyer says a gender discrimination lawsuit filed against her earlier this year by a group of Archie Comics employees should be tossed in part because white guys aren’t members of “a protected class.” So in essence, Nancy is free to be sexist towards men, heterosexual men, white heterosexual men, because the government doesn't protect their rights. Not because they have no rights. Does this sound familiar? No? Sounds a lot like the Dred Scott decision doesn't it? For those unfamiliar with that case, the judge in that case declared that black people in general have no rights that any white male (or female for that matter) were bound to respect. How is that any different than this lawyer's contention that the white heterosexual male cannot have his day in court because basically he has no rights that the government (or anybody in a so called protected class) are bound to respect? I believe it was Dr. Martin Luther King Jr. who pointed out that when the rights of one group are trampled on, we are all damaged. See how low feminism has stooped? It begins with an idea and grows into a gross legal framework that threatens everybody. Mind you I understand that lawyers need to do whatever they legally can to defend their client but I would hope that the judge in this case tosses that entire argument out as it clearly fails the 14th Amendment case. Protected classes are protected from discrimination and not from being discriminatory. How that doesn't fall afoul of the 14th Amendment is beyond me. But I assure you that this policy, the entire concept of "protected classes" is going to get blown up due to this kind of behavior. And in complete disregard for what gender discrimination is: And Silberkleit's lawyer, Thomas Brown, said the employees' allegations don't even rise to the level of gender discrimination. "It's absurd," he said Essentially they are claiming that even if the behavior in question happened, it isn't discriminatory. Imagine that! Imagine a male supervisor referring to his female subordinates as vagina. like "go see the vagina at the front desk." Yeah, such a claim of "not discriminatory" would generate many many laughs and meme's across the internet. This takes us back to Gibney's class. She's fortunate that the students in question did not have the information I have at the tip of my tongue because a reprimand would have been the least of her worries. See the problem Gibney has is that she thinks she's entitled to say whatever it is she says and that somehow her degree places her beyond critique. Matter of fact a lot of black people walk around with a sense of entitlement. Gibney didn't even acknowledge the concerns of the students. They were simply labelled "angry white heterosexual males" and told to be quiet. Yeah, I'm familiar with that kind of attitude as a BLACK male so I can sympathize. Black folks think that they alone can discuss and discuss correctly matters of racism. That simply is not the case. While we may have the best experiences on the receiving end we also make mistakes in discussing it (since we are fallible humans) and have our own blind spots on the matter. Futhermore, as is evidenced from this blog, not all of us ascribe to the same overall themes of racism. For example, I freely point out the White Supremacy System and Culture as described by Garvey(s) and Welsing, but many of my compatriots do not ascribe. I personally think I have a far better documented case for my position than they do, but I leave that to the readers. What is worse though are academics who think somehow a PhD means they cannot be wrong or questioned by anyone much less someone far less degreed (or has qualifications in other fields of study). What Gibney and others of her ilk will soon learn is that they no longer have the monopoly on information in regards to issues of race and gender. The world wide web has thrown open the doors of information that previously was locked away in libraries and journals. Stuff like the very very bad violent crime rates among African-Americans are no longer hidden as they are all over YouTube and reported on by individuals with blogs. Blatantly racist (some would say "compensatory justice") knockout games and polar bear hunting deals deadly blows to the common concept of racist violence as the sole domain of whites as perpetrators. The anti heterosexual male attitudes that permeates the feminist sphere are being well documented for those who look for it. And while the mass media of Gibney's class may still hold sway over the public, these things are reaching out. If black academics wish to keep their heads above water, whether it be in matters of race or gender, they better get their acts together and get more, shall we say, mathematical with their analyses. A lot of what is out there is heavy on academic jargon (watch the video) and low on verifiable data. Current verifiable data.
I. We can be “confident” in God’s good work. A. This could apply to the “church” as established by God. 3. Jesus will receive glory from the church through all ages (Ephesians 3:21). 4. Jesus will ultimately assemble “The Assembly of the First Born” in heaven (Hebrews 12:23). 5. Jesus does not mention the church at Philippi in Revelation 2 and 3. B. This probably applies to individual church members (and all the Twice-Born). 1. Paul is “persuaded” in God “love” work a. Romans 8:38-39 -- For I am persuaded, that neither death, nor life, nor angels, nor principalities, nor powers, nor things present, nor things to come, Nor height, nor depth, nor any other creature, shall be able to separate us from the love of God, which is in Christ Jesus our Lord. 2. Paul “knows” that God can “keep” us forever a. 2 Timothy 1:12 -- Nevertheless I am notashamed: for I know whom I have believed, and am persuaded that he is able to keep that which I have committed unto him against that day. 3. Old Testament saints were “persuaded” and “embraced” the promises a. Hebrews 13:11 -- These all died in faith, not having received the promises, but having seen them afar off, and were persuaded of them, and embraced them, and confessed that they were strangers and pilgrims on the earth. 4. John teaches that we can “know” and be “assured” a. 1 John 3:19 -- We know that we are of the truth, and shall assure our hearts before him. a. Ephesians 4:24 -- Put on the new man, which after God is created in righteousness and true holiness. a. Called according to His purpose b. Predestined to be conformed to Christ’s image 3. Created with full glorification promised a. Romans 8:30 -- Moreover whom he did predestinate, them he also called: and whom he called, them he also justified: and whom he justified, them he also glorified. 4. Created to receive a guaranteed inheritance () a. 1 Peter 1:3-5 -- Blessed be the God and Father of our Lord Jesus Christ, which according to his abundant mercy hath begotten us again unto a lively hope by the resurrection of Jesus Christ from the dead, to an inheritance incorruptible, and undefiled, and that fadeth not away, reserved in heaven for you, who are kept by the power of God through faith unto salvation ready to be revealed in the last time. 13. John 17:24 – Jesus’ request that we would be with him and see His glory C. Passages that appear to teach otherwise are easily dealt with. 1. Hebrews 6:4-6 describes one who has “tasted” but “withdrawn.” The classic example of this is Judas Iscariot. 2. Acts 8:13-24 relates the story of Simon who “believed” but then demonstrated his lack of salvation by wanting to buy the power of the Holy Spirit from Peter. Peter said: “Thou hast neither part nor lot in this matter: for thy heart is not right in the sight of God.” 3. Matthew 7:21-23 records one of the more sober condemnations given by the Lord Jesus. Not every one that saith unto me, Lord, Lord, shall enter into the kingdom of heaven; but he that doeth the will of my Father which is in heaven. Many will say to me in that day, Lord, Lord, have we not prophesied in thy name? and in thy name have cast out devils? and in thy name done many wonderful works? And then will I profess unto them, I never knew you: depart from me, ye that work iniquity. 4. Those who are genuinely “twice-born” both know and show their salvation. Those who are not “work iniquity.” 5. 2 Peter 1:4-9 insists that, although God has provided everything we need to grow in righteousness, if we do not grow, we can arrive at a condition where we will have “forgotten that he was purged from his old sins.” It is possible to lose the assurance of our salvation – but not the salvation itself. III. The purpose for our Salvation being based on God’s creative power is revealed in the Scripture. A. Creating expresses the direct will of God – no human agency possible. 1. Colossians 1:16-20 -- By him were all things created -- By him all things consist (or are saved) By him all things are reconciled 2. Hebrews 1:2 -- He made the worlds -- He is upholding all things -- He becomes heir of all things 3. Romans 11:36 -- For of Him . . And Through Him …And to him … are all things B. Creating eliminated excuses for all humanity in all circumstances. 1. Romans 1:20 -- For the invisible things of him from the creation of the world are clearly seen, being understood by the things that are made, even his eternal power and Godhead: so that they are without excuse. 2. Acts 17:29 -- Forasmuch then as we are the offspring of God, we ought not to think that the Godhead is like unto gold, or silver, or stone, graven by art and man’s device. C. Creating gave foundation to the everlasting gospel. 1. Revelation 14:6-7 -- And I saw another angel fly in the midst of heave, having he everlasting gospel to preach unto them that dwell on the earth, and to every nation, and kindred, and tongue, and people. Saying with a loud voice, Fear God, and give glory to him; for the hour of his judgment is come: and worship him that made heaven and earth, and the sea, and the fountains of waters. 2. Colossians 1:23 -- If ye continue in the faith grounded and settled, and be not moved away from the hope of the gospel, which ye have heard, and which was preached to every creature which is under heaven; whereof I Paul am made a minister D. Creating displayed the power of the Lord Jesus Christ. 1. Colossians 1:16-18 -- For by him were all things created, that are in heaven and that are in earth, visible and invisible, whether they be thrones, or dominions, or principalities, or powers: all things were created by him, and for him. And he is before all things, and be him all things consist. And he is the head of the body, the church: who is the beginning, the firstborn from the dead: that in all things he might have the preeminence. 2. John 10:37-38 -- If I do not the works of my Father, believe me not. But if I do, though ye believe not me, believe the works: that ye may know, and believe, that the Father is in me, and I in him. E. Creating gave authority to the message of Jesus Christ. 1. John 1:1-4 -- In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by him; and without him was not anything made what was made. In him was life, and the life was the light of men. 2. John 1:11-14 -- He came unto his own, and his own received him not. But as many as received him, to them gave he the power to become the sons of God, even to them that believe on his name: Which were born, not of blood, nor of the will of the flesh, nor of the will of man, but of God. And the Word was made flesh, and dwelt among us, (and we beheld his glory, the glory as of the only begotten of the Father) full of grace and truth F. Creating is what God does when He gives new life. 1. Ephesians 2:8-10 -- For by grace are ye saved through faith; and that not of yourselves: it is the gift of God: Not of works, lest any man should boast. For we are his workmanship, created in Christ Jesus unto good works, which God hath before ordained that we should walk in them. 2. 2 Corinthians 5:17 -- Therefore if any man be in Christ, he is a new creature: old things are passed away; behold, all things are become new 1 Corinthians 15:55-58 O death, where is thy sting? O grave, where is thy victory? The sting of death is sin; and the strength of sin is the law. But thanks be to God, which giveth us the victory through our Lord Jesus Christ. Therefore, my beloved brethren, be ye stedfast, unmoveable, always abounding in the work of the Lord, forasmuch as ye know that your labour is not in vain in the Lord.
Strabismus in developmental cataract. To evaluate the presence of strabismus in patients with developmental cataract rendered pseudophakic and how this influences their visual acuity. A retrospective study was carried out on 113 patients with developmental cataract who came under the authors' observation at the outpatient department of the Pediatric Ophthalmology Unit of the University of Federico II of Naples from 1990 to 2005. All patients were followed up for a long period (mean 62 months, range 36-144 months). Age at diagnosis, sex, laterality, age at cataract extraction, morphology, and cataract density were all considered as possible factors associated with strabismus. Visual acuity and ocular motility before and after cataract extraction surgery were especially noted. Statistical evaluation was performed using t-test, Chi-square test, and Fisher exact test. Out of the 113 patients a total of 181 eyes were affected: 68 patients (60%) presented bilateral cataract, 45 patients (40%) monolateral cataract. Strabismus was present in 39 patients (34%) before cataract surgery. Age at cataract diagnosis, age at surgery, sex, and cataract morphology were not found to be statistically associated with strabismus. However, laterality was found to be statistically associated with the onset of strabismus. Cataract density was found to be statistically associated with poor vision. Patients with strabismus presented a non statistically significant lower visual acuity. Strabismus has a greater incidence in developmental cataract compared to the general population, and can influence visual acuity, especially in monolateral and total cataracts. Intraocular lens implants produced satisfactory visual rehabilitation.
package com.jshop.dao; import java.util.List; import com.jshop.entity.UserRoleM; public interface UserRoleMDao extends BaseTDao<UserRoleM>{ /** * 根据用户id删除角色 * * @param roleid */ public int delUserRoleM(String userid); /** * 根据userid获取用户角色 * * @param userid * @return */ public List<UserRoleM> findUserRoleMByuserid(String userid); }
The Teen Titans uncover the mystery and motivation behind Bombshell's origin, and no one is more surprised at the answers than Bombshell herself! Meanwhile, Wonder Girl is confronted with the figure behind her recent misfortunes and discovers why family reunions can be absolute hell.
Unraveling the Complex Web of Associations Between Easy Access to Firearms and Premature Mortalities. We investigated whether high school students reporting easy access to guns were more likely to die prematurely from either suicide, homicide, or an accidental death. Based upon the National Longitudinal Study of Adolescent to Adult Health, we contrasted those reporting easy access to guns, n = 5,185, 25%, with the remaining 75% (n = 15,589) on various sociodemographic characteristics, behaviors, and premature mortalities. We found higher rates of suicides, homicides, and accidental deaths among those reporting easy access to guns at Wave 1 or Wave 2. This was only true for males. Those with easy access to guns were more likely to share common sociodemographic characteristics, came from two-parent homes where children had strong and close relationships with parents, where children were more likely to get into fights, do delinquent misdeeds, and engage in other risk-taking behaviors such as increased drinking, drug use, and riding motorcycles. Logistic regression analysis showed easy access to guns remained a significant predictor of premature mortalities when sex, family income differences, risk-taking, and delinquency were used as covariates. This study supports previous research and carves out new ground showing easy access to guns acts synergistically with other lifestyle differences to diminish youth life chances.