text
stringlengths
8
5.77M
Q: Find X position of where user is dragging their finger in ScrollView I have a scrollview and I'm watching user input. I would like to know where their finger currently is on the X plane. Is this possible through the ScrollView and/or its delegate or do I have to override touchesBegan, etc? A: I'm assuming you set up your scroll view delegate. If you did, then you only need to implement the scrollViewDidScroll: method from the UIScrollViewDelegate protocol... - (void)scrollViewDidScroll:(UIScrollView *)scrollView { CGPoint touchPoint = [scrollView.panGestureRecognizer locationInView:scrollView]; NSLog(@"Touch point: (%f, %f)", touchPoint.x, touchPoint.y); } This will update while the user is scrolling. Additional info: Note that if you have something like a UIPageControl, that your scroll view is navigating between, you may have to calculate the x position based on the number of pages (ie if each page is 100 pixels wide, page 0 will start at point 0, page one at point 100, etc..).
Diplomats from European countries on Tuesday blasted a recent Iranian missile test as “inconsistent” with a key U.N. Security Council resolution, as they struggle to keep the Iran deal intact amid U.S. pressure to get tough on the Islamic regime. Iran test-fired a medium-range ballistic missile on Saturday, which the U.S. said had the capability to strike parts of Europe and the Middle East. Secretary of State Mike Pompeo said the missile was capable of carrying multiple warheads and was in violation of Security Council Resolution 2231 -- which calls on Iran to refrain from “any activity related to ballistic missiles designed to be capable of delivering nuclear weapons, including launches using such ballistic missile technology.” POMPEO SAYS IRAN TESTED BALLISTIC MISSILE IN VIOLATION OF UN RESOLUTION Resolution 2231 was the Security Council’s enshrinement of the 2015 Iran nuclear deal, known as the Joint Comprehensive Plan of Action (JCPoA) -- which the Trump administration withdrew the U.S. from in May. The other signatories were Germany, U.K., France, China and Russia. But the missile test has posed challenges to those countries trying to uphold their end of the deal despite the U.S. withdrawal -- drawing condemnation from European countries that otherwise have been supportive of the Iran pact. Consequently, the U.K. and France called a closed-door meeting of the Security Council on Tuesday to discuss the issue, though diplomats declared the test “inconsistent” with rather than “in violation” of 2231. U.K. Ambassador Karen Pierce called the actions "part and parcel of Iran's destabilizing activities in the region." Her comments echo U.K. Foreign Secretary Jeremy Hunt, who said Saturday that he was “deeply concerned by Iran’s actions," even as he reiterated support for the nuclear deal. “Provocative, threatening and inconsistent with UNSCR 2231. Our support for JCPoA in no way lessens our concern at Iran’s destabilising missile programme and determination that it should cease,” he tweeted. The claim that the move was “inconsistent” with 2231 was echoed by other diplomats at Turtle Bay. “This kind of ballistic missile activity is inconsistent with the JCPoA , especially Annex B which calls on Iran not to engage in these kinds of activities,” Dutch Ambassador Karel van Oosterom said. French Ambassador François Delattre also said Iran's actions were "inconsistent" with the resolution and called on Iran to "immediately cease any activity related to ballistic missiles designed to be able to carry nuclear weapons, including launches using ballistic missile technology." The resolution's text only “calls upon” Iran to refrain from ballistic activity, rather than demanding it. It was that weaker language that kept diplomats from outright declaring Iran in violation of the resolution. Israeli Ambassador Danny Danon told reporters, separately, that Israel, which does not sit on the Council, believes the test to be a violation of the resolution and called on the Security Council to condemn Iran for its actions. The test marks the latest blow in Europe’s efforts to keep the 2015 accord alive, particularly after the U.S. withdrawal from the pact in May. The U.S. has since re-imposed multiple rounds of sanctions on the regime, including on crude oil exports last month, and has urged European allies to join them. U.S. Ambassador Nikki Haley said in a statement Tuesday that the Iranian test was "dangerous and concerning, but not surprising" and called on the Council to act. “The United States has repeatedly warned the world about Iran’s deliberate efforts to destabilize the Middle East and defy international norms. The international community cannot keep turning a blind eye every time Iran blatantly ignores Security Council resolutions," she said. "If the Security Council is serious about holding Iran accountable and enforcing our resolutions, then at a minimum we should be able to deliver a unanimous condemnation of this provocative missile test." However, diplomats emerging from the closed-door meeting said while there were expressions of concern about Iran's activity, there were no immediate plans for any action against Iran in response. The Iranians, meanwhile, argued they were in line with 2231: "Portraying Iran’s ballistic missile program as inconsistent with resolution 2231 or as a regional threat is a deceptive and hostile policy of the U.S." Even as it re-imposed sanctions, the U.S. has warned that it will continue to act unilaterally if necessary. President Trump, at a U.N. Security Council meeting in September, warned that the U.S. "will pursue additional sanctions, tougher than ever before, to counter the entire range of Iran's malign conduct." On Tuesday, Sen. Ted Cruz, R-Texas, called for more U.S. action to combat Iranian aggression. “The United States has only begun to reverse the damage done by Obama's Iran nuclear deal, which gave the Ayatollahs the resources and diplomatic breathing room to build more and better ballistic missiles,” Cruz said in a statement to Fox News. “The last round of sanctions, while important, clearly failed to deter Iran from advancing their missile program. It's time to totally cut off Iran from the global financial system and deny them the resources they're using to threaten us and our allies.” Some European leaders have recently opened the door to sanctions on Iran after the emergence of terror plots on European soil, which leaders say originate from Tehran. The Wall Street Journal reported last month that a call for sanctions by Danish diplomats won broad support at a meeting of E.U. ambassadors, after Denmark’s intelligence agency foiled an Iranian plot to kill an opposition activist and arrested a Norwegian of Iranian descent. EUROPE OPENS DOOR TO SANCTIONS ON IRAN AFTER TERROR PLOTS IN DENMARK, PARIS That alleged plot came after an Iranian diplomat based in Vienna was arrested in July for a plot to bomb an annual gathering of Iranian dissident groups in Paris, which Trump lawyer Rudy Giuliani attended. Fox News' Ben Evansky contributed to this report.
Wanda - FYI, Joe Hirl, the trader in charge of setting up the Tokyo trading office, will be in Houston next week, and it would probably be a good idea if he could interview Mark Frank, or whatever candidate we recommend for the Controller position. Our Controller for the Sydney office will be up the following week and he should meet with her as well, as that will enable him to get first hand knowledge of the remote trading office establishment issues we've addressed in Australia. Sally - did you decide on the one candidate you thought of when we met last? We might want to arrange a meeting with him and Joe also. Thanks, Cassandra.
syntax = "proto3"; package Billboard; import "google/protobuf/Empty.proto"; service Board { rpc ShowMessage (google.protobuf.Empty) returns (stream MessageReply) {} } message ImageMetaData { string mime_type = 1; string file_name = 2; } message ImageChunk { bytes data = 1; int32 length = 2; } message MessageReply { oneof image { ImageMetaData meta_data = 1; ImageChunk chunk = 2; } }
GCU #16: The Birds & The Deadly Bees The Birds and The Deadly Bees! Yes, it’s time to have THE talk. We discuss the Alfred Hitchcock classic where birds of a feather KILL together as well as a less than buzzworthy bee themed murder mystery that wishes it was The Birds. Plus, how might these films be combined into one shared universe? The Birds vs The Deadly Bees: Winged Justice! That’s how.
{ "images" : [ { "idiom" : "universal", "scale" : "1x" }, { "idiom" : "universal", "filename" : "snippets_add_snippet@2x.png", "scale" : "2x" }, { "idiom" : "universal", "scale" : "3x" } ], "info" : { "version" : 1, "author" : "xcode" } }
Caterpillar, based in Peoria, Ill., disclosed on Jan. 18 that it had uncovered “deliberate accounting misconduct” at Zhengzhou Siwei Mechanical & Electrical Manufacturing Co., a maker of roof-support equipment for underground coal mines that it had acquired last June. Siwei is a subsidiary of ERA Mining Machinery, a Hong Kong-listed firm controlled by a shell company whose principals are two American entrepreneurs in China. Caterpillar paid about $700 million for ERA but said earlier this month it was writing down the value of that company by $580 million. “It’s disappointing,” Oberhelman said. “But how we respond defines us.” Of course this stuff doesn’t only happen in China. Fraud occurs everywhere. But you would think that a company like Caterpillar would be a little more careful here.
Page:Barlaam and Josaphat. English lives of Buddha.djvu/25 But while Catholic Christendom had no doubt as to the reality of these Saints, Catholic scholarship was by no means positive as to the authorship of the Legend of the Saints. The Greek MSS. attributed it to "John, Monk of the Convent of St. Saba," or St. Sinai. It is only in the latest MSS. that this Monk John is directly identified with John of Damascus, a somewhat distinguished theologian of the eighth century. He was the only ecclesiastical writer of the name of John to whom the book could be attributed, and scholarship, like Nature herself, abhors a vacuum. And so the book of Barlaam and Joasaph has been included among the works of John of Damascus ever since his editors have collected them together. Yet they have not been without their doubts, and they always felt themselves obliged to defend the inclusion of the book. One of his editors indeed, Lequien, went so far as to exclude it altogether from the authentic works. The whole question has been carefully threshed out by M. Zotenberg in his Notice sur le Livre de Barlaam
Zane's parents, Kye Gbangbola and Nicole Lawler, were also taken ill at the family home just before 3.30am yesterday. They remain in a serious but not life-threatening condition at St Peter's Hospital. Two police officers and 13 nearby residents were released after being treated in hospital as a precaution. Neighbour Anoop Hothi, 31, taught Zane martial arts at the Sport Martial Arts Academy in Egham, which Zane joined when he was five. After just a year, the youngster became a member of the leadership team and last month he was promoted to green stripe belt. Mr Hothi said: "Little Zane was an absolute joy to teach, and it's children like him who make teaching so much more rewarding. "I'm sure his school teachers would say the same thing. He was a lovely boy, and he came from good, caring parents. He criticised various agencies for their response to the floods in the area as he praised Zane's parents as the nicest neighbours he had ever had. "It's not the parents who are to blame for this. They were loving and caring people - it's the overall system that's to blame for his death. "Zane had his whole life ahead of him. It's heart-breaking for his parents. I found out yesterday morning and I didn't want to believe it." "It was only a few months ago that my little brother was playing with Zane, and now he is in tears." PA Kye Gbangbola pictured clearing flood waters from outside his home near Chertsey, Surrey Mr Hothi said he saw some pipes coming out from the front of Zane's parents' property throwing out water, but he did not know whether a generator was being used. Last night it was revealed Zane's mother is a member of the All Party Parliamentary Climate Change Group and had recently warned government cuts had “affected planning, maintenance and the capacity to respond to incidents” when it came to flooding. His father is the founder of a sustainability consultancy and the acting chairman of the Sickle Cell Society, set up in 1979 to highlight the plight of those with the genetic blood disorder. A message on the Facebook page of Sport Martial Arts Academy announced Zane's death to members. It said: "Many of you will know him and his parents and many of the children would have trained with him over the last two years that he has been with us. "Starting off in Little Samurais aged five, he was always enthusiastic and energetic about his training, showing the same passion for martial arts that his father has. "After a year he was a member of our leadership team helping and teaching others new to the club. Just last month he was promoted to green stripe belt." Police and fire officers wearing white face masks entered the property today as it remained surrounded by floodwater from the River Thames. Meanwhile, further tributes continued to pour in for Zane online. One post on Facebook said: "Zane always had a smile on his face, he was so talented. RIP Zane, our thoughts are with his family." Another said: "Our words are not adequate to express the sorrow we feel for Zane's family and the loss of such a wonderful boy. "Zane was an inspiration to everyone around him. I imagine there is no pain more far-reaching and deeper than losing a child. "Our hearts and prayers go out to his family at this most difficult time." Officers have refused to be drawn on whether carbon monoxide poisoning from a generator pumping out flood water from his home may have been to blame. Nicole Lawler mother of Zane Gbangbola Chief Superintendent Dave Miller, of Surrey Police , said that the cause of Zane's death was still unknown, adding that it could be days before the exact cause was established and it would be "inappropriate" to speculate on what it might be. He said: "The investigation into yesterday's tragic death of the seven-year-old boy is ongoing and the cause of death is still unknown. "We are continuing to work with partner agencies and officers are following various lines of enquiry." Mr Miller said that there did not appear to be a wider risk to the public. "There have been no further casualties reported. This, coupled with expert advice, leads us to believe at this stage that this is an isolated incident localised to one family," he said. "Surrey Police will release an update when a cause of death has been determined, however this may take several days. "Our thoughts continue to be very much with the boy's parents, who still remain in hospital, and we are continuing to support them during what is understandably a very difficult time." Public Health England also said it did not believe there was a wider health risk to the public after nearby residents were evacuated and advised to go to hospital as a precaution. A spokeswoman for the Department of Health agency, which is tasked with improving the nation's health, said it was helping police with their investigation. "It is too early to speculate on the cause of death and Public Health England is working with the other agencies to assist this investigation and ensure appropriate action is taken to protect public health. "Currently, there are no indications that there is a wider public health risk." A South East Coast Ambulance spokeswoman said Zane was found to be in a "very serious condition" when crews arrived. She said she could not be drawn on the suggestions that carbon monoxide poisoning was to blame.
Sex offenders released in Fond du Lac 11:08 AM, Apr. 10, 2014 Allen Sabel Written by Action Reporter Media Staff Fond du Lac police announced that Ge Vue, 31, and Allen L. Sabel, 26, are relocating to the city. Vue has moved to 39 Sixth St. Although Vue is listed as a lifetime sex offender, he is not on active supervision with the Wisconsin Department of Corrections. The 31-year-old was convicted in 1999 of two counts of second degree sexual assault of a child. According to court records, he forced 12-year-old and 14-year-old female acquaintances to submit to sexual intercourse. ...
In situ measurements of shear stresses of a flushing wave in a circular sewer using ultrasound. Deposits build up in sewer networks during both spells of dry weather and in connection with storm water events. In order to reduce the negative effects of deposit on the environment, different cleaning technologies and strategies are applied to remove the deposits. Jet cleaning represents the most widely used method to clean sewers. Another alternative cleaning procedure is flushing. On account of new developments in measurement and control panels, the flushing method is becoming more important. Therefore, in the last few years a number of new flushing devices have been constructed for application in basins, main sewers and initial reaches. Today, automatic flushing gates are able to accomplish cleaning procedures under economical and ecological conditions. The properties of flushing waves for cleaning sewers have been determined by several mathematical-numerical studies. These various investigations use altering numerical schemes, are based on different sets of physical equations and take one- or more dimensional aspects into account. Considering that bottom shear stress is the key value to evaluate the beginning of motion of any deposit, one may use this value that has to be determined by measurements. This paper deals with shear stresses caused by flushing waves which have been measured by an ultrasonic device that can determine the velocity in different depths of flow. Thus, it is possible, within certain limits, to calculate bottom shear stresses based on the log-wall law. Further discussion will deal with the requirements of measurements, its uncertainty and aspects in respect to the application of simulation of flushing waves.
Group O RBCs: where is universal donor blood being used. There have been recurrent shortages of group O blood due to insufficient inventory and use of group O blood in ABO non-identical recipients. We performed a 12-year retrospective study to determine utilization of group O Rh-positive and Rh-negative red blood cells (RBCs) by recipient ABO group. Reasons for transfusing group O blood to ABO non-identical recipients were also assessed. Utilization data from all group O Rh-positive and Rh-negative RBCs transfused at three academic hospitals between April 2002 and March 2014 were included. Data were extracted from Transfusion Registry for Utilization Surveillance and Tracking, a comprehensive database with inventory information on all blood products received at the hospitals. Extracted data included product type, ABO and Rh, final disposition (transfused, wasted, outdated), and demographic and clinical data on all patients admitted to hospital. Descriptive statistics were performed using sas 9.3. There were 314 968 RBC transfusions: 151 645 (48·1%) were group O, of which 138 136 (91·1%) RBC units were transfused to group O individuals. ABO non-identical recipients received 13 509 group O RBCs (8·9%). The percentage of group O RBCs transfused to ABO non-identical recipients by fiscal year varied from 7·8% to 11·1% with a steady increase from 2011 to 2013. Reasons for this included: trauma, outdating, outpatient usage and shortages. The practice of transfusing O RBCs to non-O individuals has been increasing. Specific hospital and blood supplier policies could be targeted to change practice, leading to a more sustainable group O red blood cell supply.
Pickett Team Muscles Way To Victory In Mosport The Muscle Milk HPD car led Sunday's ALMS race at the start and the finish at Mosport. (Photo courtesy of the American Le Mans Series) RacinToday.com With Klaus Graf and Lucas Luhr driving, Muscle Milk Pickett Racing won for the third straight year at Canadian Tire Motorsport Park with a victory Sunday in the fifth round of the American Le Mans Series presented by Tequila Patrón. The team won the Mobil 1 presents the Grand Prix of Mosport in the No. 6 Honda Performance Development ARX-03a by more than 10 seconds over Dyson Racing’s Chris Dyson and Guy Smith in their No. 16 Lola-Mazda coupe. Racing in the P2 and GT classes was closer as Conquest Endurance’s Martin Plowman and David Heinemeier-Hansson won their first race in the No. 37 Morgan-Nissan with Plowman’s late-race drive to hold off Level 5 Motorsports’ Christophe Bouchut. In GT, Extreme Speed Motorsports took its first ALMS victory when the No. 45 Flying Lizard Motorsports Porsche 911 GT3 RSR was excluded in post-race technical inspections. It moved Scott Sharp, Johannes van Overbeek and the No. 01 Ferrari F458 Italia into first in class. “Although I feel for Pat and Jörg, I couldn’t be happier for Scott, (team owner) Ed (Brown), ESM and Patrón,” said van Overbeek, hero of the race. “After the podium the race just didn’t feel over; and as it turned out it wasn’t. Winning at Mosport caps a great weekend and it was a fine way to celebrate my 100th start in this dynamic and competitive series. I hope it’s a sign of things to come.” It was a twist to what was already a nail-biting finish. A late caution with 20 minutes remaining bunched up the top three GT cars… which didn’t really need much bunching up anyway. It put Bergmeister, Corvette Racing’s Jan Magnussen and van Overbeek nose-to-tail with 11 minutes remaining. Magnussen, who teamed with Antonio Garcia in the No. 3 Corvette C6 ZR1, hounded Bergmeister for 10 of those minutes. While he attempted to take the lead on the final lap, Magnussen opened the door for van Overbeek to slide the Ferrari inside the Corvette at the exit of Turn 5 and take second place. The move, of course, ended up meaning much more following post-race inspections. “Jan is a tough customer and I figured that was the only shot I had at him,” van Overbeek said. “I didn’t want to telegraph it earlier, so I saved it for last. There were three of us running in a train. Getting to them and getting around them was a whole different story. I had a bad aero push and I knew that was the only shot – in 5a and 5b – to get around the Corvette. I made a Hail Mary on the last lap and it worked out.” The P2 battle was shaping up to be just as dramatic. Bouchut in the No. 055 Honda Performance Development ARX-03b chased down Plowman at the final restart, and each of the cars used traffic to their advantage. Bouchut was too aggressive with five minutes left; he drifted into the pit exit lane and drew a stop-and-go penalty. Plowman eventually won by 19.713 seconds in a race that saw the Conquest car lead overall at one point – just as it did at Lime Rock Park two weeks ago. “Even though he was catching me, I was driving within myself,” Plowman said. “I knew they were trying to push, and if the gap got below 10 seconds I would just push even more. It felt like I just kept catching GTC cars in the slower corners. There were some really scary moments when I noticed (Bouchut) was getting closer and closer and I had to push. I would do all that work saving my tires and then use them all up to get gap back. After it was apparent he crossed the pit blend line to gain an advantage and he had to serve the penalty, I could breathe.” Heinemeier-Hansson and Level 5’s Scott Tucker swapped the lead in the opening portion of the race. The Morgan-Nissan and HPD prototypes had more than one side-by-side run around the track. The two drivers swapped the lead three times before handing off to their teammates, one of those changes coming as the result of a pitlane speeding penalty for Heinemeier-Hansson. “We battled well with Scott Tucker. It was nice to see just how different the cars were,” Heinemeier-Hansson said. “We had a car better on the back straight, but (Tucker) could zoom through Turn 5. Tucker drove really, really well. Multiple times I was passing cars around the outside of Turn 2 and I thought there was no way he could pass me there, but he would be right behind me after I checked my mirrors. It was great to see a clean race. It was great to win, we’ve had the car and the pace to win before. Finally everything clicked.” Back at the front of the grid, Luhr and Graf were firmly in control for most of the day en route to their fourth consecutive ALMS victory. Luhr celebrated his 33rd birthday by leading the majority of his opening stint a day after Graf won the pole position on his 43rd birthday. Luhr drove the opening 75 minutes despite nursing an injured ankle. “I felt (the pain) a bit, but it didn’t affect my race much,” Luhr said. “Other than that I think we were fully in control of the race. We made a good strategy call in the beginning to save some fuel. That allowed us to do the race on just two stops. The rest had to stop three times, so it was a big advantage for us.” That’s not to say there was no potential for drama. The same transmission issue that setback the Muscle Milk car four laps in the last race at Lime Rock Park two weeks ago appeared to creep up again late Sunday. In addition, Graf was called in for a late-race pitlane penalty that cost him 20 seconds. “We don’t know exactly what happened (with the late electronics issue),” said Graf, who now three straight CTMP victories in three different cars. “It was similar to Lime Rock, and we just reset some switches. We need to look into it. It didn’t stop us from losing the lead, and we kept it. Certainly it created a bit of excitement for people watching, but it didn’t affect anything.” Eric Lux and Tony Burgess finished third in P1 and fifth overall in a second Dyson Racing Lola-Mazda. RSR Racing’s Bruno Junqueira and Tomy Drissi won in Prototype Challenge for the first time as a pairing. Junqueira in the No. 9 ORECA FLM09 finished a lap ahead of CORE autosport’s Colin Braun and Jon Bennett in the No. 05 entry. The victory was the first in ALMS for Junqueira since his jump from open-wheel racing. It also was the first for RSR, owned by Paul Gentilozzi, which fielded a Jaguar GT effort from 2009-2011. “I am very happy,” Junqueira said. “The only time we showed some speed was here last year. We were really hankering for our first win. Finally we got it. After a year-and-a-half for the team, it is even more special. We have been in contention in every race for a win. Today we finally did it.” The victory looked up in the air during a couple of points. Drissi ran second early but was called in to serve a penalty for speeding on pitlane. Junqueira took the lead with 30 minutes left when class pole-winner Kyle Marcelli pitted late. The class pole-sitter and Canadian driver was on his way back through the field but crashed with 26 minutes remaining. “We got some breaks, we trimmed up our strategy and it started to fall in to place,” said Drissi, who won in class for the third time. “Eventually we got that win. I think the best way to get out of a hole is to stop digging, so when we got here we decided to just let go of the past. So now we’re on top, and this is a good sign of things to come. I think now, we’ll have the speed, both of us, together. I think the rest of the year, we have to be smart and now we have the confidence we can do it.” TRG won the GT Challenge class for the second year in a row at CTMP. Spencer Pumpelly took advantage of late engine troubles for JDX Racing and drove the No. 66 Porsche 911 GT3 Cup to a 1.440-second victory over Alex Job Racing’s Leh Keen. JDX’s Michael Valiante took the class lead just past the two-hour mark until he brought the No. 11 Porsche into pitlane. Pumpelly had been closing the gap slightly before he inherited the lead with eight minutes left. “I think the JDX didn’t take enough fuel, and we heard they were talking about pitting,” Pumpelly said. “We knew it would be anyone’s race. It was nice to finally capitalize. We took advantage of it and drove a really nice race.” Pumpelly drove with Emilio Di Guida, who took his first ALMS victory in May at Mazda Raceway Laguna Seca. He drove the opening stint and focused on keeping the car on the track before handing off to Pumpelly. “The strategy we had was really good,” he said. “I tried to fight the other cars for position. At the start, I thought I felt something in the car. The strategy was to take it easy when we start, and then to gain more and more speed.” The next round of the American Le Mans Series presented by Tequila Patrón is the Mid-Ohio Sports Car Challenge. The sixth round of the 2012 championship is set for 1 p.m. ET on Saturday, Aug. 4 from Mid-Ohio Sports Car Course. ABC’s broadcast featuring live coverage begins at 2 p.m. ET.
Constrictive pericarditis in B cell chronic lymphatic leukaemia. We report a case of B-cell chronic lymphatic leukaemia (B-CLL) complicated by constrictive pericarditis. The pericardial involvement was confirmed histologically to be leukaemic in nature. We draw attention to this complication which is amenable to surgical correction. To our knowledge this has been described only once before as an autopsy finding and has not been encountered ante-mortem.
933 So.2d 814 (2006) Robert DUBOSE v. The PLANT DEPOT. No. 2005-CA-1149. Court of Appeal of Louisiana, Fourth Circuit. May 17, 2006. Triscelyn Landor-McDonald, Trinity Law Center, L.L.C., New Orleans, LA, for Plaintiff/Appellee. Val P. Exnicios, Liska, Exnicios & Nungesser, New Orleans, LA, for Defendant/Appellant. (Court composed of Judge PATRICIA RIVET MURRAY, Judge TERRI F. LOVE, Judge MAX N. TOBIAS, JR.). PATRICIA RIVET MURRAY, Judge. This is a workers' compensation case. On February 1, 2001, the Office of Workers' Compensation (OWC) rendered a default judgment awarding workers' compensation benefits to Robert Dubose. However, the default judgment was not rendered against his employer, the Plant and Palm Depot, Inc. Rather, the judgment was rendered against the Plant Depot, the entity Mr. Dubose incorrectly *815 named as his employer in his 1008 claim. To correct the error, Mr. Dubose filed a motion to amend the default judgment. The Plant and Palm Depot not only opposed the motion, but also filed an exception of prescription. The OWC overruled the exception of prescription and granted the motion to amend. From that judgment, the Plant and Palm Depot appeals. For the reasons that follow, we affirm. FACTUAL AND PROCEDURAL BACKGROUND On August 4, 2000, Mr. Dubose allegedly was injured in the course and scope of his employment with the Plant and Palm Depot. On August 15, 2000, Mr. Dubose timely filed a 1008 claim with the OWC seeking workers' compensation benefits. As noted, Mr. Dubose erroneously listed his employer as the Plant Depot.[1] A mandatory mediation conference was scheduled for November 3, 2000, and Mr. Dubose's employer was represented at the conference by Jimmy Costello.[2] At the mediation conference, Jimmy Costello signed as agent for the Plant Depot and waived service and citation. On December 28, 2000, the matter proceeded to trial, but no one appeared on behalf of the employer. As noted, on February 1, 2001, the OWC rendered a default judgment against the Plant Depot awarding workers' compensation benefits to Mr. Dubose.[3] On August 17, 2004, Mr. Dubose filed a Motion to Amend 1008 and/or Motion to Enforce Judgment with the OWC. In response, the Plant and Palm Depot filed an exception of prescription and an opposition to Mr. Dubose's motion to amend. On December 1, 2004, a hearing was held on this matter. On December 14, 2004, the OWC rendered judgment denying the exception of prescription and granting the motion to amend the default judgment to correctly name the employer as the Plant and Palm Depot, Inc., d/b/a Plant Depot.[4] In its reasons for judgment, the OWC recited the following findings of fact: Claimant filed a 1008 against "Plant Depot" ATTN: Jimmy Costello." At the mediation conference, Jimmy Costello appeared and signed as agent for "Plant Depot" and waived Service and Citation. The "Plant Depot" had no workers' compensation insurance in effect at the time of the accident. Jimmy Costello allowed claimant to confirm a default judgment against the "Plant Depot." After Notice of Judgment, defendant did not file for a New Trial nor an Appeal. Only when claimant sought to enforce the judgment did an attorney for the "Plant & Palm Depot, Inc." appear challenging the judgment and any attempts to correct the defendant's true name to *816 "Plant & Palm Depot, Inc. d/b/a the Plant Depot." The "identity" of the defendant was certain. The identity was "Plant Depot" which was a d/b/a of "Plant & Palm Depot, Inc." All actions by and pertaining to the "Plant Depot" was by Jimmy Costello, who was one of the directors of the "Plant & Palm Depot, Inc." Jimmy Costello was the same one who did not purchase workers' compensation insurance and who failed to keep the "Plant & Palm Depot, Inc." in good standing with the Louisiana Secretary of State. Hence, correcting the defendant's name from the "Plant Depot" to the "Plant & Palm Depot, Inc. d/b/a the Plant Depot," is of little moment." Wagenvoord [Broadcasting Co. Inc. v. Rurton Blanchard, 261 So.2d 257 (La.App. 4 Cir.1972)]. This timely appeal followed. On appeal, the Plant and Palm Depot argue the OWC erred in (i) allowing Mr. Dubose to alter the default judgment, (ii) failing to impose the burden of proof on Mr. Dubose to show that he worked for the Plant and Palm Depot, and (iii) denying its exception of prescription. DISCUSSION In workers' compensation cases, the appropriate standard of review to be applied by the appellate court to the OWC's findings of fact is the "manifest error-clearly wrong" standard. Dean v. Southmark Const., XXXX-XXXX, p. 7 (La.7/6/04), 879 So.2d 112, 117. In applying the manifest error or clearly wrong standard, the appellate court must determine not whether the trier of fact was right or wrong, but whether the fact finder's conclusion was a reasonable one. Hudson v. Housing Authority of New Orleans, XXXX-XXXX, p. 7 (La.App. 4 Cir. 10/27/04), 909 So.2d 607, 611, citing Seal v. Gaylord Container Corp., 97-0688, p. 4 (La.12/2/97), 704 So.2d 1161, 1164. Where two permissible views of the evidence exist, a fact finder's choice between them can never be manifestly erroneous or clearly wrong. Id. If the fact finder's findings are reasonable in light of the record reviewed in its entirety, the court of appeal may not reverse, even if convinced that had it been sitting as the trier of fact, it would have weighed the evidence differently. Banks v. Industrial Roofing & Sheet Metal Works, Inc., 96-2840, p. 7 (La.7/1/97), 696 So.2d 551, 556. Assignment of Error No. 1: Error in Amending Judgment The Plant and Palm Depot submits that the amendment of the default judgment by the OWC to correct the name of the employer is a substantive change prohibited by La. C.C.P. art. 1951.[5] Based on our review of the record, we find the OWC correctly applied the procedural law regarding the amendments of judgments. In this regard, the OWC stated in its reasons for judgment: [T]he analysis (and which is consistent with all case law on this point of changing the defendant's name) revolves around a fact determination. Specifically, was the party who was served, who was aware of the lawsuit, who chose to defend or ignore the demands and/or lawsuit, who was involved in the actions that were the basis of the cause of action of the lawsuit — was that party the one and the same in reality as the defendant whose name was misspelled or incorrect on the judgment? *817 In cases where the answers to these questions was "no," then the change of the name of the defendant was a "substantive change" and therefore could not be changed in a final judgment. In cases where the answer to those questions were "yes," then the change of the name was not substantive. See Wagenvoord, supra; and Sherman[Shearman] v. Simpson, (3rd C.A.La.1972) 264 So.2d 713; and Thompson v. Matthews, (4th C.A.La. 1979)[,] 372[374] So.2d 192. Succinctly stated, "Was the identity of the defendant fixed with certainty" so that the error in spelling his name (or incorrect name) is of little moment? The jurisprudence the OWC cited accurately reflects that when the identity of the defendant is fixed with certainty, the amendment of the judgment to correctly reflect the name of the defendant is not a substantive change. Thompson, 374 So.2d at 193. In the present case, the record supports the OWC's factual finding that the identity of the employer was certain. Therefore, we find no error in the OWC's decision to allow the default judgment to be amended. Assignment of Error No. 2: Failure to Impose Burden of Proof on Mr. Dubose The Plant and Palm Depot argues that at no time during the trial or pretrial proceedings did Mr. Dubose allege or establish that he worked for it. This appeal, however, relates solely to the issue of Mr. Dubose's error in naming his employer in the petition. The merits of the underlying workers' compensation claim are not at issue. Regardless, the Plant and Palm Depot has never disputed that Mr. Dubose worked for the business entity operated by Mr. Costello, which was the Plant and Palm Depot. Assignment of Error No. 3: Failure to Grant the Exception of Prescription. The Plant and Palm Depot argues that Mr. Dubose's claim has prescribed pursuant to La. R.S. 23:1209, which requires an injured employee to file a formal complaint with the OWC within one year from the date of the accident. Mr. Dubose filed his initial workers' compensation complaint within days of the accident. We do not find, nor does the Plant and Palm Depot cite any authority to support the argument, that Mr. Dubose's subsequent request to amend the judgment falls within the parameters of the prescriptive period found in La. R.S. 23:1209. DECREE For the foregoing reasons, the judgment of the OWC denying the Plant and Palm Depot's exception of prescription and granting Mr. Dubose's motion to amend is affirmed. AFFIRMED. NOTES [1] At the hearing in this matter, Mr. Dubose introduced a business card for the Plant Depot, which listed Jimmy Costello as the owner of the business. [2] Jimmy Costello is listed as one of two directors for the Plant and Palm Depot, Inc. in the corporation's "Domestic Business Corporation Initial Report." Mr. Costello was also Mr. Dubose's immediate supervisor on the job. [3] Thereafter, Mr. Dubose filed a Petition to Recognize Judgment in the 24th Judicial District Court for the Parish of Jefferson. The petition was dismissed, however, because the name of the defendant was incorrectly listed in the default judgment. [4] The judgment further ordered the claimant's attorney to prepare the Amended Judgment. That judgment, dated February 2, 2005, reiterates the specifics of the Default Judgment regarding the award of compensation benefits. [5] La. C.C.P. art. 1951 provides: "A final judgment may be amended by the trial court at anytime, with or without notice, on its own motion or on motion of any party: (1) To alter the phraseology of the judgment, but not the substance; or (2) To correct errors of calculation."
Publicist Max Clifford says he has been contacted by stars from the 1960s and 70s who say they are frightened of being implicated in the Jimmy Savile scandal. ITN Dozens of celebrities from the 1960s and 70s are "frightened to death" they will be implicated in the Jimmy Savile child abuse scandal, according to public relations guru Max Clifford. He said the stars, some of whom were still big names, had approached him to handle any fallout from inquiries. He said they were worried because at their peak they had lived a hedonistic lifestyle where young girls threw themselves at them but they "never asked for anybody's birth certificate". Clifford's comments came as it emerged that the Catholic Archbishop of Westminster has written to the Pope to ask him to consider removing Savile's papal knighthood in recognition of the distress caused to his victims. Scotland Yard is leading the current investigation into accusations of abuse by former BBC DJ and presenter Savile, which now involves around 300 possible victims. Officers have searched a cottage belonging to Savile in Allt na Reigh in Glencoe, Scotland, to look for "any evidence of any others being involved in any offending with him". In Leeds, members of Savile's family issued a statement expressing their bewilderment at his crimes and their sympathy for his victims. In the statement, the family said their "thoughts" and "prayers" were with those who had suffered abuse. On Friday, Clifford said young pop stars at the time had gone from working in a factory one week to performing in front of thousands of people "and girls are screaming and throwing themselves at them then". "All kinds of things went on and I do mean young girls throwing themselves at them in their dressing rooms at concert halls, at gigs, whatever," he said. "They never asked for anybody's birth certificate and they were young lads … suddenly everyone's dream was a reality. "We are talking about a lot of people that were huge names in the 60s and 70s and a lot of them barely remember what they did last week, genuinely. "For them to try and recount what happened in a dressing room in 1965 or 1968 or 1972, genuinely they are frightened to death." He said the investigation needed to focus on the "facilitators" who lurked on the periphery and had had years to cover their backs. "I am hoping that the real predators are the ones we are going to find out about: the Glitters of this world, the Saviles of this world, not people that were randy young pop stars in the 1960s, 70s and 80s even, that had women throwing themselves at them everywhere they went, because that is a whole different area and a whole different situation. No one had heard the word paedophile in those days, the 60s and 70s," he said. Seven alleged victims of Savile made complaints to four separate police forces, Surrey, London, Sussex and Jersey, while the television presenter was alive but it was decided that no further action should be taken. Scotland Yard said on Friday that a retired officer had told them he had investigated Savile in the 1980s while based in west London but did not have the evidence to proceed. Metropolitan police commander Peter Spindler said he believed the allegation was of an indecent assault, possibly in a caravan on BBC premises in west London, but officers have still not found the original file. Another allegation, of inappropriate touching dating back to the 1970s, was made by a woman in 2003, but this was treated as "intelligence" by police because the victim did not want to take action. Surrey police submitted a file to the Crown Prosecution Service containing references to four potential offences, including an allegation of indecent assault on a young girl at a children's home. The allegations related to three potential victims in Surrey and another in Sussex, and Savile was interviewed under caution in 2009, but prosecutors decided there was insufficient evidence to bring charges. The seventh allegation emerged in 2008 when Jersey police received a claim that an indecent assault occurred at the children's home Haut de la Garenne in the 1970s. Again it was decided that there was insufficient evidence to proceed. Spindler said Savile was "undoubtedly" one of the most prolific sex offenders he had encountered and that Operation Yewtree, looking into Savile's crimes, would be a "watershed moment" for child abuse investigations. The Catholic Church of England says it has contacted the Holy See to ask if the papal knighthood awarded to late television star Jimmy Savile could be removed following sexual abuse allegations. Police say 300 potential victims have come forward with abuse allegations against Savile, a well-known BBC children's television host who died last year. Most of them say they were abused by Savile, but some say they were abused by other people, police said Friday. The Catholic church said on Saturday that Archbishop of Westminster Vincent Nichol wrote to Vatican officials last week, asking the Holy See to investigate the possibility of posthumously removing Savile's honour in recognition of the "deep distress" of the alleged victims.
Effects of electrokinetic treatment of contaminated sludge on migration and transformation of Cd, Ni and Zn in various bonding states. This study assesses the effect of electrokinetic processes on the migration and bonding states of various heavy metals in municipal sludge. The transformation and migration are discussed through the examination of sludge characteristics and distribution of Cd, Zn and Ni after electrokinetic treatments. The migration and distribution of the contaminants after the electrokinetic treatments were determined for each sludge sample by sequential extraction. The noticeable changes on the average speciation fractions of Cd, Zn and Ni were observed that oxidizable heavy metals increased and reducible fraction decreased due to the application of voltage. Bivariate correlation analysis indicated that the amounts of different bonding states of Zn and Ni were significantly correlated (P<0.05) with durations and resistance. The oxidizable Zn was negatively correlated with exchangeable and reducible Zn. Moreover, reducible Zn had a close negative relationship with residual Zn. The bonding state of Ni was significantly related to the durations of electrokinetic processes, indicating the existing of mutual transformation between different speciation fractions over time. The analysis also indicated that the exchangeable Cd showed a close negative relationship with reducible Cd (P<0.01), whereas the reducible Cd was negatively related to residual Cd (P<0.05).
Formula 1, which was taken over by US media giant Liberty Media in a £6.4bn deal earlier this year, is regulated and governed by the FIA The Serious Fraud Office is "reviewing material" relating to a payment made by Formula 1's commercial rights holder to the sport's governing body the FIA. MP Damian Collins has asked the body to investigate whether the payment breached bribery laws. The £3.9m payment was made to the FIA for entering into an agreement with the teams and sport's commercial arm. The FIA says the payment was remuneration "for its regulatory role" and denies wrongdoing. Collins, chairman of the Culture, Media and Sport Parliamentary select committee, said he was "very concerned" about why the sport would need to make a payment to its governing body and regulator as part of the so-called Concorde Agreement, which was signed in 2013. "That's why I've written to the Serious Fraud Office (SFO) asking them 'do they feel there was a breach of the Bribery Act and does it warrant investigation'?" he told ITV. external-link An SFO spokesperson told BBC Sport: "The Serious Fraud Office is reviewing material in its possession in relation to these allegations. All matters referred to the SFO are assessed against criteria to establish whether they may fall within its remit to investigate." The FIA confirmed in a statement that it had received the payment and explained: "The Concorde Implementation Agreement entered into by the commercial rights holder of Formula 1 and the FIA in 2013 introduced a new governance structure for Formula 1 and redefined certain conditions applicable to their relationship, in particular to ensure that the FIA be properly remunerated for its regulatory role. "Within this agreement, a lump sum payment of $5m (£3.9m) was made to the FIA as part of the global consideration received in connection with the renegotiation of the terms of the agreements between the commercial rights holder and the FIA, and of the Concorde Agreement, at that time. "Following its approval, the Concorde Implementation Agreement came into force and this sum was paid to the FIA and properly accounted for. No individual received any payment out of this sum. Any allegation to the contrary would be defamatory."
Enter your email to subscribe: The Supreme Court of Ohio today imposed an indefinite license suspension against [a]Cuyahoga Falls attorney...for engaging in illegal voyeuristic conduct that resulted in his conviction on multiple criminal counts including felony charges of intercepting electronic or oral communications and pandering sexually oriented matter involving a minor. [His] law license was suspended on an interim basis in February 2008, after the Court received notice of his felony convictions. The Court adopted findings by the Board of Commissioners on Grievances and Discipline that, although [his] criminal acts were not committed in the performance of his duties as a lawyer, they violated the state attorney discipline rules that prohibit an attorney from engaging in criminal conduct involving moral turpitude and from engaging in conduct that reflects adversely on the attorney’s fitness to practice law. In imposing an indefinite license suspension, with credit for the months. [He] has been under interim suspension, the Court noted that this sanction requires a disciplined attorney seeking reinstatement to go through an extensive application process in which he must demonstrate that he has recovered the capacity to engage in the competent and ethical practice of law. The Court also imposed special conditions for reinstatement based on the nature of [his] offenses, including no additional misconduct and proof of continuing successful psychiatric treatment and compliance with a recovery contract with the Ohio Lawyers Assistance Program. The misconduct had its genesis in the attorney's discovery that he could sometimes hear people in his apartment complex having sexual relations. He "started placing a recording device inconspicuously outside apartment windows so he could record residents' sexual activity and later listen to the recording for sexual gratification." A resident saw him and reported to the police. A search of his apartment revealed a substantial amount of child pornography and a "peep hole" that allowed him to view the female resident of the apartment next door. The attorney presented the testimony of his psychiatrist (an expert in clinical sexuality) that he is being treated for paraphilia, "a condition generated by 'the clash between individual sexual interest and the social rules governing sexual behavior.' " The court expresses concern about whether the attorney can afford recovery treatment, but leaves the issue to a reinstatement hearing. One justice would permanently disbar.
/* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) * All rights reserved. * * This package is an SSL implementation written * by Eric Young (eay@cryptsoft.com). * The implementation was written so as to conform with Netscapes SSL. * * This library is free for commercial and non-commercial use as long as * the following conditions are aheared to. The following conditions * apply to all code found in this distribution, be it the RC4, RSA, * lhash, DES, etc., code; not just the SSL code. The SSL documentation * included with this distribution is covered by the same copyright terms * except that the holder is Tim Hudson (tjh@cryptsoft.com). * * Copyright remains Eric Young's, and as such any Copyright notices in * the code are not to be removed. * If this package is used in a product, Eric Young should be given attribution * as the author of the parts of the library used. * This can be in the form of a textual message at program startup or * in documentation (online or textual) provided with the package. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * "This product includes cryptographic software written by * Eric Young (eay@cryptsoft.com)" * The word 'cryptographic' can be left out if the rouines from the library * being used are not cryptographic related :-). * 4. If you include any Windows specific code (or a derivative thereof) from * the apps directory (application code) you must include an acknowledgement: * "This product includes software written by Tim Hudson (tjh@cryptsoft.com)" * * THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The licence and distribution terms for any publically available version or * derivative of this code cannot be changed. i.e. this code cannot simply be * copied and put under another distribution licence * [including the GNU Public Licence.] */ #include <CCryptoBoringSSL_asn1.h> #include <CCryptoBoringSSL_bio.h> int i2a_ASN1_INTEGER(BIO *bp, const ASN1_INTEGER *a) { int i, n = 0; static const char *h = "0123456789ABCDEF"; char buf[2]; if (a == NULL) return (0); if (a->type & V_ASN1_NEG) { if (BIO_write(bp, "-", 1) != 1) goto err; n = 1; } if (a->length == 0) { if (BIO_write(bp, "00", 2) != 2) goto err; n += 2; } else { for (i = 0; i < a->length; i++) { if ((i != 0) && (i % 35 == 0)) { if (BIO_write(bp, "\\\n", 2) != 2) goto err; n += 2; } buf[0] = h[((unsigned char)a->data[i] >> 4) & 0x0f]; buf[1] = h[((unsigned char)a->data[i]) & 0x0f]; if (BIO_write(bp, buf, 2) != 2) goto err; n += 2; } } return (n); err: return (-1); }
var peliasQuery = require('pelias-query'); var _ = require('lodash'); module.exports = _.merge({}, peliasQuery.defaults, { 'size': 1, 'track_scores': true, 'layers': ['venue', 'address', 'street'], 'centroid:field': 'center_point', 'sort:distance:order': 'asc', 'sort:distance:distance_type': 'plane', 'boundary:circle:radius': '1km', 'boundary:circle:distance_type': 'plane', 'boundary:rect:type': 'indexed', 'ngram:analyzer': 'peliasQuery', 'ngram:field': 'name.default', 'ngram:boost': 1, 'phrase:analyzer': 'peliasPhrase', 'phrase:field': 'phrase.default', 'phrase:boost': 1, 'phrase:slop': 2, 'focus:function': 'linear', 'focus:offset': '0km', 'focus:scale': '50km', 'focus:decay': 0.5, 'focus:weight': 2, 'function_score:score_mode': 'avg', 'function_score:boost_mode': 'replace', 'address:housenumber:analyzer': 'peliasHousenumber', 'address:housenumber:field': 'address_parts.number', 'address:housenumber:boost': 2, 'address:street:analyzer': 'peliasStreet', 'address:street:field': 'address_parts.street', 'address:street:boost': 5, 'address:postcode:analyzer': 'peliasZip', 'address:postcode:field': 'address_parts.zip', 'address:postcode:boost': 3, 'admin:country_a:analyzer': 'standard', 'admin:country_a:field': 'parent.country_a', 'admin:country_a:boost': 5, 'admin:country:analyzer': 'peliasAdmin', 'admin:country:field': 'parent.country', 'admin:country:boost': 4, 'admin:region:analyzer': 'peliasAdmin', 'admin:region:field': 'parent.region', 'admin:region:boost': 3, 'admin:region_a:analyzer': 'peliasAdmin', 'admin:region_a:field': 'parent.region_a', 'admin:region_a:boost': 3, 'admin:county:analyzer': 'peliasAdmin', 'admin:county:field': 'parent.county', 'admin:county:boost': 2, 'admin:localadmin:analyzer': 'peliasAdmin', 'admin:localadmin:field': 'parent.localadmin', 'admin:localadmin:boost': 1, 'admin:locality:analyzer': 'peliasAdmin', 'admin:locality:field': 'parent.locality', 'admin:locality:boost': 1, 'admin:neighbourhood:analyzer': 'peliasAdmin', 'admin:neighbourhood:field': 'parent.neighbourhood', 'admin:neighbourhood:boost': 1, 'popularity:field': 'popularity', 'popularity:modifier': 'log1p', 'popularity:max_boost': 20, 'popularity:weight': 1, 'population:field': 'population', 'population:modifier': 'log1p', 'population:max_boost': 20, 'population:weight': 2 });
San Fernando railway station San Fernando railway station can be one of two railway stations previously served by Philippine National Railways: San Fernando railway station (La Union), serving San Fernando City in La Union San Fernando railway station (Pampanga), serving the City of San Fernando in Pampanga
This application outlines a career development plan for the applicant who is a practicing neonatologist with an interest in developmental immunology and has the goal of becoming an independent investigator. Under the mentorship of established basic science researchers and a multidisciplinary Advisory Committee, the Principal Investigator will pursue a program of education (coursework, conferences, seminars) and a research project addressing the causation of BPD, a critical issue in neonatal medicine. BPD is a sequence of chronic lung injuries in ventilated premature infants that can be quite severe, resulting in significant morbidity and mortality. Current treatments, including bronchodilator and diuretic therapies, are only palliative. Prevention of BPD using corticosteroids has some benefit, but this has recently been associated with significant and long-term adverse effects. Therefore, it is important to establish methods to identify infants who are at particular risk of developing BPD and to improve preventive therapies. Inflammation and the accumulation of activated neutrophils in the lung play a large part in the pathogenesis of BPD. We hypothesize that reduced clearance of neutrophils from the neonatal lung by apoptosis accounts, in large part, for the severity of inflammatory injuries in BPD, and that this is due to specific alterations in signaling pathways mediating neutrophil activation and apoptosis. To test this hypothesis, blood and lung neutrophil apoptosis in normal and premature infants will be compared with that in adults. Mechanisms underlying reduced apoptosis in neonatal neutrophils will also be analyzed. Quantifying markers for reduced lung neutrophil apoptosis that correlate with the development of BPD may allow us to identify patients at high risk who are good candidates for preventive treatment. The identification of specific pathways mediating apoptosis that are altered in neonatal neutrophils may also suggest preventive therapeutic strategies.
Background ========== It is estimated that 16 million American adults have coronary heart disease (CHD). CHD remains the leading cause of death in the United States with 652,091 registered deaths in 2005 \[[@B1]\]. To date, multiple longitudinal and cross-sectional studies have examined the association of CHD with psychological functioning, particularly depression \[[@B2],[@B3]\]. Over 100 studies have investigated this relationship, thus providing evidence that depression is prevalent (18% to 60%) in patients with CHD. This comorbidity has significant adverse effects on the course and outcome of CHD \[[@B4]-[@B7]\]. Depressed patients are twice as likely as nondepressed patients to have a major cardiac event within 12 months of the diagnosis of coronary artery disease \[[@B8]\]. In addition, the risk of mortality is greater in depressed patients compared to nondepressed after the following events: CHD \[[@B4]\], acute myocardial infarction \[[@B9]\], an episode of unstable angina \[[@B10]\], or CABG \[[@B4],[@B5]\]. Although the relationship between depression and cardiac events is well established, the mechanism underlying this relationship remains unclear \[[@B11]\]. However, three lines of evidence suggest that altered autonomic nervous system (ANS) activity in depressed patients might be responsible for the increased risk of mortality and medical morbidities in patients with CHD. First line of evidence originates from early reports of ANS dysregulation in depression was found in studies of medically ill patients with major depressive disorder (MDD). These studies found elevated levels of plasma and urinary catecholamines, primarily norepinephrine (NE), in depressed patients compared with controls \[[@B12]-[@B14]\]. These findings are significant because the concentrations of plasma NE generally parallel the level of activity of the sympathetic nervous system (SNS) and are highly correlated with sympathetic neural activity \[[@B14]\]. A second line of evidence is based on the consistent findings that resting Heart Rate (HR) is higher in depressed than nondepressed patients \[[@B14]-[@B16]\]. Depression is also associated with exaggerated HR response to physical and psychological stressors in both medically well individuals \[[@B17]\] as well as in patients with CHD \[[@B18]\]. As regulation of HR occurs primarily through a reciprocal interaction of the sympathetic and parasympathetic nervous system, and given that one of the functions of ANS is to regulate HR, elevated HR suggests dysregulation of cardiac ANS function. A third line of evidence is based on studies reporting decreased Heart Rate Variability (HRV) among depressed patients compared to nondepressed controls \[[@B8],[@B19],[@B20]\]. Over the last two decades, HRV has emerged as an important marker for examining the continuous interplay between the parasympathetic and sympathetic influences on HR that yields information about autonomic flexibility \[[@B21]\]. Increased HRV has been used as a marker of increased vagal activity and has been consistently associated with greater capacities to regulate stress, emotional arousal, and attention \[[@B22]\] while low HRV has been associated with excessive cardiac sympathetic modulation, inadequate parasympathetic modulation, or both \[[@B23]\]. A number of studies have found HRV to be lower in depressed psychiatric patients compared to controls \[[@B20],[@B21]\]. There is even greater evidence that HRV is lower in depressed than nondepressed patients with CHD \[[@B24],[@B25]\]. In summary, there is considerable evidence of autonomic cardiovascular dysregulation in depressed patients as well as in patients with CHD. However, it is unknown whether patients with CHD and depression have greater ANS dysregulation relative to patients with either depression or CHD alone (i.e., comorbidity versus single morbidity). It is also unknown whether ANS dysregulation explains the increased morbidity and mortality in patients with both disorders. Thus, the purpose of this study was twofold. First, we compared three markers of ANS function in four groups of patients: 1) Patients with coronary heart disease and depression (CHD/DEP), 2) Patients without CHD but with depression (NonCHD/Dep), 3) Patients with CHD but without depression (CHD/NonDep), and 4) Patients without CHD and depression (NonCHD/NonDep). Second, we investigated the association of ANS activity (HR, HRV, and plasma NE levels) impact of depression and autonomic nervous system activity on CABG outcomes. Second, we investigated the association between these markers of ANS function and group classification in cardiac patients (i.e., CHD/DEP vs. CHD/NonDep) and CABG outcomes (i.e., in-hospital length of stay and patient\'s type of discharge (i.e., routine or nonroutine), while holding constant potential differences in medical (e.g., diabetes, history of myocardial infarction, etc.) and sociodemographic (e.g., age, gender, etc.) variables. We hypothesized that patients in the CHD/Dep group will have the greatest dysregulation in autonomic function while patients in the NonCHD/NonDep group will have the least amount of autonomic dysregulation compared to the other 2 groups. We also hypothesized that ANS markers and group classification in cardiac patients will significantly predict in-hospital length of stay and patient\'s type of discharge. Specifically, there will be a significant positive association between HR and plasma NE levels and in-hospital length of stay. There will be a significant negative association between HRV and in-hospital length stay. In addition, patients in the CHD/Dep group will more likely be discharged non-routinely discharged following a CABG operation than those with CHD only. Both of these hypotheses reflect a possible additive effect of depression and heart disease on ANS dysregulation. Methods ======= Participants ------------ A sample of patients was recruited from private sector hospitals in the Northeast to form four groups of patients: 1) Patients with CHD and depression (CHD/DEP), 2) Patients without CHD and with depression (NonCHD/Dep), 3) Patients with CHD and without depression (CHD/NonDep), and 4) Patients without CHD and depression (NonCHD/NonDep). It should be noted that patients without depression have no current major depressive episodes. Patients with a history of depression, or minor forms of depression may be included in the nondepressed group. Procedure --------- Patients in the CHD/Dep and CHD/NonDep groups were recruited from patients who have a CHD diagnosis and were scheduled to undergo a first-time CABG with or without concomitant valve procedure. Patients in the NonCHD/NonDep group were recruited from a primary care clinic within the hospital while patients in the NonCHD/Dep group were recruited from the hospital\'s outpatient mental health clinics. Those who consented to participate in the study were assessed to determine if they met the study eligibility criteria. The inclusion criteria for the CHD/Dep group consisted of being enrolled to undergo a CABG operation, and having a diagnosis of MDD. The exclusion criteria for the CHD/DEP group were significant cognitive deficits or other psychiatric diagnoses. The inclusion criterion for the CHD/NonDep group consisted of being enrolled to undergo a CABG operation. The exclusion criteria for the CHD/NonDep group consisted of significant cognitive deficits, a diagnosis of MDD, or any other psychiatric diagnosis. The inclusion criterion for the NonCHD/Dep group consisted of a diagnosis of MDD. The exclusion criteria for this group consisted of a diagnosis CHD, significant cognitive deficits, or any other psychiatric diagnosis. Patients in the NonCHD/NonDep group were excluded if they had a diagnosis of MDD, CHD, significant cognitive deficits, or any other psychiatric diagnosis. Screening --------- Patients were initially screened to determine whether they met inclusion or exclusion criteria. Psychiatric interview and a psychophysiological assessment were conducted on all subjects who consented to participate in the study. Psychiatric Interview --------------------- The MINI International Neuropsychiatric Interview (MINI) \[[@B26]\] is a standardized diagnostic instrument for the diagnosis of psychiatric disorders using the Diagnostic and Statistical Manual, 4^th^edition (DSM-IV-TR) \[[@B27]\] and International Classification of Diseases (ICD) - 10 psychiatric disorders \[[@B28]\]. It consists of standardized, structured, closed-end questions throughout its diagnostic procedure The MINI has demonstrated adequate reliability and validity. Inter-rater and test-retest reliabilities were high among the majority of disorders. Validities with other lengthy structured diagnostic interviews such as the Structured Clinical Interview (SCID) for DSM-IIIR have been reported \[[@B26]\]. Research has shown that the MINI can be used successfully as a gold standard of psychiatric diagnosis in multi-center clinical trials and epidemiology studies \[[@B29]\]. The MINI was used to make the diagnosis of MDD. Heart Rate and Heart Rate Variability Measurement ------------------------------------------------- After a 12 hour fast which includes abstinence from smoking and a seated rest of 30 minutes, the HR, HRV and plasma NE levels were measured for each subject. The assessment of HR and HRV were gathered via recordings of EKG and respiration using the Nexus 10 BioTrace equipment and associated software version 1.16. The Nexus 10 is a 10 channel physiological monitoring and feedback platform that offers data acquisition at up to 2048 samples per second. It is a certified class 2-1 (EU) medical device. Following previous conventions \[[@B30]\], patients were excluded from further analysis if they were not in predominantly regular sinus rhythm or if they had sustained atrial arrhythmias such as atrial fibrillation or greater than 10% ectopic complexes. During EKG measurement, participants were instructed to maintain open eyes and avoid moving their wrists while the experimenter read excerpts from a collection of pleasant travel stories. This is a common HRV experimental paradigm designed to mimic normal waking state levels of arousal \[[@B31]\]. HRV was recorded for 15 minutes for each participant. At the end of the session the recordings were coded and saved for subsequent analysis. Movement artifacts above a certain threshold were automatically removed from the session overview which provides a display of the total session of respiration and heart rate data. Following previous convention \[[@B32]\], heart rate data were averaged across 60 seconds intervals at a sampling rate of 512 hertz and edited by averaging premature ectopic beats that exceeded a 25% difference between two consecutive data points. HRV was calculated as the standard deviation of all normal-to-normal RR intervals (SDNN; intervals between adjacent QRS complexes). Plasma Norepinephrine Assessment -------------------------------- Blood samples (1.2 mL) were drawn from the antecubital vein by acute venipuncture and were contained in chilled, heparinized tubes containing ethylene glycol tetraacetic acid and 200 mmol/L reduced gluthione. The plasma was then stored in polystyrene tubes at −70°C until assayed. The assay and laboratory procedures for measuring NE have been described in detail elsewhere \[[@B33]\], and have been used by other investigators in similar studies \[[@B34]\]. ### Medical Covariates A number of plausible variables has been identified that could influence ANS regulation, particularly HR, HRV, and plasma NE levels. To help partition out the effects of these variables, we have included the following covariates: age, education, race, diabetes mellitus, hypertension, history of asthma, history of myocardial infarction, cigarette smoking, alcohol consumption, level of physical activity, body mass index (BMI), and the Deyo score \[[@B35]\]. The Deyo Score is a comorbidity index that was adapted from the Charlson Comorbidity Index \[[@B36]\]. It is designed to capture comorbid conditions recorded in the inpatient setting using ICD-9-CM diagnosis and procedure codes and has been widely used in outcomes studies with administrative datasets as the principal data source\[[@B37]\]. The Deyo Score assesses comorbid medical conditions such as myocardial infarction, congestive heart failure, peripheral vascular disease, cerebrovascular disease, dementia, chronic obstructive pulmonary disease, rheumatologic disease, mild liver disease, diabetes melllitus, diabetic complications, hemiplegia or paraplegia, renal disease, malignancy, moderate to severe liver disease, metastatic solid tumors, and acquired immune deficiency syndrome/human immunodeficiency viral infection. The Deyo Score was determined by weighted scoring of comorbidities. Individual comorbidities were then combined to form a Deyo Score \[[@B35]\]. Patients were considered smokers if they smoked1 pack or more of cigarettes per week and for at least 5 years were considered smokers. BMI was assessed using a BMI calculator that took into consideration the patient\'s weight and height. Physical activity was assessed using the International Physical Activity Questionnaires (IPAQ) \[[@B38]\]. The IPAQ consists of eight items that estimate the time spent performing physical activities (low to high). A number of studies have been conducted on the IPAQ, showing that it produces reliable data as well as acceptable concurrent, criterion, and construct validity \[[@B38],[@B39]\]. Alcohol consumption was measured using the Alcohol Use Disorders Identification Test (AUDIT) \[[@B40]\]. The AUDIT is a 10-item self-report questionnaire. Each of the questions has a set of responses to choose from, and each response has a score ranging from 0-4. All the responses were then added to form a total score. A recent systematic review of the literature has concluded that the AUDIT is the best screening instrument for the range of alcohol problems in primary care \[[@B41]\]. Outcome Variables ----------------- Outcome variables in this study included length of inpatient hospital stay and patient disposition. Length of inpatient hospital stay (measured in days) is defined as the difference between the hospital admission date and the date of discharge for the patient. Disposition of patients was coded as routine or non-routine. Patients were coded as having a non-routine disposition if they were discharged to short-term hospital, skilled nursing facility, intermediate care facility, another type of facility, home health care, or against medical advice. Statistical Analysis -------------------- We hypothesized that patients in the CHD/Dep group will have the greatest dysregulation in autonomic function while patients in the NonCHD/NonDep group will have the least amount of autonomic dysregulation compared to the other 2 groups. To examine our first hypothesis, chi-square and one-way analysis of variance (ANOVA) were performed to evaluate group differences across demographic and medical variables, as well as markers of ANS dsyregulation. Any variables that differed significantly between the four groups were used in subsequent regression models as covariates to assess the independent impact of ANS indicators on medical outcomes following CABG. Our second hypothesis was that ANS markers and group classification of cardiac patients (see above) will significantly predict in-hospital length of stay and patient discharge disposition. Specifically, there will be a significant positive association between HR and plasma NE levels and in-hospital length of stay. There will be a significant negative association between HRV and in-hospital length stay. In addition, patients that are in the CHD/Dep group will more likely be discharged non-routinely following a CABG operation than those in the CHD/NonDep group. To address these hypotheses, logistic regression and multiple regression analyses were used. Logistic regression analysis was conducted with patient discharge disposition as an outcome variable after controlling for the effects of age, Deyo score, physical activity, and BMI (i.e., variables that were significant in previous chi-square and ANOVA analyses). Independent variables in this analysis include group membership, HR and HRV. Multivariable regression analysis was also performed assessing the impact of group membership, HR, and HRV on in-hospital length of stay after controlling for the effects of age, Deyo score, physical activity, and BMI. Results ======= Chi square test and separate one-way ANOVAs were conducted to evaluate the relationship between groups of patients and demographic and medical characteristics (See Table [1](#T1){ref-type="table"}). The independent variable had four levels: CHD/Dep, CHD/NonDep, NonCHD/Dep, andNonCHD/NonDep. The dependent variables were demographic, medical, and ANS dysregulation variables. For age, the ANOVA was significant, *F*(3, 358) = 3.75, *p*= .011. The strength of the relationship between groups of patients and age, as assessed by η^2^, was weak, with the groups of patient factor accounting for 1% of the variance of the dependent variable. For the Deyo score, the ANOVA was significant, *F*(3, 358) = 5.59, *p*= .001. The strength of the relationship between groups of patients and the Deyo score was weak with the groups of patient factor accounting for 4.5% of the variance of the dependent variable. For BMI, the ANOVA was significant, *F*(3, 358) = 7.46, *p*\< .001. The strength of the relationship between groups of patients and the BMI was weak with the groups of patient factor accounting for 5.9% of the variance of the dependent variable. The four groups also differ on physical activity, *χ^2^*(3, *n*= 362) = 45.6, *p*\< .05 (two-tailed), with *ϕ*= .067. For heart rate, the ANOVA was significant, *F*(3, 358) = 13.3, *p*\< .001. The strength of the relationship between groups of patients and HR was weak, with the groups of patient factor accounting for 16% of the variance of the dependent variable. For HRV, the ANOVA was significant, *F*(3, 358) = 205.1, *p*\< .001. The strength of the relationship between groups of patients and HR was strong, with the groups of patient factor accounting for 46% of the variance of the dependent variable. ###### Demographic and Medical Characteristics -------------------------------------------------------------------------------------------------------------------------- Characteristics CHD/Dep (1) NonCHD/Dep (2) CHD/NonDep (3) NonCHD/NonDep (4) *p* Post Hoc --------------------------- ------------- ---------------- ---------------- ------------------- --------- ---------------- Age 61.3\ 59.2\ 62.0\ 58.3\ .011 3 \> 4\* (8.3) (9.1) (9.3) (7.5) Education 10.3\ 12.1\ 12.2\ 11.9\ .169 (4.9) (6.3) (7.1) (6.6) Race .326  Caucasian 81.9%\ 79.3%\ 81.4%\ 91.2%\ (68) (73) (79) (82)  African American 9.6%\ 14.2%\ 10.3%\ 6.6%\ (8) (13) (10) (6)  Hispanic 8.5%\ 6.5%\ 8.3%\ 2.2%\ (7) (6) (8) (2) Deyo score 1.39\ .961\ 1.30\ .776\ .001 1 \> 4\*\ (1.04) (1.35) (.933) (1.23) 3 \> 4\* History of MI 34.9%\ 35.9%\ 35.1%\ 20%\ .062 (29) (33) (34) (18) History of asthma 3.6%\ 5.4%\ 7.2%\ 4.4%\ .724 (3) (5) (7) (4) Cigarette Smoker 55.4%\ 59.7%\ 57.3%\ 42.2%\ .054 (46) (55) (56) (38) AUDIT 25.8\ 27.4\ 26.4\ 19.6\ (6.2) (7.2) (5.8) (4.3) Physical Activity .003  Low 63.9% (53) 62.0%\ 51.5%\ 35.6%\ (57) (50) (32)  Moderate 26.5%\ 33.7%\ 37.1%\ 50.0%\ (22) (31) (36) (45)  High 9.6%\ 4.3%\ 11.3%\ 14.4%\ (8) (4) (11) (13) Diabetes 26.5%\ 25%\ 28.9%\ 13.3%\ .064 (22) (23) (28) (12) Hypertension 32.5%\ 28.2%\ 30.9%\ 16.7%\ .073 (27) (26) (30) (15) Body mass index (BMI) 29.7\ 29.3\ 27.8\ 24.9\ \< .001 1 \> 4\*\ (7.2) (6.4) (8.8) (8.2) 2 \> 4\* Heart rate 76.3\ 74.4\ 71.4\ 66.9\ \< .001 1 \> 3 \> 4\*\ (11.4) (12.2) (10.9) (10.3) 2 \> 4\* Heart rate variability^a^ 19.79\ 24.53\ 24.89\ 50.51\ \< .001 1 \< 2 \< 4\*\ (7.9) (7.6) (7.88) (12.5) 1 \< 3 \< 4\* Plasma NE^b^ 293\ 343\ 308\ 341\ .120 (99) (175) (211) (160) -------------------------------------------------------------------------------------------------------------------------- Note. AUDIT = Alcohol Use Disorders Identification Test. CHD/Dep = Patients with CHD and depression. NonCHD/Dep = Patients without CHD but with depression. CHD/NonDep = Patients with CHD but without depression. NonCHD/NonDep = Patients without CHD or depression. ^a^Standard deviation of RR (msec). ^b^alog transformed pg/ml. \* *p*\< .05. Follow-up tests were conducted to evaluate the pairwise differences among the means. Because the variances among the four groups ranged from 55.5 to 85.7, we chose not to assume the variances were homogeneous and conducted post hoc comparisons with the use of the Dunnett\'s *C*test, a test that does not assume equal variances among the four groups. For age, there was a significant difference in the means between the CHD/NonDep and the NonCHD/NonDep groups with the CHD/NonDep group being older than the NonCHD/NonDep group. For the Deyo score, there were significant differences in the means between the CHD/Dep and NonCHD/NonDep groups and the CHD/NonDep and the NonCHD/NonDep groups with the CHD/DEP and the CHD/NonDep groups having higher means scores on the Deyo score compared to the NonCHD/NonDep group. For the BMI, there were significant differences in the means between the CHD/Dep+Dep+CHD and NonCHD/NonDep groups and between the NonCHD/Dep and the NonCHD/NonDep groups with the CHD/DEP and the NonCHD/Dep groups having higher means scores on the BMI compared to the NonCHD/NonDep group. For heart rate, the CHD/DEP group had the highest HR followed by the CHD/NonDep and the NonCHD/NonDep groups. For HRV, the CHD/Dep group had the lowest HRV while the NonCHD/NonDep group had the highest HRV. Table [2](#T2){ref-type="table"} contains results of logistic regression analysis with patient discharge disposition (non-routine = 1 and routine = 0) as an outcome variable after controlling for the effects of age, Deyo score, physical activity, and BMI. Independent significant predictors of patient discharge were the following: being in the CHD/Dep group (OR: 1.43, HR (OR: 1.39), and HRV (OR: .597). Table [3](#T3){ref-type="table"} contains results of multivariable regression analysis with length of in-hospital stay as the dependent variable. The adjusted *R*^2^of .26 indicates that a fourth of the variability in length of stay is predicted by group, HR, and HR variability. Independent significant predictors included: group classification (*B*= 1.56), HR (*B*= .058), and HRV (*B*= .-963). ###### Logistic Regression Analysis Predicting Routine Discharge after controlling for the Effects of Age, Deyo score, Physical Activity, and BMI (n = 180) Variable *B* SE *B* Wald\'s Statistic Odds Ratio (95% CI) ------------------------------------- ----------- -------- ------------------- --------------------- Group (1 = CHD/DEP, 0 = CHD/NonDep) .516\* .023 14.3 1.43 (1.33-2.63) Heart rate .343\* .121 15.5 1.12 (1.02-1.04) Heart rate variability -.513\*\* .094 19.9 .597 (.497-.718) Note. CHD/DEP = coronary artery disease (CHD) and depression. - CHD/NonDep = CHD but without depression. \* *p*\< .05. \*\* *p*\< .01. ###### Multiple Regression Analysis Predicting In-Hospital Length of Stay after controlling for the Effects of Age, Deyo score, Physical Activity, and BMI (n = 180) Variable *B* SE *B* β 95% CI ------------------------------------- ---------- -------- ------- ------------ Group (1 = CHD/DEP, 0 = CHD/NonDep) 1.56\*\* .276 .786 .986-2.76 Heart rate .058\* .096 .265 .456-.956 Heart rate variability -.963\* .123 -.564 -.126-.021 Note. CHD/DEP = coronary artery disease (CHD) and depression . - CHD/NonDep = CHD but without depression. \* *p*\< .05. \*\* *p*\< .01. Discussion ========== Despite the significant contribution in the literature on mental health and cardiovascular diseases, we simply do not know at this time which mechanisms account for the relationship between depression and outcomes following a CABG surgery \[[@B11]\]. Also, to the best of our knowledge, there are no published studies that compared the incidence of ANS dysregulation in patients with both CHD and depression, to those with either depression or CHD alone. It is also unknown whether ANS dysregulation could explain CABG outcomes. These two questions are important to address because if ANS dysregulation is what links depression CABG outcomes, then recognition and treatment of ANS dysregulation may lead to improved patient outcomes. Thus, it was in this framework that we sought to address ANS dysregulation and outcomes following a CABG operation. Our initial analyses revealed that age, Deyo score, physical activity, BMI, HR, and HRV were significantly different across the four groups. Specifically, patients that had CHD only were significantly older than the patients who did not have CHD or depression. Also, those that had CHD with or without depression have higher Deyo scores than patients who did not have CHD or depression. This is expected given that the Deyo score reflects 17 comorbid medical conditions. The measurement of ANS regulation/dysregulation has long been debated in the medical community. In our study, we defined ANS dysregulation as having a high basal HR, low HRV, and high plasma NE levels. Based on this definition, we found that patients with both depression and heart disease have the greatest autonomic dysregulation compared to the other three groups. The results supported our first hypothesis showing that patients diagnosed with both CHD and depression have HR and lower HRV than patients in the other three groups. However, the findings were not consistent for plasma NE levels; this unexpected finding might be due to the following reasons: It is well documented that there are many factors that can influence plasma levels of catecholamines such as psychological stress, temperature, posture, exercise, medications, and food intake \[[@B42]\]. Furthermore, the sympathetic nervous system consists of many different nerves that are distributed throughout the body. Consequently, the measurement of sympathetic nerve activity in one area of the body may not truly reflect sympathetic nerve activity throughout the body. Moreover, we collected blood samples from the antecubital vein. Given that sympathetic nerve activity in the arm may influence levelsof antecubital plasma NE levels, this measurement of plasma NE level may not accurately reflect plasma NE levels throughout the body. As suggested by others \[[@B43],[@B44]\], total body sympathetic nerve activity might be better assessed using arterialized venous sampling and plasma NE kinetic techniques that rely on dilution of radiolabeled NE and mathematical modeling to provide estimates of postganglionic norepinephrine release and clearance. The results supported our initial hypothesis that there are group differences across indicators of ANS activity. However, group differences do not provide much support that ANS dsyregulation predicts outcome following a CABG operation. Thus, in our subsequent research questions we examined whether markers of ANS activity predicted in-hospital length of stay and patient discharge disposition. It was found that there was increased length of stay and greater likelihood of having a non-routine discharge following CABG in patients with both depression and CHD compared to those with CHD only. This finding is interesting for the following reasons. First, it suggests that HR and HRV may have an additive effect to CABG outcomes. Second, by including group classification as an independent variable in our analyses (and after controlling for potential confounding variables), we were able to assess whether there was an association between group classification and outcomes following a CABG operation. Despite the positive findings, there are several limitations to consider, the first of which involves extraneous variables that might inflate the systematic error in the study. Despite statistically controlling for a number of extraneous variables to remove some of the variability in the dependent variable that is due to these extraneous variables, there were other variables that were not controlled for during the study. Because of the nature of the study, which limits the ability to randomly assign patients to different conditions, variables such as the number of grafts patients received and medications that they were currently taking might have a systematic effect on (correlates with) length of stay and patient discharge disposition. Secondly, although this study used the MINI in diagnosing patients, it did not assess reliability of diagnoses using multiple raters. Thirdly, the generalizability of the results of this study to the general population is limited because the study had only male patients and it included patients with depression but without comorbid psychiatric disorders. And Finnaly, the index that was used in this study was SDNN. SDNN is an acceptable measure of HRV for short term measurements. There have been studies of short term HRV as predictors of cardiac mortality and morbidity. However, most studies of HRV and depression in CHD has calculated HRV from 24 hour ambulatory monitoring and used frequency domain indices of HRV. Conclusions =========== In summary, the current study presents evidence to support the hypothesis that ANS dysregulation might be one of the underlying mechanisms that links depression to CABG outcomes. However, further research is needed to control for other potential covariates such as diet and testing conditions to confirm that ANS dysregulation is the mechanism underlying these two conditions. Also, these preliminary results suggest that we begin to focus on treatment-related questions. For instance, future studies should focus on developing and testing interventions that targets modifying ANS dysregulation. Furthermore, it would be beneficial to know if improved ANS regulation can decrease morbidity and mortality in depressed CHD patients following CABG. This line of research may guide therapeutics especially that HRV can be modified through pharmacologic and biobehavioral therapies as well as exercise and exercise therapies \[[@B45]\]. Competing interests =================== The authors declare that they have no competing interests. Authors\' contributions ======================= TD was involved in developing the intellectual content of the manuscript as well as participated in the collection of the data, the analysis of the data, and the drafting of the manuscript. JS was involved in the design of the study as well as participated in the data analysis. EW was involved in revising the important intellectual content of the manuscript. DM participated in the design of the study and drafting the manuscript. EW participated in collecting the data and scoring the instruments. All authors read and approved the final manuscript.
Q: Free software to create a map server that can be installed directly in a subdomain without a server install? I want to create a web map page using my own shapefiles but all the map server software that I’ve seen need to be installed directly in the server. Is there any software that allows me to create a map server just by putting the files in some folder and "execute" them (like php or js, for example)? A: You could try Openlayers after converting your shapefiles to GML, JSON, or KML. Ogr2ogr is useful for the conversion step; see OGR vector formats for format details. (source)
If i'm having extremely horrible back pain lower back pain and hurt everytime i move even worse does this make should I go to the hospital? In brief: No If you experience loss of bowel or bladder control or develop weakness of the legs then a trip to the er is warranted. You should go to see your regular doc about your pain. In brief: No The er is for life threatening emergencies. If you experience loss of bowel or bladder control or develop weakness of the legs then a trip to the er is warranted. You should go to see your regular doc about your pain. Would you like to video or text chat with me?
DESCRIPTION (provided by applicant: Speech sounds are the most important sounds that humans hear, yet little is known about the functional properties of the interconnected auditory and auditory-related brain regions that are essential to normal speech perception. Our research goal is to understand where and how speech information is processed within this network. We use novel combinations of complementary invasive and non-invasive experimental methods to study these brain regions in neurosurgery patients who require placement of chronic intracranial electrodes. These experiments involve combining direct cortical electrophysiological recordings with electrical stimulation techniques and anatomical and functional MRI methods. Our investigative strategy makes use of these unique experimental opportunities to overcome long-standing barriers to progress in this research field. Recent methodological advances now enable us to simultaneously study neural processing and connectivity at all levels of this network and directly test neural models of speech perception in human subjects. We will pursue our goals by testing hypotheses regarding: (1) the locations and functional properties of auditory cortical fields and auditory-related cortices of the temporal and frontal lobes, (2) the functional connections between these areas and other regions of the human brain, and (3) the directional flow of speech information within this network. These objectives are pursued by an experienced multidisciplinary group of investigators with expertise encompassing all required clinical and research topic areas. To our knowledge, the resulting data will be the first of its kind to directl demonstrate how speech information is processed at all levels of the temporal-frontal lobe auditory cortical system, and to directly demonstrate point-to-point functional connections between these cortical network regions and sites elsewhere in the human brain. Knowledge of the normal network will improve our understanding of the pathophysiology of disease states affecting this system, and will provide mechanistic insights that are required to inform the design of new treatment strategies.
Q: Zooming in so a smaller portion of the game scene fills the entire screen so i am creating a game similar to super-smash-bros, in which the camera should zoom in on players when they are close together, and scale back when they are apart. I create a window for my game via m_pWindow = SDL_CreateWindow("First SDL game attempt", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1000, 1000, SDL_WINDOW_SHOWN); and then initialise my render via m_pRenderer = SDL_CreateRenderer(m_pWindow, -1, SDL_RENDERER_ACCELERATED); and when i call render for my buffer, it simply calls SDL_SetRenderDrawColor(m_pRenderer, m_clearRed, m_clearGreen, m_clearBlue, 0xff); SDL_RenderClear(m_pRenderer); SDL_RenderPresent(m_pRenderer); Now i am wondering where in this solution should i implement the camera, if you could point me in the right direction that would be great. For example, i have a player positioned at {500, 800}, and another at {300,800}. I want to zoom in on this point of the screen. The total window is 1000x1000, but rendering {x = 200->600, y = 600->1000} is what i want to achieve. I have currently tried using SDL_Rect rect; rect.x = 0; rect.y = 0; rect.w = 900; //TEST VALUES rect.h = 900; //TEST VALUES SDL_RenderSetViewport(m_pRenderer, &rect); But this simply doesn't render part of the screen, as opposed to scaling the entire view. Any other help would be greatly appreciated A: As it seems there is no thing like a camera in SDL you have to implement that by yourself. The call you found SDL_RenderSetViewport(...) is indeed for a very different purpose, it adjusts the size (and offset) of the area within your window to draw to. That is useful e.g. if the window is resized. Where to implement the camera is rather simple, you just need to calculate the bounding box of the locations of your players and take its center and size for your rendering. This center (translation) and size (scale) are what defines a camera in 2D (actually there could be also rotation but that's it). How to apply is rather difficult, since (AFAIK) there is no pendant in SDL for it. You have to do it explicitly on your one. You should never change the actual position of your players, instead you have to calculate a on screen position for every render cycle (frame). One example for this can be found here. If you want to do this more efficiently on the GPU, you could use OpenGL see here (Notice this is about legacy OpenGL). You could also use modern (core profile) OpenGL, but I think that this is rather a bit too hard for a simple 2D game.
AT&T says it is "in advanced discussions" with power companies to start trials of a new broadband technology in at least two locations by this fall. This is an update on the Project AirGig that AT&T announced in September 2016. AirGig is a wireless technology even though it depends on the presence of power lines. Antennas that are placed on utility poles send wireless signals to each other; AT&T says the power lines "serve as a guide for the signals," ensuring they reach their destination. AT&T says the wireless signals could be used to deliver multi-gigabit Internet speeds for either smartphone data or home Internet service. Trial locations have not yet been announced, but today's announcement says, "One location will be in the United States with others to be determined in the coming months." There's also no word on when commercial deployment might begin, but AT&T seems to be excited about the project. "Future field trials will demonstrate how Project AirGig works to support power companies’ smart grid technologies, such as meter, appliance, and usage control systems and early detection of powerline integrity issues," AT&T said. "The trials will also evaluate the technology during inclement weather, such as rain, snow, and high winds. Importantly, we can more precisely determine the cost of deployment while maintaining the highest signal quality for a customer." AirGig devices use inductive power and don't require a direct electrical connection It's not clear whether any individual customers will get AirGig service in the trials this year. We asked AT&T, and the company said, "that's among the details we’re working out for the first trials." Though AT&T has made fiber-to-the-premises available to nearly 4 million customer locations nationwide, the company's old copper networks haven't been upgraded in a lot of areas, leaving many customers with painfully slow speeds or no wired broadband at all. AT&T doesn't seem to be in any rush to help all of these customers access modern Internet speeds, but it's also testing a couple of technologies in addition to AirGig that might help rural areas. AT&T provided very short updates on those projects today. One such technology is G.fast, a new version of DSL that greatly increases speeds over copper lines. "Based on the learnings of a G.fast trial at a multifamily property in Minneapolis, we plan to make the technology available at additional locations beginning mid-2017," AT&T said. G.fast can offer fiber-like speeds but those speeds degrade over distance, just like traditional DSL, so in many areas, AT&T would need to bring fiber closer to homes to deploy G.fast. AT&T is also testing a home wireless Internet service for rural areas. "In 2016, we began trialling a Fixed Wireless Internet (FWI) service in several states on our path to expand access to locations with slow or no Internet connectivity—primarily in rural areas—as part of our participation in the FCC Connect America Fund (CAF)," AT&T said today. "We plan to begin offering FWI in areas where we accepted CAF support in mid-2017, reaching over 400,000 locations by the end of this year. Ultimately we plan to expand internet access to more than 1.1 million locations across 18 states by the end of 2020."
Q: I want to build Custom ListView in android. Screen Attached How i can achieve this. What should be used for it, Dialog,Snackbar or popup window with animation. I have to show the list items as in this image. A: You can either use Dialog Fragment or Bottom sheet with Recycler view. In what's app I think bottom sheet is used with so many constraints. So for bottom sheet with Recycler view you can check this or this And for Dialog Fragment you can check this for how you can display dialog at bottom. Hope will help you...
Related Stories Provincial police have found the body of a fisherman who drowned on Northern Light Lake this weekend. Dennis Todd, 59, of Jefferson City, Mo., was found in the Trafalgar Bay area Sunday afternoon. He was reported missing Thursday night, after another fisherman found a woman alone on an island. The woman said she had been fishing with Todd. OPP said the woman was wearing a life jacket but Todd was not when the two were thrown from the boat. Both were employees of Gunflint Lodge in Grand Marais, Minn. The lodge's owner, Bruce Kerfoot, described Todd as an "extremely skilled fisherman" who had worked for him for 27 years. He said he wasn't surprised to learn Todd had not been wearing a life jacket. 'Total, freak accident' "The guides usually strap them to their seat, because they want to be more nimble," said Kerfoot. "Unfortunately, it's kind of the habit of guides not to wear vests unless they're in rough water conditions." Original reports about the incident indicated that Todd and his passenger were thrown from the boat when it hit something underwater, but Kerfoot said that doesn't appear to be what happened. OPP officers were assisted in the search by guides from Gunflint Lodge. (Supplied by OPP) "The re-enactment looks like he lost the grip on his motor," said Kerfoot, whose employees assisted OPP with the initial search for Todd. "And that meant, at full throttle, it did a 90-degree turn, quickly, and the centrifugal force ejected both people from the boat. "That was just one of those total, freak accidents." Kerfoot said he plans to buy lighter, more comfortable life jackets to encourage guides to wear them whenever they're on the water.
1. Field of the Invention The present invention relates to a vehicular electronic control apparatus with at least a portion of control data therefor remaining unspecified when the process for manufacturing thereof is finished, and permitting control data corresponding to a specific specification to be specified at a later stage. Further, the invention relates to a method for setting a control specification for the vehicular electronic control apparatus. 2. Description of the Related Art Japanese Laid-open (Kokai) Patent Application Publication H04 (1992)-092734 discloses a vehicular electronic control apparatus in which a plurality of control specifications are previously stored, and one of the control specifications is selected to execute the control. Japanese Laid-open (Kokai) Patent Application Publication No. H08 (1996)-237772 discloses a technique in which a plurality of electronic control devices mounted on a vehicle mutually diagnose failures, and if a first electronic control device which sends data is failed, a second electronic control device which receives the data carries out control by using a default value. In the case of a vehicular electronic control apparatus of which the manufacture is finished in a state where at least a portion of control data is unspecified during the production of the electronic control apparatus, so that information of a specification is taken therein afterwards from an outside to determine the control specification, if the power is turned ON to carry out taking-in of the specification information, a normal control operation is not yet carried out because the control specification is not determined. Hence, when the electronic control apparatus includes a self-diagnosing means and an associated means for transmitting the result of self-diagnosis done by the self-diagnosing means to the outside, since the normal control operation does not take place, the self-diagnosing means will determine that a condition is abnormal due to its self-diagnosis operation, and such an abnormality determination result will necessarily be outputted by the associated transmitting means to the outside. Nevertheless, under a condition such that the control specification has not yet established, determination of occurrence of any abnormal state should not be made by the self-diagnosis operation, and therefore if any abnormality determination result is outputted to the outside, an unfavorable problem may occur such that an erroneous result of the self-diagnosis is stored and unnecessary control operation may start, based on the erroneous result of the diagnosis.
Peripheral neuropathy after concomitant dimethyl sulfoxide use and sulindac therapy. The case is presented of a 63-year-old man with a long history of degenerative arthritis who took sulindac (Clinoril) 200 mg BID for 6 months with no untoward effects. Then, without physician knowledge, he began using 90% dimethyl sulfoxide (DMSO) topically to his upper and lower extremities. Shortly thereafter, he developed a profound mixed sensorimotor peripheral neuropathy. Serial electromyographic and nerve conducion studies performed at intervals of several months for 1 year suggested both segmental demyelination and axonal neuropathy. The patient experienced initial deterioration followed by gradual but incomplete recovery.
Wikileaks founder Julian Assange flatly rejected U.S. intelligence claims that his organization received leaked Clinton emails from the Russian government, saying the allegations are part of a 'foolish' and 'dangerous' effort by Democrats to overturn Donald Trump’s election victory. 'Our source is not the Russian government,' Assange told Sean Hannity on his radio show on Thursday, in his first U.S. interview since the election. 'We have U.S. intelligence saying that say they know how we got our stuff and when we got it, and us saying we didn’t get it from a state.' Assange said his group has a strict policy against commenting on its sources, but he wanted to dispute allegations that Wikileaks was involved in a Russian-orchestrated campaign to swing the election for Donald Trump. Julian Assange has rejected U.S. intelligence claims that his organization received leaked Clinton emails from the Russian government, saying the allegations are part of a 'foolish' and 'dangerous' effort by Democrats to overturn Donald Trump’s election victory (file) The CIA believes the Russian government gave Wikileaks hacked emails from the Democratic National Committee and Clinton's campaign Chief John Podesta to intentionally damage Hillary Clinton. But that view has been questioned by the FBI and the Director of National Intelligence, who say there is not enough evidence to determine Russia’s motivation and whether it gave the documents to Wikileaks. Assange declined to confirm or deny comments from former UK Ambassador Craig Murray – a close Wikileaks associate – who told Dailymail.com this week that the group's email sources were American and that he met with one of them in Washington, D.C. 'We don’t comment on sourcing,' said Assange. 'Craig Murray is a former UK ambassador. He is a friend of mine. He is not authorized to speak on behalf of Wikileaks.' Murray told Dailymail.com that he traveled to Washington, D.C. in September and met with a Wikileaks source in a wooded area near American University. 'Neither of [the leaks] came from the Russians,' said Murray. 'The source had legal access to the information. The documents came from inside leaks, not hacks.' Murray is a controversial figure. He was removed from his posting in Kazakhstan amid allegations of misconduct. He was cleared but quit the U.K. diplomatic service and is now a critic of successive British governments. The CIA believes the Russian government gave Wikileaks hacked emails from the Democratic National Committee and Clinton's campaign Chief John Podesta (pictured in September) to intentionally damage Hillary Clinton Assange declined to confirm or deny comments from former UK Ambassador Craig Murray (pictured speaking to media in 2012) who said this week that the group's email sources were American and that he met with one of them in Washington, D.C. Although Assange said Murray does not speak for Wikileaks, the ex-diplomat's links to the organization are well known. Assange speculated that Clinton supporters were promoting the Russia allegations to raise doubts about the election's legitimacy, in a last-ditch effort to block Trump from getting instated by the Electoral College. Democrats have been urging electors in states that voted for Trump to flip their support when the Electoral College meets on Monday. If enough electors were to switch their votes, it could block Trump from taking office – although experts say the strategy is a long-shot and would almost certainly be overruled in congress. 'It's foolish because it won't happen,' said Assange. 'It's dangerous because the argument that it should happen can be used in four years' time, or eight years' time, for a sitting government that doesn't want to hand over power. That's a very dangerous thing.' The Wikileaks founder said he was surprised by Trump's victory, and that the inaccurate polling – which predicted a comfortable Clinton victory – might have actually helped him win by giving Clinton a false sense of security. Assange speculated that Clinton supporters were promoting the Russia allegations to raise doubts about the election's legitimacy, in a last-ditch effort to block Trump from getting instated by the Electoral College. Hillary Clinton is pictured December 8 The Clinton campaign 'got fooled by the polling and therefore didn't spend the amount of money that they needed to on the campaign, and didn't recruit even more mainstream media sources to beat up Trump and defend Clinton.' He also called the mainstream media 'a paper tiger in this election' and 'increasingly not very important.' 'There was intense pressure in the United States from the mainstream media to make people feel ashamed for wanting to vote for Donald Trump, and to make them feel that they had to vote for Hillary Clinton, even though they didn't want to,' said Assange. 'The degree of bias they've been showing during this election…this is the other reason why Trump won,' he added. 'That kind of hectoring from the liberal media in the United States, and the tide of advertising that Hillary Clinton was putting out, really put a lot of people off.'
--- title: Ansible Operator Watches linkTitle: Watches weight: 20 --- The Watches file contains a list of mappings from custom resources, identified by it's Group, Version, and Kind, to an Ansible Role or Playbook. The Operator expects this mapping file in a predefined location: `/opt/ansible/watches.yaml` These resources, as well as child resources (determined by owner references) will be monitored for updates and cached. * **group**: The group of the Custom Resource that you will be watching. * **version**: The version of the Custom Resource that you will be watching. * **kind**: The kind of the Custom Resource that you will be watching. * **role** (default): Specifies a role to be executed. This field is mutually exclusive with the "playbook" field. This field can be: * an absolute path to a role directory. * a relative path within one of the directories specified by `ANSIBLE_ROLES_PATH` environment variable or `ansible-roles-path` flag. * a relative path within the current working directory, which defaults to `/opt/ansible/roles`. * a fully qualified collection name of an installed Ansible collection. Ansible collections are installed to `~/.ansible/collections` or `/usr/share/ansible/collections` by default. If they are installed elsewhere, use the `ANSIBLE_COLLECTIONS_PATH` environment variable or the `ansible-collections-path` flag * **playbook**: This is the playbook name that you have added to the container. This playbook is expected to be simply a way to call roles. This field is mutually exclusive with the "role" field. When running locally, the playbook is expected to be in the current project directory. * **vars**: This is an arbitrary map of key-value pairs. The contents will be passed as `extra_vars` to the playbook or role specified for this watch. * **reconcilePeriod** (optional): The maximum interval in seconds that the operator will wait before beginning another reconcile, even if no watched events are received. When an operator watches many resources, each reconcile can become expensive, and a low value here can actually reduce performance. Typically, this option should only be used in advanced use cases where `watchDependentResources` is set to `False` and when is not possible to use the watch feature. E.g To managing external resources that don’t raise Kubernetes events. * **manageStatus** (optional): When true (default), the operator will manage the status of the CR generically. Set to false, the status of the CR is managed elsewhere, by the specified role/playbook or in a separate controller. * **blacklist**: A list of child resources (by GVK) that will not be watched or cached. An example Watches file: ```yaml --- # Simple example mapping Foo to the Foo role - version: v1alpha1 group: foo.example.com kind: Foo role: Foo # Simple example mapping Bar to a playbook - version: v1alpha1 group: bar.example.com kind: Bar playbook: playbook.yml # More complex example for our Baz kind # Here we will disable requeuing and be managing the CR status in the playbook, # and specify additional variables. - version: v1alpha1 group: baz.example.com kind: Baz playbook: baz.yml reconcilePeriod: 0 manageStatus: False vars: foo: bar # ConfigMaps owned by a Memcached CR will not be watched or cached. - version: v1alpha1 group: cache.example.com kind: Memcached role: /opt/ansible/roles/memcached blacklist: - group: "" version: v1 kind: ConfigMap # Example usage with a role from an installed Ansible collection - version: v1alpha1 group: bar.example.com kind: Bar role: myNamespace.myCollection.myRole # Example filtering of resources with specific labels - version: v1alpha1 group: bar.example.com kind: Bar playbook: playbook.yml selector: matchLabels: foo: bar matchExpressions: - {key: foo, operator: In, values: [bar]} - {key: baz, operator: Exists, values: []} ``` The advanced features can be enabled by adding them to your watches file per GVK. They can go below the `group`, `version`, `kind` and `playbook` or `role`. Some features can be overridden per resource via an annotation on that CR. The options that are overridable will have the annotation specified below. | Feature | Yaml Key | Description| Annotation for override | default | Documentation | |---------|----------|------------|-------------------------|---------|---------------| | Reconcile Period | `reconcilePeriod` | time between reconcile runs for a particular CR | ansible.sdk.operatorframework.io/reconcile-period | 1m | | | Manage Status | `manageStatus` | Allows the ansible operator to manage the conditions section of each resource's status section. | | true | | | Watching Dependent Resources | `watchDependentResources` | Allows the ansible operator to dynamically watch resources that are created by ansible | | true | [dependent watches](../dependent-watches) | | Watching Cluster-Scoped Resources | `watchClusterScopedResources` | Allows the ansible operator to watch cluster-scoped resources that are created by ansible | | false | | | Max Runner Artifacts | `maxRunnerArtifacts` | Manages the number of [artifact directories](https://ansible-runner.readthedocs.io/en/latest/intro.html#runner-artifacts-directory-hierarchy) that ansible runner will keep in the operator container for each individual resource. | ansible.sdk.operatorframework.io/max-runner-artifacts | 20 | | | Finalizer | `finalizer` | Sets a finalizer on the CR and maps a deletion event to a playbook or role | | | [finalizers](../finalizers)| | Selector | `selector` | Identifies a set of objects based on their labels | | None Applied | [Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)| | Automatic Case Conversion | `snakeCaseParameters` | Determines whether to convert the CR spec from camelCase to snake_case before passing the contents to Ansible as extra_vars| | true | | #### Example ```YaML --- - version: v1alpha1 group: app.example.com kind: AppService playbook: playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False snakeCaseParameters: False finalizer: name: finalizer.app.example.com vars: state: absent ``` **Note:** By using the command `operator-sdk add api` you are able to add additional CRDs to the project API, which can aid in designing your solution using concepts such as encapsulation, single responsibility principle, and cohesion, which could make the project easier to read, debug, and maintain. With this approach, you are able to customize and optimize the configurations more specifically per GKV via the `watches.yaml` file. **Example:** ```YaML --- - version: v1alpha1 group: app.example.com kind: AppService playbook: playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False finalizer: name: finalizer.app.example.com vars: state: absent - version: v1alpha1 group: app.example.com kind: Database playbook: playbook.yml watchDependentResources: True manageStatus: True ```
Devices that deliver drugs through the skin for absorption into the body have been known for some time. For example, U.S. Pat. No. 3,249,109 describes a two-layer topical dressing that consists of an adhesive base made of drug-containing hydrated gelatin with a fabric backing layer. This type of device could be considered a "skin-controlled" device because the system delivers an excess of drug to the skin and the rate of absorption is controlled by the permeability of the skin at the application site which can vary over relatively wide ranges from site-to-site and individual-to-individual. In order to deliver transdermal drugs having a relatively narrow therapeutic range, and for which such wide variations could not be tolerated, "system-controlled" delivery devices were developed which deliver drugs transdermally at rates which are controlled primarily by the delivery device to reduce or eliminate the variations in drug input rate associated with variations in skin permeability. For example, U.S. Pat. No. 3,598,122 describes a multilayer adhesive bandage formed of a backing layer, a drug reservoir layer and a contact adhesive layer, and includes means for metering the rate at which the drug is released to the skin. Other representative system controlled transdermal drug delivery devices are described in U.S. Pat. Nos. 3,797,494 and 4,379,454, the latter of which teaches controlling the rate at which a drug is absorbed through the skin by controlling the rate at which a permeation enhancer for the drug is delivered to the skin. (All of the aforementioned U.S. patents are incorporated herein by reference.) In addition, Black, "Transdermal Drug Delivery Systems", U.S. Pharmacist, November 1982, pp. 49-78, provides additional background information regarding commercially available transdermal drug delivery systems and a reasonably complete summary of the factors involved in percutaneous absorption of drugs may be found in Arita, et al, "Studies on Percutaneous Absorption of Drugs", Chem. Phar. Bull., Vol. 18, 1970, pp. 1045-1049; Idson, "Percutaneous Absorption", J. Phar. Sci., Vol. 64, No. 6, pp. 910-922; and Cooney, Advances in Biomedical Engineering, Part 1, Chapter 6, "Drug Permeation Through Skin: Controlled Delivery for Topical of Systemic Therapy", Marcel Dekker, Inc., New York and Basel, 1980 pp. 305-318. Although the transdermal drug delivery route is rapidly becoming a preferred delivery route for a wide variety of drugs, transdermal delivery is not without its problems. A large number of drugs are oil-insoluble and in aqueous solutions exist, depending on pH, either as the unionized acid or base or in the ionized salt form. The unionized forms of most drugs are generally more permeable through the skin than the ionized drug making it easier to achieve, either with or without permeation enhancers, blood levels which are capable of producing the desired therapeutic effects. (See R. J. Scheuplein, et al., "Permeability of the Skin", Physiological Reviews, Vol. 51, No. 4, October 1972, pp. 702-747, particularly 729-735). Unfortunately, the pH of aqueous solutions of a free base or acid is usually below 3 for the acid or above 10 for the base, and transdermal delivery at these pH's may cause discomfort and/or irritation to the skin of the recipients. Adjusting the pH of solutions of these drugs to a more physiologically acceptable level (e.g., 5-8) results in a substantial proportion of the drug being converted to the nonpermeable, ionized form. As a result, prior to our invention we are unaware of any transdermal drug delivery system which is capable of delivering the ionized form of highly ionized, fat insoluble drugs at rates adequate to produce desired therapeutic effects. It is accordingly an object of this invention to provide a medical device for transdermal drug delivery adapted to deliver the ionized form of a highly ionized, fat insoluble drug. It is another object of this invention to provide a transdermal drug delivery device capable of delivering a highly ionized, fat insoluble drug from an aqueous reservoir. It is another object of this invention to provide a transdermal drug delivery device in which a highly ionized, fat insoluble drug is delivered at a substantially physiological pH. It is another object of this invention to provide a transdermal drug delivery device capable of delivering the ionized form of a fat insoluble drug at a substantially constant rate. It is another object of this invention to provide reservoir compositions useful in the aforementioned drug delivery devices.
891 So.2d 596 (2004) William E. MATTHEWS, Petitioner, v. The STATE of Florida, Respondent. No. 3D04-2909. District Court of Appeal of Florida, Third District. December 29, 2004. Rehearing Denied January 21, 2005. William E. Matthews, in proper person. Charles J. Crist, Jr., Attorney General, for respondent. Before GERSTEN, FLETCHER, and WELLS, JJ. PER CURIAM. We deny William E. Matthews' petition for writ of mandamus, through which he seeks to compel the trial court to correct his habitual offender sentence, pursuant to Blakely v. Washington, ___ U.S. ___, 124 S.Ct. 2531, 159 L.Ed.2d 403 (2004). We note first that Blakely does not apply retroactively to cases on collateral appeal. In re Dean, 375 F.3d 1287, 1290 (11th Cir.2004)("Regardless whether Blakely established a `new rule of constitutional law' . . . the Supreme Court has not expressly declared Blakely to be retroactive to cases on collateral appeal."). See also McBride v. State, 884 So.2d 476 (Fla. 4th DCA 2004). Further, Matthews' general assertion that the habitual offender statute is illegal under Blakely and that he should have been given a guidelines sentence is incorrect. Blakely does not declare habitual offender sentencing illegal, and because Matthews was legally sentenced as an habitual offender the sentencing guidelines are inapplicable. Petition denied.
Tuesday, November 11, 2014 NEW AND IMPROVED ROMNEY DID WELL IN 2014... AND HE GETS IT. WILL HE RUN??? Romney, who was in Washington on Friday to speak at the Israeli American Council’s national conference, has experienced a resurgence of sorts in recent months, as dozens of campaigns asked for his assistance. He spent the final days before Tuesday’s elections in Alaska, where he stumped for Republican Senate candidate Dan Sullivan. At Friday’s event, Romney sharply criticized President Obama’s foreign policy. “It’s tempting to think he’s just inept,” Romney said, “but the reality is, he does have a foreign policy” — one that is “weakening our military and distancing us from our allies.” I THINK THE ABOVE PROVES ONCE AGAIN THAT ROMNEY REALLY GETS IT AND THAT HE WOULD MAKE A GREAT POTUS. WE MIGHT FIND A BETTER CANDIDATE FOR 2016. WE CERTAINLY COULD DO WORSE. IT TOOK REAGAN A FEW DEFEATS TO GET THE HANG OF IT. EVEN LINCOLN LOST TO STEPHEN DOUGLAS A FEW TIMES. I THINK HE'D BE BETTER THAN JEB AND HUNTSMAN AND SANTORUM ANDTHAT HE'S GOT BETTER EXECUTIVE EXPERIENCE THAN RUBIO OR CRUZ. PERRY WAS A DUD LAST TIME; MAYBE HE'LL IMPROVE THIS TIME - BUT HE'S STILL TEXAN AND A LOT OF THE SWING VOTERS STILL HAVE TEXAS-FATIGUE. WALKER IS GREAT, BUT NO COLLEGE AND NO MILITARY EXPERIENCE AND HE LOOKS LIKE A KID. KASICH IS A GOOD ONE: HE'S GOT GREAT EXPERIENCE IN CONGRESS AND AS A GOVERNOR.
Mark Harrison's bloghttp://blogs.warwick.ac.uk/markharrison en-GB(C) 2015 Mark Harrisonhttp://blogs.law.harvard.edu/tech/rssMark HarrisonMark HarrisonWarwick Blogs, University of Warwick, http://blogs.warwick.ac.uk120Russia's Improbable Futures and the Lure of the Past by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/russias_futures/ <p class="answer">Writing about web page <a href="http://rbctv.rbc.ru/polls/list" title="Related external link: http://rbctv.rbc.ru/polls/list">http://rbctv.rbc.ru/polls/list</a></p> <p style="padding-left: 30px;">On 27 January I was asked to join a panel on Russia's Future within the University of Warwick One World Week. (The other panel members were Richard Connolly, co-director of the University of Birmingham Centre for Russian, European, and Eurasian Studies, and the journalist Oliver Bullough.) I decided to talk about how Russians are looking to the past in order to understand their uncertain future. Here, roughly, is what I said:</p> <p>Russia has many possible futures; all of them are improbable. The economy must do better, stay the same, or do worse. Relations with the West must improve, remain as they are, or deteriorate further. Adding them up, there are nine possible combinations. The probability of any particular combination is small, so each is improbable. But one of them must happen because, taken together, the sum of the probabilities is one. One of them must happen, but we have no idea which one.</p> <p>Faced with an uncertain future, we often look to the past for guidance and reassurance. What was the outcome when we were previously in a situation that felt the same? At New Year, many Russians were looking to the past. I found this out when I stumbled on the website of RBC-TV, a Russian business television channel. Every day <a href="http://rbctv.rbc.ru/polls/list">the RBC website polls its fans</a> on a different multiple-choice question. On 30 December, the question of the day, with answers (and votes in parentheses), was:</p> <blockquote class="quotes"> <p>What should Father Frost bring for Russia?</p> <ul> <li>End of sanctions (6%)</li> <li>End of the war in Ukraine (27%)</li> <li>A stable ruble (7%)</li> <li>Return of the Soviet Union (59%)</li> </ul> </blockquote> <p>It's disconcerting to be reminded of the strength of nostalgia among Russians for the time when their country was a global superpower. The Soviet Union united all the Russias -- if anyone's not sure what that means, that's Great Russia, Little Russia and New Russia (Ukraine), and White Russia (Belarus) -- with the countries of the Baltic, Transcaucasia, and Central Asia. The Soviet Union stood for strong centralized rule, with a powerful secret police and thermonuclear weapons. The nostalgia is shared by President Putin, who said (on 25 April 2005): &ldquo;The collapse of the USSR was the greatest geopolitical disaster of the [twentieth] century.&rdquo; </p> <p>Here's a question that RBC asked its supporters on 25 December:</p> <blockquote class="quotes"> <p>Can direct controls and a price freeze save Russia&rsquo;s economy? </p> <ul> <li>Yes, the free market is not up to the job (55%)</li> <li>No, that would cause insecurity and panic (40%)</li> <li>No need &ndash; no crisis (5%)</li> </ul> </blockquote> <p>Again, the strength of support for the backward-looking answer is disconcerting. I tried to think of the last time the Russian economy was in a squeeze like today's. The last time the oil price price came down like this was the mid-1980s when North Sea and Alaskan oil broke the power of the OPEC cartel for a few years (that's <a href="http://www.project-syndicate.org/commentary/oil-prices-ceiling-and-floor-by-anatole-kaletsky-2015-01">the analysis of Anatole Kaletsky</a>). The disappearance of oil rents probably contributed to the collapse of the Soviet economy.</p> <p>But a closer parallel to today is 1930, when two things happened at once. The global market for Soviet exports shrank in the Great Depression. And international lending dried up, meaning that the Soviet economy could not roll over its debts. The Soviet import capacity collapsed almost overnight. Stalin responded by forcing the pace of import substitution through rapid industrialization. He demanded &quot;The five plan in four years!&quot; The result was a crisis of excessive mobilization that claimed millions of lives in the famine of 1932 and 1933.</p> <p>Prominent in calling for an economic breakthrough today is President Putin, who responded to Western sanctions on 18 September 2014: &ldquo;In the next 18 to 24 months we need to make a real breakthrough in making the Russian real sector more competitive, something that in the past would have taken us years.&rdquo; Government-friendly Russian economists are talking about <a href="http://svpressa.ru/economy/article/102320/">the need to go from a market economy back to a mobilization economy</a>. In case the foreigners aren't getting the message, first deputy prime minister Shuvalov told those assembled in Davos on 23 January: &ldquo;We will survive any hardship in the country &ndash; eat less food, use less electricity.&rdquo;</p> <p>A third question that RBC asked its viewers was on 19 December:</p> <blockquote class="quotes"> <p>What matters most for the country right now? </p> <ul> <li>The foreign exchange rate (33%)</li> <li>Who is a true patriot and who is fifth column (56%)</li> <li>&ldquo;Vyatskii kvas&rdquo; (11%)</li> </ul> </blockquote> <p>(The English equivalent of &quot;Vyatskii kvas&quot; would probably be Devon cider. For the reasons why it was being talked up as a solution to Russia's problems last December, click <a href="http://www.rferl.org/content/putin-press-conference-drunk-journalist-stroke/26751313.html">here</a>.)</p> <p>Here the strength of support for the backward looking answer is shocking. What is the &quot;fifth column&quot; and how does it resonate in Russian history? In 1937, Stalin saw Moscow surrounded and penetrated by enemies. This coincided with the siege of Madrid in Spain&rsquo;s Civil War. In 1936 the nationalist General Mola was asked which of his four columns would take Madrid. He replied, famously: &ldquo;My fifth column&rdquo; (of undercover nationalist agents already in the city). In Madrid the Republicans responded by executing 4,000 nationalist sympathisers. In the Soviet Union Stalin, who was also watching, ordered the execution of 700,000 &ldquo;enemies of the people.&rdquo; </p> <p>In recent times, the spectre of a &quot;fifth column&quot; was first reawakened by President Putin on 18 March 2014, when he remarked: &quot;Western politicians are already threatening us with not just sanctions but also the prospect of increasingly serious problems on the domestic front. I would like to know what it is they have in mind exactly: action by a fifth column, this disparate bunch of &lsquo;national traitors&rsquo;, or are they hoping to put us in a worsening social and economic situation so as to provoke public discontent?&quot;</p> <p>Putin took up this theme again on 18 December 2014: &quot;The line that separates opposition activists from the fifth column is hard to see from the outside. What&rsquo;s the difference? Opposition activists may be very harsh in their criticism, but at the end of the day they are defending the interests of the motherland. And the fifth column is those who serve the interests of other countries, and who are only tools for others&rsquo; political goals.&quot;</p> <p>Here you can see that Putin did affirm the possibility that opposition can be loyal. But is it possible for Russia to have a loyal opposition today? The only example of loyal opposition that Putin could bring himself to mention was the poet Lermontov -- who died in 1841.</p> <p>These echoes of the Soviet past in Russian opinion today are disconcerting and even frightening. At the same time it is important to remember that, even while Russians look to the past, Russia today is absolutely not the Soviet Union. From today's vantage point it is nearly impossible to imagine how closed, stifling, claustrophobic, and isolated was everyday life even in late Soviet times. Russians in 2015 lead very different lives from Soviet citizens in 1985. They are richer, live longer, are able to visit, study, phone, and write abroad. Even today they are relatively free to search for and find information and discuss it among themselves. In all these ways, the transition from communism has not been a failure. </p> <p>As Andrei Shleifer and Daniel Treisman (2014) wrote recently: &quot;Putin&rsquo;s authoritarian turn clearly makes Russia more dangerous. But it does not, thus far, make the country politically abnormal. In fact, on a plot of different states&rsquo; Polity [i.e. democracy] scores against their incomes, Russia still deviates only slightly from the overall pattern. For a country with Russia&rsquo;s national income, the predicted Polity score [a measure of democracy] in 2013 was 76 on the 100-point scale. Russia&rsquo;s actual score was 70, on a par with Sri Lanka and Venezuela.&quot;</p> <p>To see Russia as just another middle income country helps us to identify Russia's underlying problem. In Russia, just like in Sri Lanka, Venezuela, and most countries outside &ldquo;the West,&rdquo; wealth and power are fused in a small, closed elite, and that is how it has always been. The fusion of wealth and power was and remains normal. Before the revolution Russia was governed by a landowning Tsar, aristocracy, and church. After the revolution Russia was governed by a communist elite that monopolized all productive property plus media, science, and education. Today Russia is governed by an ex-communist, ex-KGB elite that has once again gathered control of energy resources and the media. This fusion of wealth and power is neither new nor is it unusual among middle and low income countries.</p> <p>In societies where wealth and power are fused, particular people are powerful because they control wealth and the same people are wealthy if and only if they are powerful. This is what gives politics in such societies its life-and-death immediacy. To lose power means to lose everything; when power change hands there is often violence. &ldquo;All politics is real politics,&quot; write Douglas North, John Wallis, and Barry Weingast (2009); &quot;people risk death when they make political mistakes.&rdquo;</p> <p>Several times in history, liberal reformers have tried to separate wealth and power in Russia and make space for public opinion. Here are some examples from the last 150 years:</p> <ul> <li>In 1864 a reform brought elected local governments &ndash; but within an absolute monarchy.</li> <li>Shaken by military defeats and popular insurrections, in 1906 the Russian monarchy introduced an elected parliament, although with few powers, and ndividual peasant landownership, although (as it turned out) with little time for implementation.</li> <li>In 1992 and 1995 Russia saw voucher privatization and &quot;loans-for-shares,&quot; creating a class of corporate shareholders &ndash; but the outcome was crony capitalism, not free enterprise.</li> <li>In 2003, Mikhail Khodorkovskii tried to separate the governance of Yukos from the &quot;power vertical,&quot; but he went to prison for it.</li> </ul> <p>All these efforts have so far achieved only partial or temporary success. Russia has not yet found a solution to the problem of the fusion of wealth and power. Here, at last, is an aspect of Russia's future that is certain: If Russia is ever to find a solution to this problem, it will be there.</p> <p style="padding-left: 30px;">Note: I updated this column after publication to correct a date -- 2014, which appeared as 1914.</p> <h2>References</h2> <ul> <li>North, Douglas, John Wallis, and Barry Weingast. 2009. Violence and Social Orders A Conceptual Framework for Interpreting Recorded Human History. Cambridge: Cambridge University Press.</li> <li>Shleifer, Andrei, and Daniel Triesman. 2014. &quot;Normal Countries: The East 25 Years after Communism.&quot; Foreign Affairs, November-December. </li> </ul>EconomicsHistoryPutinRussiaStalinWed, 18 Feb 2015 07:00:35 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/russias_futures/#comments094d73cd4b7e772a014b89d6d77701531The military power, economics and strategy that led to D-Day by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/the_military_power/ <p class="answer">Writing about web page <a href="http://theconversation.com/the-military-power-economics-and-strategy-that-led-to-d-day-27663" title="Related external link: http://theconversation.com/the-military-power-economics-and-strategy-that-led-to-d-day-27663">http://theconversation.com/the-military-power-economics-and-strategy-that-led-to-d-day-27663</a></p> <p style="padding-left: 30px;"><em>The Conversation published this column on the seventieth anniversary of D-Day, June 6 2014. I thought I'd include it here.</em></p> <p>On June 6, 1944, more than 150,000 Allied troops landed in Normandy. Their number rose to 1.5m over the next six weeks. With them came millions of tons of equipment, ranging from munitions, vehicles, food, and fuel to prefabricated floating harbours.</p> <p>The achievement of the Normandy landings was, first of all, military. The military conditions included co-operation (between the British, Americans, and Free French), deception and surprise (the Germans knew an invasion was coming but were led to expect it elsewhere), and the initiative and bravery of officers and men landing on the beaches, sometimes under heavy fire. <a href="http://en.wikipedia.org/wiki/Normandy_landings">More than 4,000 men died on the first day</a>.</p> <p>D-Day was made possible by its global context. Germany was already being defeated by the Soviet Army on the eastern front. There, 90% of German ground forces were tied down in a protracted losing struggle (after D-Day this figure fell to two-thirds). The scale of fighting, killing, and dying on the eastern front was a multiple of that in the West. For the Red Army in World War II, <a href="http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/harrison/public/patrioticwar2006.pdf">4,000 dead was a quieter-than-average day</a>.</p> <p>Economic factors were also involved. In 1944 the main fighting still lay in the east, but <a href="http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/harrison/public/ww2overview1998.pdf">the Allied economic advantage lay in the west</a>. Before the war the future Allies had twice the population and more than twice the real GDP of the Axis powers. During the war the Allies pooled their resources so as to maximise the production of fighting power in a way that the Axis powers did not attempt to match. America made the biggest single contribution, shared with the Allies through Lend-Lease.</p> <p>Between 1942 and 1944 <a href="http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/harrison/public/ww2overview1998.pdf">Allied war production exceeded that of the Axis</a> in every category and on all fronts. This advantage was especially great in the West. In the chart below, a value of one on the horizontal plane would mean equality between the two sides. Values above one measure the Allied dominance:</p> <p><br /> </p> <figure class="align-centre"><img alt="" src="https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/50415/width668/6qcqdv6c-1401984377.jpg" border="0" /> <figcaption><span class="caption"><em>The Allies made more planes, guns, tanks and bombs on every front.</em></span> <span class="attribution"><em><a class="source" href="http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/harrison/public/ww2overview1998.pdf" rel="nofollow">Mark Harrison</a></em></span> </figcaption> </figure> <p><br /> </p> <p>Eventually the accumulation of firepower helped turn the tide. A German soldier in Normandy <a href="http://www.ibiblio.org/hyperwar/USA/BigL/BigL-7.html">told his American captors</a>, &ldquo;I know how you defeated us. You piled up the supplies and then let them fall on us.&rdquo;</p> <p>D-Day was made possible by economics, but it was made inevitable by other calculations. When the outcome of the war was in doubt, Stalin demanded the Western Allies open a &ldquo;second front&rdquo; in Western Europe to take pressure off the Red Army. At this time, working towards D-Day was a price that the Allies paid for Stalin&rsquo;s cooperation in the war. By 1944 German defeat was assured; now D-Day became a price the Western Allies paid in order to help decide the post-war settlement of Europe.</p> <p>While D-Day was inevitable, its success was not predetermined by economics or anything else. The landings were preceded by years of building up men and combat stocks in the south of England, and by months of detailed logistical planning. But <a href="http://books.google.co.uk/books/about/Supplying_War.html?id=Tu3XZTx_s84C">most of the plans were thrown to the wind</a> on the first day as the chaos of seasick men struggling through the surf and enemy fire onto the Normandy sands unfolded. This greatest amphibious assault in history was a huge gamble that could easily have ended in disaster.</p> <p>Had the D-Day landings failed, our history would have been very different. The war would have dragged on beyond 1945 in both Europe and the Pacific. Germany would still have been undefeated when the first atomic bombs were produced; their first victims would have been German, not Japanese. Germany and Berlin would never have been divided, because the Red Army would have occupied the whole country. The Cold War would have begun with the Western democracies greatly disadvantaged. We have good reason to be grateful to those who averted this alternative history.</p> <p style="padding-left: 30px;"><img alt="The Conversation" height="1" src="https://counter.theconversation.edu.au/content/27663/count.gif" width="1" border="0" /><em>Mark Harrison does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.</em></p> <p style="padding-left: 30px;">This article was originally published on <a href="http://theconversation.com">The Conversation</a>. Read the <a href="http://theconversation.com/the-military-power-economics-and-strategy-that-led-to-d-day-27663">original article</a>. </p>EconomicsHistoryPoliticsStalinWarFri, 13 Jun 2014 15:22:49 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/the_military_power/#comments094d73cd465c0c3e014695d2f79010c50Stay Where You Are: Russia Will Come to You by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/stay_where_you/ <p class="answer">Writing about web page <a href="http://www.forbes.com/sites/paulroderickgregory/2014/03/10/putins-big-lie-on-ukraine-if-it-werent-so-serious-it-would-be-funny/" title="Related external link: http://www.forbes.com/sites/paulroderickgregory/2014/03/10/putins-big-lie-on-ukraine-if-it-werent-so-serious-it-would-be-funny/">http://www.forbes.com/sites/paulroderickgregory/2014/03/10/putins-big-lie-on-ukraine-if-it-werent-so-serious-it-would-be-funny/</a></p> <p>An old joke has resurfaced in connection with Ukraine's Crimean crisis. I saw it first in <a href="http://www.forbes.com/sites/paulroderickgregory/2014/03/10/putins-big-lie-on-ukraine-if-it-werent-so-serious-it-would-be-funny/">a column by my co-author Paul Gregory</a>:</p> <blockquote class="quotes"> <p>You want to live in France? Go to France. You want to live in Britain? Go to Britain. You want to live in Russia? Stay where you are: Russia will come to you.</p> </blockquote> <p>It's generally hard to work out when and where such jokes originated, but this one has real-life foundation. </p> <p>Before the war Menachem Begin, who was later Israel's prime minister, was a Jewish activist in Poland. When Germany and the Soviet Union divided Poland in 1939 he fled to Lithuania, where Soviet troops arrived in 1940. With thousands of others, Begin was arrested. He was accused of being a British agent under Article 58 of the RSFSR (Russian republic) criminal code, which dealt with counter-revolutionary crimes. In a later memoir Begin recalled a prison conversation (Weiner and Rahi-Tamm 2012, p. 14): </p> <blockquote class="quotes"> <p>When Begin inquired how article 58 of the Soviet Criminal Code (counter revolutionary activity, treason, and diversion) could be applied to activities that were considered legal in then sovereign Poland, his interrogator did not hesitate: &ldquo;Ah, you are a strange fellow [chudak], Menachem Wolfovich. Article 58 applies to everyone in the world. Do you hear? In the whole world. The only question is when he will get to us or we to him.&rdquo; </p> </blockquote> <p>This raises an interesting question: If the jokes are the same, is the system the same? In other words, is Putin's Russia the same as Stalin's Soviet Union? In most aspects of everyday life the answer is: Clearly not. In Russia today there is far more freedom of speech, assocation, and enterprise than there ever was in the Soviet Union. But there is also much less of these things than there should be. And there are disturbing continuities with the Soviet past in Putin's KGB background and loyalty, his nostalgia for the Soviet empire, and the identification of national power with his personal regime. </p> <p>Directly linked to these things is continuity in Russia's menacing approach to its neighbours. The people of what was once eastern Poland (now western Ukraine and western Belarus), and Lithuania, Latvia, and Estonia, are being reminded today that they live in territories to which &quot;Russia came&quot; in 1939 and 1940. These occupations were followed by unanimous parliamentary votes and rigged referenda, the registration of the population and issuing of &quot;passports&quot; (ID cards), and mass arrests and deportations.</p> <p>If we are returning to the past, one may hope for a new era of Russian jokes. Unfortunately, it may turn out that the best jokes have already been told.</p> <h2>Reference</h2> <p>Weiner, Amir, and Aigi Rahi-Tamm. 2012. Getting to Know You: The Soviet Surveillance System, 1939-1957. Kritika 3:1, pp. 5-45.</p>HistoryPutinStalinThu, 13 Mar 2014 10:28:25 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/stay_where_you/#comments094d73cd446a02f90144bafc60bc199a0Stalin Equals Cromwell: How Putin Sees Russia's Past by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/stalin_versus_cromwell/ <p class="answer">Writing about web page <a href="http://www.kremlin.ru/news/19859" title="Related external link: http://www.kremlin.ru/news/19859">http://www.kremlin.ru/news/19859</a></p> <blockquote class="quotes"> <p>How is Cromwell so different from Stalin? Can you tell me? There is no difference. From the standpoint of our liberal representatives, from the liberal spectrum of our political establishment, he is a similarly bloody dictator. He was a treacherous guy, and he played an ambivalent role in the history of Great Britain. His memorial stands, and no one is tearing it down.</p> </blockquote> <p>Russia's President Vladimir Putin does not know the difference between Joseph Stalin and Oliver Cromwell. It is true, as Putin declared (at a four-hour press conference held at the end of last year, on 19 December 2013), that Cromwell was a dictator. It is true, also, that Cromwell's historic achievements were stained with the blood of others. Yet his statue stands in Westminster outside the British Parliament. Putin's implication is clear: Like Cromwell, Stalin is just another national leader from times past, and any nation would be willing to remember him for his place in national history.</p> <p>What should we take from this? There is a characteristic skew to Putin's view of Russia's past. But this is hardly new. In 2007 <a href="http://archive.kremlin.ru/text/appears/2007/06/135323.shtml">Putin had this to say</a>:</p> <blockquote class="quotes"> <p>As for the problematic pages in our history -- yes, they existed. The same as in the history of any state! Indeed, we have had fewer than some others. And not as terrible for us as in some others. Yes, we had some dreadful pages: let's remember the events that began in 1937, let's not forget them. But there were no less in other states, they've had worse. At least we haven't used atomic weapons on civilians. We haven't flooded thousands of kilometres with chemicals and we haven't dropped seven times more bombs on a small country than were used in the whole Great Patriotic [War, i.e. World War II], as happened in Vietnam, let's say. We've had no other black pages such as Nazism, for example.</p> <p>You never know what might have happened in the history of other states and peoples! We can't afford to let them make us feel guilty about it -- they should worry about themselves.</p> </blockquote> <p>In short, Putin does not see much to feel bad about in Soviet public life <strong>before </strong>1937. He feels bad about &quot;the events that began in 1937&quot; (when Stalin ordered the execution of 700,000 and the imprisonment of 1.5 million more), but these were no more than would fall into the normal range of bad stuff that might have happened anywhere. I'm not going to go into more detail here on this. Interested readers can go back to <a href="http://www.newrepublic.com/article/books/the-problematic-pages">the blistering response of Leon Aron</a>, who said it at the time much better than I can.</p> <p>If &quot;Stalin = Cromwell,&quot; what does it matter? One implication might be for Russia's public life, given that Stalin is still politically relevant to Russia in a way that Cromwell is not to the UK. It is three centuries and a half since England's Civil War was concluded and there is no significant Cromwellian party in British public life (other than perhaps in Northern Ireland). Russia today, in contrast, has many active claimants to Stalin's mantle, including a communist party whose leader Gennadii Zyuganov, <a href="http://www.kremlin.ru/news/19859">according to Putin</a>, could be considered as the second figure in Russia's public life. Still, Putin is not calling on Russians to rally under Stalin's banner and return to the peasant-slayer's precepts; far from it.</p> <p>An alternative implication is the one that matters: Putin wishes Russia's past to be seen as normal. Specifically, a believer in the Russian state and national power, he wishes the history of Russia's state to be seen as continuous and normal. All countries have had their builders of the nation state and its capacity: Cromwell, Napoleon, Bismarck, Ataturk, ... and Stalin. All were forceful modernizers, Putin seems to say, that got their way by imposing sacrifices and crossing the margins of conventional morality. But all deserve their laurels and should have their statues. As for their transgressions, we will not forget to mention &quot;the events that began in 1937,&quot; but there's no need to enumerate the mass graves in the birch woods or to detail who killed whom on whose orders.</p> <p>My guess would be that this view resonates strongly with many Russians today. It's something you can easily lose sight of in Moscow, where most streets and squares lost their Soviet-era appelations and decorations in the early 1990s, and went back to the pre-revolutionary style. But Moscow is not Russia. In many provincial Russian towns the statues of Lenin and other Bolshevik revolutionaries still stand.</p> <p>A minor detail caught my eye in the reporting of the recent tragic events in Volgograd (formerly Stalingrad): the second (trolleybus) bombing of 30 December <a href="http://www.bbc.co.uk/news/world-europe-25546477">took place in the city's Dzerzhinskii district</a>, that is, a part of the city named after Feliks Dzerzhinskii, founder of the Soviet secret police and architect of Red Terror in Russia's civil war. <a href="http://ru.wikipedia.org/wiki/%D0%94%D0%B7%D0%B5%D1%80%D0%B6%D0%B8%D0%BD%D1%81%D0%BA%D0%B8%D0%B9_%D1%80%D0%B0%D0%B9%D0%BE%D0%BD">According to Wikipedia</a>, there remain no less than ten Dzerzhinskii districts in Russia's cities and provinces (as well as one in Eastern Ukraine), not to mention the town of Dzerzhinsk, not far from Nizhnii Novgorod. In provincial Russia you can't yet have Stalingrad, despite a campaign to restore Stalin's name to the city, but it's quite normal to have Dzerzhinskii. In Moscow the destruction of Dzerzhinskii's statue was one of the symbolic acts of 1991; <a href="http://postcommunistmonuments.ca/wp/?p=207">recent calls to restore it</a> have evoked polarized opinions.</p> <p>I thought about this a few months ago when I visited Ekaterinburg. Standing on the edge of Asia, Ekaterinburg is the capital of a province the size of England and Scotland combined, but with less than a tenth of the population. First named after the Empress Catherine the Great, the city was renamed Sverdlovsk in 1924 after the early death of Soviet Russia's first head of state: Yakov Sverdlov. In 1991 the city's pre-revolutionary name was restored, but its hinterland is still called Sverdlov province, and Sverdlov's statue still stands on the main street. </p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2014/01/05/dsc04055.jpg?maxWidth=500" alt="Sverdlov" border="0" /><br /> </p> <p>Photo: Mark Harrison.</p> <p>Ekaterinburg's streets and squares commemorate many figures from the Bolshevik past from Kuibyshev (architect of the first five year plan) and Malyshev (Stalin's minister of the atomic industry) to Michurin (Stalin's pet anti-Darwinian pseudo-scientist) and Serov (first head of the post-Stalin KGB). Oh, and here's the &quot;Iset&quot; hotel, built in the shape of a hammer and sickle in the 1930s as an apartment block for security officials and their families. </p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2014/01/05/dsc04085.jpg?maxWidth=500" alt="Gorodok chekistov" border="0" /><br /> </p> <p>Photo: Mark Harrison.</p> <p>People still call it <em>Gorodok chekistov</em>, the little town of the secret policemen. Elsewhere in the town is <em>Ulitsa chekistov</em>, the street of the secret policemen. </p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2014/01/05/dsc04043.jpg?maxWidth=500" alt="Lenin" border="0" /><br /> </p> <p>Photo: Mark Harrison</p> <p>In Ekaterinburg Lenin's statue stands opposite the town hall, just as Sergo Ordzhonikidze's statue stands in the suburbs outside the head office of Uralmash, the giant Soviet-era engineering factory. Ordzhonikidze was Stalin's minister for heavy industry. (He shot himself in 1937 as a protest when Stalin eliminated his subordinates one by one). </p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2014/01/05/dsc04096.jpg?maxWidth=500" alt="Ordzhonikidze" border="0" /></p> <p>Photo: Mark Harrison.</p> <p>In Ekaterinburg some things have changed since Soviet times, not just the city's name. A mile from Sverdlov's statue stands a new shrine to Sverdlov's most famous victims, Tsar Nicholas II and his family, murdered on the spot in July 1918.</p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2014/01/05/dsc04062.jpg?maxWidth=500" alt="Romanovs" border="0" /><br /> </p> <p>Photo: Mark Harrison</p> <p>In Ekaterinburg, it seems, perpetrators and victims are commemorated with complete impartiality. The martyr Nicholas gets a new statue, while the <a href="http://books.google.co.uk/books?id=9QNuiuhPzzIC&amp;pg=PA244&amp;dq=yuri+slezkine+jewish+century&amp;hl=en&amp;sa=X&amp;ei=pyLNUtOZBOzA7Aa9n4GYAw&amp;ved=0CDIQ6AEwAA#v=snippet&amp;q=entrusted%20with%20carrying%20out%20the%20order&amp;f=false">likely murderer Sverdlov</a> keeps his old one. It's just like London, where Cromwell's statue stands in Westminster, a short walk from that of Charles I, the King whom Cromwell executed, at Charing Cross. </p> <p>Not quite like London, though. In Ekaterinburg, something is missing. On a highway a few kilometres out of town, a handpainted sign labelled &quot;Memorial&quot; points off the road. (I didn't get a chance to take a picture.) Memorial to whom? The path leads into the birch forests where the Chekists took tens of thousands for night time execution and burial in the years of Stalin's terror. Mass graves have no importance in Putin's nation-building narrative. They can be forgotten, or filed away under the heading of necessary sacrifices and inevitable mistakes.</p> <p>This is Putin's view of Russia's past. Sverdlov and Tsar Nicholas; Lenin, Stalin; the Chekists; Kuibyshev, Malyshev, Ordzhonikidze. All are figures from history, state leaders in whom Russians should feel equal national pride. Who can tell the difference? No one. As for the ordinary victims, forget them. Anyway, who cares? Only those that wish to dig for dirt among their bones. </p>HistoryRussiaStalinWed, 08 Jan 2014 10:10:15 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/stalin_versus_cromwell/#comments094d73cd41daa19501431b34762d7b650Unlearning the History of Communism by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/unlearning_history/ <p class="answer">Writing about web page <a href="ttp://www.pieria.co.uk/articles/men_make_their_own_history_but_they_do_not_make_it_as_they_please" title="Related external link: ttp://www.pieria.co.uk/articles/men_make_their_own_history_but_they_do_not_make_it_as_they_please">ttp://www.pieria.co.uk/articles/men_make_their_own_history_but_they_do_not_make_it_as_they_please</a></p> <p>On the Pieria magazine website there has been an exchange of views on capitalism and socialism. I guess it is my fault; on 28 June I contributed <a href="http://www.pieria.co.uk/articles/alternatives_to_capitalism_when_the_dream_turned_to_nightmare">a summary of some remarks on the subject</a>. I concluded:</p> <blockquote class="quotes"> <p>Liberal capitalism isn&rsquo;t perfect, but it has done far more for human welfare than communism. It has been the solution more often than the problem. Last time capitalism experienced some difficulties, many countries went off on a search for alternatives. That search for alternatives led nowhere. It wasn&rsquo;t just unproductive. It was a terrible mistake that cost many tens of millions of lives. Lots of people have forgotten this history. Now is a good time to remember it.</p> </blockquote> <p>On 31 July, <a href="ttp://www.pieria.co.uk/articles/men_make_their_own_history_but_they_do_not_make_it_as_they_please">the blogger UnlearningEconomics responded</a>:</p> <blockquote class="quotes"> <p>In my opinion, this view rests on a highly selective interpretation of events. It requires that we gloss over two major historical points: first, the historical circumstances of existing communism; second, the history of capitalist countries. It fails to acknowledge the fact that existing socialism occurred primarily in undeveloped countries, which we would naturally expect to exhibit lower standards of living than developed ones. It ignores the deliberate campaign of destruction and sabotage toward the socialist states by the capitalist states, a process comprehensively documented by US foreign policy critic William Blum (Blum, 2003). It also requires that we define past and present abuses of capitalist states as somehow 'outside' capitalism, in order to place ourselves above the (real or imagined) abuses of the communists.</p> <p>I do not hope to defend anyone's atrocities, though I am happy to refute some of the absurd exaggerations that sometimes pervade these debates. In any case, my main aim is to show two things: first, the abuses of existing socialist states are better explained by their political circumstances than their innate evils of the ideology; second, capitalist countries have a similarly abhorrent record, one which is not so easily explained by political necessities. My rendition will definitely annoy capitalists and anti-communists by being too sympathetic toward communism, which is a dirty word for many. It will also potentially annoy communists and socialists by not being sympathetic enough and repeating some of the more simplistic mainstream narratives. However, the important thing is that we examine the history of both systems in context, rather than lazily parading the kill count of the other side to try and shut down debate.</p> </blockquote> <p>UnlearningEconomics (below I'll call him or her &quot;UE&quot;) goes on to present &quot;brief&quot; (but, for a blog, quite lengthy) histories of both communism and capitalism. The general story is that if communism has had a bloody history it is mainly because communist revolutions occurred under unfavourable circumstances and had to struggle against the encirclement and aggression of the surrounding capitalist states; as for capitalism, it has its own bloody history, which is too often ignored.</p> <p>What is there here that we can agree on? Perhaps we might agree that twentieth century warfare was terrible enough that it could damage social norms and other institutions of a relatively poor country like Russia or China; in such conditions organized minorities with unscrupulous leaders could seize power and use it to do terrible things. The efforts of other countries to intervene and prevent this, then as now, were largely fruitless or even counterproductive; perhaps they should not have tried, although politicians are not generally selected for lack of ambition and public opinion too often demands that something must be done.</p> <p>UE goes beyond this to suggest that somehow history has been unfair to those same minorities and psychopathic leaders by allowing them to seize power only under terribly adverse circumstances. We owe it to them (the argument seems to go) to compensate them for their disadvantage; we should allow them at least a few decades of unchallenged power, so that they have a fair chance to show what they can achieve. But this seems completely unhinged.</p> <p>In bringing up my children, I tried to teach them that people show their inner qualities when things go badly. It is easy to look good when things go well. Only good people will still be good when things go badly; adversity reveals character. I believe this rule can also be applied to politics. It is when things go badly that we see political leaders and their programmes and ideals put to the test. </p> <p>Can systems be blamed for atrocities of whatever kind? It is not systems that take food from the mouths of the hungry or put bullets into the back of anyone&rsquo;s head. People do this. But the system matters, nonetheless. What the system does is to leave more or less scope for the concentration of power in the hands of people who are inclined to exploit it without restraint. Liberal capitalism at least allows the separation of economic power from politics and decentralizes decisions to firms and households in markets. This is because, in the words of North, Wallis, and Weingast (2011), it is an &ldquo;open-access order.&rdquo; Communism is a &ldquo;closed-access&rdquo; order that restricts who may exercise political power and concentrates control of the economy in the hands of that privileged elite. Given that, ask which of these systems is more likely to permit the abuse of power and allow abuses to be hidden from the public gaze?</p> <p>When general outlooks clash, it is not always enough to stay with generalities. Sometimes we have to get down with the particular facts. History is full of good stories, and UE tells some of them well. The problem is that not all good stories are true, but this becomes evident only when they are confronted with the detail. So, I will confront some of UE's history with the detail. I will not cover everything; I will focus for the most part on the &quot;brief history&quot; of communism, where I think I have more to offer.</p> <ul> <li>UE says: Unfavourable views of communism ignore &ldquo;the fact that existing socialism occurred primarily in undeveloped countries, which we would naturally expect to exhibit lower standards of living than developed ones.&rdquo;</li> </ul> <p>This is seriously incomplete. Existing socialism occurred in relatively few undeveloped countries, and generally only in those weakened by war (Russia, China, Korea, and Indochina). Central Europe would scarcely have counted as undeveloped; there the precondition was war followed by military occupation. Cuba may be the only example of a country that had a communist revolution without a foreign war. In 1945 in several places the boundary of &ldquo;existing socialism&rdquo; was laid down in the middle of a region that was previously economically and ethnolinguistically integrated. As well as showing that warfare counted for more than lack of development, these examples also provide natural experiments for the long run consequences of system change. Think of Estonia versus Finland, East versus West Germany, and North versus South Korea. For discussion see Harrison (2013).</p> <ul> <li>UE says: Unfavourable views of communism also ignore &ldquo;the deliberate campaign of destruction and sabotage toward the socialist states by the capitalist states&rdquo; (citing William Blum).</li> </ul> <p>Again, seriously incomplete. The UE view of postwar history rests on selection, overstatement of the capacity of outsiders to intervene in Russia and Eastern Europe, exaggeration of popular support for communism (the most popular communist party in Europe at the end of the war was probably the French party with no more than a quarter of the popular vote), and ignorance of the documented process whereby Stalin&rsquo;s secret police entered Eastern Europe in 1944 and 1945 &ldquo;embedded&rdquo; with the Red Army and armed with a template for dictatorship that they began to apply immediately, regardless of whether or not communists were in the government (Applebaum 2012). Far from resenting western &quot;sabotage,&quot; millions of Central and East Europeans felt abandoned by the West as Stalin crushed their hopes for national self-determination. Finally, it forgets that the one American initiative that could have decisively altered the trajectory of Eastern Europe was not &ldquo;destruction and sabotage&rdquo; but Marshall Aid, which Stalin instructed his allies to reject.</p> <ul> <li>UE says: The unfavourable conditions of the Russian Revolution are shown by the fact that &ldquo;Russia had suffered the worst losses out of any country during the war.&rdquo;</li> </ul> <p>No. It is hard to imagine that Russia would have suffered the Revolution without three years of world war, and it is true that battle and non-battle deaths of Russian soldiers up to 1917 were heavy (1.8 million). At the same time Russia's losses were fewer than Germany&rsquo;s absolutely, and (given Russia&rsquo;s large population) were proportionately fewer than of those of Britain, France, Italy, Serbia, Rumania, Austria-Hungary, Bulgaria, and Turkey (Broadberry and Harrison 2005). Russia&rsquo;s economic loss of GDP per head up to 1917 was less than that of Austria, Finland, France, Germany, Greece, Hungary, and Turkey (Markevich and Harrison 2011). The latter conclude: &ldquo;We have seen that the economic decline up to 1917 was not more severe in Russia than elsewhere. In short, we will probably not be able to explain why Russia was the first to descend into revolution and civil war without reference to historical factors that were unique to that country and period.&rdquo;</p> <ul> <li>UE says: &ldquo;By the time Joseph Stalin took (absolute) power in 1929, many &ndash; including, perhaps, himself &ndash; believed the threats the USSR faced were justifications for his purges and the Gulags.&rdquo;</li> </ul> <p>Seriously incomplete. There is no &ldquo;perhaps&rdquo; here: Stalin had a precise understanding that is now well documented (e.g. Khlevniuk 1995; Simonov 1996; Davies et al. 2003; Harrison 2008; Velikanova 2013). In 1921, 1924, 1927, and 1929 there was no foreign threat. But rumours of war were frequent, because the Soviet Union&rsquo;s strategy of inciting revolution and mutiny abroad kept Soviet foreign relations in a state of continual tension. In domestic society, Stalin's secret police told him, every rumour was destabilizing; peasants and workers started to wonder when the chance would come to get rid of the Bolsheviks. Stalin was aware that above all he had to secure the regime internally and externally and that drift could only weaken him. This is why he launched Soviet society simultaneously on the courses of forced industrialization, mass collectivization of the peasantry, and political violence. Justification? Yes, of course, if taking power and holding it are sufficient motivations. Not otherwise. Khrushchev was personally responsible for tens of thousands of killings under Stalin, and this left him with a bad conscience. In trying to come to terms with it he blamed Stalin many times but not Hitler, the CIA, or anyone else outside the country.</p> <ul> <li>UE says: &ldquo;The country did face a very real Nazi threat that, failing industrialisation, it would not have been able to overcome.&rdquo;</li> </ul> <p>No. Stalin changed course towards industrialization, collectivization, and mass violence in 1929, when there was no significant external threat. The Nazis came to power in 1933, and no European leader (including Stalin) recognized the threat from Hitler before 1935. Before Hitler, a threat to Siberia appeared from the East in 1931 with the Japanese annexation of Manchuria. These threats came after, not before, Stalin&rsquo;s &ldquo;revolution from above.&rdquo; As for whether the Nazi threat justified Stalin&rsquo;s policies after the event, I have written about this in many places (most recently Harrison 2010).</p> <ul> <li>UE says: &ldquo;This reasoning is consistent with the fact that once Stalin died and the more immediate western threats disappeared, &lsquo;de-Stalinisation&rsquo; took place: the Gulags were softened and reduced in size; the cult of personality was dismantled &hellip; things certainly improved once the Nazi threat had been eliminated.&rdquo;</li> </ul> <p>No. The Nazi threat was eliminated in 1945. The softening of the Soviet regime after 1953 had everything to do with Stalin&rsquo;s death and nothing to do with the disappearance of &ldquo;immediate western threats.&rdquo; De-Stalinization took place not because of the disappearance of western threats but because the entire Soviet leadership was tired of living in fear of their own lives, and then went further because Khrushchev and Mikoyan had bad consciences about their own responsibility for past mass killings. The Gulag was dismantled immediately, not because of the disappearance of western threats but because Lavrentii Beriia had long before determined that it was an economic drain and a source of social contagion but Stalin had prevented him from acting on his findings. There was bitter resistance to dismantling the cult of Stalin from other communist leaders (especially Mao), not because of western threats but because it threatened their own legitimacy (and their own cults). The cult of Stalin was dismantled but was soon replaced by the cult of Khrushchev.</p> <ul> <li>UE says: &ldquo;The Great Leap Forward (GLF) &hellip; undoubtedly caused a large degree of famine, surely because of the over-centralised and inflexible nature of the policy.&rdquo;</li> </ul> <p>Seriously incomplete. A centralized, inflexible policy was enough to start a famine, but it does not begin to explain explain how the famine proceeded, nor does it explain the secrecy that then shrouded it for decades.</p> <p>Think about what is required for an act of policy to cause millions of famine deaths. Here is the problem: When people starve to death, they do not die suddenly and unexpectedly. It takes them months, even many months to weaken, become sick, and die. Some die before others. Some die of hunger; some are carried off by diseases to which they lose immunity. Some die at home; some drop dead in the street. Some die passively; some steal or even kill for food; a few turn to cannibalism. In other words, a policy that causes millions of famine deaths (such as in the USSR in 1932 to 1934) or tens of millions (in China in 1958 to 1960) cannot go unnoticed by those carrying out the policy.</p> <p>In fact, in both the USSR and China, the famine process worked like this (Davies and Wheatcroft 2004; Chen and Kung 2011). First, the leaders issued quotas for the collection of food, province by province. They also gave the provincial leaders to understand that their future depended on meeting the quota. The provincial leaders competed to raise more grain than their neighbours in order to show loyalty and to save their own lives and the lives of their families. And they passed these incentives down the line to their subordinates charged with doing the actual work. When some people reported that the quotas were too heavy, or they resisted or dragged their feet, they were arrested and others took their place. Food collections began and the first people started to die. When some people reported that other people were dying, they were told that this was just &ldquo;simulation or provocation&rdquo;: enemies were maliciously withholding food and starving their own children to cause trouble (Davies and Wheatcroft 2004, p. 206).</p> <p>While the first ones were dying, the people responsible for extracting grain from the villages had to go deeper and deeper into the countryside to find food and take it by force. On every journey along all the different routes they took, they had to go past the people from whom they had already taken food, who were now dead or dying, to find more food that they could take. In China, the provincial leaders of lower rank had more to prove and Chen and Kung (2011) show these people tried harder, so that more grain was collected and more people died in their provinces. Returning from every journey past the already dying and dead people, they sometimes reported what they had seen (although it was sometimes &ldquo;forbidden to keep an official record&rdquo;) but in public they had to remain absolutely silent about, not just at the time but for the rest of their lives. The same applied to everyone with business that required them to move around the countryside. While they were doing this, others had to be ordered to stop some of the dying people who were not dead yet from moving out in search of food elsewhere. They had to be ordered to stop them because the food that had been collected and stored elsewhere was destined for others; if the dying people were allowed to eat it, it would not be available to feed Stalin&rsquo;s Great Breakthrough or Mao&rsquo;s Great Leap Forward. A particular reason for these orders is that when hungry people are allowed to mix with people that have enough to eat, it is extraordinary difficult to stop kind people from giving some of their food to starving families; the Germans found this in occupied Europe when they tried to cut Jewish communities off from food, and this is one reason why they first herded Jews into ghettoes and later decided to accelerate the Holocaust (Collingham 2010, pp. 205 ff). Finally, both at the time and later, the surviving victims and perpetrators alike learned never to talk about it, perhaps not even to their children. As a result, witnesses of terrible things (such as Yang 2012) often concluded the events they had seen were isolated and exceptional.</p> <p>In other words, the &ldquo;over-centralised and inflexible nature of the policy&rdquo; was enough to start a famine, but further deliberate actions were required to ensure government priorities for food supplies when millions of people were dying of hunger. All this must be read into the &ldquo;over-centralised and inflexible nature of the policy,&rdquo; and it suggests why those words do not begin to provide a full explanation.</p> <ul> <li>UE says: &ldquo;It is also worth noting that the remaining Cold War paranoia was certainly not a USSR-only phenomenon, with McCarthyism and the red scare in the US reaching levels which now seem ridiculous to most.&rdquo;</li> </ul> <p>No. McCarthyism was ridiculous and, partly as a result of it, the FBI missed many Soviet agents that were actually at work in American government and society after the war (Moynihan Commission 1997).</p> <ul> <li>UE says: &ldquo;In Poland, the popular party Solidarity wanted some form of worker ownership &ndash; in other words, socialism &ndash; until, in desperation, they had to turn to the IMF, who made capitalist policies a condition for any aid. In Russia, Boris Yeltin&rsquo;s &lsquo;free market&rsquo; reforms were resisted, which was met with force; similarly, in China, the Tienanmen Square massacres were not made in favour of capitalism but in favour of democracy and worker control&rdquo; (citing Naomi Klein).</li> </ul> <p>No. None of us can possibly know what demonstrators in China or elsewhere &ldquo;really&rdquo; wanted. Politics is the art of the possible, and for this reason people tend to express their choices strategically, in the light of the constraints they perceive and the choices they expect others to make. I saw this myself in Russia: As long as the communist party was in full control, many dissenters preferred to limit their demands by appealing to rights guaranteed by the Soviet constitution, asking for a return to &ldquo;true&rdquo; Leninism, calling to rehabilitate Old Bolsheviks like Trotsky and Bukharin, and so forth. Only when the communist monopoly gave way did it become politically and psychologically possible for free thinkers to go further; some didn't but many did. UE refers to IMF conditionality in a disparaging way; but why would a responsible aid donor give aid without wishing to rule out uses of its resources that would be damaging or counterproductive? UE relies on Klein&rsquo;s Shock Doctrine as a source; on its use of evidence see Harrison (2009).</p> <ul> <li>UE says: &ldquo;While estimates of deaths from Mao&rsquo;s GLF are exaggerated using dubious estimation techniques (which effectively allow the demographers to pick the number arbitrarily), little to no cover has been given to the increase in Russian deaths during the &lsquo;transition&rsquo; to capitalism, which, by a reasonable estimation method of simply counting the increase in death rates, claimed 4 million lives between 1990 and 1996&rdquo; (citing Utsa Patnaik).</li> </ul> <p>No. UE (or perhaps Utsa Patnaik) seems to confuse demographic studies with the literary and journalistic accounts written by people who do not have a good understanding of error margins. Demographers know that when people die in numbers so large that they are not recorded individually there is always an error margin. The error margin has several sources: mismeasurement of the population before and after the shock, imputation of normal mortality during the shock (required to infer excess mortality), and correctly apportioning the birth deficit between babies not born (or miscarried) and babies born and died within the famine period. In other words the best available estimation techniques give rise to ranges rather than point estimates, and it is from these ranges that nonspecialists feel entitled to pick and choose.</p> <p>As for the cause of Russia&rsquo;s mortality spike in the transition years, the research attributing it to mass privatization (Stuckler and McKee 2009) has been widely disseminated; less well known is that it has also been thoroughly criticized (Earle 2009; Earle and Gehlbach 2010; Brown, Earle, and Telegdy 2010; Battacharya, Gathmann, and Miller 2013; see also reply by Stuckler and McKee 2010). In the last years of the Soviet Union Gorbachev&rsquo;s anti-alcohol campaign temporarily prevented millions of Russians from drinking themselves to death. However, it did not alter their desire to drink. Their deaths were postponed and so stored up and waiting to happen when alcohol became cheaper again and more easily available. Thus, the increase in Russian deaths during transition is more plausibly attributed to an increase in the availability and collapse in the price of alcohol.</p> <p>I&rsquo;ll conclude on the subject of atrocity. UE writes: &ldquo;I do not hope to defend anyone's atrocities, though I am happy to refute some of the absurd exaggerations that sometimes pervade these debates &hellip; the important thing is that we examine the history of both systems in context, rather than lazily parading the kill count of the other side to try and shut down debate.&rdquo; I noticed that <a href="http://unlearningeconomics.wordpress.com/2013/08/01/pieria-article-on-capitalism-versus-socialism/">the UE blog goes further</a>, wishing to move debate on from &ldquo;disingenuous &lsquo;Black Book of Communism&rsquo;-style kill count porn&rdquo; (the &quot;Black Book&quot; reference is to Courtois et al. 1999).</p> <p>This shocked me. Is there room for debate over the scale, causes, and significance of the excess deaths that arose around the world from communist policies? Absolutely. Should any figure in the Black Book of Communism be above discussion? Of course not. But kill count <em>porn</em>? The demand for these people to be remembered and their suffering acknowledged comes from the victims themselves. &ldquo;We were forgotten. For our broken lives. For our executed fathers. No one apologized. If we don&rsquo;t preserve the historical memory, we shall continue to make the same mistakes&rdquo; (Fekla Andreeva, resettled as a child with her &ldquo;kulak&rdquo; family, whose father was executed in the Great Terror, cited by Reshetova 2013; see also Gregory 2013).</p> <h2>References</h2> <ul> <li>Applebaum, Anne. 2012. Iron Curtain: The Crushing of Eastern Europe 1944-56. London: Allen Lane.</li> <li>Bhattacharya, Jay, Christina Gathmann, and Grant Miller. 2013. Gorbachev&rsquo;s Anti-Alcohol Campaign and Russia's Mortality Crisis. American Economic Journal: Applied Economics 5(2): 232-60.</li> <li>Broadberry, Stephen, and Mark Harrison. 2005. The Economics of World War I: an Overview. In The Economics of World War I: 3-40. Edited by Stephen Broadberry and Mark Harrison. Cambridge: Cambridge University Press.</li> <li>Brown, J. David, John S. Earle, and &Aacute;lmos Telegdy. 2010. Employment and Wage Effects of Privatisation: Evidence from Hungary, Romania, Russia, and Ukraine.&rdquo;Economic Journal 120, no. 545: 683-708.</li> <li>Chen, S. and Kung, J. (2011), &lsquo;The Tragedy of the Nomenklatura: Career Incentives and Political Radicalism during China&rsquo;s Great Leap Famine&rsquo;, American Political Science Review, 105(1): 27-45.</li> <li>Collingham, Lizzie. 2010. The Taste of War: World War Two and the Battle for Food. London: Allen Lane.</li> <li>Courtois, Stephane, Mark Kramer, Jonathan Murphy, Jean-Louis Panne, Andrzej Paczkowski, Karel Bartosek, and Jean-Louis Margolin. 1999. The Black Book of Communism. Ed Cambridge, MA: Harvard University Press.</li> <li>Davies, R. W., and Stephen Wheatcroft. 2003. The Industrialisation of Soviet Russia, vol. 5: The Years of Hunger: Soviet Agriculture, 1931-1933. Basingstoke: Macmillan.</li> <li>Davies, R. W., Oleg Khlevniuk, E. A. Rees, Liudmila P. Kosheleva, and Larisa A. Rogovaia, eds. 2003. The Stalin-Kaganovich Correspondence, 1931-36. New Haven, CT: Yale University Press.</li> <li>Earle, John S. 2009. Mass Privatisation and Mortality. The Lancet 373 (April 11), p. 1247</li> <li>Earle, John S., and Scott Gehlbach. 2010. Did Mass Privatisation Really Increase Post-Communist Mortality? The Lancet 375 (January 30), p. 372.</li> <li>Gregory, Paul R. 2013. Women of the Gulag. Stanford: Hoover Institution Press.</li> <li>Harrison, Mark. 2008. The Dictator and Defense. In Guns and Rubles: the Defense Industry in the Stalinist State, pp. 1-30. Edited by Mark Harrison. New Haven: Yale University Press, 2008.</li> <li>Harrison, Mark. 2009. <a href=" http://warwick.ac.uk/markharrison/comment/shockdoctrine.pdf">Credibility Crunch: A Comment on The Shock Doctrine</a>. University of Warwick. Department of Economics.</li> <li>Harrison, Mark. 2010. Industry and the Economy. In The Soviet Union at War, 1941-1945, pp. 15-44. Edited by David R. Stone. Barnsley: Pen &amp; Sword.</li> <li>Harrison, Mark. 2013. Communism and Economic Modernization. In The Oxford Handbook in the History of Communism. Edited by Stephen A. Smith. Oxford: Oxford University Press.</li> <li>Khlevniuk, Oleg. 1995. The Objectives of the Great Terror, 1937-38. In Soviet History, 1917-1953: Essays in Honour of R. W. Davies: 158-76. Edited by J. M. Cooper, Maureen Perrie, and E. A. Rees. New York, NY: St Martin's.</li> <li>Markevich, Andrei, and Mark Harrison. 2011. Great War, Civil War, and Recovery: Russia&rsquo;s National Income, 1913 to 1928. Journal of Economic History 71:3, pp. 672-703.</li> <li>Moynihan Commission. 1997. Report of the Commission on Protecting and Reducing Government Secrecy. Senate Document 105-2 Pursuant to Public Law 236, 103rd Congress. Washington, United States Government Printing Office.</li> <li>North, Douglass C., John Joseph Wallis, and Barry R. Weingast. 2011. Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History. Cambridge: Cambridge University Press</li> <li>Reshetova, Natalia. 2013. Women of the Gulag. Hoover Digest no. 3, 108-115.</li> <li>Simonov, Nikolai S. 1996. &quot;Strengthen the Defence of the Land of the Soviets: the 1927 War Alarm and its Consequences.&quot; Europe-Asia Studies 48(8): 1355-64.</li> <li>Stuckler, David, Lawrence King, and Martin McKee. 2009. Mass Privatisation and the Post-Communist Mortality Crisis: a Cross-National Analysis. The Lancet no. 373 (January 31, 2009): 399-407.</li> <li>Stuckler, David, Lawrence King, and Martin McKee. 2010. Did Mass Privatisation Really Increase Post-Communist Mortality? &ndash; Authors&rsquo; Reply. The Lancet 375 (January 30, 2010), pp 372-74.</li> <li>Velikanova, Olga. 2013. Popular Perceptions of Soviet Politics in the 1920s: Disenchantment of the Dreamers. Basingstoke: Palgrave.</li> <li>Yang Jisheng. 2012. Tombstone: The Untold Story of Mao&rsquo;s Great Famine. London: Allen Lane.</li> </ul>ChinaHistoryPoliticsRussiaStalinThu, 08 Aug 2013 14:02:35 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/unlearning_history/#comments094d73cd403ebfaf01405dae1a5a03a93Alternatives to Capitalism: When Dream Turned to Nightmare by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/alternatives_to_capitalism/ <p class="answer">Writing about web page <a href="http://cpasswarwick.wordpress.com/overview-2/peking-conference/proposed-topics/" title="Related external link: http://cpasswarwick.wordpress.com/overview-2/peking-conference/proposed-topics/">http://cpasswarwick.wordpress.com/overview-2/peking-conference/proposed-topics/</a></p> <p>On Friday evening I found myself debating &quot;Socialism vs Capitalism: The future of economic systems&quot; at the Peking Conference of the Warwick China Public Affairs and Social Service Society. The organizers also invited my colleagues Sayantan Ghosal, Omer Moav, and Michael McMahon, who spoke eloquently. The element of debate was not too prominent because we all said similar things in different ways. I'm an economic historian and the great advantage of history is that it gives you hindsight. Anyway, here is what I said:</p> <p>Let&rsquo;s start from some history. There was a time between the two world wars when the capitalist democracies, like America, Britain, France, and Germany, were in a lot of trouble. In 1929 a huge financial crisis began in the United States and went global. There was a Great Depression. Around the world, many tens of millions of farmers were ruined. Tens of millions of workers lost their jobs. </p> <p>As today, people asked: What was the cause of the problem? One answer they came up with was: Capitalism is the problem. Lots of people decided: the problem is the free market economy! The government should step in to take over resources and direct them! The government should get us all back to work! The government should get us building new cities, power stations, and motorways! </p> <p>Another answer many of the same people came up with was: Democracy is the problem. Lots of people decided: the problem is too much politics! We need a strong ruler to stop the squabbling! Someone who can make decisions for the nation! Someone who can organize us to build a common future together! </p> <p>So there was a search for alternatives to capitalism. Different countries tried different alternatives. The alternatives they tried included national socialism (or fascism) and communism under various dictators, like Hitler and Stalin.</p> <p>What happened next? On average the dictators&rsquo; economies did recover from the Depression faster than the capitalist democracies. </p> <p>(Here's a chart I made earlier to illustrate the point, but I did not have the opportunity to use it in my talk. Reading from the bottom, the democracies are the USA, France, and the UK; the dictatorships are Italy, Germany, Japan, and the USSR. You can see that Italy does not conform to the rule that the dictators' economies recovered faster. Without Italy, the average economic performance of the dictatorships would have looked even better.)</p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2013/01/31/great_depression_ver_2.jpg?maxWidth=500" alt="Seven major economies in the Great Depression" border="0" /></p> <p>But solving one problem led to another. Before the 1930s were over the dictators&rsquo; policies had already caused millions of deaths. A Japanese invasion killed millions in China (I'm not sure how many). An Italian invasion killed 300,000 in North Africa. Soviet economic policies caused 5 to 6 million hunger deaths in their own country and Stalin had a million more executed.</p> <p>And another problem: As political scientists have shown, democracies don&rsquo;t go to war (with each other). Dictators go to war with democracies (and the other way round). And dictators go to war with each other. The result of this was that in the 1940s there was World War II. Hitler, Mussolini, Tojo, and Stalin went to war -- with the democracies and with each other. Sixty million more people died. </p> <p>After the war, capitalism recovered. In fact, far from being a problem, it became the solution. By the 1960s all the lost growth had been made up. Think of the economic losses from two World Wars and the Great Depression. If all you knew about capitalist growth was 1870 to 1914 and 1960 onwards, you&rsquo;d never know two World Wars and the Great Depression happened in between. </p> <p>(To illustrate that point, here's another chart I made earlier, but did not use. It averages the economic performance of Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, Netherlands, New Zealand, Norway, Sweden, Switzerland, the UK, and the USA.)</p> <p><img src="/images/markharrison/2013/01/31/great_depression_ver_3.jpg?maxWidth=500" alt="great_depression_ver_3.jpg" border="0" /><br /> </p> <p>After World War II fascism and national socialism fell into disrepute, but communism carried on. In China, Mao Zedong&rsquo;s economic policies caused more deaths. In 1958 to 1962, 15 to 40 million people starved. Communist rule led China into thirty years of stagnation and turmoil. After that Deng Xiaoping made the communist party get its act together. And the communists forgave themselves for their past and agreed to forget about it. </p> <p>Here's the takeaway. </p> <p>Liberal capitalism isn&rsquo;t perfect, but it has done far more for human welfare than communism. It has been the solution more often than the problem. Last time capitalism experienced some difficulties, many countries went off on a search for alternatives. That search for alternatives led nowhere. It wasn&rsquo;t just unproductive. It was a terrible mistake that cost many tens of millions of lives. Lots of people have forgotten this history. Now is a good time to remember it.</p> <p>Postscript. At one point I thought of calling this blog &quot;Alternatives to capitalism: the search for a red herring&quot; (a &quot;red herring&quot; is something that doesn't exist but people look for it anyway.) But I realized that would have been wrong, because alternatives to capitalism have actually existed. The problem with the alternatives is not that we cannot find them. It is that the people who went searching for them fell into a dream and woke up to a nightmare.</p>ChinaEconomicsHistoryMaoPoliticsRussiaStalinMon, 04 Feb 2013 08:34:58 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/alternatives_to_capitalism/#comments094d73cc3c870ac1013c926199e2053e7Markets versus Government Regulation: What are the Tail Risks? by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/markets_versus_government/ <p class="answer">Writing about web page <a href="http://ideas.repec.org/a/aea/jeclit/v45y2007i1p5-38.html" title="Related external link: http://ideas.repec.org/a/aea/jeclit/v45y2007i1p5-38.html">http://ideas.repec.org/a/aea/jeclit/v45y2007i1p5-38.html</a></p> <p>Tail risks are the risks of worst-case scenarios. The risks at the far left tail of the probability distribution are typically small: they are very unlikely, but not impossible, and once or twice a century they will come about. When they do happen, they are disastrous. They are risks we would very much like to avoid.</p> <p>How can we compare the tail risks of government intervention with the tail risks of leaving things to the market? Put differently, what is the very worst that can happen in either case? Precisely because these worst cases are very infrequent, you have to look to history to find the evidence that answers the question.</p> <p>To make the case for government intervention as strong as possible, I will focus on markets for long-term assets. Why? Because these are the markets that are most likely to fail disastrously. In 2005 house prices began to collapse across North America and Western Europe, followed in 2007 by a collapse in equity markets. By implication, these markets had got prices wrong; they had become far too high. The correction of this failure, involving large write-downs of important long term assets, led us into the credit crunch and the global recession.</p> <p>Because financial markets are most likely to fail disastrously, they are also the markets where many people now think someone else is more likely to do a better job. </p> <p>What's special about finance? Finance looks into the future, and the future is unexplored territory. Only when that future comes about will we know the true value of the long-term investments we are making today in housing, infrastructure, education, and human and social capital. But we actually have no knowledge what the world will be like in forty or even twenty years' time. Instead, we guess. What happens in financial markets is that everyone makes their guess and the market equilibrium comes out of these guesses. But these guesses have the potential to be wildly wrong. So, it is long-term assets that markets are most likely to misprice: houses and equities. When houses and equities are priced very wrongly, chaos results. (And in the chaos, there is much scope for legal and illegal wrongdoing.) </p> <p>When housing is overvalued, too many houses are built and bought at the high price and households assume too much mortgage debt. When equities are overvalued, companies build too much capacity and borrow too much from lenders. To make things worse, when the correction comes it comes suddenly; markets in long term assets don't do gradual adjustment but go to extremes. In the correction, nearly everyone suffers; the only ones that benefit are the smart lenders that pull out their own money in time and the dishonest borrowers that pull out with other people&rsquo;s money. It's hard to tell which we resent more.</p> <p>If markets find it hard to price long term assets correctly, and tend to flip from one extreme to another, a most important question then arises: Who is there that will do a better job?</p> <p>It's implicit in current criticisms of free-market economics that many people think like this. Financial markets did not do a very good job. It follows, they believe, that someone else could have done better. That being the case, some tend to favour more government regulation to steer investment into favoured sectors. Others prefer more bank regulation to prick asset price bubbles in a boom and underpin prices in a slump. The latter is exactly what the Fed and the Bank of England are doing currently through quantitative easing.</p> <p>Does this evaluation stand up to an historical perspective?</p> <p>We&rsquo;re coming through the worst global financial crisis since 1929. Twice in a century we've seen the worst mess that long-term asset markets can make -- and it's pretty bad. <a href="http://www.bostonfed.org/economic/conf/LTE2011/papers/Papell_Prodan.pdf">A recent estimate of the cumulative past and future output lost to the U.S. economy from the current recession</a>, by David H. Papell and Ruxandra Prodan of the Boston Fed, is nearly $6 trillion dollars, or two fifths of U.S. output for a year. A global total in dollars would be greater by an order of magnitude. What could be worse? </p> <p>For the answer, we should ask a parallel question about governments: What is the worst that government regulation of long term investment can do? We'll start with the second worst case in history, which coincided with the last Great Depression.</p> <p>Beginning in the late 1920s, the Soviet dictator Stalin increasingly overdid long term investment in the industrialization and rearmament of the Soviet Union. Things got so far out of hand that, in Russia, Ukraine, and Kazakhstan in 1932/33, as a direct consequence, 5 to 6 million people lost their lives.</p> <p>How did Stalin's miscalculation kill people? Stalin began with a model that placed a high value (or &ldquo;priority&rdquo;) on building new industrial capacity. Prices are relative, so this implied a low valuation of consumer goods. The market told him he was wrong, but he knew better. He substituted one person&rsquo;s judgement (his own) for the judgement of the market, where millions of judgements interact. He based his policies on that judgement. </p> <p>Stalin&rsquo;s policies poured resources into industrial investment and infrastructure. Stalin intended those resources to come from consumption, which he did not value highly. His agents stripped the countryside of food to feed the growing towns and the new workforce in industry and construction. When the farmers told him they did not have enough to eat, he ridiculed this as disloyal complaining. By the time he understood they were telling the truth, it was too late to prevent millions of people from starving to death.</p> <p>This case was only the second worst in the last century. The worst episode came about in China in 1958, when Mao Zedong launched the Great Leap Forward. A famine resulted. The causal chain was pretty much the same as in the Soviet Union a quarter century before. Between 1958 and 1962, at least 15 and up to 40 million Chinese people lost their lives. (We don&rsquo;t know exactly because the underlying data are not that good, and scholars have made varying assumptions about underlying trends; the most difficult thing is always to work out the balance between babies not born and babies that were born and starved.)</p> <p>This was the worst communist famine but it was not the last. In Ethiopia, a much smaller country, up to a million people died for similar reasons between 1982 and 1985. If you want to read more, the place to start is &ldquo;Making Famine History&rdquo; by Cormac &Oacute; Gr&aacute;da in the Journal of Economic Literature 45/1 (2007), pp. 5-38. The RePEc handle of this paper is <a href="http://ideas.repec.org/a/aea/jeclit/v45y2007i1p5-38.html">http://ideas.repec.org/a/aea/jeclit/v45y2007i1p5-38.html</a>.</p> <p>Note that I do not claim these deaths were intentional. They were a by-product of government regulation; no one planned them (although some people do argue this). At best, however, those in charge at the time were guilty of manslaughter on a vast scale. In fact, I sometimes wonder why Chinese people still get so mad at Japan. Japanese policies in China between 1931 and 1945 were certainly atrocious and many of the deaths that resulted were intended. Still, if you were minded to ask who killed more Chinese people in the twentieth century, the Japanese imperialists might well have to cede first place to China's communists. However, I guess there is less national humiliation in it when the killers are your fellow countrymen than when they are foreigners.</p> <p>To conclude, no one has the secret of correctly valuing long term assets like housing and equities. Markets are not very good at it. Governments are not very good at it either. </p> <p>But <strong>the tail risks of government miscalculation are far worse</strong> than those of market errors. In historical worst-case scenarios, market errors have lost us trillions of dollars. Government errors have cost us tens of millions of lives. </p> <p>The reason for this disparity is very simple. Markets are eventually self-correcting. &quot;Eventually&quot; is a slippery word here. Nonetheless, five years after the credit crunch, worldwide stock prices have fallen, house prices have fallen, hundreds of thousands of bankers have lost their jobs, and democratic governments have changed hands. That's correction.</p> <p>Governments, in contrast, hate to admit mistakes and will do all in their power to persist in them and then cover up the consequences. The truth about the Soviet and Chinese famines was suppressed for decades. The party responsible for the Soviet famine remained in power for 60 more years. In China the party responsible for the worst famine in history is still in charge. School textbooks are silent about the facts, which live on only in the memories of old people and the libraries of scholars.</p>ChinaEconomicsHistoryPoliticsRecessionRussiaStalinMon, 15 Oct 2012 11:22:31 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/markets_versus_government/#comments094d73cc3a4f383f013a6417366b11420Political Costs of the Great Recession by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/political_costs_of/ <p class="answer">Writing about web page <a href="http://www.ft.com/cms/s/0/5b1b5556-8d1d-11e1-9798-00144feab49a.html#axzz1styV0LMT" title="Related external link: http://www.ft.com/cms/s/0/5b1b5556-8d1d-11e1-9798-00144feab49a.html#axzz1styV0LMT">http://www.ft.com/cms/s/0/5b1b5556-8d1d-11e1-9798-00144feab49a.html#axzz1styV0LMT</a></p> <p><a href="http://www.ft.com/cms/s/0/5b1b5556-8d1d-11e1-9798-00144feab49a.html#axzz1styV0LMT">Monday's Financial Times</a> recorded the dismal showing of Nicolas Sarkozy in the French Presidential first-round election, the record vote for France's far-right National Front, and the openings to the right of Sarkozy and Fran&ccedil;ois Hollande, who remain in the contest, as they compete to sweep up the votes of the eliminated candidates.</p> <p>It reminded me of a recent NBER working paper by Alan de Bromhead, Barry Eichengreen, and Kevin O'Rourke on <a href="http://ideas.repec.org/p/nbr/nberwo/17871.html">Right-wing Political Extremism in the Great Depression</a>. (There's a<a href="http://www.voxeu.org/index.php?q=node/7660"> non-technical summary on VOXeu</a>.) What these authors show is that the rise of right wing extremism in the Great Depression was not just a German phenomenon. They define extremist parties as those that campaigned to change not just policy but the system of government. They look at 171 elections in 28 countries spread across Europe, the Americas, and Australasia between 1919 and 1939. They find that a swing to right-wing &quot;anti-system&quot; parties was more likely where the depression was more prolonged, where there was a shorter history of democracy, and where fascist parties were already represented in the national parliament. In short, de Bromhead and co-authors conclude, the Depression was &quot;good for fascists.&quot; </p> <p>I don't mean to imply that either Sarkozy or Hollande are fascists. They aren't. Neither of them wants to replace electoral democracy by authoritarian rule. But they are responding to the protest vote in their own country by proposing &quot;solutions&quot; to the problems of the already weakened French market economy that will weaken it further by increasing government entitlement spending, government regulation, and tax rates. </p> <p>Where does the protest vote come from? There is anger and pessimism. There is a search for alternatives to free-market capitalism and representative democracy. The problem is that all the alternatives are worse. But none of the candidates (perhaps with the exception of Fran&ccedil;ois Bayrou, who did badly) has been willing to say this.</p> <p> How do we know that all the alternatives are worse? We know it from history. </p> <p>The chart below shows the total real GDPs of twelve major market economies from 1870 to 2008 (the countries are Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, Netherlands, New Zealand, Norway, Sweden, Switzerland, United Kingdom, and United States; data are by the late Angus Maddison at <a href="http://www.ggdc.net/maddison/">http://www.ggdc.net/maddison/</a>). The vertical scale is logarithmic, so the slope of the line measures its rate of growth.</p> <p><br /> </p> <p><img src="http://blogs.warwick.ac.uk/images/markharrison/2012/04/24/great_depression.png?maxWidth=500" alt="140 years of economic growth" border="0" /></p> <p><br /> </p> <p>You can see two things. One is the steadiness of economic growth in the West over 140 years up to the recent financial crisis. The other is that two World Wars and the Great Depression were no more than temporary deviations. They are just blips in the data. For many people they were hell to live through (and sometimes these were the lucky ones), but in the long run the economic consequences went away. In fact, <a href="http://yalepress.yale.edu/book.asp?isbn=9780300151091">recent work by the economic historian Alexander J. Field</a> has shown that the depressed 1930s were technologically the most dynamic period of American history.</p> <p>One conclusion might be that the economic consequences of the current recession are not the ones that we should fear most. I don't mean that the economic losses arising from reduced incomes and unemployment are trivial; life today is unexpectedly hard for millions of people, young and old. Young people, even if they will not be a &quot;lost&quot; generation, will suffer and be scarred by the experience. If you're old enough, you could be dead before better times come round again. At the same time, the kind of pessimism that says that our children will be never be as well off as we were is groundless. The economic losses associated with the recession will eventually evaporate, just as the economic losses of the Great Depression went away in the long run.</p> <p>We should be more afraid of the lasting political consequences. The effects of the Great Depression on politics were very deep and very persistent. World War I ended with the breakup of the German, Austro-Hungarian, Romanov, and Ottoman Empires. In the 1920s, most of the new countries that were formed became democracies. Then, we had the Great Depression. Across Europe there was anger, pessimism, and a search for alternatives to free-market capitalism and representative democracy. By the end of the 1930s Europe had recovered economically from the depression but most of the new democracies had fallen under dictators. That led to World War II, in which as many as 60 million people were killed. Fascism was defeated, but then Europe was divided by communism and that led to the Cold War. </p> <p>It took until 1989 for the average of democracy scores of European countries (measured from the Polity IV database) to return to the previous high point, which was in 1919.</p> <p>In short, the Great Depression stimulated a search for alternatives to liberal capitalism. This search was extremely costly and completely pointless. For a while in various quarters there was admiration for Hitler, Mussolini, or Stalin, their great public works, their capacity to inspire and to mobilize, and their rebuilding of the nation. But both fascism and communism turned out to be terrible mistakes. </p> <p>Memories are short. Today's politicians want your vote. And many voters want to hear that some radical politician or authority figure has a quick fix for capitalism. It seems like we may have to learn from our mistakes all over again. Let's hope that the lesson is less costly this time round.</p>EconomicsHitlerPoliticsRecessionStalinTue, 24 Apr 2012 21:04:34 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/political_costs_of/#comments094d73cc36bb977c0136e62b8aa917ef1Russia's Great War, Civil War, and Recovery by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/russias_great_war/ <p class="answer">Writing about web page <a href="http://www2.warwick.ac.uk/fac/soc/economics/news/?newsItem=094d43a2365e99f001366436ff461cde" title="Related external link: http://www2.warwick.ac.uk/fac/soc/economics/news/?newsItem=094d43a2365e99f001366436ff461cde">http://www2.warwick.ac.uk/fac/soc/economics/news/?newsItem=094d43a2365e99f001366436ff461cde</a></p> <p>Tomorrow I'm flying to Moscow to collect a prize, which I will share with <a href="http://www.nes.ru/en/people/catalog/m/amarkevich">my coauthor Andrei Markevich</a>. This is the Russian national prize for applied economics, <a href="http://econprize.ru/announcements/50327714.html">which was announced last week</a>. The prize, sponsored by a consortium of Russian universities, research institutes, and business media, is awarded every second year. The award is for our paper &quot;Great War, Civil War, and Recovery: Russia&rsquo;s National Income, 1913 to 1928,&quot; published in the Journal of Economic History 71:3 (2011), pp. 672-703. <a href="http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/harrison/public/jeh2011_postprint.pdf">A postprint is available here</a>. </p> <p>The spirit of the paper is as follows. In 1914 Russia joined in World War I. In 1917 there was a revolution, and Russia&rsquo;s part in that war came to an end. A civil war began, that petered out in 1920. It was followed immediately by a famine in 1921. We calculate that by the end of all this Russia had suffered 13 million premature deaths, nearly one in ten of the population living within future Soviet borders in 1913. After that, the Russian economy recovered, but was soon swept up in Stalin's five-year plans to &quot;catch up and overtake&quot; the West.</p> <p>We calculate Russia&rsquo;s real national income year by year from 1913 to 1928; this has never been done before on a consistent GDP basis. National income can be measured three ways, which ought to give the same answer (but rarely do): income (wages, profits, ...), expenditure (consumption, investment, ...), and output (of industry, agriculture, ...). We measure output. Data are plentiful, but of uneven quality and coverage. The whole thing is complicated by boundary changes. Between 1913 and 1922 Russia gave up three per cent of its territory, mainly in the densely settled western borderlands; this meant the departure of one fifth of its prewar population. The demographic accounting is complicated not only by border changes but also by prewar and wartime migrations, war deaths, and statistical double counting.</p> <p>Our paper looks first at the impact of World War I, in which Russia went to war with Germany and Austria-Hungary. Initially the war went went well for Russia, because Germany found itself unexpectedly tied down on the western front. Even so, Germany quickly turned back the Russian offensive and would have defeated Russia altogether but for its inability to concentrate forces there. </p> <p>During the war nearly all the major European economies declined (Britain was an exception). The main reason was that the strains of mobilization began to pull them apart, with the industrialized cities going in one direction and the countryside going in another. In that context, we find that Russia&rsquo;s economic performance up to 1917 was better than has been thought. Our study shows that until the year of the 1917 revolution Russia&rsquo;s economy was declining, but by no more than any other continental power. While wartime economic trends shed some light on the causes of the Russian revolution, they certainly do not support an economically deterministic story; if anything, our account leaves more room for political agency than previous studies.</p> <p>In the two years following the Russian revolution, there was an economic catastrophe. By 1919 average incomes in Soviet Russia had fallen to less than half the level of 1913. This level is seen today only in the very poorest countries of the world, and had not been seen in eastern Europe since the seventeenth century. Worse was to come. After a run of disastrous harvests, famine conditions began to appear in the summer of 1920 (in some regions perhaps as early as 1919). In Petrograd in the spring of 1919 an average worker&rsquo;s daily intake was below 1,600 calories, about half the level before the war. Spreading hunger coincided with a wave of deaths from typhus, typhoid, dysentery and cholera. In 1921 the grain harvest collapsed further, particularly in the southern and eastern grain-farming regions. More than five million people may have died in Russia at this time from the combination of hunger and disease.</p> <p>Because we have shown that the level of the Russian economy in 1917 was higher than previously thought, we find that the subsequent collapse was correspondingly deeper. What explains this collapse? The obvious cause was the Russian civil war, which is conventionally dated from 1918 to 1920. However, we doubt that this is a sufficient explanation. First, the timing is awkward, because the economic decline was most rapid in 1918 and this was before the most widespread fighting. Second, there are signs that Bolshevik policies of economic mobilization and class warfare were an independent factor spreading chaos and decline. These policies were continued and even intensified for a year after the civil war ended and clearly contributed to the disastrous famine of 1921.</p> <p>Because of the famine, economic recovery did not begin until 1922. At first recovery was very rapid, promoted by pro-market reforms, but it slowed markedly as the Soviet government began to revert to mobilization policies of the civil-war type. We show that as of 1928 the Russian recovery was delayed by international standards. The result was that, when Stalin launched the first five year plan for rapid forced ndustrialization, the Soviet economy's recovery from the Civil War was not complete. By implication, some of the economic growth achieved under the five-year plans should be attributed to delayed restoration of pre-revolutionary economic capacity.</p> <p>In concluding the paper, we reflect on the state in the history of modern Russia. It seems important for economic development that the state has the right amount of &quot;capacity,&quot; not too little and not too much. When the state has the right amount of capacity there is honest administration within the law; the state regulates and also protects private property and the freedom of contract. When the state has too little capacity it cannot prevent outbreaks of deadly violence, and security ends up being privatized by gangs and warlords. When the state has too much capacity it can starve and kill without restraint. In Russian history the state has usually had too little capacity or too much. In World War I the state had too little capacity to regulate the war economy and it was eventually pulled apart by competing factions. Millions died. In the Civil War, the state acquired too much capacity; more millions died.</p> <p>Andrei Markevich and I have many debts. Our first thanks go, of course, to <a href="http://econprize.ru/founder">the sponsors of the prize</a>. After that, we are conscious of owing a huge amount to our predecessors, many of whom should be better known than they are, but I'm going to leave the history of the subject to those interested enough to consult the paper. A number of people helped us generously, especially Paul Gregory, Andrei Poletaev, Stephen Wheatcroft, and the journal editors and referees. Of course, I'm personally grateful to Andrei. It&rsquo;s hard to say which of us did what (between May 2009 and January 2011 our paper went through exactly 50 revisions), but you&rsquo;ll see that Andrei is named as first author.</p> <p>Beyond any personal feelings, I'm thrilled by the recognition of economic history. <a href="http://www.hse.ru/news/recent/50105660.html">When he announced the award</a>, the jury chairman Professor Andrei Yakovlev was asked if this wasn't an &quot;unexpected&quot; outcome for an award in applied economics. Yakovlev described it as an &quot;important precedent,&quot; recognizing that &quot;explanations of many of the processes that we have seen in Russia in the last twenty years lie in history.&quot; He pointed out that most western countries have historical national accounts going back through the nineteenth century (and England's now go back through the thirteenth). Such data help us to understand the here and now, by showing how we got here. </p>EconomicsHistoryRussiaStalinWarMon, 02 Apr 2012 11:58:47 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/russias_great_war/#comments094d73cd3606901b013671e98c0210da2Russians, Be Careful What You Wish For by Mark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/russians_be_careful/ <p class="answer">Writing about web page <a href="http://www.themoscowtimes.com/news/article/5000-protest-duma-election-results/449327.html" title="Related external link: http://www.themoscowtimes.com/news/article/5000-protest-duma-election-results/449327.html">http://www.themoscowtimes.com/news/article/5000-protest-duma-election-results/449327.html</a></p> <p>The Russian parliamentary elections show that, whichever party Russians voted for, whether they voted under free and fair conditions or not, they voted overwhelmingly for a strongman. United Russia (one half of the vote) is for Putin. The Communist Party (one fifth) is for Ziuganov. The Liberal Democrats (one tenth) are for Zhirinovskii. </p> <p>Neither liberal nor democratic, the Liberal Democrats' favourite term of abuse for advocates of a free and competitive political system is <em>der'mokraty</em>, &quot;shittocrats.&quot; The Communists have called for Russia to undergo &quot;re-Stalinization.&quot; United Russia follows the hazy notion of &quot;sovereign democracy,&quot; implying a non-competitive dialogue between rulers and ruled.</p> <p>On the face of it, the outlook for democracy in Russia is hopeless. Apparently, nearly all Russians espouse one or another form of authoritarianism.</p> <p>All the more surprising and encouraging that 5,000 Muscovites have taken the risky course of public demonstration against vote rigging and electoral fraud. But what do 5,000 demonstrators count, out of 65 million voters?</p> <p>More than would appear at first sight, perhaps. A new article by <a href="http://elliott.gwu.edu/faculty/hale.cfm">Henry Hale</a>(2011) of George Washington University suggests how much may be going on below the surface. Hale argues that we often misinterpret Russian opinion polls and election outcomes. When we find that many Russians take a dim view of &quot;democracy,&quot; we fail to check that we and they understand democracy the same way; it turns out we don't. When we find that Russians frequently favour a strong leader, we assume that this is in conflict with the idea of competitive elections and we fail to check whether Russians see the same conflict. This too turns out not to be true. </p> <p>On the evidence, Hale argues, most Russians do favour a strong leader, but the same Russians, even those who rail against <em>der'mokratiia</em>, also favour competitive elections. They want a strong leader that they have chosen, a strong leader who will govern according to the law, treat the people fairly, and then submit himself to competitive re-election as the constitution requires. </p> <p>Such attitudes set up an obvious paradox, Hale observes. Russians know what they want, but they cannot have it for long. Any leader strong enough to rule as Russians want to be ruled is also strong enough to bend the law, pressure the courts, and stuff the ballot boxes. This seems like an electoral equivalent to the Weingast (1995) paradox: &quot;A government strong enough to protect property rights and enforce contracts is also strong enough to confiscate the wealth of its citizens.&quot;</p> <p>Hale has two conclusions. First, &quot;Russia&rsquo;s leaders, including even the highly popular Putin, are desired not as dictators but as powerful delegates with an expansive&mdash;but still limited&mdash;mandate to &lsquo;get things done&rsquo;. Limits include: that the basic rights of the opposition not be violated; that the leader not have a right to remain in complete power for life; and that the people retain the right to select a successor in a free, fair and competitive process when that leader&rsquo;s constitutional term limits are up.&quot; It is logical therefore that, as Putin has increasingly overstepped these limits, he should gradually be losing his earlier support and legitimacy. </p> <p>Second, Hale confirms that Russians are &quot;the enablers of their own autocracy&mdash;but for reasons di&#64256;erent from those usually given.&quot; The underlying problem is &quot;not any kind of culturally embedded or historically developed support for autocracy, but the preference for a kind of democracy that nevertheless relies on electing a strong leader as a way of concentrating national e&#64256;orts on the resolution of major national challenges.&quot;</p> <p>Or, in the words of <a href="http://www.americanliterature.com/Jacobs/SS/TheMonkeysPaw.html">W. W. Jacobs</a>: &quot;Be careful what you wish for.&quot; <br /> </p> <h2>References</h2> <ul> <li>Hale, Henry E. 2011. The Myth of Mass Russian Support for Autocracy: The Public Opinion Foundations of a Hybrid Regime, Europe-Asia Studies 63:8, pp. 1357-1375.</li> <li>Weingast, Barry R. 1995. The Economic Role of Political Institutions: Market-Preserving Federalism and Economic Development. Journal of Law, Economics, and Organization 11:1, pp. 1-31.</li> </ul>PoliticsRussiaStalinWed, 07 Dec 2011 11:00:57 GMTMark Harrisonhttp://blogs.warwick.ac.uk/markharrison/entry/russians_be_careful/#comments094d73cd32a5e48a0134182e1d0b60b00
Patricia Wilson Cooper who recently retired as a Nursing Assistant, who had a 34-year career at the Canandaigua VA working in many patient care treatment areas All week, nurses at the Canandaigua VA and Rochester VA Outpatient Clinic are being honored to mark their special achievements. VA, the nation’s largest single employer of nurses, joined the American Nurses Association (ANA) in honoring America’s nurses “dedicated to saving lives and maintaining the health of millions of Veterans and their families,” stated an announcement. ANA designated 2017 as the "Year of the Healthy Nurse." This year’s nurses recognition theme is “Nursing: the Balance of Mind, Body, and Spirit.” VA health care facilities throughout the country are paying tribute to the VA’s 90,000 nurses. “On the battlefield, the military pledges to leave no one behind,” stated Canandaigua VA Medical Center’s Nurse Executive Lisa Lehning. “As a nation, we pledge that when they return home, we leave no Veteran behind. A VA nurse lends fidelity to that pledge.” Recognizing nurses this month honors Florence Nightingale, the founder of nursing as a modern profession.
Glenn Reynolds Glenn Harlan Reynolds (born August 27, 1960) is Beauchamp Brogan Distinguished Professor of Law at the University of Tennessee College of Law, and is known for his American politics weblog, Instapundit. Authorship Instapundit blog Reynolds' blog got started as a class project in August 2001, when he was teaching a class on Internet law. Much of Instapundit's content consists of links to other sites, often with brief comments. Between early 2006 and early 2010, Reynolds began to host podcasts of "The Glenn & Helen Show", along with his wife, Dr. Helen Smith. In 2007 network theory researchers who studied blogs as a test case found that Instapundit was the #1 blog for "quickly know[ing] about important stories that propagate over the blogosphere". In the past, Reynolds has called for the assassination of Iranian scientists and clerics, and advocated the use of nuclear weapons against North Korea "if they start anything." On September 21, 2016, on his Twitter account, Reynolds suggested that any drivers feeling threatened by protesters objecting to the fatal shooting of Keith Lamont Scott in Charlotte, North Carolina should "run them down." The tweet consisted only of the words "Run them down" and a link to a news story about the protestors. On September 22, 2016, Erik Wemple of the Washington Post, published an article titled "'Instapundit' Glenn Reynolds defends 'Run them down' tweet during Charlotte unrest.'" The article contained the original tweet and an interview in which Reynolds said: But riots aren't peaceful protest. And blocking interstates and trapping people in their cars is not peaceful protest — it's threatening and dangerous, especially against the background of people rioting, cops being injured, civilian-on-civilian shootings, and so on. I wouldn't actually aim for people blocking the road, but I wouldn't stop because I'd fear for my safety, as I think any reasonable person would. Twitter suspended Reynolds' account, but restored it shortly after and told him to delete the tweet in order to be allowed to use Twitter again. The University of Tennessee released a statement that it was investigating Reynolds as it does not condone language that encourages violence. On September 27, 2016, the law school's Dean Melanie Wilson issued a statement to announce that the University had ended its short-lived investigation with a recommendation that no disciplinary action be taken. Dean Wilson wrote that Reynolds' tweet "... was an exercise of his First Amendment rights. Nevertheless, the tweet offended many members of our community and beyond, and I understand the hurt and frustration they feel." USA Today said that Reynolds had violated the newspaper's standards and suspended his column for one month. Reynolds issued an apology at the request of USA Today saying: Wednesday night one of my 580,000 tweets blew up. I didn't live up to my own standards, and I didn't meet USA TODAY's standards. For that I apologize, to USA TODAY readers and to my followers on social media. ... Those words can easily be taken to advocate drivers going out of their way to run down protesters. I meant no such thing, and I'm sorry it seemed I did. What I meant is that drivers who feel their lives are in danger from a violent mob should not stop their vehicles. I remember Reginald Denny, a truck driver who was beaten nearly to death by a mob during the 1992 Los Angeles riots. ... I have always supported peaceful protests, speaking out against police militarization and excessive police violence in my USA TODAY columns, on my website and on Twitter itself. I understand why people misunderstood my tweet and regret that I was not clearer. Academic publications As a law professor, Reynolds has written for the Columbia Law Review, the Virginia Law Review, the University of Pennsylvania Law Review, the Wisconsin Law Review, the Northwestern University Law Review, the Harvard Journal of Law and Technology, Law and Policy in International Business, Jurimetrics, and the High technology law journal, among others. Other writing Reynolds also writes articles for various publications (generally under his full name, Glenn Harlan Reynolds): Popular Mechanics, Forbes, The New York Post, The New York Times, The Atlantic Monthly, The Washington Post, The Washington Times, The Los Angeles Times, USA Today, and The Wall Street Journal. He has written for the TCS Daily, Fox News, and MSNBC websites as well. Political views Reynolds is often described as conservative, but holds liberal views on some social issues such as abortion, the War on Drugs and gay marriage. He describes himself as a libertarian and more specifically a libertarian transhumanist. He customarily illustrates his combination of views by stating: "I'd like to live in a world in which happily married gay people have closets full of assault weapons to protect their pot." Reynolds is a former member of the Libertarian Party and the Democratic Party. He delivered the keynote speech at a September 2011 conference at the Harvard Law School to discuss a possible Second Constitution of the United States and concluded that the movement for a constitutional convention was a result of having "the worst political class in our country's history". Personal life Reynolds grew up a Methodist but is now a Presbyterian. He is married to Dr. Helen Smith, a forensic psychologist. Reynolds also once ran his own music label WonderDog Records, for which he also served as a record producer. Reynolds has also worked as an indie music artist. One of his albums reached the number one album chart spot on the website service MP3.com for several weeks. Reynolds is of Scots-Irish ancestry. Books authored Outer Space: Problems of Law and Policy (1989), (with Robert P. Merges); 2nd ed. (1997), The Appearance of Impropriety: How the Ethics Wars Have Undermined American Government, Business, and Society (1997), (with Peter W. Morgan) An Army of Davids: How Markets and Technology Empower Ordinary People to Beat Big Media, Big Government, and Other Goliaths (2006), Looks at modern American society through the lens of individuals versus social institutions, and Reynolds concludes that technological change has allowed more freedom of action for people, in contrast to the "big" establishment organizations that used to function as gatekeepers. Thus, he argues that the balance of power between individuals and institutions is "flatting out", which involves numerous decentralized networks rising up. The Higher Education Bubble (2012), About the rising price of higher education, causing students to take on excessive debt, even as they face an uncertain job market. Higher education spending fueled by cheap credit resembles an economic bubble, and higher education bubble has become a common term to describe this phenomenon. The K-12 Implosion, Encounter Broadsides No. 31, (2013) Provides a description of what's wrong with America's K-12 education system, and where the solutions are likely to come from, along with advice for parents, educators, and taxpayers. He argues that America has been putting ever-growing amounts of money into the existing public education system, while getting increasingly worse results. He suggests that while parents are losing hope in public schools, new alternatives are appearing, and change is inevitable. References External links Official home page and bio at UTK Law School site Law Review articles by Reynolds via SSRN Category:1960 births Category:Living people Category:21st-century American non-fiction writers Category:21st-century American male writers Category:21st-century Presbyterians Category:American bloggers Category:American legal scholars Category:American libertarians Category:American male bloggers Category:American people of Scotch-Irish descent Category:American political writers Category:American Presbyterians Category:American transhumanists Category:Former Methodists Category:Libertarian theorists Category:People from Knoxville, Tennessee Category:University of Tennessee faculty Category:Writers from Tennessee
Q: When working with ASP.NET do we have access to all classes in the .NET Framework? Is this a subset of .NET or is it the complete framework? If it's a subset, what classes (namespaces) is ASP.NET composed of, and more importantly which ones is it not composed of? I noticed in the documentation that there are obviously namespaces that only apply to web applications and vice versa with windows applications, but are these still accessible in ASP.NET? Is the limit of what ASP.NET is made of based on the restrictions we set or is it, like I mentioned earlier, only a piece of the .NET Framework? From developing with WP7 there obviously were restrictions, therefore making it only of a subset of the .NET Framework (Compact). From what I can tell there is no reason why ASP.NET should have any limitations. Is this correct? A: ASP.NET development requires full .NET library (not the client profile) so you'll virtually have access to all namespaces in the framework so no restrictions. On the other hand, even though you can access objects like System.Windows.Forms.Form, they won't be fully functional as there is no desktop application and a desktop to show the form on (though you can still use them for other purposes than displaying a form on the desktop). As a side note, actually your hosting provider may restrict some features in the framework (an example here: http://faq.1and1.co.uk/scripting/asp_net/2.html) so you'll have to check with them.
Q: Displaying number of alphabetical letters and the number of decimal digits in C I have been working on this simple code for hours now, and I have no idea what is wrong! I need to display number of alphabetical letters and the number of decimal digits in standard input. So far I have this: #include<stdio.h> #include<ctype.h> int isalpha(int); int isdigit (int); int main() { int c; while((c=getchar())!=EOF) printf("The number of letters is %d and the number of digits is %d.\n", isalpha(c), isdigit(c)); return 0; } int isalpha(int one) { int ch; int i; i=0; scanf("%d", &ch); if(isalpha(ch)) i++; return i; } int isdigit(int two) { int a; int k; k=0; scanf("%d", &a); if(isdigit(a)) k++; return k; } Program crashes whenever I try to run it and I have no clue what part of the code is wrong. Although I don't have much experience in this field yet, so any help is highly appreciated! Thank you in advance. A: Just use the existing API's gently and get the count as shown below int alp = 0; int dig = 0; while ((c = getchar()) != EOF) { if (isalpha(c) alp++; else if (isdigit(c)) dig++; } printf("The number of letters is %d and the number of digits is %d.\n", alp,dig); PS: Take care to flush the newline char if you have \n in the input
/* * youtube * https://github.com/savetheinternet/Tinyboard/blob/master/js/youtube.js * * Don't load the YouTube player unless the video image is clicked. * This increases performance issues when many videos are embedded on the same page. * Currently only compatiable with YouTube. * * Proof of concept. * * Released under the MIT license * Copyright (c) 2013 Michael Save <savetheinternet@tinyboard.org> * Copyright (c) 2013-2014 Marcin Łabanowski <marcin@6irc.net> * * Usage: * $config['embedding'] = array(); * $config['embedding'][0] = array( * '/^https?:\/\/(\w+\.)?(?:youtube\.com\/watch\?v=|youtu\.be\/)([a-zA-Z0-9\-_]{10,11})(&.+)?$/i', * $config['youtube_js_html']); * $config['additional_javascript'][] = 'js/jquery.min.js'; * $config['additional_javascript'][] = 'js/youtube.js'; * */ onready(function(){ var do_embed_yt = function(tag) { $('div.video-container a', tag).click(function() { var videoID = $(this.parentNode).data('video'); $(this.parentNode).html('<iframe style="float:left;margin: 10px 20px" type="text/html" '+ 'width="360" height="270" src="//www.youtube.com/embed/' + videoID + '?autoplay=1&html5=1" allowfullscreen frameborder="0"/>'); return false; }); }; do_embed_yt(document); // allow to work with auto-reload.js, etc. $(document).on('new_post', function(e, post) { do_embed_yt(post); }); });
Q: React hooks value is not accessible in event listener function I decided to use React hooks for my component using windom width and resize event listener. The problem is that I can't access current value that I need. I get nested value that was set while adding event listener. Passing function to value setter function is not a solution for me because it's forcing render and breaking my other functionalities. I am attaching minimalistic example to present core problem: import React, { Component, useState, useEffect } from 'react'; import { render } from 'react-dom'; const App = () => { const [width, setWidth] = useState(0); const resize = () => { console.log("width:", width); // it's always 0 even after many updates setWidth(window.innerWidth); }; useEffect(() => { resize(); window.addEventListener("resize", resize); return () => window.removeEventListener("resize", resize); }, []); return <div>{width}</div>; } render(<App />, document.getElementById('root')); LIVE DEMO IS HERE Please for help. A: Every render you get a new copy of resize function. Each copy captures the current value of width. Your listener has a copy which was created on the first render with width = 0. To fix this issue you have several options: Update listeners when width changes useEffect(() => { resize(); window.addEventListener("resize", resize); return () => window.removeEventListener("resize", resize); }, [width]); Use functional updates to get the current width inside listener const resize = () => { setWidth(oldWidth => { console.log("width:", oldWidth); return window.innerWidth }); }; Store the width in a mutable reference const widthRef = useRef(width); const resize = () => { console.log("width:", widthRef.current); widthRef.current = window.innerWidth; setWidth(window.innerWidth); };
Pyrene-labeled deoxyuridine and deoxyadenosine: fluorescent discriminating phenomena in their duplex and hairpin oligonucleotides. Pyrene-labeled deoxyuracil and deoxyadenine units are useful unnatural nucleobases. These fluorescent nucleobase analogues allow strong interstrand stacking interactions to compensate for a loss of hydrogen bonding and exhibit a range of different emission intensities when they form duplexes with one another. These findings may provide new insights into the design of new probes and nucleobase analogues for applications in molecular biology. For this purpose, we have prepared an hairpin molecular beacon (MB) that incorporates an excimer unit in its closed state, and have utilized lambda(max) changing to discriminate between match and mismatch. This hairpin configuration is attractive because the synthesis of such an MB is relatively simple and inexpensive because it does not require two distinct processes to prepare the fluorophore and quencher.
Traffic Tunnel Administration Building The Traffic Tunnel Administration Building, also known as Boston Police Station Number One, is a historic government building in the North End of Boston, Massachusetts. The building occupies a prominent position facing North End Park off the Rose Kennedy Greenway, and is bounded by the park, North Street, and the trench carrying the exit point of the Sumner Tunnel. The Georgian Revival building was designed by Salem architect John M. Gray and built in 1931. The southern facade, facing the park, was originally used as the administrative facilities for Boston's tunnels, and the eastern facade provided access to the police station. The administration facilities are now used by the local police union, and the police station now houses the North Bennet Street School. The building was listed on the National Register of Historic Places in 2015. See also National Register of Historic Places listings in northern Boston, Massachusetts References External links Category:Government buildings on the National Register of Historic Places in Massachusetts Category:National Register of Historic Places in Boston Category:North End, Boston Category:Buildings and structures in Boston
Q: Flex 3: synchronously loading an xml file My question is very simple: In flex3, is there a way to load an xml file synchronously? I know how to load asynchronously, using a load event. This may be useful, or may not. I just want to read the file, parse it, do what I have to do with it, and continue executing code. I have a component that uses an xml file to store some configuration parameters. I need to read the file when the object is initialized. However, with the event model, I can't control when the file is loaded, so I must write code to "wait" for the code to load. This is just ridiculous, or is it me? I want code like this: var foo:Foo = new Foo(); //This constructor should read the xml and initialize the object. foo.doSomething(); //When I call this method the xml must be already handled. I can handle the xml file on the event, and it works fine, but the event fires after the doSomething method. I hope I have explained myself. I think this should be really easy, but it's driving me crazy. I don't want to write code to wait for the event unless it's really necessary. I feel all this should be just one line of code! A: It's not possible to load synchronously, flash is built for the web and you can never be sure how long a call takes. Air is different because that loads from the filesystem, and there are nowhere near the same amounts of delay there. The cleanest solution would be to listen for the load to complete inside Foo and calling doSomething() from inside, this way your "outer" class won't need to bother at all. If you do absolutely need to call foo.doSomething() from the outside, you can use the event system. Let your Foo class dispatch an event when it is done loading: dispatchEvent(new Event(Event.COMPLETE, true)); To catch that you will need to listen for it like so: foo.addEventListener(Event.COMPLETE, handleFooComplete); And you event handler function should look like this: private function handleFooComplete(e:Event):void{ foo.doSomething(); } However you choose to do it, you will need to listen for Event.COMPLETE on the loader. There's no getting around that.
Imatinib (Gleevec)-induced hepatotoxicity. Imatinib (Gleevec, Novartis Pharmaceuticals Corp, East Hanover, NJ) is widely used in the treatment of chronic myelogenous leukemia and gastrointestinal stromal tumors. To our knowledge, only one case report of histologically proven Imatinib-induced hepatotoxicity has been reported. We describe another case of hepatotoxicity in a 22-year-old woman including the histopathologic changes and the clinical course after the discontinuation of Imatinib.
An ebola epidemic that has infected more than 1,000 people in central Africa is spreading at a record rate weeks after health officials reported the outbreak largely contained. Infections in the east of the Democratic Republic of Congo reached 57 two weeks ago and 72 last week, contradicting the World Health Organisation, which said last month that the number of cases was falling. Militia violence and suspicion about responders wearing unfamiliar protection suits are posing the most serious challenges to the response, emergency teams have said. Five ebola treatment centres have been attacked in two months, scattering infected patients back into their communities.
Praying for Coffee Cake An overweight business associate of mine decided it was time to shed some excess pounds. He took his new diet seriously, even changing his driving route to avoid his favorite bakery. One morning, however, he arrived at work carrying a gigantic coffee cake. We all scolded him, but his smile remained cherubic. "This is a very special coffee cake,” he explained. “I accidentally drove by the bakery this morning, and there in the window was a host of goodies. I felt this was no accident, so I prayed, ‘Lord, if you want me to have one of those delicious coffee cakes, let me have a parking place directly in front of the bakery. And sure enough,” he continued, “the eighth time around the block, there it was!”
Q: Why doesn't Eclipse show leak warning for streams? In Eclipse Neon, if I write this Java code: Stream<Object> stream = Stream.builder().build(); I get no leak warnings, but if I implement Stream, such as public class MyStream<T> implements Stream<T> { // implementation } and I write similar code Stream<Object> stream = new MyStream<>(); I get a Resource leak: 'stream' is never closed warning. This happens only in Eclipse, while compiling with javac does not issue any warning. Note I'm not looking for an answer on how to close the stream and such, but for an answer which explains the reason of this different behavior for the same interface. A: Eclipse has a whitelist of types that do not require cleanup, because they don't actually refer to system resources. Core Java types are listed here, but your custom types are not. See the help for more information. A: In the first case you are not creating the instance of the resource. In the second case, you are. The eclipse documentation states the following: Ownership / responsibility The above diagnostics basically assume that a method that creates an instance of a resource type is also responsible for closing this resource. [...] - If a method obtains a resource via a method call rather than by a new expression, it may or may not be responsible; any problems are only flagged as potential resource leaks.
--- fr: event_types: account_created: Compte créé account_verified: Compte vérifié authenticated_at: Connecté à %{service_provider} authenticated_at_html: Connecté à %{service_provider_link} authenticator_disabled: Application d'authentification supprimée authenticator_enabled: Application authenticator ajoutée backup_codes_added: Codes de sauvegarde ajoutés eastern_timestamp: "%{timestamp} (Eastern)" email_changed: Adresse courriel modifiée email_deleted: Adresse e-mail supprimée new_personal_key: Clé personnelle modifié password_changed: Mot de passe modifié password_invalidated: Réinitialisation du mot de passe par %{app_name} personal_key_used: Clé personnelle utilisée pour la connexion phone_added: Numéro de téléphone ajouté phone_changed: Numéro de téléphone modifié phone_confirmed: Numéro de téléphone confirmé phone_removed: Numéro de téléphone supprimé piv_cac_disabled: Carte PIV/CAC non associée piv_cac_enabled: Carte PIV/CAC associée sign_in_after_2fa: Signé avec deuxième facteur sign_in_before_2fa: Connecté avec mot de passe usps_mail_sent: Lettre envoyée webauthn_key_added: Clé de sécurité ajoutée webauthn_key_removed: Clé de sécurité retirée
The Intuitive Basis for Contextualism (for The Routledge Handbook of Epistemic Contextualism, edited by Jonathan Jenkins Ichikawa) Geoff Pynn FINAL VERSION: September 12, 2016 Francois follows climate science closely, and on this basis she believes, correctly, that the earth's mean temperature will continue to rise over the next century. Does she know this? Many would say that she does. Her belief is based on her accurate understanding of the scientific consensus, and we typically treat scientific expertise as a source of knowledge. On the other hand, you might deny that she knows that the temperature will continue to rise, even if you agree that this is likely. After all, climate scientists themselves readily acknowledge that their predictions are not entirely certain. So while Francois may be justified in her belief, she doesn't really know. Which answer is right, then? Does Francois know? Or not? Contextualists think that which answer is correct depends, in part, on the context in which the question is asked. Whether Francois can truly claim to know depends on her context's "epistemic standard," which determines how strong her epistemic position must be in order for her to count as knowing in that context --how much evidence she needs, which alternatives she needs to be able to rule out, how reliable her belief-forming mechanisms need to be, and so on. So according to the contextualist, Francois can truly claim to know that the temperature will continue to rise in a context where the epistemic standard is relatively relaxed, but not in a context where the epistemic standard is particularly demanding. This chapter outlines the intuitive argument for contextualism. To a substantial degree, my presentation follows that of Keith DeRose, who has done more than any other contextualist to develop the argument (see especially DeRose 1992, 2005, 2009 (ch.2)). The overall shape of the argument is an inference to the best explanation: contextualism, it is claimed, is part of the best explanation for the 2 variability in epistemic standards exhibited by our ordinary knowledge talk. The argument is "intuitive" in that it relies upon intuitive judgments about ordinary knowledge claims. The contents of these judgments furnish the data that contextualism explains. By calling the judgments "intuitive," I mean two things: first, they are more-or-less non-inferential and cognitively effortless; second, they are generated by intellectual reflection or imagination, rather than perception (Nagel 2007, Nado and Johnson 2014). When I say that something "intuitively seems" to be the case, I mean that we (I and, hopefully, the reader) are inclined to make an intuitive judgment that it is the case. When I say that an intuitive judgment is "accepted," or that we "defer to" an intuition or intuitive judgment, I mean that we accept that the judgment's content is true. Low-High Pairs and The Intuitions They Elicit The case for contextualism starts with the observation that we apply different epistemic standards in different contexts when making and evaluating knowledge claims. This observation will not be news to anyone who has been exposed to radical skeptical arguments. When well-constructed and successfully deployed, such arguments lead us temporarily to apply much higher epistemic standards than we ordinarily do, and hence to conclude that we don't know much. Still, as Hume pointed out, even skeptics regard themselves as knowers once the skeptical spell has been lifted: "[T]he first and most trivial event in life will put to flight all their doubts and scruples, and leave them the same, in every point of action and speculation, with the philosophers of every other sect, or with those who never concerned themselves in any philosophical researches" (Hume 1999, 207). One can see the debate over radical skepticism as a debate about which epistemic standard is correct: the very high standard introduced by the skeptic, or the more manageable one in place once our doubts and scruples have been put to flight. Contextualists claim to be able to resolve, or dissolve, this debate: different standards apply in different contexts, so neither one is "correct" tout court (see Chapter 10). But the contextualist resolution might appear ad hoc. 3 Couldn't nearly any philosophical dispute be "resolved" by stipulating that some term at the heart of the dispute has a meaning that varies with context? To avoid this charge, we need independent reason to accept that contextualism is true. Contextualists argue that shifts in epistemic standards like those triggered by a skeptic's intervention are ubiquitous in ordinary conversations. They present us with pairs of imaginary vignettes to illustrate this variability. In the "Low" vignette, a speaker in some mundane situation claims that a subject knows some proposition. In the "High" vignette, a speaker in a different situation claims that the same subject doesn't know that same proposition. When the vignettes are well constructed, both the positive knowledge claim in the Low case and the negative knowledge claim in the High case seem true. I'll call such cases Low-High pairs. Here is a well-known Low-High pair from Keith DeRose: Low Bank Case. My wife and I are driving home on a Friday afternoon. We plan to stop at the bank on the way home to deposit our paychecks. But as we drive past the bank, we notice that the lines inside are very long, as they often are on Friday afternoons. Although we generally like to deposit our paychecks as soon as possible, it is not especially important in this case that they be deposited right away, so I suggest that we drive straight home and deposit our paychecks on Saturday morning. My wife says, "Maybe the bank won't be open tomorrow. Lots of banks are closed on Saturdays." I reply, "No, I know it'll be open. I was just there two weeks ago on Saturday. It's open until noon." High Bank Case. My wife and I drive past the bank on a Friday afternoon, as in [Low Bank Case], and notice the long lines. I again suggest that we deposit our paychecks on Saturday morning, explaining that I was at the bank on Saturday morning only two weeks ago and discovered that it was open until noon. But in this case, we have just written a very large and very important check. If our paychecks are not deposited into our checking account before Monday morning, the important check we wrote will bounce, leaving us in a very bad situation. And, of 4 course, the bank is not open on Sunday. My wife reminds me of these facts. She then says, "Banks do change their hours. Do you know the bank will be open tomorrow?" Remaining as confident as I was before that the bank will be open then, still, I reply, "Well, no, I don't know. I'd better go in and check" (DeRose 1992, 913; DeRose 2009, 1-2). In Low, it doesn't matter very much whether Keith is right about the bank's hours, and no hypotheses about how he could be wrong has been raised. In High, it matters a lot whether he is right, and a particular hypothesis about how he might be wrong ("Banks do change their hours") has been raised. What leads Keith to deny that he knows in High is not an argument for philosophical skepticism, but his awareness of the ordinary ways he could go wrong, and the exigencies of everyday life. The intuitive argument for contextualism doesn't rest upon any particular Low-High pair such as the bank cases or Stuart Cohen's equally well-known airport cases (Cohen 1999). Such cases are rather meant to illustrate a pervasive variability in our ordinary knowledge talk, which contextualism (it is argued) best explains. Nonetheless, it simplifies matters to present the argument as if a particular pair of cases were essential to it. There should be no danger in this, provided we bear in mind that the contrast between the bank cases is meant to be representative of a ubiquitous phenomenon. So construed, the key claim in the case for contextualism is: Truth. Keith's claim to know in Low and his claim not to know in High are both true. Truth, in turn, is underwritten by two intuitive judgments. First, considered from the perspective of the context in which it was made, each claim seems true. As DeRose puts it, contextualists "appeal to how we, competent speakers, intuitively evaluate the truthvalues of particular claims that are made (or are imagined to have been made) in particular situations" (DeRose 2009, 49). Imagine yourself in each conversation, and ask whether the claim Keith makes in that conversation is true (assuming, of course, that the bank will in fact be open); the contextualist thinks that 5 you'll find yourself answering, "Yes." Second, each claim is intuitively appropriate. A claim can be (and seem) true without being (or seeming) appropriate. Asked by a friend who's run out of gas if there is a filling station nearby, I claim that there is one around the corner, without revealing that I know that it has been closed for months. My claim is true but misleading, and hence improper. However, the propriety of a claim is evidence for its truth, since it is generally improper to make a false claim. Not always: hyperbolic and other figurative claims can be proper though false ("It took me a million years to get through Husserl's Logical Investigations!"). Nonetheless, such cases are exceptional, and neither of Keith's claims seems at all figurative (pace Hazlett 2007 and Schaffer 2004). It's important to see that neither of these intuitions constitutes a judgment about what Keith knows or doesn't know. For a contextualist, the question of what a subject knows is different from the question of what knowledge claims are true of her. To say that Keith knows would be to claim, in effect, that he meets the epistemic standards in place in our present context. The case for contextualism does not rest on an intuitive judgment that Keith meets or doesn't meet the epistemic standards in place in the context of a philosophical discussion about knowledge or knowledge claims. Rather, it rests on the judgment that Keith's knowledge claims, as made in their imagined contexts, are true. Contextualists typically refrain from issuing or endorsing any first-order judgment about whether the characters in their vignettes know or don't know. DeRose, for example, says that his intuitions about the "object-level question" of whether the characters in his story know "would be far more wavering and uncertain than are my intuitions that the claims made in the cases are true" (DeRose 2009, 49). Similarly, when arguing for contextualism using his airport cases, Stewart Cohen is concerned with whether the speakers use the word 'know' correctly, and whether they speak truly, and not with whether the subject of their knowledge attributions knows (Cohen 1999, 58ff.). The intuitions are also not judgments about the sentences that Keith has uttered. Standard contextualism does treat both sentences as true with respect to their context of utterance. And contextualists are not always careful about distinguishing the truth of a sentence from the truth of a claim made by uttering the sentence (though see Stainton 2010 and Pynn 2015). This is partly because 6 contextualists typically presuppose that what Keith claims in each case just is the content encoded by the sentence he utters with respect to its context. If a claim's content and truth-value are identified with the content and truth-value of the sentence uttered in making the claim, then Truth implies that the sentences Keith utters in both cases are true. But the intuitive judgments at play in the argument concern the truth and propriety of Keith's claims, and not the sentences he utters in making them.1 By and large, contextualists and their opponents have agreed that Keith's claims are intuitively proper and true; controversy has concerned how to accommodate these intuitions, not what they are. Recently, however, work in "experimental philosophy" has been used to raise doubts about the intuitions themselves. Citing surveys designed to elicit judgments about Low-High pairs, Jonathan Schaffer and Joshua Knobe assert that "people simply do not have the intuitions they were purported to have," suggesting that "the whole contextualism debate was founded on a myth" (Schaffer and Knobe 2012, 675). Chapter 3 discusses this issue in more detail. Two brief responses are worth making. First, some of the data cited by Schaffer and Knobe is neutral with respect to the intuitive judgments just canvassed (see DeRose 2011 for discussion). Two of the surveys ask subjects about whether various characters in Low-High pairs know, rather than whether speakers who claim to know speak truly. A third study (Buckwalter 2010) asked subjects whether the speaker in a Low case who claims to know speaks truly, but then, in a departure from contextualist Low-High pairs, asked whether a speaker in a High-like case who also claims to know speaks truly. Only the fourth study (Feltz and Zarpentine 2010) involved a survey in which the bank cases were presented more-or-less as originally constructed. In Feltz and Zarpentine's study, the average level of agreement that the claims made in High and Low were true was around 4 on a 7-point Likert scale. While this result does not confirm the contextualist's claims about the intuitions, neither does it disconfirm them; it is neutral. 1 To see the difference, it may be helpful to focus on the actual sentence that features in the High bank case: "Well, no, I don't know." Speaking for myself, I have no intuition whatsoever about whether that sentence is true. I am inclined to say that it is neither true nor false, because it is semantically incomplete, since it has no element corresponding to what Keith is claiming not to know. 7 Second, more recent work than that cited by Schaffer and Knobe suggests that the intuition of truth in Low and High is, in fact, widely shared. Hansen and Chemla 2013 "confirmed DeRose's prediction that speakers would find both 'I know that p' in the Low context and 'I don't know that P' in the High context true" (Hansen and Chemla 2013, 203). And Buckwalter 2014 designed a new survey where speakers were asked about the truth of knowledge attributions and denials made in various Low and High cases, and found that subjects "generally judged everything true across the board" (Buckwalter 2014, 156).2 In light of this subsequent work, we have reason to doubt Schaffer and Knobe's assertion that contextualism is founded on an intuitive myth. Nonetheless, the empirical adequacy of the standard contextualist claim about our intuitive judgments is a subject of lively and ongoing debate; see Chapter 3 for a more detailed and sympathetic discussion of this line of criticism. Why The Intuitions Should Be Trusted Deferring to the intuitive judgments gives us strong reason to accept Truth. But why defer to the intuitions to begin with? Why think that the seeming truth and propriety of his claims indicates that they are proper and true? This question points towards the vast controversy over the role of intuitions in philosophy; see Pust 2016 for an introduction to this literature. It is a widely accepted philosophical practice to afford the contents of our intuitive judgments a default level of evidential significance. The practice is not to treat intuitive judgments as infallible or issuing from some faculty of rational intuition, but simply to treat acceptance of their contents as a 2 Consistent with his earlier study, Buckwalter's respondents also judged that speakers who claim not to know in Low cases and speakers who claim to know in High cases were speaking truly. This wrinkle leads Buckwalter to suggest that all of the responses were "largely driven by accommodation;" i.e., the conversational rule -schema David Lewis posited to the effect that speakers ought, so far as possible, to assign semantic values to utterances that permit them to be interpreted as true (Lewis 1979). Contextualist should welcome Buckwalter's suggestion. If the reason that subjects so readily interpret "know"-involving utterances as true is that they are tacitly adhering to a rule of accommodation for such utterances, then we have a further piece of "intuitive" evidence for contextualism: the more semantically invariant a term, the more resistant we should be to accommodating a variety of "surfacecontradictory" utterances involving the term. 8 desideratum when tallying the pros and cons of a philosophical view. When the balance of reasons tips in favor of a view, despite its conflict with some intuitive judgments, standard practice tells us to "bite the bullet" and dismiss the problematic intuitions. Yet even when biting the bullet, we are encouraged to provide an explanation for the wayward intuitions. Fairly powerful reasons are required to conclude that things are not how they intuitively seem, and we may remain dissatisfied with a bullet-biting view until we have been told why things intuitively but wrongly seemed as they did. Employing this standard practice in the present context, the intuitions that Keith's utterances are both proper and true ought to be taken at face value. And to take them at face value is to endorse Truth. If we are to reject them, we are owed an explanation as to why we had them to begin with. Of course, to describe this practice is not to justify it. Controversy surrounds all general defenses of reliance on intuition in philosophy. A more manageable strategy here may be to pursue a narrower defense. Jennifer Nagel argues that "epistemic evaluations of particular cases" of the sort frequently discussed by epistemologists (e.g., intuitive judgments about Gettier's cases, Carl Ginet's fake barn country, Lawrence Bonjour's Truetemp case) are exercises of our capacity to attribute mental states to other people (Nagel 2007, 2012). Though our "mindreading" ability is susceptible to error, it is nonetheless generally accurate. If Nagel is right about the source of our intuitive epistemic evaluations, then we can be confident in treating them as evidentially significant (if defeasible). But Nagel's defense of epistemic intuitions, even if successful, may not establish the significance of the contextualist intuitions about Low-High pairs, because the latter may not count as epistemic intuitions; they concern the truth and propriety of knowledge claims, and not whether the subjects of those claims know. Since the intuitions concern claims made by uttering sentences, we may wish to treat them as linguistic intuitions. Linguists treat the intuitive judgments of competent speakers about certain features of their language as an important source of evidence about those features of the language. The standard rationale for treating such intuitions as evidence is that linguistic competence relies on tacit knowledge of the rules governing the language (Chomsky 1986). On the assumption that a linguistic intuition is the product of a speaker's tacit knowledge of the rules governing their language, we have good reason to 9 accept it. While there is substantial controversy over the adequacy of this traditional rationale (see, e.g., Devitt 2006), it seems undeniable that competent speakers of a language possess at least some degree of epistemic authority concerning many features of their language. If the intuitive judgments of truth and propriety in Low-High pairs are linguistic intuitions, then they have a prima facie claim to deference, on pain of undermining a principle source of evidence in linguistics. However, just as it is not clear that the intuitions are epistemic, it's also not clear that they are best characterized as properly linguistic, either. They don't concern the properties of words or sentences, but the claims made by uttering sentences in particular contexts. The competence required to determine what claim is being made by a speaker who utters a particular sentence involves a substantial degree of extralinguistic knowledge, as does that required to form an accurate judgment about whether a claim is true or conversationally proper. Suppose that Mary utters, "Sharon is by the bank." Linguistic competence alone won't enable you to know whether she is claiming that Sharon is waiting by a financial institution, or that Sharon is down by the riverbank, much less whether Mary's claim is proper or true. Similarly, tacit knowledge of the syntactic and semantic features of the linguistic expressions he uses doesn't suffice for us to know what Keith claims by uttering, "Well, no, I don't know," much less to form a judgment as to whether his claim is true or proper. Intuitions of truth and propriety rest in part upon empirical knowledge of how speakers in various circumstances use particular English sentences, together with our capacity to imaginatively occupy the circumstances described in the case. But even if the intuitions do not rest entirely upon tacit linguistic knowledge, their being the product of our competence as users of English gives us good reason to treat them with respect. Fluent speakers possess practical expertise concerning how to use their language. They are in a position to know what sentences speakers tend to utter in various situations, what speakers typically mean to claim by uttering what they do, and which claims are appropriate to make under which circumstances. The intuitive judgments of competent speakers about the truth and propriety of claims made using their language thus deserve deference for the same reason that the judgments of anyone with practical expertise in any particular area do: expertise in a practice gives you reliable (though not infallible) intuitions about how 10 the practice works. When a chef who has been making mayonnaise for years tells you that you're adding the oil to your emulsion too quickly, you ought to listen. A seasoned jazz musician can tell you, without appeal to theory, whether a given note will sound awkward at a particular moment in an improvised sequence. Similarly, given sufficient background information, a fluent speaker of English can tell you whether a claim made using English in a particular circumstance would be proper, and whether it would be true.3 Contextualism and Its Invariantist Rivals The rest of the intuitive argument for contextualism is devoted to showing that contextualism is better able than its rivals to accommodate and explain Truth. Keith's claims are "surface-contradictory". Making what is implicit in the uttered sentences explicit, the two claims are: (L) I know [that the bank will] be open. (H) I don't know [that the bank will be open]. Going by their surface grammar, (L) and (H) are contradictories. So how could both claims be true? We assume that the bank will be open, and that Keith believes this in both cases. Keith has no evidence against the bank's being open in High that he lacks in Low. His epistemic position with respect to the 3 DeRose suggests that correct semantic theory of a term is correct partly in virtue of the fact that we have the semantic intuitions about the term that we do, together with other facts about our usage of the term (DeRose 2009, 66-67). He concludes that ordinary usage facts indicating that a term is context -sensitive are thus "some of the best possible type of evidence you could ask for" to conclude that the term is, in fact, context-sensitive. Against this, Cappelen and Lepore argue that intuitions of the sort we have been discussing --intuitions about truth and propriety generated by what they call "minimal pairs" --provide no evidence that a term is semantically context-sensitive (Cappelen and Lepore 2005, 17). Their target is specifically the view that the word "know" should be categorized as an indexical term. (Indeed, as speech-act pluralists, they agree with contextualists that the same sentence can be used to make claims with different truth-conditions in different contexts.) Though the claim that "know" is an indexical term has sometimes been thought to be constitutive of contextualism, contextualists are free reject inde xicalism about "know". See Stainton 2010, Pynn 2016, and chapter 37 for further discussion of these issues. 11 proposition that the bank will be open is the same in both cases. How, then, can he truly claim (L) in Low, but truly claim (H) in High? Contextualism provides a simple answer: whether Keith can truly claim to know something varies with the epistemic standard in the context in which he makes the claim. Since the standard in Low is relatively low, while the standard in High is relatively high, his epistemic position is strong enough for him to count as knowing in Low, but not for him to count as knowing in High. Invariantists hold that the epistemic standards governing the truth of a knowledge claim are fixed across contexts, and so cannot agree that (L) and (H) are both true owing to a variation in the epistemic standards across the contexts of utterance. Traditionally, invariantists have attempted to block the intuitive inference to Truth by providing alternative explanations for the intuitions that support it. In more recent years, clever versions of invariantism have been developed that accept Truth, and propose ways to explain it, rather than biting the bullet. Traditional invariantists hold that one of Keith's claims is false. Skeptical invariantists hold that the standard is very demanding, and hence that Keith's claim to know in Low is false. Moderate invariantists hold that the standard is more relaxed, and hence that Keith's claim not to know in High is false. In either case, one intuition of truth must go. Invariantists have nonetheless been keen to accommodate both intuitions of propriety. This leads to two challenges. The first is to explain how the false claim is nonetheless proper. Let's call this the propriety challenge. The second is to explain the wayward intuition of truth. Let's call this the truth challenge. A common strategy for meeting the propriety challenge focuses on the pragmatic effects of knowledge claims (see Chapter 19). False claims can pragmatically convey truths, and in virtue of this may be conversationally proper, despite being false. Jessica Brown 2006 offers a pragmatic answer to the propriety challenge on behalf of moderate invariantism. If Keith were to claim that he knows in the High case, his assertion would, though true, be irrelevant, because the conversationally relevant issue is not whether he knows, but whether he is in an especially strong epistemic position. So he falsely claims that he doesn't know, conveying the conversationally relevant truth that he is not in an especially strong epistemic position (for other proposals in this vein see Rysiew 2001 and 2007, Black 2005, Hazlett 2007, 12 and Pritchard 2010). Skeptical invariantists have made parallel proposals. Jonathan Schaffer 2004 treats ordinary knowledge claims as hyperbole, arguing that such hyperbolic falsehoods convey that the speaker can eliminate the possibilities of error relevant in the context of utterance. Wayne Davis 2007 offers a different kind of pragmatic skeptical account of the propriety of Low knowledge claims, arguing that they are examples of "loose use," proper for the same reason that it can be proper to claim, falsely, that a jar with only a few coffee grounds left is empty (see chapter 17). Though promising as an answer to the propriety challenge, the pragmatic approach faces a significant hurdle in meeting the truth challenge. There are no uncontroversial examples of false claims that seem true in virtue of being proper. The central cases of false-but-proper claims --examples involving figurative speech --do not produce an intuition of truth. And though Davis is surely right that it is often proper to call a coffee jar with a couple of beans in it empty, some (the present author included) hold that this is in part because such claims are often true: the standards for emptiness fluctuate with context. If this is correct, Davis's proposal treating ordinary knowledge claims as instances of loose use may amount to a version of contextualism, rather than a competitor. It is common for invariantists who recognize the limitations of the pragmatic approach to the truth challenge to attempt to meet it with an error theory of some kind. Timothy Williamson, for example, suggests that repeated exposure to unusual skeptical possibilities can produce an "illusion of epistemic danger" (Williamson 2005; see also Vogel 1990). High context speakers, under the sway of such an illusion, may be led to underestimate the strength of their epistemic positions. More recently, Mikkel Gerken 2013 has developed a theory of what he calls "epistemic focal bias," which may produce inaccurate impressions of knowledge and nonknowledge. There is some tension between the error-theoretic approach to the truth challenge and pragmatic resolutions of the propriety challenge. A claim that results from an error may seem proper, but once the error is uncovered, we generally change our minds about its propriety. It is not clear that invariantists can endorse the intuition of propriety while rejecting the intuition of truth (though see Pynn 2014 for an attempt to do both). 13 Other invariantists accept Truth. One prominent strategy is to offer a psychological explanation for the falsehood of Keith's claim in High. Kent Bach argues that in a High context a speaker's "threshold for (confidently) believing" goes up, so that she "demands more evidence than knowledge requires" before she is willing to form a confident belief (Bach 2005, 77). Jennifer Nagel (2008, 2010a, 2010b) relies on an array of psychological studies to argue that subjects in high stakes situations require more information before forming settled beliefs, and so tend to refrain from forming settled beliefs on the basis of information that low stakes subjects treat as sufficient for settled belief. Since high stakes decrease a subject's "need for closure," Keith will be less inclined to form a settled belief about the bank's hours in High than he was in Low. The Bach-Nagel strategy is then to say that Keith doesn't have a settled belief that the bank will be open in High. Since knowledge requires belief, Keith doesn't know in High, and his claim in High is true (see chapter 7). Another invariantist strategy for accommodating Truth is to argue that the epistemic position required for knowing varies with the subject's practical situation. Proponents of interest-relative or sensitive invariantism say that whether a subject's epistemic position is strong enough to know that P depends upon the practical significance for her of the question of whether P is true (see chapter 20). On this picture, when the costs of being wrong about P are high, you need to be in a stronger epistemic position to know that P than you do when the costs of being wrong are low. Sensitive invariantism predicts that (L) and (H) are both true: since the practical stakes are higher for Keith in High than they are in Low, a stronger epistemic position is required for him to know in High than in Low. This approach rests on a claim known as anti-intellectualism or impurism; namely, that the epistemic requirements for knowing vary with the subject's practical situation. Anti-intellectualism is controversial, though it has able defenders, and its capacity to enable invariantists to accommodate our intuitions about Low-High pairs is a significant consideration in its favor (see Stanley 2005 and Fantl and McGrath 2009 for major defenses of interest-relative invariantism and impurism, respectively; see also Hawthorne 2004). Taking one of these approaches enables invariantists to avoid the propriety and truth challenges. But the challenges re-emerge when we make a slight alteration to the structure of a Low-High pair. The 14 bank cases involve first-person knowledge ascriptions made in different scenarios. This makes room for positing some variation in Keith's psychological state or practical situation between the Low and High scenarios, which explains how (L) and (H) can both be true, even though contextualism is false. But we can also construct Low-High pairs where the surface-contradictory claims concern a third party. Such "third-person" cases elicit the same intuitive judgments as the original Bank Cases, but there is no room to posit a difference in the third-party subject's mental states or situation to account for the truth of two surface-contradictory claims. DeRose's Thelma and Louise Cases are designed for just this purpose (DeRose 2009, 4-5; Cohen 1999's airport cases also have this structure). Thelma, Louise, and Lena are co-workers. All three saw their colleague John's hat in the hallway and overheard a conversation whose participants presupposed that he was in his office. All three believe that John was in, though they did not actually see him: Low Thelma. On her way home, Thelma stops at the local tavern to collect on a small bet concerning whether John would be in that day. After her tavern-mates pay up, they ask her whether Lena knows that John was in, since she also had a small bet going on the question. "Yes," Thelma answers, "Lena knows that John was in." High Louise. Louise is stopped by the police on her way home. They are investigating a serious crime, and need to verify whether John was at work today. They have no reason to doubt that he was, but need Louise's testimony to be sure. She demurs, pointing out that he may have left his hat on the hook the previous day, and that her co-workers who thought he was in may have been mistaken. After all, she points out, she didn't actually see him. So while she believes he was in, she says, she doesn't know. They follow up by asking whether Lena could testify to John's whereabouts. No, Louise answers, she didn't see him either: "Lena doesn't know that John was in." 15 Thelma and Louise's claims about Lena both seem proper and true when considered against the backdrop of their contexts of utterance. These intuitions, in turn, underwrite: Truth*. Thelma's claim that Lena knows in Low and Louise's claim that Lena doesn't know in High are both true. Contextualism accommodates and explains Truth* in precisely the same way it did Truth. But assuming that Thelma and Louise are speaking simultaneously, Lena's confidence level and practical circumstances must be the same in each case. So we cannot posit a psychological or practical difference to accommodate and explain Truth*. Bach and Nagel both appeal to error theories to handle such third-person cases, chalking the intuitive truth of claims like Lena's up to a kind of error (Bach 2005, 76-77; Nagel 2010b). Stanley takes a somewhat different tack, suggesting that in considering whether Lena knows, Louise is actually concerned with whether Lena would know if she were in Louise's situation. Since she wouldn't, Louise claims that Lena doesn't know; according to Stanley this is "a perfectly intuitive explanation of the intuitions" (Stanley 2005, 102). That may be, though to the extent that an explanation's simplicity and unity counts in its favor, contextualism is preferable to either of these approaches. Cross-Contextual Intuitions: Trouble for Contextualism? Insofar as a view's capacity to explain how the contents of our intuitive judgments are true contributes to its superiority over rivals, contextualism so far appears superior to invariantism. However, opponents of contextualism have argued that some of our intuitions are at odds with contextualism. These problematic intuitions primarily concern various forms of disagreement, and cross-contextual assessments (see chapter 20). For example, imagine the conversation in High Louise continuing: 16 High Louise, Con't. The police point out that Thelma was overheard in the tavern claiming that Lena knows that John was in, and ask her what she thinks of that. "No, Thelma's claim was false," Louise replies. "Lena doesn't know" (cf. McKenna 2014, 726). According to contextualism, Thelma's claim was true. Assuming that Louise's assessment of Thelma's claim is intuitively correct, we appear to have an intuition whose content contextualists must reject. A number of theorists have argued that such assessments furnish intuitive evidence against contextualism (e.g. Williamson 2005, 220, Stanley 2005, 52, MacFarlane 2005, 202-203, Brogaard 2008, 411). Note that if this case provides evidence against contextualism, it also provides evidence against moderate and sensitive invariantism, at least on the assumption that those views treat Thelma's claim in Low as true. Skeptical invariantists may regard the datum as a point in favor of their own view. However, skeptical invariantists already reject many ordinary intuitions of truth; on their view, ordinary positive knowledge claims are almost always false, despite our persistent everyday intuitions to the contrary. So even granting that the case is an intuitive cost for contextualism as compared to skeptical invariantism, it hardly tips the intuitive balance in skeptical invariantism's favor. We may question the degree to which such cases are intuitively problematic for contextualism. Prominent Low-High pairs are designed to capture what ordinary speakers would say in relevantly similar circumstances. By contrast, it is not clear that an ordinary speaker in circumstances like Louise's would say, "Thelma's claim was false." Provided she were aware of the casual nature of Thelma's tavern conversation, it would be at least as natural for her to say, "Thelma was only speaking loosely," or even, "She didn't really mean that Lena knows for sure." Neither of these assessments would conflict with contextualism; indeed, either of them would provide some indirect confirmation that different standards are operative in each context. Of course, Louise could say that Thelma's claim was false, and to the extent that such an assessment would be intuitively correct, this is a fair point against contextualism. But if it is 17 not what Thelma most naturally would say, the point is not especially threatening, especially given the intuitive costs already borne by invariantism.4 Such cross-contextual assessments play an important role in motivating a newer competitor to contextualism, known as relativism about knowledge attributions (see chapters 25 and 26). According to the relativist, the truth-conditions of a knowledge claim vary not with the context of utterance, but the context of assessment (see MacFarlane 2005, MacFarlane 2014 (ch. 8), and Rysiew 2011). Relativists can treat Thelma's claim as true relative to her own context of assessment, but false relative to Lena's. So relativism can accommodate both the intuitive truth of Thelma's knowledge attribution, and the intuitive truth of Lena's assessment of Thelma's knowledge attribution as false. There may be intuitive costs associated with relativism as well, however. According to the relativist, Thelma's claim was true as assessed in Thelma's context of utterance, but it seems doubtful that Lena would be prepared to grant this. Montminy 2009 argues that the relativist must impute to ordinary speakers a kind of semantic error in their cross-contextual judgments. But relativism is an important emerging paradigm in philosophical semantics, and the question of whether contextualism or relativism better accommodates and explains our intuitions about knowledge claims remains open. Bibliography Bach, K. (2005). "The Emperor's New 'Knows'." In Preyer, G. and Peter, G., editors, Contextualism in Philosophy: Knowledge, Meaning, and Truth, pages 51–90. Oxford University Press. Black, T. (2005). "Classical Invariantism, Relevance and Warranted Assertibility Manoeuvres." The 4 See also DeRose's considerations in favor of the "methodology of the straightforward," on which the "simple positive and negative claims speakers make utilizing the piece of language being studied" receive greater weight than "more complex matters, like what metalinguistic claims speakers will make and how they tend to judge how the content of one claim compares with another" (DeRose 2009, 153). 18 Philosophical Quarterly, 55(19):328–336. Brogaard, B. (2008). "In Defense of a Perspectival Semantics for 'Know'." Australasian Journal of Philosophy, 86(3):439–459. Brown, J. (2006). "Contextualism and Warranted Assertibility Manoeuvres." Philosophical Studies, 130:407–435. Buckwalter, W. (2010). "Knowledge isn't Closed on Saturday: A Study in Ordinary Language." Review of Philosophy and Psychology, 1(3):395–406. Buckwalter, W. (2014). "The Mystery of Stakes and Error in Ascriber Intuitions." In Beebe, J., editor, Advances in Experimental Epistemology. Continuum. Cappelen, H. and Lepore, E. (2005). Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism. Blackwell Publishing. Chomsky, N. (1986). Knowledge of Language: Its Nature, Origin, and Use. New York: Praeger Publishers. Cohen, S. (1999). "Contextualism, Skepticism, and the Structure of Reasons." Philosophical Perspectives, 13:57–89. Davis, W. (2007). "Knowledge Claims and Context: Loose Use." Philosophical Studies, 132:395–438. DeRose, K. (1992). "Contextualism and Knowledge Attributions." Philosophy and Phenomenological Research, 52:913–929. DeRose, K. (2005). "The Ordinary Language Basis for Contextualism, and the New Invariantism." The Philosophical Quarterly, 55:172-198. 19 DeRose, K. (2009). The Case for Contextualism. Oxford University Press, Oxford. DeRose, K. (2011). "Contextualism, Contrastivism, and X-Phi Surveys." Philosophical Studies, 156:81– 110. Devitt, M. (2006). "Intuitions in Linguistics." British Journal for the Philosophy of Science, 57(3):481513. Fantl, J. and McGrath, M. (2009). Knowledge in an Uncertain World. Oxford University Press. Feltz, A. and Zarpentine, C. "Do You Know More When It Matters Less?" Philosophical Psychology, 23:683-706. Gerken, M. (2013). "Epistemic Focal Bias." Australasian Journal of Philosophy, 91(1):41–61. Hansen, N. and Chemla, E. (2013). "Experimenting on Contextualism." Mind and Language, 28(3):286– 321. Hawthorne, J. (2004). Knowledge and Lotteries. Oxford University Press. Hazlett, A. (2007). "Grice's Razor." Metaphilosophy, 38(5):669–690. Hume, D. (1999). An Enquiry Concerning Human Understanding. T. D. Beauchamp, ed. Oxford: Oxford University Press. Lewis, D. (1979). "Scorekeeping in a Language Game." Journal of Philosophical Logic, 8:339-359. MacFarlane, J. (2005). "The Assessment Sensitivity of Knowledge Attributions." Oxford Studies in Epistemology, 1:197–234. MacFarlane, J. (2014). Assessment Sensitivity: Relative Truth and Its Applications. Oxford: Oxford University Press. 20 McKenna, R. (2014). "Shifting Targets and Disagreements." Australasian Journal of Philosophy, 92(4): 725-742. Montminy, M. (2009). "Contextualism, Relativism and Ordinary Speakers' Judgments." Philosophical Studies, 143(3):341–356. Nado, J. and Johnson, M. (2014). "Moderate Intuitionism: A Metasemantic Account." In A.R. Booth and D. Rowbottom, eds., Intuitions. Oxford: Oxford University Press. Nagel, J. (2007). "Epistemic Intuitions." Philosophy Compass, 2: 792-819. Nagel, J. (2008). "Knowledge Ascriptions and the Psychological Consequences of Changing Stakes." Australasian Journal of Philosophy, 86(2):279–294. Nagel, J. (2010a). "Knowledge Ascriptions and the Psychological Consequences of Thinking About Error." The Philosophical Quarterly, 60(239):286–306. Nagel, J. (2010b). "Epistemic Anxiety and Adaptive Invariantism." Philosophical Perspectives, 24:407– 435 Pritchard, D. (2010). "Contextualism, Skepticism, and Warranted Assertability Manoeuvres." In Campbell, J. C., O'Rourke, M., and Silverstein, H., editors, Knowledge and Skepticism, pages 85–103. MIT Press. Pust, J. (2016). "Intuition." The Stanford Encyclopedia of Philosophy (Spring 2016 Edition). E. N. Zalta, ed. URL = <http://plato.stanford.edu/archives/spr2016/entries/intuition/>. Pynn, G. (2014). "Unassertability and the Illusion of Ignorance." Episteme, 11(2): 125-143. Pynn, G. (2015). "Pragmatic Contextualism." Metaphilosophy, 46(1): 26-51. 21 Rysiew, P. (2001). "The Context-Sensitivity of Knowledge Attributions." Noûs, 35:477–514. Rysiew, P. (2007). "Speaking of Knowing." Noûs, 41(4):627–662. Rysiew, P. (2011). "Relativism and Contextualism." In Hales, S. D., editor, A Companion to Relativism, 286–305. Blackwell. Schaffer, J. (2004). "Skepticism, Contextualism, and Discrimination." Philosophy and Phenomenological Research, 69(1):138–155. Schaffer, J. and Knobe, J. (2012). "Contrastive Knowledge Surveyed." Noûs, 46(4):675–708. Stainton, R. J. (2010). "Contextualism in Epistemology and the Context Sensitivity of 'Knows'." In O'Rourke, M. and Silverstein, H., editors, Knowledge and Skepticism, pages 113–139. Cambridge, MA: MIT Press. Stanley, J. (2005). Knowledge and Practical Interests. Oxford University Press. Vogel, J. (1990). "Are There Counterexamples to the Closure Principle?" In Roth, M. and Ross, G., editors, Doubting: Contemporary Perspectives on Skepticism, pages 13–27. Kluwer, Dordrecht. Williamson, T. (2005). "Contextualism, Subject-Sensitive Invariantism and Knowledge of Knowledge." The Philosophical Quarterly, 55(219):213–
Spanish Judge Orders Bags Of Blood Destroyed In Doping Case Doctor Eufemiano Fuentes, left, arrives at a court house in Madrid on January 28, 2013. Dani Pozo AFP/Getty Images By all accounts, it was a less-than-spectacular end to one of Spain's biggest doping cases. El País, the country's biggest newspaper, summed up the trial of Dr. Eufemiano Fuentes saying it ended without blood and without a sentence. Fuentes was convicted of endangering public health and was given a one-year suspended sentence, a $6,000 fine and a four-year ban from practicing medicine. Most people sentenced under two years in Spain skip prison. But that's not the big news. The news that is causing waves across the sports world is that Judge Julia Patricia Santamaria also ordered that more than 200 bags of blood and the documentation regarding them, which was seized during Operation Puerto, should be destroyed. As The New York Times reports that could mean "an effort to uncover possibly one of the biggest doping scandals in history" might be thwarted. "Antidoping agencies and sports federations, including the World Anti-Doping Agency and Spain's new antidoping agency, had requested the blood bags so they could try to identify which athletes had been Fuentes's clients and pursue doping cases involving them," the Times reports. "Though only cyclists have been identified as working with Fuentes, he testified at his two-month trial that his patients also included athletes in tennis, soccer, boxing and track and field." "The alleged doping network was uncovered on May 23, 2006, when Spanish police raided several apartments and a laboratory in Madrid and seized about 200 bags of blood. Police also arrested doctors, sporting directors and trainers suspected of taking part in the scheme. "Several top cyclists, including the Spaniards Alejandro Valverde and Alberto Contador, Italian Ivan Basso and Germany's Jan Ullrich were implicated. "During the trial, Fuentes said he also had clients in other sports including soccer, tennis and boxing."
Latest Guide:2017 Motul Petit Le Mans at Road Atlanta Last Updated: v2 ( Update to #20, #26, #73, #75 liveries and #26 and #57 driver roster) It’s back for an 11th year, the 10th as an official resource, the 2017 IMSA Official Spotter Guide for the final round of the championship at Road Atlanta is now available to download. There will be an update late Wednesday with a couple of embargoed liveries. If you are at the event, Andy Blackmore Design and IMSA co-produce a Viewing guide which includes a printed Spotter Guides which is normally available in the IMSA FanZone area. Note, This guide went to press prior to the previous rounds, so the printed guide does not include #6 Penske in WT or the #39 Goldcrest Porsche or #60 KoHR Mustang in Continental. Most of the third drivers hadn’t been confirmed as the printed guide went to press. Thanks to IMSA for partnering with the guide. If you get a few seconds, please thank IMSA on Twitter. Without their support, there would be no guide. Hopefully, the guide will be back in 2018. Massive thanks to the tens of thousands who have downloaded the guide or picked up one of the printed guides at the circuit. Thank you to all the teams who have helped with creating the guide. Also thanks to the media outlets which promote the guides including RadioLeMans, IMSA Radio, DailySportscar and Sportscar365. NOTE: Please fill free to share the guide on Social Media, but PLEASE LINK TO THIS PAGE and not the guide (as file names change with updates, particularly with the month change this week) so your friends and fans can download the latest version of the guide! I can also track data easier which in turn helps my quest for sponsorship Previous Events: Daytona Sebring Long Beach CoTA Detroit Watkins Glen Canadian Tire Motorsport Park Lime Rock Park Road America VIR Laguna Seca The online Spotter Guide is produced by Andy Blackmore Design.The IMSA Race Day Viewing Guide is produced, race by race by IMSA and Andy Blackmore Design..
Gotlands Tidningar Gotlands Tidningar (meaning Gotland’s Newspapers in English) is a Swedish local newspaper based in Visby, Sweden. Profile Gotlands Tidningar was established in 1966 when two papers, Gotlänningen and Gotlands Folkblad, formed a joint operating company to publish them as two editions under the same name. The paper has its headquarters in Visby and is published six days per week. Since 1999 the paper has been owned by Norrköpings Tidningar Media AB. The publisher is Gotlands Förenade Tidningstryckerier. The paper is published in tabloid format. In 2002 Gotlands Tidningar sold 12,800 copies. References External links Official website Category:1966 establishments in Sweden Category:Media in Visby Category:Daily newspapers published in Sweden Category:Publications established in 1966 Category:Swedish-language newspapers
Um nt sure if i this is the right place to post this thread bt i wud appreciate ur help. I knw there is a way of viewin an asp script coz sum1 once showed me an i jst 4got. Help me out hw can I view the php source script
Thoughts on the election results I eagerly watched coverage of the election last night, awaiting the outcome of what was predicted to be a very close race. From a health promotion perspective, the results are positive for Ontario. With an unexpected majority, Premier Wynne can move forward on her platform that was based largely on the May provincial budget, a budget which many considered to be one of the most progressive in many years. The proposed measures attempt to balance austerity with investments in key areas including early child development, community health, poverty reduction and the widening income inequality in Ontario. Despite the new government’s declared commitments many challenges remain and there are numerous competing government priorities and demands. At Health Nexus, we believe that, to have the most positive impact on our society, the newly elected government must address the underlying issues that influence our health and wellbeing, and build communities where people feel safe, connected, and valued. We are ready to work with the provincial government and our broader partners on these fundamental issues that impact us all.
Q: How set up different stmpclient instances in web.config? I don't believe this is specifically an MvcMailer question (this is the mailer I am using), but I am struggling with framing a Googleplex search to figure out how to send e-mails from different accounts based on my context. I have a need to send two e-mails from two different e-mail accounts. I have tried using mailMessage.From = new MailAddress("some-other-email@gmail.com"); in MvcMailer, but that doesn't even show up in the e-mail I dump to the temp directory. It shows up as what is in the web.config: "some-email@gmail.com". This is my web.config for MvcMailer: <mailSettings> <!-- Method#1: Configure smtp server credentials --> <!--<smtp from="some-email@gmail.com"> <network enableSsl="true" host="smtp.gmail.com" port="587" userName="some-email@gmail.com" password="valid-password" /> </smtp>--> <!-- Method#2: Dump emails to a local directory --> <smtp from="some-email@gmail.com" deliveryMethod="SpecifiedPickupDirectory"> <network host="localhost" /> <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\" /> </smtp> </mailSettings> This is the mailer code: public virtual MailMessage EMailConsultation(EMailConsultationData model) { var mailMessage = new MailMessage { Subject = "INQUIRY: E-Mail Consultation" }; mailMessage.From = new MailAddress("some-other-email@gmail.com");//I tested this to see if at the very least it would show up in the e-mail, but it didn't. mailMessage.To.Add(model.EMail); ViewData = new ViewDataDictionary(model); PopulateBody(mailMessage, viewName: "InquiryEMailConsultation"); return mailMessage; } Again, the above code works to send e-mail. I just do not know how I can set up the mailer to send from a specified e-mail address, rather than just from "some-email@gmail.com" as is in the web.config. I have multiple MailMessages, and have a need to send certain ones from a different e-mail account. I would greatly appreciate any help/code examples. A: You could create your own SmtpClient object in the code and send your generated email with that. And only use 1 smtp setting in the web.config (the default one). in your web.config for MvcMailer: <mailSettings> <smtp from="some-email@gmail.com" deliveryMethod="SpecifiedPickupDirectory"> <network host="localhost" /> <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\" /> </smtp> </mailSettings> and use MyMailer.EMailConsultation().Send(); if you need to send an email via google (eg) use this: using (var googleSmtp = new SmtpClient("smtp.gmail.com", 587)) { googleSmtp.EnableSsl = true; googleSmtp.Credentials = new NetworkCredential("some-email@gmail.com", "valid-password"); googleSmtp.Send(MyMailer.EMailConsultation()); }
Q: How to store each line of a file in a matrix? I have a text file with this structure: regular: 12/09/2010, 15/09/2012, 16/09/2012 rewards: 17/09/2010, 18/09/2012, 19/09/2012 rewards: 20/09/2010, 21/09/2012, 22/09/2012 And I want to store it in matrix like this: my_matrix = [['regular', '12/09/2010', '15/09/2012', '16/09/2012'], ['rewards', '17/09/2010', '18/09/2012', '19/09/2012'], ['regular', '20/09/2010', '21/09/2012', '22/09/2012'] ] I've tried this: File.open(text_file).each_line do |line| @costumer_request = line.delete!(':').split @costumer_request = line.delete!(',').split But it returns this: @costumer_request ['regular', '20/09/2010', '21/09/2012', '22/09/2012'] How Can I solve this ? A: File.open(text_file){|io| io.each_line.map{|line| line.split(/[:,\s]+/)}}
2006 was the driest year on record in many parts of Australia. Even though temperatures did not reach the history-making heights of 2005, new research published in November by the University of Melbourne and Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) forecast temperature increases of 0.3 to 1.7C in most Australian wine regions by 2030. Aldi's fine wine selection just got bigger with the addition of the 2003 Premier Cru Chablis Grande Rserve from Bovier & Fils. Atypical, having spent nine months in French oak, it retails for 7.99 and is available in Aldi's 300 stores nationwide. Arthur O'Connor has moved from head winemaker at Seppelt in Victoria to the newly created position as head of winemaking at Codornu, while Jordi Ratera has been appointed the new technical director of Codornu, succeeding Juan Jos de Castro, who held the position for 37 years. Ratera has been with Codornu for 20 years, and most recently set up the new Codornu winery in the Napa Valley. Hine is releasing its 1957 vintage - a Jarnac-matured Grande Champagne Cognac. Only five casks were laid down to mature, and director and cellarmaster Bernard Hine describes the 1957 vintage as an exceptional year, with a perfect balance between floral and fruit hints'. Research commissioned by Wine+, the new on-trade wine show, and undertaken by Wine Intelligence in more than 20 white table-cloth restaurants in London has revealed gross variations in wine prices in restaurants. In response to Jackson Wine Estates International's recent appointment of J E Fells & Sons as its UK agent, James Tookey, formerly in charge of European sales, has been appointed UK Sales Manager, and will manage the relationship with Fells. Readers of these pages will recall how a few months ago we reported that the EU Commission had ruled that no member nation could use the name Tokay', or soundalikes, for wine except the Hungarians, giving several years' notice before putting the ruling into effect. Laurent-Perrier has appointed Steve Brandwood to the newly created position of UK sales director, responsible for sales of Laurent-Perrier, Marqus de Riscal, Trinity Hill and Thorn-Clark to all UK trade sectors.
Small, prop-driven planes are spreading lead around the country (Credit: Brad Clinesmith) Poison From Above It’s time to clean up our largest source of lead emissions — small planes Reprint of oped by Nathan Donley in The Hill As residents of Flint, Mich., and other cities grapple with high levels of lead in their drinking water, another source of lead exposure is, quite literally, flying under the radar: the ongoing use of leaded gasoline in small airplanes that are spewing dangerous levels of lead all across the country. Widely believed to be banned already, leaded gasoline is still used in the majority of small, propeller-driven airplanes, which is no small thing. It accounts for more than half of the nearly 1,000 tons of lead emissions in the United States each year. Now, all the EPA-bashing members of Congress claiming to be horrified by the government’s complicity in Flint can significantly reduce our nation’s lead pollution by supporting a bill recently introduced into the U.S. House of Representatives that will close the loophole allowing small planes to use leaded gas. The “No Lead in the Air Act of 2016,” introduced by U.S. Rep. Eleanor Holmes Norton (D-D.C.), takes aim at a very real, very solvable health problem — the nation’s largest remaining source of lead emissions. With the EPA and the Centers for Disease Control making clear there is no safe level of lead in young children, this issue deserves serious consideration, both by Congress and the EPA. The stakes are extremely high, especially if you live, work or go to school near one of the nation’s airports. The EPA estimates that 16 million people live and 3 million children go to school within one kilometer — about two-thirds of a mile — of airports where the greatest amounts of lead are released. As far back as 2010, the EPA acknowledged that children can be exposed to lead emitted into the air either “directly by inhalation, or indirectly by ingestion of lead-contaminated food, water or other materials including dust and soil.” These risks were documented in a Duke University study that detected higher levels of lead in North Carolina children living within half a mile of airports where planes use leaded gas. The researchers concluded there was a significant association between leaded aviation fuel exposure and higher blood lead levels in children. Like many of the families exposed to lead-tainted tap water in Flint, the families across the country subjected to these lead emissions from airplane fuel are more likely to be low-income and minority — just the latest example of vulnerable and minority populations being disproportionately exposed to harmful pollutants. While nearly half of lead emissions from planes remain near airports, the rest is dispersed throughout the environment during flight. This is significant because lead doesn’t break down. Once it is taken out of the ground, it simply exists in the environment until it can be buried again. Released in the environment, lead is an extremely toxic heavy metal that can cause severe nervous system damage, reduced intelligence, behavioral changes and developmental defects that are often irreversible. Not surprisingly, like the difficulties that were faced removing lead from pipes, paints and auto fuel, the beneficial properties of lead make it difficult to incentivize replacement. And it’s true, just like the lead that’s been banned from other products, the lead in airplane fuel has benefits — it boosts octane and prevents “knocking” which could cause the engine to fail mid flight. But just as with other products where lead was banned, the risks associated with its ongoing use are far too high. And, just as with the other products, there are alternatives to using lead in aviation fuels. It’s estimated that about 80 percent of the small plane fleet could safely switch to unleaded gasoline immediately, with no retrofitting needed, as long as it does not contain ethanol. But there is currently no economic incentive for airports to carry multiple fuels. And without regulatory pressure, that won’t change. The “No Lead in the Air Act of 2016” can provide the push needed to trigger the long-overdue elimination of these dangerous fuels. The bill wouldn’t ban the toxic fuel until 2021, allowing time for an alternative to be tested and put in place for the remaining 20 percent of planes. Whatever the cost, politically and financially, we cannot simply look away and continue to allow lead to literally be dumped on millions of children who have no choice other than to breathe toxic air while playing outside or learning their ABCs. Donley is a developmental biologist who works in the Center for Biological Diversity’s Environmental Health Program
Regioselective nucleophilic additions to cross-conjugated dienone system bearing beta-fluorine: a versatile approach to highly substituted 2-cyclopentenones. [reaction: see text] 3-Fluoro-5-methylene-2-cyclopentenone is treated with appropriate nucleophiles and Lewis acids to undergo regioselective 1,2-addition, exocyclic 1,4-addition, and endocyclic 1,4-addition, leading to 3-substituted 4-methylene-2-cyclopentenones, 5-substituted 3-fluoro-2-cyclopentenones, and 3-substituted 5-methylene-2-cyclopentenones in good yields, respectively.
Locality of quark-hadron duality and deviations from quark counting rules above the resonance region. We show how deviations from the dimensional scaling laws for exclusive processes may be related to a breakdown in the locality of quark-hadron duality. The essential principles are illustrated in a pedagogic model of a composite system with two spinless charged constituents, for which a dual picture for the low-energy resonance phenomena and high-energy scaling behavior can be established. We introduce the concept of "restricted locality" of quark-hadron duality and show how this results in deviations from the perturbative quantum chromodynamics quark counting rules above the resonance region. In particular, it can be a possible source for oscillations about the smooth quark counting rule, as seen, e.g., in the 90-degree differential cross sections for gammap-->pi(+)n.
1. Field of the Invention The present invention relates to an image recording apparatus based on the exposure of a photosensitive recording medium to the light, and particularly to the adjustment of the level of exposure. 2. Description of the Related Art An image recording apparatus of this type incorporates a light exposing device which irradiates the light from a light source to the original text and exposes the photosensitive medium to the light which has been subjected to the influence of the original text Conventionally, an incandescent lamp, such as a halogen lamp as disclosed in U.S. Pat. No. 4,806,984, or a fluorescent lamp is used for the light source and the light output of the light source is varied for the adjustment of the level of exposure thereby to adjust the contrast of the recorded image or the like. However, in case an incandescent lamp is used for the light source, a change in the light output creates a variation in the hue of light, resulting in deviation from the range of wavelength sensitivity of the photosensitive medium. The photosensitive medium has its sensitive wavelength range determined from the composition of the medium, and the deviation from the wavelength range can cause such impropriety as the collapse of color balance. Although a fluorescent light source retains the wavelength even if the light output is changed, the adjustment of its light output necessitates a complex circuitry, resulting in an expensive apparatus.
Q: Gradle build prints extraneous output I have the following basic build.gradle script: task count << { 4.times { print "$it-" } } When I run it in quiet mode, it intermittently prints extraneous text, like the phrase 0% CONFIGURING or 0% EXECUTING: C:\gradle-test>gradle -q count 0-1-2-3-------> 0% CONFIGURING [0s] C:\gradle-test>gradle -q count 0-1-2-3- C:\gradle-test>gradle -q count 0-1-2-3-------> 0% EXECUTING [0s] Why does this extra text print arbitrarily and what does it mean? A: This text like 0% EXECUTING is called the Status Bar which is displayed when Gradle is run in Rich console mode (default mode if Gradle build process is attached to a console), see more information in the documentation here : https://docs.gradle.org/current/userguide/command_line_interface.html#rich_console Why does this extra text print arbitrarily Because in your example , this is a very simple build script which is executed so fast that "sometimes" the Status Bar has not had time to be displayed before build is finished (this is my interpretation, and I reproduced for example when calling task clean on very simple projects) EDIT: this status bar will be displayed even in "quiet" mode. if you want to disable it, you can configure the "plain text" console mode with Gradle commandline option --console=plain
Services 'Relief and elation' for Rhinos Leeds can party like it is 1999 after lifting a 15-year weight from their shoulders by winning the Challenge Cup at Wembley. Since the Rhinos beat London to win rugby league's most famous competition in the last final at the old national stadium, they have lost six successive showpieces - a statistic made all the more startling when it is put alongside the six Grand Finals they have won in that time. With each defeat the desperation to win the cup has increased and all that pressure was released as Castleford were beaten 23-10 to give Brian McDermott's men the silverware they craved the most. "It will take a while to come up with the right phrase to sum up the feeling," the coach said. "We have been striving for this for so long, the club has, and to eventually get it is a big feeling. "For the players it's a sense of relief, elation. They're an emotional bunch right now but we feel like we have delivered something. I feel like I have delivered something. We have picked up six silver medals but now we have a gold, and that's a big feeling." Leeds have been the kings of the league in the last decade, with Old Trafford a second home to them. Their cup struggles, though, have bewildered them and amused rival supporters. "A lot of this (feeling) is down to the history," McDermott added. "It's the journey there; I haven't coached every one of those six losses but I feel it, you feel that mounting pressure. "This is a different feeling entirely." Leeds never looked like losing a game they controlled from the time Tom Briscoe opened the scoring five minutes in. Two outstanding finishes from Ryan Hall supplemented some incessant work from their forward pack and, when half-back Danny McGuire - who saw the game out with broken ribs - slotted a drop-goal late on, they knew they were home. "This is right up there, as to lose the other finals has been horrible," said captain Kevin Sinfield, who has become so accustomed to hoisting the Super League trophy above his head. "To finally get our hands on this is really special; not just for me and the guys who have lost here before, but the players, our family, friends, the coaches, the club and the fans. They kept coming down here, spending their hard-earned money and then driving back up the M1 disappointed. "The word for me is perseverance. It has been 15 hard years to get this back - it's too long. We haven't deserved it (in those 15 years) but this is special. "We stuck at it, continued what we were doing. For the last 15 years the club and the fans have persevered through quite a bit of adversity and we stand here smiling, which is great." Daryl Powell was a member of that 1999 Leeds side but was on the other side of the fence on Saturday, plotting their downfall as Castleford coach. The boyhood Cas fan was unable to stop his old employers earning the win, though, admitting that his men were just not good enough on the day. Much was made of the lack of big-game experience in the Tigers side ahead of kick-off, but Powell thought Leeds were simply too quick and too cohesive. "We recognised there was a challenge laid down in the first half by Leeds that we didn't go with particularly well," he said. "We did show courage, determination, but we were never quite good enough to score enough points to put Leeds under the intense pressure we needed to. "The first half was disappointing. Leeds were very good, their kicking game excellent, they challenged us in certain areas of the field and our response was never really good enough." Castleford have been one of the stand-out teams of the season and remain firmly in the thick of a battle to win the Super League title. It is not inconceivable that they could meet Leeds again at Old Trafford in October, and Powell is keen to make sure that his men learn the lessons of this big-stage loss. "The feedback from them is that the preparation was great, so we need to see what the reasons were that we didn't deliver," he added. "We needed to see why, as a group, did we not deliver our best performance. Our best performance would have made it a tight game. We have to learn." Ipsoregulated This website and associated newspapers adhere to the Independent Press Standards Organisation's Editors' Code of Practice. If you have a complaint about the editorial content which relates to inaccuracy or intrusion, then please contact the editor here. If you are dissatisfied with the response provided you can contact IPSO here
1. Technical Field The present invention relates to connectors, in particular, to a connector for connecting to a connecting portion arranged at a distal end of a flexible printed circuit board. 2. Related Art Conventionally, for a connector, for example, as shown in Japanese Patent Application Laid-Open No. 2002-190360, there is a printed wiring board connector, including a housing with a substrate insertion groove to be inserted with a printed wiring substrate having a terminal portion arranged with numerous printed wiring terminals on the front and the back, in which numerous contact pieces that oppose in a front and back direction of the printed wiring terminal in the substrate insertion groove of the housing and sandwich the terminal portions of the printed wiring substrate are lined, where one of the opposing contact pieces is formed as a sandwich operation contact piece and the other contact piece is separately formed at a position facing the sandwich operation contact piece as an opposing side contact piece, a sandwich operation that operates the sandwich operation contact piece to a side of sandwiching the terminal portion of the printed wiring substrate inserted in the substrate insertion groove is arranged, a terminal piece is integrally formed at each contact piece, and each terminal piece is projected to outside the housing, (refer to, for example, Japanese Patent Application Laid-Open No. 2002-190360).
All vehicle trajectory data files are available from the NGSIM Web site at: <https://www.its-rde.net/index.php/data/searchdata?tagid=163>. Introduction {#sec001} ============ Continuous growth of motor vehicles has made urban congestion become a serious national problem, one which has been receiving considerable attention from engineers, planners, researchers, and policymakers. The congestion issue in urban areas has always intertwined with other concerns which significantly affect our quality of life, such as air quality, urban noise, energy use, road safety and economic growth \[[@pone.0190616.ref001]\]. Traditionally, congestion can be categorized as recurrent and non-recurrent (incident-based). Apart from the latter, recurrent congestion influence road operation in a significant way and contribute to a large portion of urban traffic delay \[[@pone.0190616.ref002], [@pone.0190616.ref003]\] and its quantitative characterization has always been important for managing traffic in the urban context. Studies about quantifying congestion abound, by and large, in literature as people attempted difference approaches to address it. For instance, measures with statistical perspectives were investigated by Federal Highway Administration. Lindley \[[@pone.0190616.ref004]\] promoted and studied the effectiveness of potential solutions to congestion. The statistical analysis indicates that the demand reduction strategies should be effective when looking for potential solutions. The study provides a first cut at estimating cost and congestion reduction potential giving available options. Moreover, Highway Performance Monitoring System (HPMS) \[[@pone.0190616.ref005]\] provided a solid database for statistical analysis of congestion. The work also estimates an aggregated impact of several techniques for reducing freeway congestion. D'abadie and Ehrlich \[[@pone.0190616.ref006]\] discussed various approaches for quantifying congestion and their effectiveness. They also compared two measures of congestion (distance-based and time-based) to describe the magnitude of congestion in a case study of New Jersey counties. The results showed that the time-based approach is more likely to have a high impact as it effectively provides a different perception of congestion and also a stronger guidance on major issue identification. Also, Milojevic and Rakocevic \[[@pone.0190616.ref007]\] proposed an algorithm, VANET, to enable vehicles in the network to be aware of the level of traffic congestion in a distributed way. The work tackles the congestion issue by enhancing the vehicle information communication to prevent early-form congestion and provides overall knowledge about congestion to drivers. On the other hand, Armah et al \[[@pone.0190616.ref008]\] attempted to study congestion and one of its side effects, air pollution, with a systemic approach. They provided overall systemic thinking flowcharts on urban congestion issue. But the assessment was largely qualitative. Kerner et al \[[@pone.0190616.ref009]--[@pone.0190616.ref011]\] conducted a series of deep investigation on bottleneck congestion and proposed a three-phase traffic theory for controlling and tracking spatial-temporal congestion in highway traffic patterns. There are many others and readers can refer to an incomplete list includes: \[[@pone.0190616.ref012]--[@pone.0190616.ref015]\]. With such ample options of various methods and approaches, an investigation emphasized on comprehensive systemic perspective is still missing for quantitative congestion assessment. The concept of resilience originated in engineering mechanics, which can be retrospected back to the early 19th Century \[[@pone.0190616.ref016]\] and currently found in a wide range of areas \[[@pone.0190616.ref017]\] including engineering systems \[[@pone.0190616.ref018]\], ecology \[[@pone.0190616.ref019]\], psychology \[[@pone.0190616.ref020], [@pone.0190616.ref021]\], social science \[[@pone.0190616.ref022]\] and so forth. Even though the concept is still struggling for reaching an agreed definition \[[@pone.0190616.ref023]\], it is most commonly described as the ability of a system to cope with disturbance and recover its functionality afterwards \[[@pone.0190616.ref024]\]. In this way, Bruneau et al \[[@pone.0190616.ref025]--[@pone.0190616.ref027]\] proposed a quantitative framework to assess system resilience with "Resilience-Triangle" based on the level of functionality performance, and this is the so-called "R4" framework (Robustness, Redundancy, Resourcefulness, and Rapidity). They argued that resilience loss of system functionality can be assessed by calculating the area of the triangle on time-series performance. A large area of triangle denotes a less resilient system functionality. For congestion occurred in traffic flow it has the similar preference. However, some of its fundamental dimensions should be adjusted for traffic congestion studies. A key differentiation with previous proposals is that the quantification of congestion was addressed, in this paper, using a new-build resilience-based metric that consists multiple dimensions and combined with emerging concept to provide a novel solution for the conventional issue. Our criterion bases on rethinking an urban highway as an integrated system and traffic quantities as its functionality indications. Hence, we examined and improved the "R4" framework, and adopted the "triangle" idea to quantify the congestion with a resilience-oriented approach in spatial-temporal performance. Materials and methods {#sec002} ===================== Data descriptions and conceptual discrete platform {#sec003} -------------------------------------------------- All three datasets used for numerical studies were collected by the Next Generation Simulation Programme from the United States Federal Highway Administration \[[@pone.0190616.ref028]\]. The datasets contain detailed time-resolution vehicle trajectory information, including trajectory location, time, speed, acceleration, etc. Here, traffic in the first dataset was monitored on eastbound Interstate 80 (I-80) in the San Francisco Bay area near Emeryville, CA, on 13 April 2005. The study area is 1650 feet (approx. 503m) long and comprises six freeway lanes that include one heavy-goods vehicle (HGV) lane and one on-ramp ([Fig 1A](#pone.0190616.g001){ref-type="fig"}). The full dataset covers a span of 45 minutes in total and is segmented into three 15-minute subsets, i.e., 16:00 p.m. to 16:15 p.m., 17:00 p.m. to 17:15 p.m., and 17:15 p.m. to 17:30 p.m. ![The study areas (not in scale) and discrete platform.\ Aerial view of lane configuration of (A) I-80; (B) US-101; and (C) Lankershim Boulevard, LB. (D) Conceptual cells constructed for spatial-temporal analysis.](pone.0190616.g001){#pone.0190616.g001} The vehicle trajectory data from the second dataset was collected on southbound of freeway US-101, also known as the Hollywood Freeway in Los Angeles, on 15 June 2005. The study area is approximately 2100 ft (approx. 640m) in length and consists of five mainline lanes throughout the section and one auxiliary lane as lane 6 ([Fig 1B](#pone.0190616.g001){ref-type="fig"}). A total of 45 minutes of data from morning peak is segmented as well into three 15-minute periods: 7:50 a.m. to 8:05 a.m.; 8:05 a.m. to 8:20 a.m.; and 8:20 a.m. to 8:35 a.m. The first two freeway cases all contain various vehicle types, and because normal traffic in the HGV lane and the ramps differ from that found in the other lanes, they were eliminated from our consideration. More details of these two study areas can be found in \[[@pone.0190616.ref029], [@pone.0190616.ref030]\]. Having first two cases selected from freeway vehicle data, the third dataset was from a section of an urban arterial. The data was collected on Lankershim Boulevard (LB) in the Universal City neighborhood of Los Angeles, CA, on 16 June 2005. This arterial area covers three signalized junctions and is about 1600 ft (approx. 500m) in length, which contains three to four lanes in dual-way directions ([Fig 1C](#pone.0190616.g001){ref-type="fig"}). The observation period was 30 minutes in total: 8:30 a.m. to 8:45 a.m. and 8:45 a.m. to 9:00 a.m during morning peak hours. The data contains various vehicle types and different lane layout and there is no special vehicle or lane types excluded for analysis of this case, whereas the main portion of traffic was still passenger vehicles \[[@pone.0190616.ref031]\]. [Table 1](#pone.0190616.t001){ref-type="table"} summarizes the basic information about all datasets. Because we would like to capture the steady and comprehensive patterns and also to avoid inactive cells in spatial-temporal profiles, the first and last 150 seconds in the temporal dimension and 100 feet (approx. 30.5m) in the spatial dimension were removed. 10.1371/journal.pone.0190616.t001 ###### Brief summary of datasets \[[@pone.0190616.ref029]--[@pone.0190616.ref031]\]. ![](pone.0190616.t001){#pone.0190616.t001g} --------------------------------------------------------------------------------------------------------------------------- ID & Type Time span Direction Length, vehicle, and lane types ----------------- --------------------------- ------------ ---------------------------------------------------------------- I-80\ 45 mins\ Eastbound 1650 ft\ Freeway 16:00 p.m. to 16:15 p.m.\ Four passenger-vehicle lanes with one HGVs and one ramp lanes\ (working hours)\ Freeway lanes\ 17:00 p.m. to 17:30 p.m.\ WITHOUT signal control (evening peak hours) US-101\ 45 mins\ Southbound 2100 ft\ Freeway 7:50 a.m. to 8:35 a.m.\ Five passenger-vehicle lanes and one ramp lane\ (morning peak hours) Freeway lanes\ WITHOUT signal control Lankershim\ 30 mins\ Dual way 1600 ft\ Boulevard (LB)\ 8:30 a.m. to 9:00 a.m.\ Three to four main lanes for mixed Passenger vehicles,\ Urban street (morning peak hours) Trucks, and Motorcycles\ Three to six lanes at junctions\ WITH signal control junctions --------------------------------------------------------------------------------------------------------------------------- In [Fig 1D](#pone.0190616.g001){ref-type="fig"}, development of the conceptual platform begins with establishing the discrete cells. The study areas were depicted into cells with dimensions of 4 seconds × 70 feet (approx. 21.34m). We calibrated dimensions to ensure an efficient discretization. If, however, the cell is too small, then the number of vehicles in each cell would not be representative. Likewise, the propagation pattern of congestion would have been ambiguous if those cells were too large (see details in sensitivity test section). Because the spatial-related dimensions of raw data for this study were expressed using "foot" or "feet (ft.)" as the unit of measurement, our results applied the same unit to keep consistency. But where possible, those values have been converted into the International System of Units. Resilience-oriented approach {#sec004} ---------------------------- The performance of a system decreases after a shock, and if possible, recovers within a certain time. This can be observed in many cases, like formation and dissolution of congestion in traffic. Accordingly, the "R4" resilience-triangle metric \[[@pone.0190616.ref025]\] was proposed upon a very straightforward proxy: a system's resilience loss is the loss of performance. And it is defined by (a) the draw-down line (the downturn section which starts from performance prior to shock to lowest level after the shock), (b) the draw-up line (the recovery section), and (c) the time period required for whole process (from the head of draw-down line to the end of draw-up line). A pair of down-and-up lines forms a draw-down and draw-up cycle. Thereby, it is convincing that using triangle's area to represent the resilience loss is sensible (area of triangle Δ*ABD* in [Fig 2](#pone.0190616.g002){ref-type="fig"}). Nevertheless, we split this triangle into two segments, Resilience Loss (RL) and Resilience Gain (RG), since it would be more sensible to understand downturn and upturn separately. In this way, congestion can then be effectively represented by RL in time-series traffic performance. ![Typical draw-down and draw-up cycle.\ In this case, external shock occurs at time *t*~*pre*~ and the performance recovers at *t*~*post*~. Time-series performance is able to have several cycles as the process could be dynamic. The grey band is the Robustness Range, which can be dynamic and adaptive in each cycle as if in $P_{(t)}^{\prime}$. Δ*ABD* represents the "Resilience-triangle". Colour-pattern shades denote the areas we consider in our quantification metric and fundamental dimensions are defined accordingly.](pone.0190616.g002){#pone.0190616.g002} Even the "R4" framework set a good paradigm to characterize system resilience, it overlooked the effects of different recovery paths and other essential fundamental dimensions. It is commonly held that there are four possible recovery paths that a system performance could behave: adaptive recovery, just recovery, insufficient recovery, and collapse. Hence we improve the framework and establish novel dimensions for congestion assessment as follows. Given that: - Function *P*~(*t*)~ represents performance behavior of just recovery, which is the normal case for a system performance. And $P_{(t)}^{\prime}$ is another possible recovery path with adaptive recovery. - The head and the tail of draw-down section are expressed as *t*~*pre*~ (pre-event) and *t*~*event*~ respectively, and the successive draw-up terminates at time *t*~*post*~ (post-event). We define the following fundamental dimensions in terms of congestion: **Elasticity Threshold (ET)**: Similar to the concept of elasticity in material mechanics, traffic performance should have a threshold at which the self-organizing ability and free-flow state start to deteriorate. Variety of studies suggest the existence of phase transition in traffic status \[[@pone.0190616.ref032]--[@pone.0190616.ref034]\]. We assume that losing a mild amount of elasticity will result in performance being above the ET. Nevertheless, with an overdose of elasticity loss, the performance would fall below ET, then extra effort is needed to push it back into the region of elasticity. In this study, the values of ET were determined by critical density in traffic data (details on the determination of ET can be found in the following sections). **Robustness Range (RR)**: Of particular note is the fact that a certain range of robustness ubiquitously exists (e.g., blood pressure is always monitored as acceptable within a certain range). General system performance naturally varies in time with tolerable fluctuations. Because the target is recurrent congestion, we need to identify the extent to which a decrement in performance can be considered as congestion rather than a random oscillation of the traffic. In principle, we assume the drop or raise, as long as it is in RR, are not effective in our quantification consideration. The width of the range is defined as 1/10 of ET in the analysis. Unlike ET fixed for entire time series, one should note that the RR in different cycles could be dynamically updated. **Congestion Magnitude (*C*~*m*~)**: This is a straightforward dimension that indicates the extent to which recurrent congestion occurs. Of note is that half of RR should be ruled out from the calculation of *C*~*m*~ since only the amount of drop outside the RR would be effective for quantification purpose. Thus the effective draw-down starts at $t_{pre}^{\prime}$. $$\begin{array}{r} {C_{m} = P_{(t_{pre}^{\prime})} - P_{(t_{event})}} \\ \end{array}$$ **Congestion Time (*C*~*t*~)**: defined as ratio of congestion formation time to total cycle time. Similarly, because of the effect of RR the values of *C*~*t*~ should be adjusted, from $t_{pre}^{\prime}$ to *t*~*event*~. $$\begin{array}{r} {C_{t} = \frac{t_{event} - t_{pre}^{\prime}}{t_{post}^{\prime} - t_{pre}^{\prime}}} \\ \end{array}$$ **Recovery Scenario (*R*~*s*~)**: or the recovery ability, is the dimension that illustrates the recovery path in each draw-down and draw-up cycle. In order to differentiate major congestion (insufficient recovery or collapse, i.e., the congestion is discharged partially or never discharged) and other congestion (just and adaptive recovery, i.e., congestion is mitigated and discharged completely), we define the sign of *R*~*s*~: Negative (-) for insufficient recovery or collapse, and positive (+) for just and adaptive recovery. A large positive *R*~*s*~ means *P*~(*t*~*post*~)~ \> *P*~(*t*~*pre*~)~, which denotes a severe congestion occurred but with a sufficient discharging process after its formation. $$\begin{array}{r} {R_{s} = \left\{ \begin{array}{cl} & {1\text{if} P_{(t_{post})} \geqslant P_{(t_{pre})}} \\ & {- 1\text{if} P_{(t_{post})} < P_{(t_{pre})}} \\ \end{array}\operatorname{} \right.} \\ \end{array}$$ **Resistance coefficient (*R*~*e*~)**: It is a quantity that characterizes the input effort for resisting the downturn tendency and is strongly associated with ET, i.e., if the minimum level drops below ET, *R*~*e*~ has a value greater than zero because a large amount of effort would be input to resist the drop and more effort would be needed to restore performance, and let *R*~*e*~ equals to zero when the minimum level is above the ET, that is, no phase transition occurs. Thus determination of *R*~*e*~ is positively related to the minimum level of performance *P*~(*t*~*event*)~~. One may note that *R*~*e*~ has no interaction with the draw-up section as it is mainly described as a dimension from the draw-down section. This is because the effective resistance naturally happens during downturn process, lasting until performance reaches the minimum level, then it would be ready to recover after it.$$\begin{array}{r} {R_{e} = \left\{ \begin{array}{cl} & {ET - P_{(t_{event})}\text{if} P_{(t_{event})} < ET} \\ & {0\mspace{14580mu}\text{if} P_{(t_{event})} > ET} \\ \end{array}\operatorname{} \right.} \\ \end{array}$$ As mentioned, Δ*ABD* is split into RL and RG ([Fig 2](#pone.0190616.g002){ref-type="fig"}). General speaking, RL represents the cumulative effect of resilience loss in drawdown process (in our case, drawdown process denotes the formation process of congestion, because congestion is a type of performance loss in terms of traffic condition). Thus, by approximating the shaded areas as triangles the Congestion Index (CI) of a time-dependent observation can be expressed as: $$\begin{array}{r} {Congestion Index = \left( \frac{C_{m} \times C_{t}}{2} + R_{e} \right) \times R_{s}} \\ \end{array}$$ The rationale of [Eq 5](#pone.0190616.e009){ref-type="disp-formula"} is as follows: A recurrent congestion pattern can be depicted with two portions. One is the cumulative loss in its formation process, which is denoted as (*C*~*m*~ × *C*~*t*~)/2, and another is the jamming severity contributed by phase transition, which is *R*~*e*~. What's next, these two portions are all associated with dynamic and repeating form-and-resolve process (draw-down and draw-up cycles). Thus the term *R*~*s*~ is brought into play to depict various recovery behavior in discharging process. Approaches based on travel time and volume-to-capacity ratio {#sec005} ------------------------------------------------------------ Even there is no common definition of traffic congestion \[[@pone.0190616.ref035]\], many approaches and measures have been developed to scale its magnitude and intensity. Traditionally, two approaches are particularly popular and well-applied: travel time based and volume-to-capacity (V/C ratio) based. Two measures for metric comparison purpose are selected: Relative Congestion Index (RCI) and Level of Service (LoS). RCI is conventionally defined as the ratio of delay time (DT) and free-flow travel time (*T*~*ff*~), which can be defined as \[[@pone.0190616.ref036]\]: $$\begin{array}{r} {RCI = \frac{DT}{T_{ff}} = \frac{T_{ac} - T_{ff}}{T_{ff}}} \\ \end{array}$$ where *T*~*ac*~ is the actual travel time needed. The RCI of zero denotes a very low level of congestion while values greater than two show significant congested states. Because our analysis is based on spatial-mean performance of the traffic, the *T*~*ac*~ and *T*~*ff*~ can be obtained also with spatial-mean quantities as: $$\begin{array}{r} {T_{ac} = \frac{Spatial length}{Spatial - mean speed}} \\ \end{array}$$ and $$\begin{array}{r} {T_{ff} = \frac{Spatial length}{Free - flow speed\left( v_{ff} \right)}} \\ \end{array}$$ LoS approach is a more interpretable and straightforward measure to represent various static traffic states. As adopted in Highway Capacity Manual (HCM) \[[@pone.0190616.ref037]\], this method has become extremely popular in practice, especially for non-technical users \[[@pone.0190616.ref038]\]. The LoS can be determined by various traffic quantities, such as density, speed, V/C and maximum service flow rate. Rather than assigning quantitative values, the LoS assesses traffic conditions based on scale intervals ([Table 2](#pone.0190616.t002){ref-type="table"}). The V/C ratio can be calculated as: $$\begin{array}{r} {V/C = \frac{Spatial - mean volume}{N_{max}}} \\ \end{array}$$ where, *N*~*max*~ is the maximum number of vehicles that one cell is able to contain, which represents the capacity. This term can be approximated by assuming an average vehicle length occupancy. We write: $$\begin{array}{r} {N_{max} = \frac{L_{cell}}{L_{occupancy}} \times N_{lanes}} \\ \end{array}$$ 10.1371/journal.pone.0190616.t002 ###### Level of Service (LoS) and its corresponding V/C ratio and traffic states \[[@pone.0190616.ref037]\]. ![](pone.0190616.t002){#pone.0190616.t002g} LoS class Traffic state and condition V/C ratio ----------- --------------------------------------------------------- ----------- A Free flow 0∼0.60 B Stable flow with unaffected speed 0.61∼0.70 C Stable flow but speed is affected 0.71∼0.80 D High-density but stable flow 0.81∼0.90 E Traffic volume near or at capacity level with low speed 0.91∼1.00 F Breakdown flow \>1.00 *L*~*cell*~ is the spatial length of cells, *N*~*lanes*~ is the number of lanes and *L*~*occupancy*~ is the average vehicle length occupancy and it comprises two parts: vehicle length *L*~*v*~ and safety distance *L*~*s*~. Because it is normally assumed that *L*~*v*~ is about 14 ft. (approx. 4.27m) \[[@pone.0190616.ref039]\], we assume *L*~*occupancy*~ is about 15 ft. (approx. 4.57m). The *N*~*lanes*~ is four in I-80 and five in US-101, recall that the HGV and ramp lanes are not considered, and we take 4.5 for the number of lanes in both northbound and southbound direction on LB to average its various lane layout through sections and at junctions. Once the V/C ratio is obtained, the LoS can be determined according to [Table 2](#pone.0190616.t002){ref-type="table"}. Although both measures are widely adopted in various studies, they unavoidably possess some weaknesses and disadvantages \[[@pone.0190616.ref038]\]: Firstly, for RCI approach it has been argued that the ratio is limited and heavily relied on particular road type and facility. Secondly, for LoS approach, it cannot provide a continuous range of values to represent the intensity of congestion. Results {#sec006} ======= In this section, the proposed metric is implemented and tested in empirical studies. Comparisons of measuring strength and metric sensitivity are investigated as well. Having all data descriptions, testbed setup and methodological frameworks constructed and outlined, next following step act as a guideline for readers to well understand the entire experimental procedure and to facilitate further analysis. **Step 1.** **Understanding the traffic data:** An unambiguous and fundamental properties of data must be obtained, such as critical density, jam density, and free-flow speed. **Step 2.** **Selecting appropriate Key Performance Indicator (KPI) and preparing the spatial-temporal profiles:** The resilience-oriented approach is performance-based and an appropriate KPI is needed for indicating various performance levels. Also, the spatial-temporal traffic patterns are obtained for exploratory analysis. **Step 3.** **Denoising, normalizing and identifying filtered draw-down and draw-up cycles:** In this step, we need to de-noise and normalize the selected KPI first and then identify reasonable forming-and-discharging congestion cycles. **Step 4.** **Estimating values for Elasticity Threshold (ET) and Robustness Range (RR):** These parameters need to be set next since many elemental functions in proposed metric rely on these two parameters. **Step 5.** **Implementing metrics and further analysis:** Calculation and measuring results are conducted and further sensitivity tests analyses are presented. Jam density, critical density and free-flow speed {#sec007} ------------------------------------------------- With discrete cells conceptualized on the study area, the first-order traffic quantities, density, speed and flow, can then be determined. The density *k*~(*i*,*j*)~ within each cell *C*~(*i*,*j*)~ was computed as *k*~(*i*,*j*)~ = *n*~(*i*,*j*)~/*l*~(*i*,*j*)~, where *n*~(*i*,*j*)~ denotes the number of vehicles in cell *C*~(*i*,*j*)~ at time *i* at *j* location, and the *l*~(*i*,*j*)~ is the spatial length of the cell, which in this case is a fixed term of 70 ft (approx. 21.34m). The dataset also contains speed information at each trajectory point. Thus the *v*~(*i*,*j*)~ was estimated by taking the average speed of all trajectory points in *C*~(*i*,*j*)~. The flow in that cell was calculated as the product of the speed and the density *q*~(*i*,*j*)~ = *k*~(*i*,*j*)~ × *v*~(*i*,*j*)~. In [Fig 3A1](#pone.0190616.g003){ref-type="fig"}, *k*~*jam*~ is roughly estimated as 0.30 veh/ft for I-80. To verify this, we applied a linear regression model to its density-speed plot ([Fig 3B1](#pone.0190616.g003){ref-type="fig"}) and found that the intersection with x-axis accredits the estimation of jam density. In this way, the jam density for US-101 case can be approximated as 0.33 veh/ft, and 0.30 veh/ft for both northbound and southbound in LB case. ![Density-flow and density-speed relationship of fundamental diagrams.\ (A) Density-flow relationship. (B) Density-speed relationship. (1) Case I-80. (2) Case US-101. (3) Northbound of LB, and (4) Southbound of LB. Red line: linear regression model; Red dotted line: linear approximation in free-flow phase with a slope as free-flow speed, which can also be determined by maximum speed in density-speed plots. Black dotted line: envelopes constructed for data containment of more than 95%, and the vertical black dotted line is the location of estimated critical density.](pone.0190616.g003){#pone.0190616.g003} The critical density, *k*~*critical*~, can be determined in each density-flow relationship plots as well. It lies on the point when traffic state transforms from free-flow phase to congestion phase (Let us only consider traditional two-phase traffic theory here for simplification. The three-phase traffic theory \[[@pone.0190616.ref040]\] will not be discussed). Therefore, it was estimated by finding the crossing point of linear approximation in free-flow phase and upper envelope in congestion phase while keeping a high data containment. In addition, the slope of this linear approximation in free-flow phase is the free-flow speed *v*~*ff*~, or forward wave speed. This quantity can be verified by the maximum speed fitted in linear regression in the density-speed relationship. Thus, the *k*~*critical*~ and *v*~*ff*~ were estimated as 0.15 veh/ft and 40 ft/s for I-80, 0.20 veh/ft and 65 ft/s for US-101, 0.18 veh/ft and 52 ft/s for both northbound and southbound in LB, respectively. Key Performance Indicator (KPI) and spatial-temporal density performance {#sec008} ------------------------------------------------------------------------ Next, we need to illustrate overall performance with appropriate measurement. Those measurements identify current performance states of the system and act as indications on how and where the gaps between current and desired Level of Performance (LoP) \[[@pone.0190616.ref041]\]. Key Performance Indicator (KPI) is a unique or a set of performance measurements which is deliberately selected for representing LoP \[[@pone.0190616.ref042]\]. The selection criteria should ensure that (1) selected KPI can be tied into the overall study purpose and goals; (2) the KPI should directly reflects the LoP changes over time; (3) the KPI should allow you to establish measurable tracks for management. In our cases, we used aggregated spatial-mean density capacity as KPI. It denotes spatial-mean capacity of a road section to accommodate traffic, and can be defined as $k_{(i)}^{\prime} = k_{jam} - {\overline{k}}_{(i)}$, where ${\overline{k}}_{(i)}$ is the spatial-mean density of study area at time *i*. We selected this density capacity as KPI for recurrent congestion as it is a direct, measurable and representative indicator for traffic, i.e., drops of this KPI indicate system performance loss as decreasing density capacity represents formation of congestion, which has consistent logic with proposed metric. Once the KPI is determined, analysis of spatial-temporal patterns can then be conducted accordingly. This technique of analysis is not uncommon in congestion studies, as it is always useful to identify the congestion and offers a direct visualization of traffic conditions within the study area. [Fig 4](#pone.0190616.g004){ref-type="fig"} illustrates the process from construction of spatial-temporal profile to KPI conversion in I-80 case. With clear visual indication, the reconstructed spatial-temporal map enables one to identify jamming patterns quickly and facilitate the further analysis. ![I-80 spatial-temporal pattern and KPI.\ (A) Full range of spatial-temporal density profile, which contains a 45-minutes time gap in the middle with I-80 16:00-16:15 before the gap and I-80 17:00-17:30 after it. (B) Aggregated spatial-mean density plot. (C) The KPI density capacity, which should be noted that it forms a mirror image with density performance.](pone.0190616.g004){#pone.0190616.g004} Congestion Index (CI) {#sec009} --------------------- We implement and test all metrics on their measuring strength and analyze their comparative performance in this section. The "SGOLAY" algorithm in MATLAB package \[[@pone.0190616.ref043]\] was applied to smooth and de-noise KPI since vehicle trajectory data usually collected with unavoidable background noise. Prior to identification of draw-down and draw-up cycles, it is better to normalize the KPI to set up a uniform scale so that it falls in the range \[0, 1\]. Here, normalized KPI was achieved by finding simple statistical normalization of spatial-mean density capacity at each time step, $k_{(i)}^{\prime}$, to the maximum density capacity, $k_{max}^{\prime}$. Thus, the normalized KPI of study area at time *i* can be realized as $k_{(i)}^{\prime}/k_{max}^{\prime}$. The identification of draw-down and draw-up cycles was then conducted according to studies \[[@pone.0190616.ref044], [@pone.0190616.ref045]\] of *ϵ*−filtering algorithm, which detects the significance of upturns and downturns by a constant threshold of *α*% on its magnitude. The reasons for performing such filtering identification process before metric implementation are as follows: (1) The proposed metric is constructed based on draw-down and draw-up cycles, therefore, one should ensure all the cycles identified are representative for each recurrent congestion pattern in spatial-temporal profile; (2) Without *ϵ*− filtering process, it would yield pure draw-downs and draw-ups (every single fluctuation), which is obviously unnecessary for those insignificant oscillations to participate in congestion measurement. However, it still requires the detection of pure downs and ups before conducting *ϵ*− filter. The *α* in *ϵ*− filtering algorithm was set as 50% since we were only interested in significant congestion (i.e., a draw-down/draw-up will only be recognized if its magnitude is more than half of its preceding draw-up/draw-down. A simplified pseudocode is given in [S1 Code](#pone.0190616.s001){ref-type="supplementary-material"}). Taking I-80 as an illustrative example, the identification process returned 18 recognizable draw-down and draw-up cycles, which indicates 18 congestion patterns were detected. The initial values of Elasticity Threshold and Robustness Range were determined by critical density *k*~*critical*~, because it is the threshold where phase transition occurs. But for the normalized KPI, those two parameters also need to be normalized to keep consistency on the scale. Recall that the *k*~*critical*~ for I-80 case was determined as 0.15 veh/ft, the density capacity at this threshold *k*′ = *k*~*jam*~ − *k*~*critical*~ = 0.27 − 0.15 = 0.12 veh/ft, and this critical capacity value is then normalized as $\text{ET} = k^{\prime}/k_{max}^{\prime} = 0.12/0.18 = 0.67$ ($k_{max}^{\prime}$ in I-80 is 0.18 veh/ft). And RR was assumed as 10% of ET. Hereafter, ET and RR in US-101 and LB can be determined accordingly and all metrics can be implemented. One numeric example of how to calculate CI with proposed metric is given for a step-by-step demonstration in [Fig 5](#pone.0190616.g005){ref-type="fig"}. In this illustration, we have ET given as 0.2. All key points for computing each elemental functions in proposed metric are also numerically presented. Therefore, we have: ![Illustrative example.\ Demonstration of an adaptive-recovery case for Congestion Index (CI) calculation.](pone.0190616.g005){#pone.0190616.g005} 1. **ET** is given as 0.2. Therefore, **RR** = 1/10 × 0.2 = 0.02. 2. **Congestion Magnitude** $C_{m} = P_{(t_{pre}^{\prime})} - P_{(t_{event})} = 0.39 - 0.1 = 0.29$ 3. **Congestion Time** $C_{t} = \left( {t_{event} - t_{pre}^{\prime}} \right)/\left( {t_{post}^{\prime} - t_{pre}^{\prime}} \right) = \left( {2 - 1.5} \right)/\left( {3 - 1.5} \right) = 0.33$ 4. **Recovery Scenario** Because *P*~(*t*~*post*~)~ \> *P*~(*t*~*pre*~)~. Then the **Recovery Scenario** *R*~*s*~ = 1 with a positive sign "+". 5. **Resistance Coefficient** Because *ET* \> *P*~(*t*~*event*~)~, *R*~*e*~ = 0.2 − 0.1 = 0.1 6. Overall congestion index for this example cycle is calculated as $\mathbf{\text{CI}} = \left( {\frac{0.29 \times 0.33}{2} + 0.1} \right) \times \left( {+ 1} \right) = + 0.148$ Following the normalization illustrated in [Fig 6A](#pone.0190616.g006){ref-type="fig"} and recalling Eqs [6](#pone.0190616.e010){ref-type="disp-formula"} and [9](#pone.0190616.e013){ref-type="disp-formula"}, resultant CI, RCI and LoS for I-80 are shown in [Fig 6B--6D](#pone.0190616.g006){ref-type="fig"} with their statistics in [Table 3](#pone.0190616.t003){ref-type="table"}. By comparing with the ground-truth spatial-temporal patterns ([Fig 6E](#pone.0190616.g006){ref-type="fig"}), it can be seen that all the significant congestion patterns were captured by CI metric. In order to represent a complete down-and-up cycle and also to capture the local maxima in RCI and LoS results, all the congestion indexes were plotted at *t*~*event*~ in each cycle. In first 200 time steps, there is no severe congestion occurred, as three notable patterns are all indicated as CI less than 0.2, RCI less than 2 and LoS in level A. Nevertheless, at around 800th time steps, several significant congestions occurred as indexes quickly turn to negative readings with increasing intensity over 0.2. It indicates that the traffic condition in latter observation of I-80 (17:00 p.m. to 17:30 p.m.) was far more congested than former 15 minutes (16:00 p.m. to 16:15 p.m.). Moreover, successive and large negative CI denote insufficient discharging processes in these congestion cycles, which further enhance our interpretation of their relative severity. 10.1371/journal.pone.0190616.t003 ###### The quantification results for 18 draw-down and draw-up cycles and congestion evaluation of all three metrics. The values for RCI and LoS are obtained by finding local maximum at *t*~*event*~. ![](pone.0190616.t003){#pone.0190616.t003g} No. *R*~*s*~ Time slot at *t*~*event*~ CI RCI V/C (LoS) ----- ---------- --------------------------- -------- ------- ----------- 1 \- 49 -0.011 0.321 0.391 2 \+ 62 0.003 0.618 0.463 3 \+ 93 0.032 1.248 0.565 4 \+ 126 0.063 1.534 0.573 5 \+ 165 0.076 1.600 0.613 6 \- 657 -0.001 0.580 0.477 7 \+ 692 0.025 1.007 0.512 8 \+ 718 0.005 1.747 0.589 9 \+ 748 0.027 1.660 0.587 10 \+ 827 0.398 6.569 0.809 11 \- 907 -0.007 0.758 0.477 12 \+ 929 0.153 2.431 0.656 13 \- 955 -0.013 0.629 0.466 14 \- 961 -0.009 0.744 0.477 15 \- 970 -0.008 0.745 0.485 16 \- 1012 -0.204 6.364 0.836 17 \- 1051 -0.202 4.501 0.761 18 \+ 1066 0.199 3.207 0.715 ![Congestion indexes of I-80.\ Congestion quantification results of all three metrics compare with the ground-truth pattern. (A) Normalized KPI with *ET* = 0.67. (B) CI. (C) RCI. (D) LoS. (E) Ground-truth pattern. The positive and negative signs denote different *R*~*s*~ in down-and-up cycles, which provide dynamic information about congestion recovery.](pone.0190616.g006){#pone.0190616.g006} Comparing CI with RCI and LoS at the local maximum at *t*~*event*~ in each cycle, we found the intensity of CI, RCI and LoS have similar indications. What's different by comparing with latter two metrics is that CI not only provides relative intensity differences among congestion patterns, but also reasonably amplify the scales to differentiate major and minor congestion. For instance, there are three successive jam patterns occurred around 700th time slot, and CI detected them as minor patterns with small values while RCI and LoS assigned relatively high values to them (yet still quantified as un-congested flow by RCI and LoS). A rule-of-thumb judging criterion of CI can be made---that is---patterns will be considered as major congestion when the absolute value of their CI is greater than 0.2. Most importantly, unlike traditional congestion measure methods, CI metric can also indicate the situation of post-event recovery as well. For instance, those short but negative indications are of particular interest. They indicate small-scale congestion with an insufficient discharging outcome. In other words, the I-80 freeway did not fully recover or completely dissolve the previous congestion queue before next one occurred at that point. Such implication could be hardly identified on spatial-temporal patterns by visual judgment and other conventional metrics, like RCI and LoS. Also, those small but positive indications illustrate immediate congestion formations with quick discharge. Together, they might be prefigured as signs for coming massive jams. [Fig 7](#pone.0190616.g007){ref-type="fig"} demonstrates the CI results for US-101, northbound and southbound of LB cases. Tables [4](#pone.0190616.t004){ref-type="table"} and [5](#pone.0190616.t005){ref-type="table"} contain the numerical measuring results of all metrics. As illustrated in [Fig 7A and 7B](#pone.0190616.g007){ref-type="fig"}, the morning peak-hour traffic was somehow less congested than expected. It may be due to the fact that southbound of US-101 in morning is not in high traffic demand (away from the attractor such as city center). Even so, CI metric still performs well in this case. In [Table 4](#pone.0190616.t004){ref-type="table"}, the absolute intensity of its quantified congestion, again, have similar variations as the outcomes obtained by other two. However, the only difference is that several congestion patterns in CI were not as significant as quantified in RCI and LoS. This could be the result of the observations that US-101 was less saturated and discharging processes of its congestion patterns were rather quick. 10.1371/journal.pone.0190616.t004 ###### The quantification results for US-101 using all three metrics. The values for RCI and LoS are obtained by finding local maximum at *t*~*event*~. ![](pone.0190616.t004){#pone.0190616.t004g} No. Time slot at *t*~*event*~ CI RCI LoS ----- --------------------------- -------- ------- ----------- 1 34 0.008 0.572 0.575 (A) 2 50 -0.007 0.505 0.512 (A) 3 60 0.001 0.594 0.538 (A) 4 87 -0.004 0.420 0.508 (A) 5 115 0.010 0.729 0.544 (A) 6 171 0.129 2.199 0.742 (B) 7 215 -0.008 0.420 0.494 (A) 8 257 0.110 2.078 0.736 (B) 9 317 -0.002 0.606 0.542 (A) 10 360 0.229 3.185 0.793 (B) 11 413 0.010 1.319 0.621 (B) 12 451 0.149 2.586 0.758 (B) 13 501 0.019 1.386 0.647 (B) 14 547 -0.065 2.986 0.789 (B) 15 585 0.029 1.757 0.714 (B) 16 608 -0.012 0.961 0.585 (A) 10.1371/journal.pone.0190616.t005 ###### The quantification results for Lankershim Boulevard (LB). The values for RCI and LoS are obtained by finding local maximum at *t*~*event*~. 18 recurrent and regularized patterns can be observed in both directions. ![](pone.0190616.t005){#pone.0190616.t005g} Northbound Southbound ------------ ------------ ------- ---------- ----------- ----- ------- ---------- ----------- 1 44 0.061 26.373 0.533 (A) 44 0.056 22.713 0.595 (A) 2 66 0.071 80.114 0.526 (A) 66 0.062 41.509 0.624 (B) 3 96 0.017 11.861 0.531 (A) 96 0.040 3356.005 0.552 (A) 4 121 0.041 25.501 0.524 (A) 121 0.071 1990.326 0.579 (A) 5 142 0.040 29.713 0.543 (A) 142 0.052 82.635 0.571 (A) 6 167 0.032 39.027 0.548 (A) 167 0.053 199.322 0.574 (A) 7 191 0.035 14.980 0.533 (A) 191 0.085 20.544 0.581 (A) 8 202 0.070 24.187 0.521 (A) 202 0.008 1.146 0.536 (A) 9 217 0.060 31.177 0.552 (A) 217 0.031 8.181 0.567 (A) 10 242 0.078 194.181 0.548 (A) 242 0.066 20.568 0.598 (A) 11 266 0.095 1618.633 0.567 (A) 266 0.054 13.711 0.583 (A) 12 293 0.068 61.873 0.598 (A) 293 0.066 25.947 0.598 (A) 13 318 0.053 74.996 0.610 (B) 318 0.147 1467.761 0.643 (B) 14 341 0.082 23.220 0.576 (A) 341 0.108 36.551 0.629 (B) 15 366 0.086 1226.453 0.552 (A) 366 0.083 29.909 0.614 (B) 16 392 0.052 66.122 0.569 (A) 392 0.027 905.763 0.631 (B) 17 416 0.067 15.108 0.574 (A) 416 0.075 72.630 0.617 (B) 18 445 0.047 26.754 0.586 (A) 445 0.066 998.680 0.574 (A) ![Congestion indexes for US-101 and LB.\ Results of CI compared with the ground-truth pattern. (A), (C), and (E) are the ground-truth spatial-temporal profiles for US-101, northbound and southbound of LB. (B), (D), and (F) are congestion indexes calculated by proposed metric correspondingly.](pone.0190616.g007){#pone.0190616.g007} The results from LB cases show interesting features ([Fig 7C--7F](#pone.0190616.g007){ref-type="fig"}). Because it is a section of an urban arterial with signal-controlled junctions and mixed groups of road users, regularized jam patterns can be clearly spotted. One may also notice the directions of propagation waves on two bounds are distinct. Even so, CI metric showed adequate measuring strength to characterize recurrent and controlled congestion patterns. Overall, the spatial-mean traffic condition on LB was unsaturated without residual queues. In contrast, RCI performs badly in this case as the values obtained at local maxima are dramatically high as shown in [Table 5](#pone.0190616.t005){ref-type="table"}. This could be a result of regularized traffic on this type of road. Signal-controlled junctions signify that the spatial-mean speed along study area could be extremely small at some time step if most of the vehicles were stopped by junction signals, and this leads to very high values of *T*~*ac*~ in [Eq 7](#pone.0190616.e011){ref-type="disp-formula"}. Since *T*~*ff*~ is constant, the RCI could have a very large value when *T*~*ac*~ is large, and therefore, makes the indexes unrepresentative for actual overall traffic condition in the study area. This exactly proves its shortcoming mentioned in the previous section. Sensitivity analysis {#sec010} -------------------- Sensitivity results of three critical parameters in the metric and different cell size are evaluated in this subsection. Since the metric heavily rely on the determination of parameters, the sensitivity of *ϵ*, RR, ET and cell size need to be studied. There are two facets in this analysis as we want to know how variation of these parameters affect (1) the number of congestion cycles detected, and (2) the measured absolute intensity of congestion. We tested the *ϵ* from 0 to 0.6, i.e., from pure draw-down and draw-up to 60% of filtering threshold. By sorting the absolute values of congestion indexes in ascending order, [Fig 8A1--8A4](#pone.0190616.g008){ref-type="fig"} illustrate that the number of congestion index is significantly affected by *ϵ* (as the value of it increases, the number of identified cycles decreases). However, the scales of the indexes show low sensitivity to variation once the *ϵ* is established, especially the major congestion, the change in *ϵ* does not significantly alter the detection of those major congestion and their scales roughly remain stable. ![Sensitivity analysis on *ϵ*, ET and RR values.\ (A) Sensitivity test on *ϵ*. (B) Test on ET, and (C) Test on RR. (1) Case I-80. (2) Case US-101. (3) Northbound of LB, and (4) Southbound of LB.](pone.0190616.g008){#pone.0190616.g008} Interestingly when it comes to the test on Elasticity Threshold (ET), results show high sensitivity to small variation of ET (from 0.2 to 0.7). In [Fig 8B1--8B4](#pone.0190616.g008){ref-type="fig"} the difference between major and minor congestion indexes, in the beginning, is hardly detected, it makes sense since small ET indicates that no phase transition occurred. And with increasing ET, the difference starts to be revealed. It verifies that the existence of phase transition is vital in quantification process, especially for identifying and differentiating major patterns. On the other hand, ET has no effect on the number of cycles detected. The overall scale of indexes shows a low sensitivity to the variation of Robustness Range (RR) from 0 to 0.09 (1/10 of ET) in [Fig 8C1--8C4](#pone.0190616.g008){ref-type="fig"}. As can be seen, the measuring strength of our proposed metric is not dramatically sensitive to RR. But we can also observe different features from [Fig 8C3 and 8C4](#pone.0190616.g008){ref-type="fig"}, the intensity of indexes gradually decreases as RR increases. This is due to the regularized feature in controlled traffic as all congestion cycles have similar depth and shape so that increasing amount of RR causes a similar amount of deduction on *C*~*m*~. Meanwhile, RR cannot influence the detected number either. [Fig 9](#pone.0190616.g009){ref-type="fig"} demonstrates the sensitivity results on cell size. From [Fig 9A1--9A4](#pone.0190616.g009){ref-type="fig"} analysis were performed based on changing spatial length but keeping a constant temporal length, and [Fig 9B1--9B4](#pone.0190616.g009){ref-type="fig"} were, in contrast, subjected to changing temporal length with a constant spatial length. There is a common pattern throughout all four cases, which indicates that a small dimensional change of cell size could drastically affect the measuring outcomes on both facets. One can see that with changes on spatial length as from 4 seconds × 10 feet (approx. 3.05m) to 4 seconds × 150 feet (approx. 45.72m), the absolute intensity of detected congestion were constantly shifting. From both freeway and arterial cases, we can see that spatial length of cell tends to have relatively more sensitive leaps on the intensity of CI rather than the number. However, in [Fig 9B1--9B4](#pone.0190616.g009){ref-type="fig"} all cases show relatively high sensitivity on both number and intensity of indexes to the variation in temporal length. For instance, only a few cycles can be identified when the temporal length is 24 seconds in LB cases. This could possibly imply that, with too small cell size, too many frivolous fluctuation details were captured and they influence the overall measuring outcomes with an unrepresentative number of vehicles in each cell. On the other hand, some congestion wave would be missed out if the cell size is too large, causing dropping number of identified congestion cycles. Also, the metric implementation outcome seems to be more sensitive to the temporal length of cells since the traffic patterns were studied in spatial-mean along the temporal dimension. ![Sensitivity analysis on cell size.\ (A) Sensitivity test on spatial dimension with constant temporal length. (B) Sensitivity test on temporal dimension with constant spatial length. (1) I-80. (2) US-101. (3) Northbound of LB, and (4) Southbound of LB.](pone.0190616.g009){#pone.0190616.g009} Discussion and conclusion {#sec011} ========================= There are some potential limitations about this metric and the vehicle trajectory data used \[[@pone.0190616.ref046], [@pone.0190616.ref047]\]. Traffic operators should be particularly aware of these limitations in practice. - Data availability and type, such as dirty and mutilated data, would significantly influence implementation of the metric. As we found during the tests, inactive cells in spatial-temporal profile could alter the outline of the spatial-mean density capacity. Such attribute requires a good data treatment which could limit potential applications of the metric. For example, if the trajectory data is collected from GPS or other types of onboard mobile sensors, a bad penetration or sampling rate could destabilize the metric performance. - The initial implementation of CI metric involves multiple steps and can be potentially complex for non-technical users. However, similar limitations are often solved by a proper built-in function in tools. We found that the processing time of the whole experiment is heavily depending on the input data but the processing time of metric per se in a total run is rather quick. - Both temporal and spatial coverage of study areas are insufficient, especially considering the fact that a peak hour usually lasts for longer period of time, and congestion propagation could also last for longer distance. - The type of roads and traffic conditions are limited. The datasets merely cover US freeways and urban arterials, leading to impossible investigations on other road types or in other countries. Furthermore, the traffic in US-101 and LB lack saturated conditions, which lead to a lot of uncertainty on metric compatibility in extreme traffics. In conclusion, our study addressed the issue of quantifying recurrent congestion based on spatial-temporal patterns on both urban freeways and streets. We constructed a metric inspired by the principle of well-applied "R4 resilience-triangle" approach, with the goal of quantitatively assessing and comparing congestion occurred repeatedly in various temporal steps. The representativeness of the metric and associated generic dimensions presented a strong capability for quantification and assessment. Our main conclusions are summarized below. The resilience-based approach provides a unique and different angle for tackling the congestion quantification issue, and its new-build characteristic dimensions are effective for capturing and differentiating major congestion. The signs of congestion indexes (positive or negative based on recovery performance) illustrate not only the overall congestion intensity but also indicate the discharging process after its formation. Our study amplifies the congestion quantification toolbox and establishes a combination of system resilience analysis. The proposed metric shows relative merits in measuring and characterizing strength as compared with other two traditional metrics, RCI and LoS. Because the construction of the metric is based on generic traffic dimensions, it has been found to be applicable to both freeway and arterial cases. The metric performs adequately in signal-controlled traffic and outperforms RCI as shown in Lankershim Boulevard case. Sensitivity tests verify that the phase transition mechanism plays an indispensable role in congestion analysis as the metric showed sensitive behavior to the Elasticity Threshold (ET). The testing results on *ϵ* and RR show relatively low sensitivity on detecting major congestion but the number of identified congestion patterns can be influenced by the existence of *ϵ*. The tests on various cell size demonstrate a sensitive behavior of metric to its discrete platform construction. Particularly we found that both number and intensity of detected congestion patterns are highly sensitive to the spatial dimension. This study provides insights into the quantification of recurrent traffic congestion inspired by emerging resilience concept. The metric we constructed showed strength in quantitative analysis of congestion in a systemic perspective and potentially offers an alternative for congestion study across different scenarios. Future research will further investigate the application of the metric to various traffic conditions in other countries and expand understandings of its application in road networks. Supporting information {#sec012} ====================== ###### Pseudocode for *ϵ*−filtering algorithm. (PDF) ###### Click here for additional data file. The research was conducted at the Future Resilient Systems at the Singapore-ETH Centre, which was established collaboratively between ETH Zurich and Singapore's National Research Foundation (FI 370074011) under its Campus for Research Excellence and Technological Enterprise program. The authors would like to thank the anonymous reviewers for their constructive comments. [^1]: **Competing Interests:**The authors have declared that no competing interests exist.
#!/bin/sh # Linux Deploy Component # (c) Anton Skshidlevsky <meefik@gmail.com>, GPLv3 do_configure() { msg ":: Configuring ${COMPONENT} ... " local timezone if [ -n "$(which getprop)" ]; then timezone=$(getprop persist.sys.timezone) elif [ -e "/etc/timezone" ]; then timezone=$(cat /etc/timezone) fi if [ -n "${timezone}" ]; then rm -f "${CHROOT_DIR}/etc/localtime" cp "${CHROOT_DIR}/usr/share/zoneinfo/${timezone}" "${CHROOT_DIR}/etc/localtime" echo ${timezone} > "${CHROOT_DIR}/etc/timezone" fi return 0 }
Q: There exists $\theta$ between $\pi/4$ and $\pi/2$ such that $\cos\theta=\theta$ I would like to show that there exists $\theta$ between $\pi/4$ and $\pi/2$ such that $\cos\theta=\theta$. I tried to use intermediate value theorem on the interval $[\pi/4,\pi/2]$ without success. Any suggestion how to proceed? maybe change the interval? Thanks A: Let $f(\theta)=\cos\theta-\theta$. Then on our interval $f'(\theta)$ is negative, so $f$ is decreasing. We don't even need the derivative, since $\cos\theta$ is decreasing and $\theta$ is increasing, so the difference is decreasing. Calculation shows that $f(\pi/4)$ is negative, so there is no root of $f(\theta)$ in the interval $(\pi/4,\pi/2)$.
Q: DbContext -> DbSet -> Where clause is missing (Entity Framework 6) I've read some tutorials with entity framework 6... The basics are easy. using (var context = new MyContext()) { User u = context.Users.Find(1); } But how to use "Where" or something else on the "DbSet" with the users? public class MyContext : DbContext { public MyContext() : base("name=MyContext") { //this.Database.Log = Console.Write; } public virtual DbSet<User> Users { get; set; } } Users [Table("User")] public class User : Base { public Guid Id { get; set; } [StringLength(100)] public string Username { get; set; } } And thats the problem which doesnt work. string username = "Test"; using (var context = new MyContext()) { User u = from user in context.Users where user.Username == username select user; } Error: There was no implementation of the query pattern for source type 'DbSet'. 'Where' is not found. Maybe a reference or a using directive for 'System.Link' is missing. If i try to autocomplete the methods there are none. Why it doesnt works? :( // Edit: Adding System.Linq to the top of the file changes the functions of the problem above so that i havent a problem anymore. But why the where is wrong now? The type "System.Linq.IQueryable<User>" cant converted into "User" explicit. There already exists an explicit conversion. (Possibly a cast is missing) A: Thanks to @Grant Winney and @Joe. Adding using System.Linq; to the namespace/top of the document where i'm tring the code above fixed the problem. And using the line above it works for the first item of the list. User user = (select user from context.Users where user.Username == username select user).First();
Indian Himalayas, India — Torn between a widespread tradition and an internationally imposed prohibition, thousands of villages scattered on the Indian Himalayas survive on the production of charas, hashish produced in India. In India, the use of cannabis dates back to the sacred Vedas texts and has been a part of religious rituals and festivities for millennia. Cannabis indica, a native strain from which charas is produced, grows wild in many parts of the Himalayas, making it almost impossible for authorities to stem production and track it back to the farmers, who have started to grow their fields ever higher to escape controls. Although widespread, there are no official figures for India’s charas cannabis cultivation as no survey has ever been conducted. Until the late 1980s, cannabis and opium were legal in India, sold in government-run shops and traded by the British East India Company. To comply with the global War on Drugs, in 1985, India passed the controversial NDPS — narcotic drugs and psychotropic substances — Act, which criminalised cannabis but failed to curb production and trafficking, which has boomed, reflecting increased prices on the international market. Charas is considered among the best hashish in the world: a gram of resin can cost $20 in the West, although charas producers win only tiny margins. They live a humble life, far away from modernity, in extreme conditions and with no alternative livelihoods. They consider cannabis a gift from God. Despite a change of course internationally, the debate on legalisation in India is still at an embryonic stage.
Case: 12-10022 Document: 00511887118 Page: 1 Date Filed: 06/14/2012 IN THE UNITED STATES COURT OF APPEALS FOR THE FIFTH CIRCUIT United States Court of Appeals Fifth Circuit FILED June 14, 2012 No. 12-10022 Summary Calendar Lyle W. Cayce Clerk UNITED STATES OF AMERICA, Plaintiff-Appellee v. GEORGE WHITEHEAD. JR., Defendant-Appellant Appeal from the United States District Court for the Northern District of Texas USDC No. 4:07-CR-11-1 Before JONES, Chief Judge, and PRADO and ELROD, Circuit Judges. PER CURIAM:* George Whitehead, Jr., federal prisoner # 35653-177, is serving a term of life imprisonment for his conviction of possession of more than 50 grams of a mixture and substance containing a detectable amount of cocaine base with intent to distribute. Concurrently, he is serving a 120-month sentence for his conviction of being a felon in possession of a firearm. Whitehead appeals the district court’s denial of his 18 U.S.C. § 3582(c)(2) motion for a reduction of his * Pursuant to 5TH CIR. R. 47.5, the court has determined that this opinion should not be published and is not precedent except under the limited circumstances set forth in 5TH CIR. R. 47.5.4. Case: 12-10022 Document: 00511887118 Page: 2 Date Filed: 06/14/2012 No. 12-10022 life sentence based on the retroactive amendments to U.S.S.G. § 2D1.1, the guideline for crack cocaine offenses. Section 3582(c)(2) permits the discretionary modification of a defendant’s sentence “in the case of a defendant who has been sentenced to a term of imprisonment based on a sentencing range that has subsequently been lowered by the Sentencing Commission pursuant to 28 U.S.C. 994(o).” § 3582(c)(2); see United States v. Doublin, 572 F.3d 235, 237 (5th Cir. 2009). The district court’s decision whether to reduce a sentence under § 3582(c)(2) is reviewed for an abuse of discretion, while the court’s interpretation of the Guidelines is reviewed de novo. United States v. Evans, 587 F.3d 667, 672 (5th Cir. 2009). Much of Whitehead’s brief amounts to an attack on his original sentence. He asserts that he was entrapped into a higher sentence, and he contends that the procedures required under 21 U.S.C. § 851 to increase his punishment by reason of his prior convictions were not followed. “A modification proceeding is not the forum for a collateral attack on a sentence long since imposed and affirmed on direct appeal.” United States v. Hernandez, 645 F.3d 709, 712 (5th Cir. 2011). A § 3582(c)(2) motion “is not a second opportunity to present mitigating factors to the sentencing judge, nor is it a challenge to the appropriateness of the original sentence.” United States v. Whitebird, 55 F.3d 1007, 1011 (5th Cir. 1995). Accordingly, to the extent that Whitehead challenges his original sentence, he cannot obtain relief under § 3582(c)(2). As the district court determined, on account of his prior felony drug convictions, Whitehead was subject to a mandatory sentence of life imprisonment under 21 U.S.C. § 841(b)(1)(A). A mandatory minimum statutory penalty overrides the retroactive application of a new guideline. See United States v. Pardue, 36 F.3d 429, 431 (5th Cir. 1994). Because Whitehead’s sentence of life imprisonment was statutorily mandated, he was not “sentenced to a term of imprisonment based on a sentencing range that has subsequently 2 Case: 12-10022 Document: 00511887118 Page: 3 Date Filed: 06/14/2012 No. 12-10022 been lowered by the Sentencing Commission.” § 3582(c)(2); see Pardue, 36 F.3d at 431. AFFIRMED. 3
Lenin Square, Donetsk Lenin Square (Russian:Площадь Ленина) is the main square in Donetsk, the capital of the proto-state breakaway republic of the Donetsk People's Republic. It is located between the streets of Artem, Postyshev, Gurov, and Komsomolskiy Avenue. It was formed between 1927 and 1967. In 1967, in honor of the 50th anniversary of the Great October Socialist Revolution, a monument to Lenin was erected on Lenin's Square. Many notable events have occurred on the square recently, including the following: The protest by pro-Russian separatists against the Ukrainian Government took place on the square. Parades of the separatist government in honor of Victory Day, May 1, and the founding of the DPR take place on the square. Landmarks Donbass Palace Donetsk Symphony Orchestra Donetsk National Academic Ukrainian Musical and Drama Theatre Ministry of Coal Industry of Ukraine Executive Committee of the Voroshilovsky district Gallery See also Donetsk List of places named after Vladimir Lenin References Category:Squares in Donetsk
Q: popoverController does not show after upgrading Xcode from 7 to 10 I have this app that was previously developed and maintained at Xcode 7. But recently we had to upgrade Xcode to 10 to be able to post the app to apple store. Many layout were broken upon the update and this seems to be a known issue ref. I believe it is the AutoLayout problem. I fixed them by going to storyboard to add the required constraints. However there is a problem with one of the popovercontroll that I do not know how to fix. Here is how it should look: Here is how it look after the upgrade: The popover is not showing. The code has not changed. It is a custom table view controller inherited from UITableViewController. I tried update frame but it did not work. The code that should pop up the view: UIStoryboard *storyBoard = [UIStoryboard storyboardWithName:@"MainStoryboard_iPhone" bundle:nil]; UNISortTableViewController *contentViewController = [storyBoard instantiateViewControllerWithIdentifier:@"UNISortTableViewController"]; ((UNISortTableViewController *)contentViewController).sortKeyArray = [NSArray arrayWithObjects:@"1", @"2", @"3", @"4", @"5", nil]; [(UNISortTableViewController *)contentViewController setPreviousSortKeyIndex:sortKeyIndex]; [(UNISortTableViewController *)contentViewController setPreviousSortOrder:ascIssues]; self.popoverController = [[popoverClass alloc] initWithContentViewController:contentViewController]; if ([self.popoverController respondsToSelector:@selector(setContainerViewProperties:)]) { [self.popoverController setContainerViewProperties:[self improvedContainerViewProperties]]; } self.popoverController.delegate = self; contentViewController.delegate = self; [self.popoverController presentPopoverFromBarButtonItem:sender permittedArrowDirections:(UIPopoverArrowDirectionUp|UIPopoverArrowDirectionDown| UIPopoverArrowDirectionLeft|UIPopoverArrowDirectionRight) animated:YES]; When debug I can see that cellforrowatindexpath event is not called A: Your antiquated code is calling initWithContentViewController. This suggests that you are using UIPopoverController, which was deprecated after iOS 9. https://developer.apple.com/documentation/uikit/uipopovercontroller/1624669-initwithcontentviewcontroller?language=objc You need to modernize your approach to popovers. Nowadays, popovers are simply a variety of presented view controller. There is no such thing as a UIPopoverController any more. You just call presentViewController on a normal UIViewController with a modalPresentationStyle of UIModalPresentationPopover. The entire way you designate where the arrow goes has changed too, but I won't go into detail, as the full information is available in the docs and elsewhere.
Q: On Refresh, newly added element in array is removed from the array list itself When ever I add new element in an array, it gets added successfully but when I refresh the browser, the added element gets removed from the list itself. here is my code snippet. <html> <label>Enter an New item to add in Stock</label> <br> </br> <input type="text" name=" itemName" id="addItemInStock><br></br> <p id="errorMsg"></p> <button onclick="addToStock()" return="false">Add</button> <p id="showList"></p> <select id="showInDropDown"> <option value="" disabled selected style="display: block;">Stock Items</option> </select> <script> var fruits = ["Banana", "Orange", "Apple", "Mango"]; document.getElementById("showList").innerHTML = fruits; var newItem = document.getElementById("addItemInStock"); function addToStock(){ if ((newItem.value) === ""){ document.getElementById("errorMsg").innerHTML = "Blank item cannot be added!!"; document.getElementById("errorMsg").style.display = "block"; } else{ document.getElementById("errorMsg").style.display = "none"; fruits.push(newItem.value); document.getElementById("showList").innerHTML = fruits; clearAndShow(); } var sel = document.getElementById("showInDropDown"); document.getElementById("showInDropDown").innerHTML = ""; for (var i = 0; i < fruits.length; i++) { var opt = document.createElement('option'); sel.appendChild(opt); } } function clearAndShow(){ newItem.value = ""; } </script> </html> A: Because on refresh, page is repainted. If you need to save the list then you need to use some web storage i.e. localStorage, web storage, etc. Following is a simple use case of localStorage Update from var fruits = ["Banana", "Orange", "Apple", "Mango"]; to var fruitsfromLS = localStorage.getItem("fruits"); var fruits = fruitsfromLS ? JSON.parse(fruitsfromLS) : ["Banana", "Orange", "Apple", "Mango"]; and update fruits.push(newItem.value); to fruits.push(newItem.value); localStorage.setItem("fruits", JSON.stringify(fruits)); For reference, localStorage
It’s been a busy year for everyone involved with Blockchain and with Cryptocurrency and we’ve been working with a fat man named Santa on a project for next year. With the ever-expanding population, the increasing cost of ink and the number of naughty and nice transactions children are completing a new solution is required if he’s to ensure every child get’s a present on Christmas. Santa is keen to improve transparency after a number of successful appeals by those kids having received coal accidentally. The old paper-based system was run by an Elf who went rogue, editing the naughty or nice list without the approval of St. Nick. He was caught deleting the good deeds from kids he didn’t like and with no backup ledgers, it was a mistake that couldn’t be fixed. High profile cases like this have been tried through the Fairy Tale Courts this season with the North Pole unable to win a single case. The Fairy Godmother sympathise with St. Nick but rebuked him for allowing such trust to be placed in the hands of one individual. It’s time for the Naughty or Nice list to modernise. Step Forward Santa Koins.
../../../default/linux/x86/17.0 ..
Whether you want a silver pashmina as a wedding shawl, a womans silver silk scarf, a silver cotton ladies cat scarf, a classical fair trade silver sparkly scarf, a cotton music note scarf, a silver handmade net scarf or a large plain coloured shawl wrap you will find them all here at York Scarves. We specialise in womens neck scarves of all sorts, from light weight scarves through to large heavy long pashmina shawls, particularly popular as wedding accessories are our plain pashminas. We also have our very own range of handmade scarves for men. And if you like the classical look check out our range of tartan scarves and tartan shawls. Although our silver Grey Watch Tartan scarf is really popular it is the red tartan scarves that everybody knows and loves which is the prime choice for men and women as a winter scarf. We also have our recently introduced range of pure wool pashminas so if you want a long wool winter scarf this could be for you. Many of our scarves can also be worn as a hijab. Of course, the name of silver covers many tones, so you might also like to search for pewter scarves and charcoal scarves. York Scarves is a fully registered member of the British Association of Fair Trade Shops and Suppliers giving you peace of mind that if you want a fair trade gift you are buying a product that has been responsibly sourced. York Scarves is also a long-established scarf wholesaler in the UK so if you are interested in stocking our products please fill in the wholesale enquiry form and we will get back to you promptly. Remember if you want fair trade shawls and wraps in any size or colour think of York Scarves first.
Q: What does *:before and *:after do in css I researched on SO about *(Asterisk) and I have found that it selects all elements and applies styles to them. I followed this link, Use of Asterisk and I noticed that this code will apply border to all of the elements. * { border: 1px solid red; } Now, my concern is that what does *:before and *:after do in CSS? *:before, *:after { box-sizing: border-box; } A: Like their name said it, :before & :after are used to apply css properties JUST before/after the content WITHIN the matching element. One day, a wise man said 'One fiddle is worth thousands words', so : div { border: solid 1px black; padding: 5px; } div:before { content: "Added BEFORE anything within the div!"; color:red; } div:after { content: "Added AFTER anything within the div!"; color:green; } <div>Div 1</div> <div>Div 2</div> <div>Div 3</div> <div>Div 4</div> A: :before selector inserts something before the content of each selected element(s). :after selector inserts something after the content of each selected element(s). so *:before like Dan White said would be before all elements and *:after will be after all elements
Q: How to determine a child view's parent when click event fires? I am using the template layout which I import into different LinearLayouts. The template has a button in it. When a user presses the button, I need to know in which LinearLayout the click event has occurred. Is this possible? I am getting problem firing parent's event as when you press the child element, the parent's event will not fire at all. A: you can get parent of any view using method: http://developer.android.com/reference/android/view/View.html#getParent%28%29
Abid Mutlak al-Jubouri Abid Mutlag al-Jubouri is an Iraqi politician and was a Deputy Prime Minister in the Iraqi Transitional Government. A Sunni Arab former major general in Saddam Hussein's army, he rose to prominence during the 1980–1988 Iran–Iraq War. References Category:Living people Category:Government ministers of Iraq Category:Year of birth missing (living people)
In an authentication system for a computer system that uses an IC (integrated circuit) card, it is necessary to perform identity authentication to confirm that the user is the authorized holder of the IC card, so as to prevent unauthorized use of the IC card through a theft or the like, as disclosed in Patent Document 1. Normally, a password called PIN (personal identification number) is used in such identity authentication. In this authentication system, mutual authentication is performed between the IC card and the authentication system terminal, so as to prove that the IC card is not an unauthorized card issued by counterfeiting or alteration, and the authentication system terminal is not an unauthorized terminal. This technique is known as internal authentication for authenticating the validity of the IC card when seen from the side of the authentication system terminal, and is also known as external authentication for authenticating the validity of the authentication system terminal when seen from the side of the IC card. The authentication system terminal is then put into a password input waiting state. The user inputs the password to the authentication system terminal, and the input password is compared with a password stored beforehand in the IC card, so as to perform the identity authentication. In the above procedures, however, the authentication system terminal that requires a password has not been proved to be valid for the user. More specifically, in a case where an unauthorized terminal is modified so as to look as if rightfully authenticated, the user cannot determine that the terminal is an unauthorized terminal. Therefore, the user is always exposed to the danger of wrongful use or theft of the password through an impersonating authentication system terminal. To solve the above problem, Patent Document 1 discloses a device that prevents password leakage. The device has a means that reads secret information available only to a subject user from an IC card, after authenticating the validity of an authentication system terminal. The unit presents the secret information to the user, and then requests the user to input the password. Patent Document 1: Japanese Patent Application Laid-Open No. 7-141480
So, there's this idea, which you already know: Define the layout of your UI by creating a tree of panels. The leaf nodes on the tree are what we used to call 'controls' way back in the day-- the things that the user interacts with, radio buttons and listboxes and such. The internal nodes are mostly concerned with layout; this kind of panel stacks its child panels vertically, that kind puts its children into a grid, etc. It's COMMON. Most of the UI-generating systems I've seen in the past twenty years are implementations of this, and the ones that aren't borrow from it. What's the word for this idea? EDIT: I'm looking for a word, or a phrase, for the pattern I'm describing. It's a big, high-level pattern, and it's become nearly universal. AWT, HTML forms with the controls in table cells, Swing, XAML, Android, and ASP.NET all use it or borrow from it. There's an idea here, on the same level as concepts like "windowing system" or "mesh network." What do we call it? I suspect that the real answer is, "there's no consensus on a name for it yet." Which would, itself, be really interesting. Perhaps you're thinking of inheritance, a property of many object-oriented systems. In theory, inheritance can describe all sorts of things; a car, a biological system, a taxonomy chart. In practice, it is used extensively in graphical systems, and only occasionally elsewhere (where composition is preferred). – Robert Harvey♦Apr 6 '12 at 22:36 1 In HTML/CSS you're referring to a relational display model. Where elements are positioned based on their relationship to the elements that come before/after them in the tree. The alternative is an absolute positioning model where the structure doesn't matter and every element's position is explicitly placed in relation to the top left corner. Relational is generally preferred because it allows you to make changes to parts of the whole without requiring a full re-calculation of all the offsets within the page. – Evan PlaiceApr 7 '12 at 1:01 @Evan: I think that should be an answer because it's the correct one. Relative or relational layout systems have become prevalent because fixed or absolute layouts have proven to be too inflexible, and a tree of containers and components plus a rendering/layout engine seems to be the natural way to implement relative layouts. – Michael BorgwardtApr 7 '12 at 9:11 6 Answers 6 Where elements are positioned based on their relationship to the elements that come before/after them in the tree. The alternative is an absolute positioning model where the structure doesn't matter and every element's position is explicitly placed in relation to the top left corner. Relational is generally preferred because it allows you to make changes to parts of the whole without requiring a full re-calculation of all the offsets within the page. For example, if you change the offset for a panel that is 5 generations/nodes removed from the main panel; in a relational mode, everything re-flows to compensate automatically; in an absolute model, the offset for all 5 generations/nodes of parent panels need to be re-calculated in cascading order because no child can calculate it's offset in relation to its parent until the parent is updated first. The tricky part about HTML/CSS layout models is, you can mix them. For instance, a 5th child panel can be placed using relative positioning (display:relative) but the child elements of that node can be placed using absolute positioning (display:absolute) within that element. Ie Instead of calculating the top-left offset from the window corner, it's taken in relation to the top-left of the box it resides in (5 generations deep). The power of the relative positioning model is that no child has to be aware of its parent's positioning. Child elements can be blissfully unaware. The difficulty comes in when there are issues/inconsistencies (ie as has happened a lot with browser inconsistencies in the past). Then you'll have to crawl back up the tree one element at a time until you can determine the one causing the trouble. Fortunately, with tools like FireBug and Google Chrome Developer Tools, it's really easy to crawl through the tree and visualize the layout boxes, padding, margins, etc interact with each other. Unfortunately, once you adopt a relational model it becomes really difficult to design a typical drag-drop GUI builder tool so most relational GUI development happens in code only. Maybe that will change in the near/distant future. Note: For completeness sake, I feel like I should also mention the fixed positioning model. It's basically the same as absolute positioning but it is not affected by scrolling. That's how those annoying toolbars that stick to the bottom/side of the page are created on most websites. "Relational display model" is a tolerably good general term. I've always called the things "panel-based layout systems," probably because I've done form layouts in half a dozen different systems but never in HTML :) – mjfgatesApr 8 '12 at 9:46 @mjfgates IMHO, you should definitely try HTML/CSS then; even if only for learning/experimentation. Desktop platforms are starting to pick up the HTML/CSS style of layout more and more because it works well. For instance, QML and WPF. The combo of a declarative syntax and a powerful/flexible layout model is something desktop development has been lacking all along. Plus, with a simple declarative layout language, you can hire designers that specialize in the look-and-feel while developers are allowed to focus exclusively on the plumbing. – Evan PlaiceApr 8 '12 at 19:01 "Container hierarchy" is a good description of the thing that this pattern <i>produces</i>... an instance of the result of using an implementation of the pattern, if you get what I mean. – mjfgatesApr 6 '12 at 19:51 All of the examples you've given have a view hierarchy where you have a simple understanding of what a view component is and then you have additional generic or ad-hoc components built on top of it. The fact that you have a view hierarchy would suggest that that is the pattern itself and not merely some product of it's use. – Filip DupanovićApr 7 '12 at 13:57 The generic term for such a tool is "GUI Builder". As you've clearly noted, that these tools can range from being completely drag-n-drop WYSIWYG, to something that is more expert-programmer oriented, where the design approach isn't exactly WYSIWYG, but represented using a hierarchical relationship of bunch of widgets (example XUL) in a more textual / programmatic form. Define the layout of your UI by creating a tree of panels. The leaf nodes on the tree are what we used to call 'controls' way back in the day I've always called it variations of: "tree layout", "tree hierarchy [of UI controls/widgets]", and so on. Sometimes I have to specify "variable-width" since binary trees tend to be what pop into mind, but "tree" applies well in general. A website wireframe is a visual guide that represents the skeletal framework of a website. The wireframe depicts the page layout or arrangement of the website’s content, including interface elements and navigational systems, and how they work together. http://en.wikipedia.org/wiki/Website_wireframe I would call it declarative definition. (ie you define 'what' the UI looks like, not the steps of 'how' to draw it). The UI may be defined as a tree, but that is because the UI is a tree by it's own nature. The UI would still be a tree if defined in an imperative way with pointers to parent/child objects. I think declarative versus imperative is what matters.
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ namespace Apache.Ignite.Core.Common { using System; using System.Runtime.Serialization; /// <summary> /// Indicates an error on Java side and contains full Java stack trace. /// </summary> [Serializable] public class JavaException : IgniteException { /** JavaClassName field. */ private const string JavaClassNameField = "JavaClassName"; /** JavaMessage field. */ private const string JavaMessageField = "JavaMessage"; /** Java exception class name. */ private readonly string _javaClassName; /** Java exception message. */ private readonly string _javaMessage; /// <summary> /// Initializes a new instance of the <see cref="JavaException"/> class. /// </summary> public JavaException() { // No-op. } /// <summary> /// Initializes a new instance of the <see cref="JavaException"/> class. /// </summary> /// <param name="message">The message that describes the error.</param> public JavaException(string message) : base(message) { // No-op. } /// <summary> /// Initializes a new instance of the <see cref="JavaException" /> class. /// </summary> /// <param name="javaClassName">Java exception class name.</param> /// <param name="javaMessage">Java exception message.</param> /// <param name="stackTrace">Java stack trace.</param> public JavaException(string javaClassName, string javaMessage, string stackTrace) : this(javaClassName, javaMessage, stackTrace, null) { // No-op. } /// <summary> /// Initializes a new instance of the <see cref="JavaException" /> class. /// </summary> /// <param name="javaClassName">Java exception class name.</param> /// <param name="javaMessage">Java exception message.</param> /// <param name="stackTrace">Java stack trace.</param> /// <param name="cause">The cause.</param> public JavaException(string javaClassName, string javaMessage, string stackTrace, Exception cause) : base(stackTrace ?? javaMessage, cause) { // Send stackTrace to base ctor because it has all information, including class names and messages. // Store ClassName and Message separately for mapping purposes. _javaClassName = javaClassName; _javaMessage = javaMessage; } /// <summary> /// Initializes a new instance of the <see cref="JavaException"/> class. /// </summary> /// <param name="message">The message.</param> /// <param name="cause">The cause.</param> public JavaException(string message, Exception cause) : base(message, cause) { // No-op. } /// <summary> /// Initializes a new instance of the <see cref="JavaException"/> class. /// </summary> /// <param name="info">Serialization information.</param> /// <param name="ctx">Streaming context.</param> protected JavaException(SerializationInfo info, StreamingContext ctx) : base(info, ctx) { _javaClassName = info.GetString(JavaClassNameField); _javaMessage = info.GetString(JavaMessageField); } /// <summary> /// When overridden in a derived class, sets the <see cref="SerializationInfo" /> /// with information about the exception. /// </summary> /// <param name="info">The <see cref="SerializationInfo" /> that holds the serialized object data /// about the exception being thrown.</param> /// <param name="context">The <see cref="StreamingContext" /> that contains contextual information /// about the source or destination.</param> public override void GetObjectData(SerializationInfo info, StreamingContext context) { base.GetObjectData(info, context); info.AddValue(JavaClassNameField, _javaClassName); info.AddValue(JavaMessageField, _javaMessage); } /// <summary> /// Gets the Java exception class name. /// </summary> public string JavaClassName { get { return _javaClassName; } } /// <summary> /// Gets the Java exception message. /// </summary> public string JavaMessage { get { return _javaMessage; } } } }
Aqueous vascular endothelial growth factor and aflibercept concentrations after bimonthly intravitreal injections of aflibercept for age-related macular degeneration. Clinical evidence supports the efficacy of bimonthly aflibercept injection for age-related macular degeneration. The study aimed to evaluate aqueous vascular endothelial growth factor and aflibercept concentrations and the efficacy of bimonthly aflibercept in patients with age-related macular degeneration. This study is a prospective, interventional case series. Enrolled were 35 eyes with exudative age-related macular degeneration from 35 patients. Patients received three bimonthly intravitreal aflibercept without loading doses. We collected the aqueous humor just before each injection, measured vascular endothelial growth factor and aflibercept concentrations by enzyme-linked immunosorbent assay and measured best-corrected visual acuity and central retinal subfield thickness before and after the injections. Aqueous vascular endothelial growth factor and aflibercept concentrations were measured. The vascular endothelial growth factor concentration was 135.4 ± 60.5 pg/mL (mean ± standard deviation, range 60.6-323.4) at baseline and below the lowest detectable limit in all eyes at month 2 and in 32 eyes at month 4 (P < 0.001 [month 2] and P < 0.001 [month 4]). The mean aflibercept concentration was 20.3 ng/mL at month 2 and 28.0 ng/mL at month 4. The mean logarithm of the minimum angle of resolution visual acuity improved from 0.50 ± 0.36 at baseline to 0.36 ± 0.40 at month 6 (P < 0.001). The mean central retinal subfield thickness decreased from 353 ± 100 μm at baseline to 236 ± 45 μm at month 6 (P < 0.001). Bimonthly aflibercept injections without loading doses may be considered a treatment option for age-related macular degeneration.
Q: Pandoc: automatically convert URLs into hyperlinks Is there an option which automatically converts URLs into hyperlinks in Pandoc? E.g. http://www.test.com should become [http://www.test.com](http://www.test.com) Or even cooler would be without the protocol: [www.test.com](http://www.test.com) A: Just surround them in <> : <http://www.test.com> echo "<http://example.com>" | pandoc <p><a href="http://example.com" class="uri">http://example.com</a></p> That will not work without the http:// though. See the documentation.
{ "": [ "--------------------------------------------------------------------------------------------", "Copyright (c) Microsoft Corporation. All rights reserved.", "Licensed under the Source EULA. See License.txt in the project root for license information.", "--------------------------------------------------------------------------------------------", "Do not edit this file. It is machine generated." ], "adminService.providerIdNotValidError": "Se necesita la conexión para interactuar con el servicio de administración", "noHandlerRegistered": "Ningún controlador registrado" }
869 F.2d 1500 U.S.v.Wood NO. 87-3811 United States Court of Appeals,Eleventh Circuit. FEB 23, 1989 1 Appeal From: M.D.Fla. 2 AFFIRMED.
David Villa to have guest stint at Melbourne City before A league debut Villa left Atletico Madrid and will play in New York City's inaugural MLS season in March 2015. David Villa will play a 10-match guest stint in the A-League with the re branded Melbourne City. (Source: AP) Spain’s all-time leading goal scorer David Villa will play a 10-match guest stint in the A-League with the re-branded Melbourne City. Villa, a member of Spain’s World Cup team in Brazil, left Spanish league champions Atletico Madrid and will play in New York City’s inaugural MLS campaign beginning in March 2015. To build match fitness, Villa will be loaned to sister club Melbourne City, which changed its name from Melbourne Heart after being bought by English Premier League champion Manchester City this year. The A-League season begins in October. “It’s very good for me in every sense,” Villa said in a statement released by Melbourne City on Thursday. “From a football point of view it’s the opportunity to play in a new league in a different country, and of course it will be ideal for me to get some competitive football in the period before the MLS season gets under way. “What I’ve always done throughout my career is do the best I possibly can. I’ll be giving everything for the team, just like I have with every club I’ve played for.” Villa has scored 56 goals in 94 matches for Spain since making his debut in 2005. Melbourne’s head coach John van’t Schip said Villa was still “at the peak of his game.” “He is playing in his third World Cup in Brazil, has just won the Spanish league and is an instinctive and gifted striker who will contribute significantly to Melbourne City FC,” van’t Schip said. “We are confident David will be an incredible asset to the playing group and his experiences in Europe and for his country will help our players not only on the pitch, but in training as well,” he added. The 32-year-old Villa scored 15 goals in 47 games for Atletico Madrid this season, with his last match for the club being the Champions League final loss to cross-town rival Real Madrid. Australia media compared the 10-match stint as the biggest coup for the A-League since Juventus striker Alessandro Del Piero signed with Sydney FC on a two-year contract. Del Piero scored 24 goals for Sydney and was the club’s leading scorer in both seasons he played. He left the team as a player at the end of this season, but says he might remain with the club in an off-field capacity.
Retention of Moisture-tolerant and Conventional Resin-based Sealant in Six- to Nine-year-old Children. The purpose of this study was to evaluate and compare the retention rates and development of caries in permanent molars in children sealed with moisture-tolerant, resin-based (Embrace WetBond), and conventional resin-based (Helioseal) sealant over a period of one year. This was a double blind, split-mouth, randomized controlled trial among six- to nine-year-olds. Sixty-eight permanent mandibular first molars in 34 children were randomly assigned to be sealed with Embrace WetBond or Helioseal sealant. The final sample was 32 children with 64 teeth. At 12 months, 23 of 32 (72 percent) sealants were completely retained in Embrace WetBond, whereas only 16 of 32 (50 percent) were retained in the Helioseal group. There was a statistically significant difference in retention rates of Embrace WetBond and Helioseal sealants at 12 months (P<.05). At 12 months follow-up, only two teeth developed caries in Embrace WetBond; in the Helioseal group, five teeth developed caries (two initial and three enamel caries). Embrace WetBond was superior to Helioseal sealant, as Embrace exhibited higher retention and lower caries scores. Embrace WetBond can be preferred over conventional resin-based sealants for community and outreach sealant programs where use of rubber dam for moisture control is difficult to practice.
But the slump in crude oil prices to a near seven year low following Friday’s inconclusive Opec meeting pushed shares off their best levels, with Wall Street opening sharply lower as energy companies dropped back. In the UK Royal Dutch Shell and BG fell 4% while BP was down 3%. In the US Chevron and Exxon are currently down around 4%, while France’s Total is nearly 1.5% lower. The closing scores showed: The FTSE 100 fell 14.77 points or 0.24% to 6223.52 after earlier rising to 6287 Germany’s Dax added 1.25% to 10,886.09, down from its peak of 10.992 France’s Cac closed up 0.88% at 4756.41 Italy’s FTSE MIB edged up 0.07% to 22,037.17 Spain’s Ibex ended down 0.36% at 10,042.4 In Greece, the Athens market rose 0.21% to 608.85 On Wall Street the Dow Jones Industrial Average is currently down 129 points or 0.73%. As for oil, Brent crude is 4.2% or $1.87 lower at $41.13, its worst level since February 2009. On that note, it’s time to close up for the evening. Thanks for all your comments, and we’ll be back tomorrow. The ability of the FTSE 100 to perform is clearly being hindered by tumbling oil prices, and with Brent hitting a new six-year low, the [likelihood] is that this will hold back this market for some time yet. As Shell, BG and BP lead the FTSE losers, the fate of the FTSE is in the hands of the dollar as another Fed fuelled dollar rally could send crude tumbling once more. With OPEC seeming less and less like a cartel and more like an audience with the Saudis, it is likely crude prices could fall further yet. It is hard to tell what is having a bigger impact on the Dow Jones, the chunky losses for Chevron and ExxonMobil or lingering resentment towards the now almost certain December rate-hike set to appear next week. Given the sharp rise that greeted the (ostensibly) lift-off securing non-farm jobs report last Friday, it is likely that the plunging oil price has created the bigger pressure, even if a bit of dovish drag can’t be completely discounted. Either way the Dow started the day down by around 100 points, in the process loping some of the more extravagant highs off the European indices. The price fall, if sustained, will lead to lower inflation in oil-consuming nations through the knock-on effects on petrol, diesel, domestic energy prices and the cost of running businesses. Lower crude prices may also delay or limit increases in interest rates. The Bank of England has already accepted that inflation – which currently stands at -0.1% – has stayed lower for longer this year than it anticipated. Analysts believe the current slide in oil prices has come too late to persuade the US Federal Reserve, America’s central bank, to delay an increase in the cost of borrowing later this month, adding that the prospect of the first tightening of policy from the Fed since 2006 was an added factor in crude’s decline. Larry’s full report is here: Opec bid to kill off US shale sends oil price down to near seven-year low Oil prices fell to their lowest in nearly seven years on Monday after OPEC’s meeting ended in disagreement over production cuts and without a reference to its output ceiling, while a stronger dollar made it more expensive to hold crude positions. The Organization of the Petroleum Exporting Countries (OPEC) ended its policy meeting on Friday without agreeing to lower production. For the first time in decades, oil ministers dropped any reference to the group’s output ceiling, highlighting disagreement among members about how to accommodate Iranian barrels once Western sanctions are lifted.... Today’s trading is in sharp contrast to the post-Draghi plunge of last Thursday, and only so much can be attributed to an improved, but still lower than expected, region-wide Sentix investor confidence figure. No, instead it appears that the ECB president managed to reassure investors at the weekend when he trotted out a fresh riff on his usual ‘whatever it takes’ spiel, stating that ‘there cannot be any limit to how far we are willing to deploy our instruments…to achieve [the central bank’s] mandate’.
Q: Unable to debug using Vagrant with custom PHP 7.2 installed and VSCode using Firefox, VSCode fails to break into a breakpoint In my project I have the following Xdebug settings on a Vagrant running VM: zend_extension=xdebug.so xdebug.remote_host=10.0.2.2 debug.repomote_port=9000 xdebug.remote_enable=1 xdebug.max_nesting_level = 1000 xdebug.remote_log=/tmp/xdebug.log Whilst on VSCode I have set it up like that: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Listen for XDebug", "type": "php", "request": "launch", "port": 9000, "pathMappings": { "/home/vagrant/code": "${workspaceRoot}", } } ] } The xdebug settings are located into inside a vagrant vm whilst the ide is on the host. The host Ip (10.0.2.2) is provided via the command: netstat -rn | grep "^0.0.0.0 " | cut -d " " -f10 Then I enable the debugging on the Firefox using the xdebug-helper with the following settings: But my IDE it fails to stop the execution on a breakpoint. Whilst debugging it I opened a shell session with the Vagrant running VM: vagrant up && vagrant ssh And then I test the reverse connection with it into the port 9000 using TCP protocol using the command (after having enabled the VSCode into listening for xdebug): nc -z -v 10.0.2.2 9000 The command itself shows the message: Connection to 10.0.2.2 9000 port [tcp/*] succeeded! Also my nginx.conf says: server { listen 80; server_name example.com; root /home/vagrant/code; index index.php index.html; charset utf-8; keepalive_timeout 65; server_tokens off; sendfile off; access_log off; error_log /var/log/nginx/error.log; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.1-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; } location ~* ^.+\.(?:css|cur|js|jpe?g|gif|htc|ico|png|html|xml|otf|ttf|eot|woff|svg)$ { access_log off; expires 30d; tcp_nodelay off; ## Set the OS file cache. open_file_cache max=3000 inactive=120s; open_file_cache_valid 45s; open_file_cache_min_uses 2; open_file_cache_errors off; } location ~ /\.ht { deny all; } } And the Vagrantfile is the following: Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.box_version = "20180917.0.0" config.vm.box_download_insecure = true config.vm.provider "virtualbox" do |vb| vb.name = "example-website" vb.memory = 3072 vb.cpus = 2 vb.customize [ "modifyvm", :id, "--uartmode1", "disconnected" ] end config.vm.network "private_network", ip: "192.168.10.80" config.vm.network "forwarded_port", guest: 80, host: 8090 config.vm.network "forwarded_port", guest: 22, host: 2922 config.vm.synced_folder "./.", "/home/vagrant/code" config.vm.provision :shell, :path => "./machine/provision/provision-xenial64.sh" config.vm.provision :shell, :path => "./machine/provision/provision-hosts.sh" config.vm.provision :shell, :path => "./machine/provision/provision-docker.sh" config.vm.provision :shell, :path => "./machine/provision/provision-nginx.sh" config.vm.provision :shell, :path => "./machine/provision/provision-php.sh" config.vm.provision :docker_compose, yml: "/home/vagrant/code/machine/docker_compose/cue.yml", run: "always" end Also the VSCode instance is a vscodium build as well and has the felixfbecker.php-debug plugin. Do you know why the VSCodium fails to break into a breakpoint? A: Does the code actually being called? Sometimes because of a frontend bug especially on ajax calling events your code it may not even called at all. So ensure first that your code is actually called and then try to figure out whether it is an xdebug issue. So as seen the connection on xdebug from guest into host is being performed. And the ip is set correctly. So it is rather plausible for the piece of code that has the breakpoint not to be called at all, so the IDE does not break into the expected breakpoint.
Q: Restrict Author to pick from media library, but not upload media I have a multisite network where the super admins will be creating the individual sites. An important role we need to enable, for compliance reasons, is to restrict "Authors" to not be able to upload media, but be able to access the media library to choose from media that is uploaded by the super admin. I have downloaded the User Role Editor plugin, which is a great plugin btw, but the only option it gives is to turn off the "upload_files" function. That takes away all ability to access the media library. No bueno. Anyone wanna take a stab at this?? A: Just taking a rough stab at this... add_filter('media_upload_tabs', 'modify_media_tabs'); function modify_media_tabs($tabs) { if (is_super_admin()) return $tabs; return array( 'type_url' => __('From URL'), 'gallery' => __('Gallery'), 'library' => __('Media Library') ); } add_filter('_upload_iframe_src', 'change_default_media_tab'); function change_default_media_tab($uri) { if (is_super_admin()) return $uri; return $uri.'&amp;tab=library'; } add_action('current_screen', 'check_uploading_permissions'); function check_uploading_permissions() { if (is_super_admin()) return; if (get_current_screen()->id == 'media-upload' || (get_current_screen()->action == 'add' && get_current_screen()->id == 'media')) { $post_id = (int) $_GET['post_id']; if (!$post_id || isset($_GET['inline'])) wp_die(__('You do not have permission to upload files.')); if ( !isset($_GET['tab']) || !($_GET['tab'] == 'library' || $_GET['tab'] == 'type_url' || $_GET['tab'] == 'gallery')) { wp_redirect( admin_url('media-upload.php?tab=library&post_id='.$post_id) ); exit; } } } We're doing 3 things: Removing the "My computer" tab from the editor's popup uploader We're making the default tab "Library" in the editor's popup uploader We're denying permission to direct access to the media upload pages I did this for anyone who isn't a super admin, but of course, you can add in a clause for other capabilities instead. I didn't try POSTing an image to see if it's bypass-able, so if compliance is your game, you'd want to do that. In fact, while it may go without saying, I'll say it anyway: This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Hope this helps! Cheers~
Q: Maven can't read my Cucumber Test I'm currently receiving this log when running 'mvn test' [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Envirosite-Regression 1.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ Envirosite-Regression --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory C:\Users\christian.nuval\Envirosite-Regression\src\main\resources [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ Envirosite-Regression --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ Envirosite-Regression --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ Envirosite-Regression --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ Envirosite-Regression --- [INFO] Surefire report directory: C:\Users\christian.nuval\Envirosite-Regression\target\surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.EnvirositeRegression.bdd.IssueTest Configuring TestNG with: org.apache.maven.surefire.testng.conf.TestNG652Configurator@7a46a697 Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.385 sec Results : Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3.374 s [INFO] Finished at: 2016-07-21T16:32:18+08:00 [INFO] Final Memory: 12M/225M [INFO] ------------------------------------------------------------------------ My pom.xml looks like this : <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.EnvirositeRegression.bdd</groupId> <artifactId>Envirosite-Regression</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>Envirosite-Regression</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-java</artifactId> <version>1.1.8</version> <scope>test</scope> </dependency> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-junit</artifactId> <version>1.1.8</version> <scope>test</scope> </dependency> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-core</artifactId> <version>1.2.4</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.openqa.selenium.core</groupId> <artifactId>selenium-core</artifactId> <version>1.0-20080914.225453</version> </dependency> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2.53.0</version> </dependency> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>6.9.10</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.4</version> </dependency> <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi</artifactId> <version>3.14</version> </dependency> <dependency> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> </dependency> </dependencies> <profiles> <profile> <id>cucumber-tests</id> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>1.7</source> <target>1.7</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <configuration> <includes> <include>**/*Test.java</include> </includes> </configuration> </configuration> </plugin> </plugins> </build> </profile> </profiles> </project> My IssueTest.java looks like this : package com.EnvirositeRegression.bdd; import cucumber.api.CucumberOptions; import cucumber.api.junit.Cucumber; import org.junit.runner.RunWith; @RunWith(Cucumber.class) @CucumberOptions( format = { "pretty", "html:target/cucumber" }, glue = "com.EnvirositeRegression.bdd", features = "SiteSearchTool.feature" ) public class IssueTest { } I don't know why the IssueTest.Java is not properly read even though I added it as an inclusion in my build configuration of maven-surefire-plugin. Please advise. A: Your logs are showing SUCCESS with no executed tests because there are no tests to run i.e. no tests are written in your IssueTest. Try an example test-case, then run mvn test. public class IssueTest { @Test public void exampleReturnsTrue() { assertTrue("This will succeed.", true); } } A side-note: If your tests are located where there supposed to be (./src/tests/java/); I suppose your pom.xml could do without the <configuration> [...] </configuration> clause, so: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> </plugin>
Neurobehavioral adaptations to methylphenidate: the issue of early adolescent exposure. Exposure to psychostimulants, including both abused and therapeutic drugs, can occur first during human adolescence. Animal modeling is useful not only to reproduce adolescent peculiarities but also to study neurobehavioral adaptations to psychostimulant consumption. Human adolescence (generally considered as the period between 9/12 and 18 years old) has been compared with the age window between postnatal days (pnd) 28/35 and 50 in rats and mice. These adolescent rodents display basal hyperlocomotion and higher rates of exploration together with a marked propensity for sensation-seeking and risk-taking behaviors. Moreover, peculiar responses to psychostimulants, including enhanced locomotor sensitization, no drug-induced stereotypy and reduced place conditioning have been described in adolescent rodents. During this age window, forebrain dopamine systems undergo profuse remodeling, thus providing a neuro-biological substrate to explain behavioral peculiarities observed during adolescence, as well as the reported vulnerabilities to several drugs. Further, methylphenidate (MPH, better known as Ritalin®), a psychostimulant extensively prescribed to children and adolescents diagnosed with attention-deficit/hyperactivity disorder (ADHD), raises concerns for its long-term safety. Using magnetic resonance techniques, MPH-induced acute effects appear to be different in adolescent rats compared to adult animals. Moreover, adolescent exposure to MPH seems to provoke persistent neurobehavioral consequences: long-term modulation of self-control abilities, decreased sensitivity to natural and drug reward, enhanced stress-induced emotionality, together with an enhanced cortical control over sub-cortical dopamine systems and an enduring up-regulation of Htr7 gene expression within the nucleus accumbens (NAcc). In summary, additional studies in animal models are necessary to better understand the long-term consequences of adolescent MPH, and to further investigate the safety of the prescription and administration of such pharmacological treatment at early life stages.
<Type Name="DragDataGetArgs" FullName="Gtk.DragDataGetArgs"> <TypeSignature Language="C#" Value="public class DragDataGetArgs : GLib.SignalArgs" Maintainer="auto" /> <TypeSignature Language="ILAsm" Value=".class public auto ansi beforefieldinit DragDataGetArgs extends GLib.SignalArgs" /> <AssemblyInfo> <AssemblyName>gtk-sharp</AssemblyName> <AssemblyPublicKey> </AssemblyPublicKey> </AssemblyInfo> <ThreadSafetyStatement>Gtk# is thread aware, but not thread safe; See the <link location="node:gtk-sharp/programming/threads">Gtk# Thread Programming</link> for details.</ThreadSafetyStatement> <Base> <BaseTypeName>GLib.SignalArgs</BaseTypeName> </Base> <Interfaces /> <Docs> <summary>Event data.</summary> <remarks> <para>The <see cref="M:Gtk.Widget.DragDataGet" /> event invokes <see cref="T:Gtk.DragDataGetHandler" /> delegates which pass event data via this class.</para> </remarks> </Docs> <Members> <Member MemberName=".ctor"> <MemberSignature Language="C#" Value="public DragDataGetArgs ();" /> <MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor() cil managed" /> <MemberType>Constructor</MemberType> <ReturnValue /> <Parameters /> <Docs> <summary>Public Constructor.</summary> <remarks>Create a new <see cref="T:Gtk.DragDataGetArgs" /> instance with this constructor if you need to invoke a <see cref="T:Gtk.DragDataGetHandler" /> delegate.</remarks> </Docs> </Member> <Member MemberName="Context"> <MemberSignature Language="C#" Value="public Gdk.DragContext Context { get; }" /> <MemberSignature Language="ILAsm" Value=".property instance class Gdk.DragContext Context" /> <MemberType>Property</MemberType> <ReturnValue> <ReturnType>Gdk.DragContext</ReturnType> </ReturnValue> <Docs> <summary>The context of this drag.</summary> <value>a <see cref="T:Gdk.DragContext" /></value> <remarks /> </Docs> </Member> <Member MemberName="Info"> <MemberSignature Language="C#" Value="public uint Info { get; }" /> <MemberSignature Language="ILAsm" Value=".property instance unsigned int32 Info" /> <MemberType>Property</MemberType> <ReturnValue> <ReturnType>System.UInt32</ReturnType> </ReturnValue> <Docs> <summary>For internal use.</summary> <value>A <see cref="T:System.UInt32" /></value> <remarks /> </Docs> </Member> <Member MemberName="SelectionData"> <MemberSignature Language="C#" Value="public Gtk.SelectionData SelectionData { get; }" /> <MemberSignature Language="ILAsm" Value=".property instance class Gtk.SelectionData SelectionData" /> <MemberType>Property</MemberType> <ReturnValue> <ReturnType>Gtk.SelectionData</ReturnType> </ReturnValue> <Docs> <summary>The data that's selected and dragged.</summary> <value>a <see cref="T:Gtk.SelectionData" /></value> <remarks /> </Docs> </Member> <Member MemberName="Time"> <MemberSignature Language="C#" Value="public uint Time { get; }" /> <MemberSignature Language="ILAsm" Value=".property instance unsigned int32 Time" /> <MemberType>Property</MemberType> <ReturnValue> <ReturnType>System.UInt32</ReturnType> </ReturnValue> <Docs> <summary>The time this data was gotten from the source widget.</summary> <value>A <see cref="T:System.UInt32" /></value> <remarks /> </Docs> </Member> </Members> </Type>
An augmented space approach to the study of random ternary alloys: I. Electronic structure with uncorrelated disorder and short ranged order. We present here a generalized augmented space recursive technique which includes the effects of diagonal and environmental disorder explicitly: an analytic, lattice translational invariant, multiple scattering theory for the study of short range ordering in random ternary alloys. Our generalized augmented space formalism includes atomic correlations over a finite cluster including short range order (SRO). We propose the augmented space recursion (ASR), a computationally fast and accurate technique which incorporates configuration fluctuations over a large local environment. We apply the formalism to a tight-binding linear muffin-tin orbital (LMTO) study of stainless steel Fe(80-x)Ni(x)Cr(20) (x = 14 and 17). We have demonstrated the effects of short range ordering by calculating the configuration averaged density of states with and without SRO and with different kinds of cluster environment embedded in an averaged medium.