text
stringlengths
8
5.77M
// ReSharper disable All using System.Collections.Generic; using System.Dynamic; using PetaPoco; namespace MixERP.Net.Core.Modules.HRM.Data { public interface IExitScrudViewRepository { /// <summary> /// Performs count on IExitScrudViewRepository. /// </summary> /// <returns>Returns the number of IExitScrudViewRepository.</returns> long Count(); /// <summary> /// Return all instances of the "ExitScrudView" class from IExitScrudViewRepository. /// </summary> /// <returns>Returns a non-live, non-mapped instances of "ExitScrudView" class.</returns> IEnumerable<MixERP.Net.Entities.HRM.ExitScrudView> Get(); /// <summary> /// Displayfields provide a minimal name/value context for data binding IExitScrudViewRepository. /// </summary> /// <returns>Returns an enumerable name and value collection for IExitScrudViewRepository.</returns> IEnumerable<DisplayField> GetDisplayFields(); /// <summary> /// Produces a paginated result of 10 items from IExitScrudViewRepository. /// </summary> /// <returns>Returns the first page of collection of "ExitScrudView" class.</returns> IEnumerable<MixERP.Net.Entities.HRM.ExitScrudView> GetPaginatedResult(); /// <summary> /// Produces a paginated result of 10 items from IExitScrudViewRepository. /// </summary> /// <param name="pageNumber">Enter the page number to produce the paginated result.</param> /// <returns>Returns collection of "ExitScrudView" class.</returns> IEnumerable<MixERP.Net.Entities.HRM.ExitScrudView> GetPaginatedResult(long pageNumber); List<EntityParser.Filter> GetFilters(string catalog, string filterName); /// <summary> /// Performs a filtered count on IExitScrudViewRepository. /// </summary> /// <param name="filters">The list of filter conditions.</param> /// <returns>Returns number of rows of "ExitScrudView" class using the filter.</returns> long CountWhere(List<EntityParser.Filter> filters); /// <summary> /// Produces a paginated result of 10 items using the supplied filters from IExitScrudViewRepository. /// </summary> /// <param name="pageNumber">Enter the page number to produce the paginated result. If you provide a negative number, the result will not be paginated.</param> /// <param name="filters">The list of filter conditions.</param> /// <returns>Returns collection of "ExitScrudView" class.</returns> IEnumerable<MixERP.Net.Entities.HRM.ExitScrudView> GetWhere(long pageNumber, List<EntityParser.Filter> filters); /// <summary> /// Performs a filtered count on IExitScrudViewRepository. /// </summary> /// <param name="filterName">The named filter.</param> /// <returns>Returns number of rows of "ExitScrudView" class using the filter.</returns> long CountFiltered(string filterName); /// <summary> /// Produces a paginated result of 10 items using the supplied filter name from IExitScrudViewRepository. /// </summary> /// <param name="pageNumber">Enter the page number to produce the paginated result. If you provide a negative number, the result will not be paginated.</param> /// <param name="filterName">The named filter.</param> /// <returns>Returns collection of "ExitScrudView" class.</returns> IEnumerable<MixERP.Net.Entities.HRM.ExitScrudView> GetFiltered(long pageNumber, string filterName); } }
Democratic leaders offered a full-throated defense of their growing impeachment inquiry into President Donald Trump, suggesting that attempts to stonewall it will be considered obstruction of justice. House Speaker Nancy Pelosi of California opened the press conference seeking to demonstrate that Democrats are simultaneously investigating the president while also pursuing bipartisan legislation. She also defended Democrats' decision to move forward with an inquiry even though she sees impeachment as divisive. Democrats are investigating claims from an intelligence community whistleblower that Trump sought to leverage the power of his office to pressure Ukraine's president to look into former Vice President Joe Biden and his son, Hunter Biden. "We are legislating...investigating and litigating. We take this to be a very sad time," Pelosi said. "I don't see impeachment as a unifying thing for this country." Political Cartoons on Impeachment View All 139 Images House Intelligence Committee Chairman Adam Schiff of California joined Pelosi and discussed the timeline of recent subpoenas to Trump officials and allies related to Ukraine. He also expressed deep concerns about Secretary of State Mike Pompeo's admission that he was listening in on Trump's July 25 call with Ukrainian President Volodymyr Zelenskiy. Schiff argued that continued efforts by the White House to stonewall Congress will help build their case that the administration is obstructing their legislative duties. "We will also draw the inference as appropriate that they're trying to conceal facts that would corroborate the whistleblower complaint so we'll have to decide whether to litigate or how to litigate. We're not fooling around here though. We don't want this to drag on for months and months and months," Schiff told reporters on Wednesday. "If they try to undermine our ability to find the facts around the president's effort to coerce a forigen leader to create dirt against a political opponent, then they will be strengthening the case on obstruction if they behave that way." Schiff noted that Kurt Volker, the U.S. ambassador to Ukraine who abruptly resigned last week, will testify behind closed doors before the House Intelligence Committee on Thursday. Trump appeared to be watching the press conference and tweeted sharp criticisms of both Pelosi and Schiff. He plans to hold a news conference later Wednesday. "Nancy Pelosi just said that she is interested in lowering prescription drug prices & working on the desperately needed USMCA," Trump said referring to legislation to lower drug prices and the ongoing negotiations of a new trade deal.
<template> <transition name="tr-expand"> <tr v-if="active" class="tr-expand" > <td :colspan="colspan"> <div class="content-tr-expand"> <slot></slot> <button v-if="close" class="tr-expand--close" @click="$emit('click', $event)" > <i class="material-icons"> clear </i> </button> </div> </td> </tr> </transition> </template> <script> export default { props: { close: { type: Boolean, default: false }, colspan:{ default: 1, type: Number } }, data:() => ({ active: false }), mounted() { this.active = true } } </script>
Q: Gradient computation I am beginner in data-science. I am trying to understand this PyTorch code for gradient computation using custom autograd function: class MyReLU(torch.autograd.Function): @staticmethod def forward(ctx, x): ctx.save_for_backward(x) return x.clamp(min=0) def backward(ctx, grad_output): x, = ctx.saved_tensors grad_x = grad_output.clone() grad_x[x < 0] = 0 return grad_x However, I don't understand this line : grad_x[x < 0] = 0. Can anyone explain this part? A: The example you find is calculating the gradient for the ReLU function, whose gradient is $$\text{ReLU}'(x)=\left\{ \begin{array}{c l} 1 & \text{if } x>0\\ 0 & \text{if } x<0 \end{array}\right.$$ Therefore when x<0, you make the gradient 0 by grad_x[x < 0] = 0.
# Falkland Islands --- description: Hotel in Stanley, -51.69328,-57.86325 components: building: Rose Hotel country: Falkland Islands country_code: fk house_number: 1 residential: Port Stanley road: Brisbane Road town: Stanley expected: | Rose Hotel 1 Brisbane Road Stanley Falkland Islands United Kingdom --- description: middle of nowhere, -51.70,-58.62 components: continent: South America country: Falkland Islands country_code: fk expected: | Falkland Islands United Kingdom
Q: Div content slider with arrow image I have working script that slides content inside div with arrow image up and down. For some reason cant get fiddle to work but it works on my site just fine. http://jsfiddle.net/w5j2s/ function Scroll(id,ud,spd){ var obj=document.getElementById(id); clearTimeout(obj.to); if (ud){ obj.scrollTop=obj.scrollTop+ud; obj.to=setTimeout(function(){ Scroll(id,ud,spd); },spd||50) } } I thought I had only one question but now ... First why doesn't fiddle work? Same script, divs, style and it works on my site ... it is not complicated but for some reason i doesn't work on fiddle! Second now I need div with arrow image but this one I need to slide left to right, not up and down. How do i modify script so i can use the same one for to different divs on same page, one that will scroll up and down and another that will slide left to right? A: you need to specify that the script is in the body: For the second part of your question you should write another function similar to Scroll that uses scrollLeft instead of scrollTop
UPDATE: October 3, 2014 Robert Talbot, Jr.—who was arrested in March after the FBI had closely tracked his alleged scheme to launch a violent “American Insurgent Movement,” including alleged plans to kill police officers, rob armored cars, and blow up mosques—will appear in federal court Friday. In light of his re-arraignment hearing, we are re-posting an April story detailing Talbot’s largely-online efforts to corral a right-wing rebellion, and the FBI’s undercover plot to catch him. Robert James Talbot, Jr. woke up on the morning of March 27 ready to carry out the plan he’d allegedly been concocting for months. He drove to a storage facility in Houston, Texas where he met the three other members of “Operation Liberty.” According to a criminal complaint filed against him, Talbot had recruited his teammates via a Facebook group called the American Insurgent Movement to help him rob armored cars; the first phase of his larger scheme to kill police officers and blow up mosques and government buildings. Talbot allegedly provided his team with detailed maps of the financial institution he wanted to target, and escape routes for best avoiding law enforcement. He put two Composition 4, or C4, plastic explosive devices in his black backpack and asked one of his team members to read a “manifesto” he’d brought with him. “We must rebel. There is no other option,” it read, according to the complaint. “Blood and bullets are the only two things that will change this world, short of divine action.” On his way to carry out the robbery, Talbot was arrested by an FBI SWAT team. Unbeknownst to the 38-year-old Talbot, he’d been the subject of an FBI Joint Terrorism Task Force investigation since August, 2013. The people he’d recruited to be a part of his team were a pair of undercover FBI agents and a civilian informant. While the words “terrorism” and “insurgent” often conjure thoughts of Islamic extremism, the majority of domestic terrorist attacks in the United States over the past two decades have been carried out by right-wing radicals. According to data compiled last year by the liberal website ThinkProgress from the National Counterterrorism Center, the National Consortium for the Study of Terrorism and Responses to Terrorism, and the Southern Poverty Law Center, right-wing extremists have been responsible for 56 percent of domestic terrorist attacks and plots since in the U.S. the 1995 Oklahoma City bombing. 12 percent have been perpetrated by Islamic extremists. Before a federal judge, Talbot—who reportedly wore the green fatigues and brown “American Insurgent Movement” t-shirt he was arrested in—was charged Friday with attempting to interfere with commerce via robbery, solicitation to commit a violent crime, and possession of an explosive material. Talbot faces up to 20 years in prison and a fine of $250,000 if convicted of the attempted robbery charge alone. Each of the remaining charges carry an additional 10 years and $100,000. According to the complaint against him, the FBI began investigating Talbot in August 2013 after he unwittingly met with an unnamed informant and allegedly expressed his desire to rob banks and use that money to equip his “resistance” group with weapons with which they would kill law enforcement. Over the next eight months, the FBI complaint alleges, Talbot set his plan into motion: He created a Facebook group titled “American Insurgent Movement” or “AIM.” He described it as “a Pre-Constitutionalist Community that offers those who seek True patriotism and are looking for absolute Freedom by doing the Will of God. Who want to restore America Pre-Constitutionally and look forward to stopping the Regime with action by bloodshed.” According to the FBI, Talbot continued to meet and chat online with the informant. Talbot asked whether the informant would be ready to quit his job to start robbing banks; ordered him to start staking out Bank of America and Chase bank locations that they might be able to hit; and suggested he prepare mentally for killing people by watching violent war movies. Talbot also started engaging with undercover FBI agents, both online and in person. He was eager to start robbing banks, he allegedly told two undercover agents at a restaurant in Katy, Texas on January 30. That way, he could get the money he needed for better weapon and equipment to kill law enforcement agents and government officials in Washington, D.C. Talbot’s Facebook posts started to take on a sense of urgency as well. On January 30, according to the complaint, Talbot posted on the AIM page that “Liberty movement starts this summer for those who are up for anything. Email the admin if your [sic] interested in walking away from your life (we have weapons if you need a weapon) to stop the Regime. We always will be recruiting…” But by February 9, Talbot had narrowed his search to “ONLY ex-military or self-trained men who trained in guerrilla warfare and understand war/battle to the fullest. I cannot take someone whom [sic] doesn’t understand what war/battle is or like,” he posted on Facebook, according to the complaint. “I don’t need someone freezing up when bullets are whizzing past there [sic] head and jeopardize the rest of the team. I can train you, but I have no time to put that much effort into someone mentally to handle blood and killing.” At 1:30 a.m. on January 12, Talbot was arrested for driving while intoxicated. According to court documents, he was sentenced to one year of probation. More than a month later, the FBI complaint alleges, Talbot met with the two undercover agents in person and informed them that he wanted to kill the state trooper who arrested him. He had a plan to ambush the trooper at night, the complaint states, and then wait for more police officers to arrive as back up and kill them as well. He also allegedly said he wanted to kill his probation officer. Mid-March, the FBI alleges, Talbot told the undercover agents that he had been researching how to create shaped charges, explosives shaped to penetrate armored steel. Talbot allegedly asked the agents to get Composition 4 (C4) plastic explosives for him and to hide them in a storage facility rented under a fake name. He planned to use the C4 explosives to make shaped changes with which he could penetrate armored car doors. He also requested from the agents at least six hand grenades, one of which he said he would tape to the armored car driver’s door in order to kill the driver and keep the vehicle from driving off during their heist. In the week leading up to Talbot’s arrest, the complaint claims, he was observed staking out multiple banks and financial institutions around Houston, surveying with binoculars from his car from various vantage points and following an armored car to learn its driver’s routine. On March 22, he allegedly sent the undercover agents $500 via Money Gram—money that the FBI says was placed in evidence—in order to purchase the illegal explosives he asked for. On March 24, Talbot, who has been identified in local news reports as “a laborer,” told the informant via text message that he quit his job. The complaint states that when asked by one of the undercover officers whether he was serious about going through the with armed robbery planned for the 27th, Talbot said, “I didn’t quit my job for shi**• and giggles.” *** For someone who allegedly advertised his own terrorism plot online, Talbot has virtually no personal Internet footprint. According to one local Houston news report, the Batavia, New York native had been living in some sort of boarding house. Terry Denny, a man who lived in the same boarding house, told KPRC-TV in Houston, that Talbot spent a lot of his time watching anti-government videos and claiming that he kept a stash of weapons in New York. “I believe this kid was as savvy or maybe more savvy than Timothy McVeigh, honestly I do. If he had a Terry Nichols with him, who knows what he would do?” Denny was quoted saying. Whether Talbot had a Terry Nichols—or any accomplice not secretly working against him—also remains unknown. Philip Gallagher, the federal public defender representing Talbot, declined to comment for this story as he is “still investigating this matter.” What is clear from the American Insurgent Movement Facebook page, is that whoever created it identifies with a wide-range of radical, right-wing, anti-government beliefs often ascribed to “Patriotism.” In 2011, the Southern Poverty Law Center identified 1274 active “Patriot” groups in the United States. While they vary slightly across the country, they are almost all united in their disdain for (and fear of) federal government control over everything from their money to their health care to their guns. Posts on the AIM page proclaim the gamut of fears, from “police departments oath keepers being laid off and replaced with bilingual foreign soldiers” to President Obama giving himself the authority “to seize all of your assets.” “Do you have what it takes to become an insurgent? Are you sick of tyranny and your line has been over crossed?” a post from March 25 on the AIM Facebook page reads. “Do you feel like your being watched or already in chains? Do you have nothing to lose? Then AIM is for you join Operation Liberty. Regardless of your history we have supplies and weapons to provide after you join.” The SPLC notes that Talbot’s alleged plans resembled the 1984 armored car burglary and assassination of a Jewish radio host in Denver by a white nationalist group called The Order. The biggest difference between the two, the SPLC writes, is that “Talbot talked about some of his planned crimes on Facebook, the complaint says, while The Order committed murders, robbed armored cars, and carried out a number of other attacks.” In court, Friday, Assistant U.S. Attorney Carolyn Ferko asserted that she has no doubt Talbot would have carried out his plans if given the chance. “I would say he had the will. He was absolutely determined,” she said, comparing him to a tortoise: “slow and steady.”
Rogue waves in a multistable system. Clear evidence of rogue waves in a multistable system is revealed by experiments with an erbium-doped fiber laser driven by harmonic pump modulation. The mechanism for the rogue wave formation lies in the interplay of stochastic processes with multistable deterministic dynamics. Low-frequency noise applied to a diode pump current induces rare jumps to coexisting subharmonic states with high-amplitude pulses perceived as rogue waves. The probability of these events depends on the noise filtered frequency and grows up when the noise amplitude increases. The probability distribution of spike amplitudes confirms the rogue wave character of the observed phenomenon. The results of numerical simulations are in good agreement with experiments.
Melatonin and melatonin agonist for delirium in the elderly patients. The objective of this review is to summarize the available data on the use of melatonin and melatonin agonist for the prevention and management of delirium in the elderly patients from randomized controlled trials (RCTs). A systematic search of 5 major databases PubMed, MEDLINE, PsychINFO, Embase, and Cochrane Library was conducted. This search yielded a total of 2 RCTs for melatonin. One study compared melatonin to midazolam, clonidine, and control groups for the prevention and management of delirium in individuals who were pre- and posthip post-hip arthroplasty. The other study compared melatonin to placebo for the prevention of delirium in older adults admitted to an inpatient internal medicine service. Data from these 2 studies indicate that melatonin may have some benefit in the prevention and management of delirium in older adults. However, there is no evidence that melatonin reduces the severity of delirium or has any effect on behaviors or functions in these individuals. Melatonin was well tolerated in these 2 studies. The search for a melatonin agonist for delirium in the elderly patients yielded 1 study of ramelteon. In this study, ramelteon was found to be beneficial in preventing delirium in medically ill individuals when compared to placebo. Ramelteon was well tolerated in this study.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> <meta http-equiv="X-UA-Compatible" content="IE=9"/> <meta name="generator" content="Doxygen 1.8.6"/> <title>CZPlayer: DS_RefreshCacheStruct结构体 参考</title> <link href="tabs.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="dynsections.js"></script> <link href="search/search.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="search/search.js"></script> <script type="text/javascript"> $(document).ready(function() { searchBox.OnSelectItem(0); }); </script> <link href="doxygen.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="top"><!-- do not remove this div, it is closed by doxygen! --> <div id="titlearea"> <table cellspacing="0" cellpadding="0"> <tbody> <tr style="height: 56px;"> <td style="padding-left: 0.5em;"> <div id="projectname">CZPlayer &#160;<span id="projectnumber">3.0.0</span> </div> <div id="projectbrief">CZPlayer应用程序</div> </td> </tr> </tbody> </table> </div> <!-- end header part --> <!-- 制作者 Doxygen 1.8.6 --> <script type="text/javascript"> var searchBox = new SearchBox("searchBox", "search",false,'搜索'); </script> <div id="navrow1" class="tabs"> <ul class="tablist"> <li><a href="index.html"><span>首页</span></a></li> <li class="current"><a href="annotated.html"><span>类</span></a></li> <li><a href="files.html"><span>文件</span></a></li> <li> <div id="MSearchBox" class="MSearchBoxInactive"> <span class="left"> <img id="MSearchSelect" src="search/mag_sel.png" onmouseover="return searchBox.OnSearchSelectShow()" onmouseout="return searchBox.OnSearchSelectHide()" alt=""/> <input type="text" id="MSearchField" value="搜索" accesskey="S" onfocus="searchBox.OnSearchFieldFocus(true)" onblur="searchBox.OnSearchFieldFocus(false)" onkeyup="searchBox.OnSearchFieldChange(event)"/> </span><span class="right"> <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a> </span> </div> </li> </ul> </div> <div id="navrow2" class="tabs2"> <ul class="tablist"> <li><a href="annotated.html"><span>类列表</span></a></li> <li><a href="classes.html"><span>类索引</span></a></li> <li><a href="hierarchy.html"><span>类继承关系</span></a></li> <li><a href="functions.html"><span>类成员</span></a></li> </ul> </div> <!-- window showing the filter options --> <div id="MSearchSelectWindow" onmouseover="return searchBox.OnSearchSelectShow()" onmouseout="return searchBox.OnSearchSelectHide()" onkeydown="return searchBox.OnSearchSelectKey(event)"> <a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(0)"><span class="SelectionMark">&#160;</span>全部</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(1)"><span class="SelectionMark">&#160;</span>类</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(2)"><span class="SelectionMark">&#160;</span>文件</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(3)"><span class="SelectionMark">&#160;</span>函数</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(4)"><span class="SelectionMark">&#160;</span>变量</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(5)"><span class="SelectionMark">&#160;</span>枚举</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(6)"><span class="SelectionMark">&#160;</span>枚举值</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(7)"><span class="SelectionMark">&#160;</span>宏定义</a></div> <!-- iframe showing the search results (closed by default) --> <div id="MSearchResultsWindow"> <iframe src="javascript:void(0)" frameborder="0" name="MSearchResults" id="MSearchResults"> </iframe> </div> </div><!-- top --> <div class="header"> <div class="summary"> <a href="#pub-attribs">Public 属性</a> &#124; <a href="struct_d_s___refresh_cache_struct-members.html">所有成员列表</a> </div> <div class="headertitle"> <div class="title">DS_RefreshCacheStruct结构体 参考</div> </div> </div><!--header--> <div class="contents"> <table class="memberdecls"> <tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-attribs"></a> Public 属性</h2></td></tr> <tr class="memitem:a718a838fbad4a6ecab0ce6b55d61381c"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a718a838fbad4a6ecab0ce6b55d61381c"></a> signed int&#160;</td><td class="memItemRight" valign="bottom"><b>currMixerIndex</b></td></tr> <tr class="separator:a718a838fbad4a6ecab0ce6b55d61381c"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:aba4ddce851e1647242a92618a6d6536f"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="aba4ddce851e1647242a92618a6d6536f"></a> bool&#160;</td><td class="memItemRight" valign="bottom"><b>isSource</b></td></tr> <tr class="separator:aba4ddce851e1647242a92618a6d6536f"><td class="memSeparator" colspan="2">&#160;</td></tr> </table> </div><!-- contents --> <HR style="FILTER: alpha(opacity=100,finishopacity=0,style=3)" width="100%" color=#000000 SIZE=3> <table width="100%"> <tr> <td align="center"> <a href="http://www.qtcn.org/bbs/read-htm-tid-55824.html"> <img style="height:40px;" src="CZPlayer.png"> </img> </a> </td> </tr> <tr> <td align="center"> Copyright (C) 2012-2015 Highway-9 Studio. </td> </tr>
Irrigation Project in Ulongwe Area Project information Location The site is located just outside Liwonde National Park in Bimbi Village of Bimbi Group Village Head in TA Kalembo. It is in Tambala Section in Ulongwe EPA. The Dambo stretches from Katapasya Stream down to Shire River in Liwonde National Park. It's Elevation is 482m.The Dambo is surrounded by Bimbi, Gopole, Chisawa Villages and Liwonde National Park. Population The population is very poor and theiy are living from agriculture. Without irrigation only one harvest per year is possible. A harvest that is usually in March, is not enough the whole year. Therefore, the people suffer hunger with increasing time and are dependent on food aid from abroad. What is the problem? The area is very hot and also lies in the rain shadow of the mountains. Therefore there is too little precipitation for doing farming to throughout the whole year. In the villages, the water table is in the dry season about 4.60 m deep - too deep, that the plant roots reach the water. Our help We plan and give technical advice to the the people and a interdisciplinary team for an irrigation scheme of 9 hectares. In three other phases more land will be irrigated. Since the irrigation area is not far from the Shire River water from the river infiltrates into the irrigation area. With irrigation three instead of one harvest during a year can be achieved. This is for the own requirementy and also enough to sell something for income generation. So the families will have enough money to pay school fees, medical expenses etc and they are independent of foreign aid. One hectare is sufficient for 10 families, each with about 6 family members. Our co-operation partner Our co-operation partner on Malawi is EI Malawi. On site we are working together with EI and the Church of Grace in Ulongwe. How much is the project? The construction costs for the project of 9 hectare are: about 44,500 US-Dollar. Our part: about 4,000 US-Dollar for technical advice and planning on the ground Seminars and Events About us Insitute Water for Africa e.V. works in development aid. Through the dissemination of simple methods, the use of appropriate technologies in water supply, water treatment and sanitation, the population should be enabled to care even for enough clean water and good hygiene.
There are many important decisions a homebrewer must make when crafting beer at home like whether to use dry or liquid yeast? Brew with extract or go all-grain? Use pellet or whole leaf hops? Each decision is what makes each brewers creation unique. Another big decision that has to be made, and one of the most over looked aspects of home brewing, is the serving of one’s home brewed beer. To Keg or To Bottle... Homebrew keg fitted with CO2 charger and portable faucet assembly allows you to dispense your brew without bottling. Bottles are portable, store well and are great for certain situations like competitions, bottle swapping among clubs and bottle aging/conditioning, but sometimes bottling can become a very time consuming and messy affair just for home enjoyment. If you are like me, I quickly became interested in kegging my homebrew and dispensing it from a draft beer dispenser. Kegging seemed to be a lot faster and cleaner process. I also was able to fill growlers when I needed to take beer with me for sharing. Having tested basically all of the mainstream portable and non-portable home brew dispensing set-ups (Keg w/ portable CO2 Charger, Jockey Box, Carbonater Cap, Mr. Beer, Beer Machine, Tap-A-Draft, Party Pig, etc) I found that for home use modifying a kegerator gives you the most bang for your buck. Many home brewers I have the pleasure of knowing have also found that stepping up to commercial grade kegerators to serve their home brew is a far more expedient approach with the least amount of maintenance. Plastic serving vessels can be an inexpensive alternative to a steel keg dispensing kegerator for serving and dispensing home brew at home or on the go. There are a few ways I can think of to go about converting a kegerator into a homebrew dispenser. You can change out the tap fittings to accommodate soda kegs or you can install quick release fittings so that you can easily switch out between commercial Sanke kegs or soda kegs. I prefer that latter, as it leaves open the possibilities of serving both commercial beer and home brew. If you have two taps, you can also serve from both types of keg at once. I have also seen home brewers keep the kegerator’s fittings the same and keg their homebrew into the bigger 15 gallon regulation kegs. This requires some special equipment, so it may not be the most economical approach. Changing out Sanke Tap Fittings for Cornelius (Soda) Keg Fittings Homebrew keg fittings with quick disconnects allow you to switch between homebrew and commercial style kegs. Whether you are changing out the taps on a commercially produced kegerator or a kit-built conversion style kegerator, it will be pretty easy to change out your tap setup with just a screwdriver, pliers, cutter or a knife or some sort. If your beer line has some fittings on the end already, you will want to cut those off. Try to cut a minimal length of beer line off when you do this, because the length of the beer line is important to maintaining a foam-free flow of beer. After you have removed the regulation Sanke keg fittings, if any, you can attach a soda keg barb fitting to the end of the beer line with a hose clamp. This completes the beer line end of the conversion, but you also need to address the CO2 end. CO2 lines are usually of a smaller diameter than beer lines (1/4” as opposed to 3/16”), but the process is basically the same as changing out the fittings for the beer lines. It does not matter as much if you cut the line shorter on the CO2 side, but it is a good idea to leave as much line as you can just for ease of changing out kegs. One problem that arises with ball lock style Soda kegs is that it is easy to confuse the gas with the liquid fitting when attaching the fittings to ball lock kegs. The two fittings are very similar, so it is best to identify them by the "gas-in" groove or by putting them on the keg. If you have to force the fitting on, you have switched the gas and liquid fittings. I have messed up some of my earlier home brew dispensing by accidentally switching these two fittings, which can cause beer leakage, keg contamination, CO2 loss, pressure problems, over foaming of the beer, and can also ruin plastic fittings and rubber o rings. You can order quick change fitting kits from homebrew suppliers that are officially “food grade” which is preferred. Also if purchasing in a local homebrew shop, it is a good idea to check that the barb fittings will screw into the quick change coupler before you leave the store. Thread tape will ensure that you have a good connection between your couplers and hose barb fittings. Always wind your thread tape in clockwise on male ends – you might want to use yellow thread tape on the beer line joint because of the beer’s alcohol content, but white thread tape will work for a while at least. Another option, if you have a freezer style conversion kegerator (Keezer), or a large refrigerator style conversion kegerator, is to install a splitter or manifold that allows you to dispense both a commercial Sanke keg and a Soda keg with home brew at the same time. This modification is not very different from the process described above, except that you will need to install a splitter coming off of your CO2 setup. If you are planning to serve beers that require different levels of CO2 push, you will need either a manifold with separate regulators or to just have two CO2 canisters and manifolds in the kegerators cooling chamber. Christian Lavender is a homebrewer in Austin, TX and founder of Kegerators.com and HomeBrewing.com Related Homebrew Tips : DIY Homebrew Dispensing Table -- Dispense homebrew directly from a table tower. Build a Homebrew Cooler Dispenser -- Learn how to convert a five-gallon cooler into a beer dispensing cooler. Homebrewer's Bar Plans -- Step-by-step homebrew bar plans designed for homebrewers by homebrewers.
We thank Lytras et al. \[[@r1]\] for the comments on our recent article assessing risk factors for mortality in inpatients with influenza and the effect of oseltamivir \[[@r2]\]. For this study, the authors are confident that the identification and adjustment for confounding variables have been done in a systematic and objective manner using stepwise logistic regression. We note the concerns on the methods we have used, but we do not see how these concerns are specific to our work. Rather the concerns relate in general to the application of these methods. We observed evidence of a protective association of full course 5 days treatment with oseltamivir against inpatient mortality, which emerged following adjustment for the confounding variables we identified. We believe this is plausible, given that risk of death is widely recognised to be associated with comorbidities and age, and that oseltamivir is licensed as an effective medication for seasonal influenza with a rational drug design against a viral target. It is the case that multiple testing using regression analysis has occurred and the authors agree that accounting for this could result in an upper limit exceeding 1. However, it would still be the case that evidence of an association between standard course of oseltamivir and protection against inpatient mortality was demonstrated in a relatively large historic cohort of hospitalised patients receiving routine clinical care. Lytras et al. \[[@r1]\] write that the final multivariable model contained nine predictors in a dataset with just 32 outcome events (deaths), that our model was severely overfitted suggesting that the low prevalence of several risk factors in the data hints at potential multicollinearity problems and that, both during the stepwise procedure and in the final model, Variance Inflation Factors for the covariates should have been included (i.e. in Table 2 of the article \[[@r2]\]). In response, we note there were only seven variables in the final model of Table 2 with a total of nine parameters and do not agree that there is universal acceptance of the 10 events per variable recommendation \[[@r3]\]. In any situation, the required number of observations could be more or less than this number. An over fitted model would be manifested in extreme estimates and confidence intervals. All the estimates and confidence intervals presented are proportionate, with the wide confidence interval for 'excessive alcohol use' reflecting the few individuals with that factor. The nonlinear link function used in the logistic regression has meant that collinearity would have needed to have been severe for it to have caused difficulties, which would have been manifested in the estimates and confidence intervals. With regard to delay in Table 3 of our study \[[@r2]\], for those not receiving oseltamivir, the delay was coded as zero. A binary variable defined to be one if antiviral given and zero otherwise was created in order to assess the impact of delay on mortality by interacting it with delay whenever delay was to be analysed. The following authors of the original article are acknowledged: Ben Warne, Lucy Reeve, Nicholas K. Jones, Kyriaki Ranellou, Silvana Christou, Callum Wright, Saher Choudhry, Clare Sander, Hongyi Zhang and Hamid Jalal. All authors of the original article approved the response. **Conflict of interest:** None declared. **Authors' contributions:** Mark Reacher, Neville Verlander and Maria Zambon wrote the response letter. [^1]: Correspondence: Mark Reacher (<Mark.Reacher@phe.gov.uk>)
8 17 17 27 12 13 22 12 15 23 14 9 13 30 21 13 15 14 12 16 10 16 12 15 9 25 21 15 14 13 12 19 16 11 11 11 16 20 22 10 18 17 19 14 14 13 14 15 10 24 23 13 14 9 20 13 10 16 16 15 15 26 9 14 11 12 20 15 8 12 16 14 24 15 16 15 15 13 24 18 12 7 14 11 11 15 18 17 16 30 15 15 11 13 15 11 14 23 24 21 20 10 8 18 12 12 13 13 12 17 18 19 23 12 9 9 14 12 9 12 19 19 19 5 12 12 15 14 20 12 11 7 12 8 11 13 11 12 9 8 11 15 11 7 11 3 11 11 8 9 9 10 16 12 6 7 7 9 9 8 14 11 5 18 11 20 22 10 13 28 19 23 11 11 20 32 12 9 7 11 10 7 10 10 10 11 9 17 16 18 12 16 15 13 24 10 7 13 4 11 13 13 20 10 8 19 6 13 14 13 16 17 8 16 14 13 23 3 27 10 13 9 13 16 9 9 14 17 12 18 18 13 13 8 11 10 12 10 16 10 10 12 12 9 12 13 7 13 10 10 13 7 4 7 21 5 9 18 8 18 6 0 13 17 11 25 8 10 13 12 11 9 16 16 8 14 10 14 16 5 6 8 9 9 9 8 8 7
Q: Jenkins Workflow Parallel Step and Joins I am currently using the parallel build step in a Jenkins Workflow script where each branch make take a different amount of time. parallel(fastBranch: { // Do something fast }, slowBranch: { // Do something slow }) I've got a problem where I think maybe the slowBranch isn't completing because the fastBranch is quicker, is this possible? Is there any kind of join mechanism in the parallel step to ensure the next line isn't executed until all branches are complete. A: Yes, there is join. For this job: parallel(fastBranch: { build("Test_fast") }, slowBranch: { build("Test_slow") }) build("Test_join") Log is: parallel { Schedule job Test_fast Schedule job Test_slow Build Test_fast #1 started Build Test_slow #1 started Test_fast #1 completed Test_slow #1 completed } Schedule job Test_join Build Test_join #1 started Test_join #1 completed Times are: Fast Start: 17:06:00 Fast Finish: 17:06:01 Slow Start: 17:06:00 Slow Finish: 17:06:20 Join Start: 17:06:30 Join Finish: 17:06:30
Epigenetic immunomodulation of hematopoietic malignancies. Significant progress has been made in the clinical management of hematologic malignancies; nevertheless, a proportion of patients still remains unresponsive to available therapeutic options. Furthermore, patients who respond to specific therapeutic regimens may still require additional treatment to eradicate minimal residual disease. In this scenario, novel immunotherapeutic strategies may significantly impact on the clinical course of hematopoietic tumors in different clinical stages of disease. Among immunotherapeutic approaches under development, promising clinical results are being obtained with vaccination of patients with solid malignancies against cancer testis antigens (CTA), which belong to a growing family of methylation-regulated tumor-associated antigens (TAA) shared among human malignancies of different histologies. Based on these notions, the emerging preclinical and clinical evidence suggest that an immunomodulatory role for epigenetic drugs is highly relevant; in fact, by interfering with DNA methylation, these compounds induce or upregulate the constitutive expression of CTA on actively proliferating neoplastic cells. This novel activity of epigenetic drugs combines with their well-known cytotoxic, pro-apoptotic and differentiating activities in hematopoietic tumors that are extensively described in other chapters of this issue. This review will focus on the expression of CTA in hematopoietic malignancies, on their epigenetic regulation, and on the foreseeable immunotherapeutic implications of DNA hypometylating drugs to design new CTA-based chemo-immunotherapeutic approaches in patients with hematopoietic tumors.
Defending Against China’s Influence Operations Abroad: New MLI Report OTTAWA, ON (October 25, 2018): In recent years, China has invested billions of dollars in an effort to boost its visibility and improve its image abroad. However, unbeknownst to many Canadians, the Chinese Communist Party has expanded its efforts, and is now increasingly relying more on unsavoury influence operations that use co-optation, bribery, incentivization, disinformation, censorship, and other methods. Defined as “sharp power,” these sorts of activities are part of a strategy employed by authoritarian regimes to penetrate into the political, social, and economic systems of target countries in order to align them with authoritarian interests. Authored by J. Michael Cole, a Taipei-based security analyst and editor in chief of the Taiwan Sentinel, this paper examines the methods used by the Chinese Communist Party to influence countries like Canada and its allies, and what we should be doing to start defending ourselves. “We are only in the beginning phase of understanding the nature and scope of China’s sharp power challenge,” writes Cole. “Simply put, we have failed to pay enough attention to China over the years, or believed, as many did, that engagement would eventually turn the regime into a more liberal, if not democratic, partner in global affairs.” Cole’s paper is a wakeup call to political leaders in all parties who have been asleep at the wheel on China. Taking policy examples from other countries who are more aware of the threat posed by Beijing, Cole outlines a number of recommendations, including: Update the legal system to target political warfare agents and activities Strengthen foreign-investment screening mechanisms Support measures to identify, track, and protect society against disinformation/ computational propaganda Bolster conflict of interest laws for government officials Increase cooperation among law-enforcement and intelligence agencies Expand government communication programs to help educate the public on political warfare Why is China targeting countries like Canada? Cole argues that some of the primary motivations of these influence operations are to promote Beijing’s interests, export the “China Model” of government, increase the legitimacy of the Chinese government abroad, and in some cases, even support the Chinese military. Rather than leaning on legitimate diplomatic tools such as “charm offensives,” Cole highlights how the Communist Party’s actions are not just designed to improve our perceptions of the People’s Republic, but rather to erode the very nature of our democratic institutions. “China has every right to use culture and a global media presence to increase its appeal and visibility worldwide,” writes Cole, though he argues “it would be a mistake to confuse these soft power efforts with the political warfare operations of a regime that is revisionist, anti-democratic and, as some would argue, increasingly Orwellian.” “Therefore, while China’s soft power is perfectly legitimate, its sharp power involves activities… [that] raise questions of ethics and often are incompatible with the values espoused by democratic societies.” Only by better understanding the ideology that lies at the heart of Beijing’s influence operations can we address the challenges democratic societies face as they seek to respond to China’s influence operations. To learn more about Chinese influence operations and the threat that they pose, read the full report here. *** J. Michael Cole is a Taipei-based senior fellow with the China Policy Institute, University of Nottingham, associate researcher with the French Centre for Research on Contemporary China, chief editor of Taiwan Sentinel, and assistant coordinator of the Forum 2000’s China working group.
/* Copyright (c) 2003-2015, CKSource - Frederico Knabben. All rights reserved. For licensing, see LICENSE.md or http://ckeditor.com/license */ CKEDITOR.plugins.setLang( 'image', 'af', { alt: 'Alternatiewe teks', border: 'Rand', btnUpload: 'Stuur na bediener', button2Img: 'Wil u die geselekteerde afbeeldingsknop vervang met \'n eenvoudige afbeelding?', hSpace: 'HSpasie', img2Button: 'Wil u die geselekteerde afbeelding vervang met \'n afbeeldingsknop?', infoTab: 'Afbeelding informasie', linkTab: 'Skakel', lockRatio: 'Vaste proporsie', menu: 'Afbeelding eienskappe', resetSize: 'Herstel grootte', title: 'Afbeelding eienskappe', titleButton: 'Afbeeldingsknop eienskappe', upload: 'Oplaai', urlMissing: 'Die URL na die afbeelding ontbreek.', vSpace: 'VSpasie', validateBorder: 'Rand moet \'n heelgetal wees.', validateHSpace: 'HSpasie moet \'n heelgetal wees.', validateVSpace: 'VSpasie moet \'n heelgetal wees.' } );
1. Field of the Invention The present invention relates to a putting-practicing apparatus provided with a mirror. 2. Description of the Related Art In addressing a golf ball with a putter, it is necessary to make a stroke line parallel with a straight line connecting player""s eyes with each other, and place the golf ball on the ground (floor) at a position thereof approximately right below the eye or at a position thereof a little outer (distant) from the position right below the eye in the direction orthogonal to the stroke line. The conventional method of practicing addressing the golf ball is carried out as follows by a method (1) or a method (2). (1) Picking up the golf ball with fingers, the player brings the golf ball into contact with one eye and looks downward. Then, the player releases the golf ball from the hand. The position of the ground (floor) at which the golf ball has dropped is the position right below the one eye. (2) The player sets the golf ball on the ground (floor) and addresses the golf ball. Then, the player holds a string-connected punched coin at a level a little below one eye. Then, the player drops it vertically to check whether the set golf ball is located right below the coin. The above practicing methods are unscientific. Any of the conventional practicing methods allows the player to check whether the positional relationship between the player""s eyes and the set golf ball is correct, but it is difficult to check whether the stroke line is parallel with the straight line connecting player""s eyes with each other. Thus, it is difficult to practice putting the golf ball. The present invention has been made in view of the above-described problem. Thus, it is an object of the present invention to provide a putting-practicing apparatus having a simple construction and allowing a player to practice addressing and putting golf balls easily on both an inclined surface and a horizontal surface. To achieve the object, in a putting-practicing apparatus of the present invention, it is preferable that a base plate is installed on a holding tool through a supporting tool; a mirror is installed on the base plate at a required position thereof; and a space allowing a putter to be stroked is provided below the mirror. In a putting-practicing apparatus of the present invention, it is preferable that a base plate is installed on a holding tool through a supporting tool and an angle-adjusting device; a mirror is installed on the base plate at a required position thereof; and a level is installed on the base plate at a required position thereof that is interlocked with the mirror. It is most advantageous to use a spherical joint capable of adjusting the angle of the base plate omnidirectionally as the angle-adjusting device. However, as the angle-adjusting device, it is possible to use a device movable on a fulcrum, a flexible tube, and the like as effective as the spherical joint. It is most effective to install the angle-adjusting device on the base plate such that the angle-adjusting device is disposed at a side of the base plate opposite to a side thereof at which the mirror is installed. The holding tool may be trapezoidal, leg-shaped, pin-shaped, and in other shapes. But it is most favorable that the holding tool is rectangular plate-shaped. According to the putting-practicing apparatus of the present invention, while a player is watching his/her eyes reflected in the mirror, he/she can easily check whether the straight line connecting his/her eyes with each other and a stroke line are parallel with each other, whether the positional relationship between his/her eyes and a golf ball set on the ground (floor) is correct, the backstroke of a putter, putting, and follow-through. Because the angle-adjusting device adjusts the angle of the base plate, he/she can practice putting the golf ball on an inclined surface, a horizontal surface, indoors, on putting green, and the like.
Q: How to count rows form mysql with PHP Trying to get total count of rows using variable in sql query but not able to get that can anyone here help me with this ? <?php $business = $value['business_name']; //echo $business; $sql = "SELECT COUNT(*) FROM listings_reviews WHERE business_name = '$business'"; $result = $conn-> query($sql); $row = $result -> fetch_assoc(); //print_r ($row) ; $rating = implode($row); echo $rating; ?> A: Give an alias to your COUNT(*) in the SQL, for example, here I have aliased it to cnt. Then in PHP you can identify clearly the variable of the $row array $business = $value['business_name']; //echo $business; $sql = "SELECT COUNT(*) AS cnt FROM listings_reviews WHERE business_name = '$business'"; $result = $conn->query($sql); while ($row = $result->fetch_assoc()) { $rating = $row["cnt"]; } echo $rating;
About Feed Us The piranha's lake has been taken over by tourists. With the people coming in to fish and throw parties in the lake, the piranha have to change their diet to humans. They don't mind so much though because that fatty human meat is making them grow and adapt quicker than usual! Upgrade a vicious school of fish with the iron-rich human blood!
HOLLOW WAVEGUIDES For use with gas spectrometers, IR temperature measurement instruments, lasers, and oxygen and carbon monoxide/carbon dioxide gas detectors, Epner Technology Inc. has released its LaserGold-coated hollow waveguides. The coating is more than 98% reflective in the near-, mid- and far-IR. The company can apply it to the inside of a variety of shapes and cross sections. The bore of the waveguide is axially polished to create a smooth surface with RMS values of close to 10 nm. The waveguides are custom-fabricated in lengths of up to 1 m and diameters of 0.25 to >50 mm. Wall thickness variations can be from 25 μm to >6 mm.
Newsletter Posted October 11, 2009 06:00 am Duo 's perfect partnership carries squad Associated Press SAN FRANCISCO --- Tiger Woods and Steve Stricker were perfect as Presidents Cup partners, and they got enough help from everyone else Saturday to put the Americans in position to stay perfect on home soil. With an improbable rally by Woods in the morning and pure putting by Stricker in the afternoon, they became the first partnership in the Presidents Cup -- and the first in 30 years of any team competition -- to go 4-0. Larry Nelson and Lanny Wadkins won all their matches in the 1979 Ryder Cup at The Greenbrier. Phil Mickelson had a chance to join them with an undefeated record using different partners. Mickelson and Sean O'Hair, who won their match handily in the morning, each had a birdie putt inside 15 feet on the final hole for the win, but had to settle for a halve. The Americans had a 121/2-91/2 lead. The International team walked away from Harding Park the past two days with momentum from keeping close. But as darkness fell across from Lake Merced, the deficit looked daunting with 12 singles matches remaining today. No team has rallied from three points behind on the final day to win the cup outright, and the Americans have lost only one singles session in the seven previous Presidents Cup matches. "Last time we had a five-point mountain to climb in Montreal, and it looks like we will have something to climb," Geoff Ogilvy said after collecting his first point of the week in teaming with fellow Australian Robert Allenby. Woods and Stricker's perfect mark looked unlikely in the morning foursomes, when the International team was poised to catch the Americans by leading the final three matches on the course. Woods and Stricker, who had missed four putts inside 8 feet during a six-hole stretch in the middle of the match, were on the verge of being closed out on the 17th hole. The International team was 1 up, with Mike Weir facing 5 feet for birdie. Woods tried to drive the green and found the bunker, and Stricker hit a poor shot to 25 feet. Miss it, and Weir could win the match by making his short birdie putt. Woods watched his birdie putt tumble into the cup on the final roll, then showed more emotion than he had all week. He repeatedly pumped his fist as Stricker broke into a wide grin. "The stage is set, and he comes through again today," Stricker said. "It's pretty impressive." Weir pushed his birdie putt, and the match was all square heading to the 18th. From the fairway, Woods drilled a 3-iron and twirled the club in his hand, the sign of a good shot, and this one was even better. It landed softly onto the green and stopped 8 feet away. Tim Clark blasted his bunker shot long, and the International team conceded the birdie and an unlikely 1-up victory. "It was fun to watch," Stricker said. "I had a front-row seat for that. That was pretty cool." MARCIO JOSE SANCHEZ/ASSOCIATED PRESS <b>Steve Stricker (left) and Tiger Woods, of the U.S., became the first partners to go 4-0 in any team competition in 30 years. </b>[CAPTION]</p> <p>
KUWAIT CITY, July 10: Citizens, expatriates and officials in showrooms for electric equipment, electronics and textile have expressed disappointment over the spread of fake equipment and textile that carry international brands but do not come close to the quality of the original products. They said consumers can no longer differentiate whether the items they are buying are fake or original as everything has been mixed up in the market where the traders of fake electronics are only concerned with making quick profits by cheating or deception. They stressed the need to hold accountable the traders who cheat their customers by selling fake products. The concerned monitoring authorities that seem to be absent in the markets must stop the import of these fake products, they added. A citizen, Abu Ibrahim, said: “Fake products have spread in different local markets. Even big commercial centers cheat their customers as they sell fake products such as computers and television sets. Fake products have become widespread like fire and have ‘conquered’ the original products for years as some Chinese companies managed to imitate all genuine or original products. It is possible to buy a fake product but it should be my intention and own will, not by being cheated by the seller.” Said Jamal disclosed he bought an electronic device from a showroom last year for a relative in his country but he was surprised to find out later it was fake although the showroom staff affirmed the product was genuine. Safwat Ezz disclosed such form of commercial cheating has reached the textile market. She purchased a piece of cloth for a relative and she was told its original wool but she was surprised to discover during washing and sewing process that it was not wool. Another citizen, Abbas Mahmoud, asserted it has become necessary for the concerned authorities to ban the import of fake products. “All these fake products are sold openly; even the CD’s are sold in open spaces, on the pavements yet no monitoring authority says anything,” he complained. Umm Ahmad admitted she has resorted to buying fake make-up because the original ones are expensive despite knowing that fake beauty make-up products may damage the skin or the eyes and cause cancer. She pointed out she has no choice, considering the salaries are low whereas the prices of various products continue to soar. Attorney Mubarak Al-Mutawa confirmed that the Kuwaiti law criminalizes the act of cheating in commercial markets, especially since Kuwait is a signatory to several agreements on protecting copyrights. Check Also One comment I’d like to thank Mr. Najeh Bilal Al-Seyassah . He pointed out a very important and current situation in Kuwait market. I really appreciate him from the bottom of my heart. I’d like to see a quick reactions from the authority as per the recommendations. I think it’s time to rethink about importing some of the Chinese products. As we all know the quality of those products. If the authority ban on imports then they might not be able to sale those products even if they want to. We seek help from the authority to prevent us buying those fake products.
Q: What is exactly a cross face position in BJJ? I'm training BJJ but my native language is not english. I tried translation but it didn't work as well. So I'm wondering: what is a cross face? I see people mentioning this cross face position, when passing the guard. I apreciate! A: A crossface is a way to gain positional control of an opponent while working from side control (side control is also called cross side or side mount). The gist of it is that you are driving shoulder pressure into the chin of your opponent to mitigate his/her mobility. More specifically, using a crossface helps prevent your opponent from turning toward you so that they can either hip away from you or roll into you to initiate some sort of escape. Here is a good visual reference: https://www.youtube.com/watch?v=Y9DHk7DgL1w
Portland police rescue family penned into bedroom by 22-pound cat Share this story Portland police rescued a family penned into a bedroom by a 22-pound house cat after the cat had attacked a baby Sunday night. The baby was OK, but the family – dog included – was forced to take refuge in a bedroom as the Himalayan cat continued to rage outside. "It's only funny when it's not happening to you," said Teresa Barker. "When this happens to you, I assure you, you will do the same things." The feline free-for-all started when Teresa's baby boy Jessie pulled the cat's tail. The cat whacked Jessie in the forehead, drawing blood. Teresa's boyfriend kicked the cat away from the baby, and that's when the cat went wild. "He was on top of the fridge, and then when something like that turns around and follows you, you're kinda getting backed up," Barker said. "As you can see, it's kind of a small space in here, so yeah, it was very frightening." The family locked themselves in a bedroom, and when Animal Control didn't answer, Teresa's boyfriend called 911. "Yeah, hi, I have a kind of a particular emergency here," the boyfriend told the dispatcher. "Um, my cat attacked my 7-month-old child... and we're trapped in our bedroom. He won't let us out of our door." He continued later: "He's at our door, bedroom door... do you hear him screaming? " Operator: "Yeah, yeah, I hear him." Officer Timothy Bocciolatt and Craig Lehman were called to the scene. "When we first got the call, we were thinking 'it's a cat call, is this really what's coming out?' " Bocciolatt said. Bocciolatt grabbed central precinct's only animal control snare and they managed to corral the cat. "The cat did not want to get back in the cage, that was for sure," Bocciolatt said. "He wanted to be free at that point." Barker said she's not sure what she's going to do with the cat. "Being trapped in an apartment or a house with anything that is going to be violent like that is very scarey," she said.
package org.jetbrains.plugins.scala.autoImport.quickFix import com.intellij.codeInsight.JavaProjectCodeInsightSettings import com.intellij.codeInsight.completion.JavaCompletionUtil.isInExcludedPackage import com.intellij.openapi.editor.Editor import com.intellij.openapi.project.Project import com.intellij.psi._ import org.jetbrains.plugins.scala.ScalaBundle import org.jetbrains.plugins.scala.autoImport.{GlobalMember, GlobalTypeAlias} import org.jetbrains.plugins.scala.extensions._ import org.jetbrains.plugins.scala.lang.formatting.settings.ScalaCodeStyleSettings import org.jetbrains.plugins.scala.lang.psi.ScalaPsiUtil.{getCompanionModule, hasStablePath} import org.jetbrains.plugins.scala.lang.psi.api.ScalaFile import org.jetbrains.plugins.scala.lang.psi.api.base.ScReference import org.jetbrains.plugins.scala.lang.psi.api.base.types.ScTypeProjection import org.jetbrains.plugins.scala.lang.psi.api.expr.{ScMethodCall, ScSugarCallExpr} import org.jetbrains.plugins.scala.lang.psi.api.statements.{ScFunction, ScTypeAlias} import org.jetbrains.plugins.scala.lang.psi.api.toplevel.templates.ScTemplateBody import org.jetbrains.plugins.scala.lang.psi.api.toplevel.typedef._ import org.jetbrains.plugins.scala.lang.psi.api.toplevel.{ScPackaging, ScTypedDefinition} import org.jetbrains.plugins.scala.lang.psi.impl.{ScPackageImpl, ScalaPsiManager} import org.jetbrains.plugins.scala.lang.resolve.ResolveUtils.{isAccessible, kindMatches} import org.jetbrains.plugins.scala.settings._ import org.jetbrains.plugins.scala.util.OrderingUtil.orderingByRelevantImports /** * User: Alexander Podkhalyuzin * Date: 15.07.2009 */ final class ScalaImportTypeFix private (override val elements: Seq[ElementToImport], ref: ScReference) extends ScalaImportElementFix(ref) { override def getText: String = elements match { case Seq(head) => ScalaBundle.message("import.with", head.qualifiedName) case _ => ElementToImport.messageByType(elements)( ScalaBundle.message("import.class"), ScalaBundle.message("import.package"), ScalaBundle.message("import.something") ) } override def shouldShowHint(): Boolean = { val settings = ScalaApplicationSettings.getInstance() val psiElements = elements.view.map(_.element: PsiElement) val showForClasses = psiElements.exists(_.is[PsiClass, ScTypeAlias]) && settings.SHOW_IMPORT_POPUP_CLASSES val showForMethods = psiElements.exists(_.is[PsiMethod, ScTypedDefinition]) && settings.SHOW_IMPORT_POPUP_STATIC_METHODS super.shouldShowHint() && (showForClasses || showForMethods) } override def getFamilyName: String = ScalaBundle.message("import.class") override def isAvailable: Boolean = super.isAvailable && ref.qualifier.isEmpty && !isSugarCallReference private def isSugarCallReference: Boolean = ref.getContext match { case ScSugarCallExpr(_, `ref`, _) => true case _ => false } override def createAddImportAction(editor: Editor): ScalaAddImportAction[_, _] = ScalaAddImportAction(editor, ref, elements) override def isAddUnambiguous: Boolean = ScalaApplicationSettings.getInstance().ADD_UNAMBIGUOUS_IMPORTS_ON_THE_FLY } object ScalaImportTypeFix { def apply(reference: ScReference) = new ScalaImportTypeFix( getTypesToImport(reference), reference ) @annotation.tailrec private[this] def notInner(clazz: PsiClass, ref: PsiElement): Boolean = clazz match { case o: ScObject if o.isSyntheticObject => getCompanionModule(o) match { case Some(cl) => notInner(cl, ref) case _ => true } case t: ScTypeDefinition => t.getParent match { case _: ScalaFile | _: ScPackaging => true case _: ScTemplateBody => t.containingClass match { case obj: ScObject if isAccessible(obj, ref) => notInner(obj, ref) case _ => false } case _ => false } case _ => true } def getTypesToImport(ref: ScReference): Array[ElementToImport] = { if (!ref.isValid || ref.isInstanceOf[ScTypeProjection]) return Array.empty implicit val project: Project = ref.getProject val kinds = ref.getKinds(incomplete = false) val manager = ScalaPsiManager.instance(project) def kindMatchesAndIsAccessible(named: PsiNamedElement) = named match { case member: PsiMember => kindMatches(member, kinds) && isAccessible(member, ref) case _ => false } val predicate: PsiClass => Boolean = ref.getParent match { case _: ScMethodCall => hasApplyMethod case _ => Function.const(true) } val referenceName = ref.refName val classes = for { clazz <- manager.getClassesByName(referenceName, ref.resolveScope) classOrCompanion <- clazz match { case clazz: ScTypeDefinition => clazz.fakeCompanionModule match { case Some(companion) => companion :: clazz :: Nil case _ => clazz :: Nil } case _ => clazz :: Nil } if classOrCompanion != null && classOrCompanion.qualifiedName != null && isQualified(classOrCompanion.qualifiedName) && kindMatchesAndIsAccessible(classOrCompanion) && notInner(classOrCompanion, ref) && !isInExcludedPackage(classOrCompanion, false) && predicate(classOrCompanion) } yield ClassToImport(classOrCompanion) val aliases = for { alias <- manager.getTypeAliasesByName(referenceName, ref.resolveScope) global <- GlobalMember.findGlobalMembers(alias, ref.resolveScope)(GlobalTypeAlias) if kindMatchesAndIsAccessible(alias) } yield MemberToImport(alias, global.owner, global.pathToOwner) //it's possible to have same qualified name with different owners in case of val overriding val distinctAliases = aliases.distinctBy(_.qualifiedName) val packagesList = importsWithPrefix(referenceName).map { s => s.reverse.dropWhile(_ != '.').tail.reverse } val packages = for { packageQualifier <- packagesList pack = ScPackageImpl.findPackage(packageQualifier)(manager) if pack != null && kindMatches(pack, kinds) && !isExcluded(pack.getQualifiedName) } yield PrefixPackageToImport(pack) (classes ++ distinctAliases ++ packages) .sortBy(_.qualifiedName)(orderingByRelevantImports(ref)) .toArray } private def hasApplyMethod(`class`: PsiClass) = `class` match { case `object`: ScObject => `object`.allFunctionsByName(ScFunction.CommonNames.Apply).nonEmpty case _ => false } private def importsWithPrefix(prefix: String) (implicit project: Project) = ScalaCodeStyleSettings.getInstance(project) .getImportsWithPrefix .filter { case exclude if exclude.startsWith(ScalaCodeStyleSettings.EXCLUDE_PREFIX) => false case include => include.split('.') match { case parts if parts.length < 2 => false case parts => parts(parts.length - 2) == prefix } } private def isExcluded(qualifiedName: String) (implicit project: Project) = !isQualified(qualifiedName) || JavaProjectCodeInsightSettings.getSettings(project).isExcluded(qualifiedName) private def isQualified(name: String) = name.indexOf('.') != -1 }
Dementia is defined by the World Health Organization as a syndrome, usually chronic and progressive, with different causes.[@B1] It is a complex condition that affects cognition, behavior, and the autonomy for practicing activities of daily living.[@B2] Currently, 50 million people are living with dementia, and projections suggest that this number will triple by 2050, affecting 152 million people.[@B3] Alzheimer's disease (AD) is one of the most common causes of this syndrome.[@B2] Cognitive dysfunction, such as mild cognitive impairment (MCI), can be considered a prodromal manifestation of dementia and can be identified years before dementia onset.[@B4] The prevalence of MCI in older adults ranges from 15 to 20%, and this condition may be related to high levels of amyloid protein, a biomarker for neurodegeneration and increased risk for dementia.[@B5] ^,^ [@B6] Little is known about the actual prevalence of dementia.[@B2] However, it is known to be more common in women and has a prevalence of 5% in people aged over 65 and up to 32% in elderly aged 85 or older.[@B1] In addition, a relationship has been observed between dementia and increased risk for cardiovascular diseases, metabolic syndrome, and neuropsychiatric disorders.[@B7] ^,^ [@B8] Another intriguing fact about dementia syndromes is underreporting rates, which are higher in low- and middle-income countries (93.2% in Asia, 62.9% in North America, 53.7% in Europe).[@B9] Usually, the delay for establishing dementia diagnosis is about 29-37 weeks between symptoms onset and definitive clinical diagnosis.[@B10] In this context, primary health care represents the first and closest contact between the elderly and health system, as well as being fundamental for the development of strategies for early identification of diseases.[@B8] On the other hand, numerous factors have been suggested as causes for late diagnosis of dementia: normal cognitive changes expected in the aging process, patients' low educational level, and lack of professional training for correct interpretation of neuropsychiatric symptoms.[@B8] ^,^ [@B10] ^,^ [@B11] Given the importance of early diagnosis for dementia and cognitive dysfunction (i.e. MCI), as well as the fact that primary health care settings are the entry point to the health system, the aim of this systematic review was to identify how low-, middle-, and high-income countries establish this diagnosis in primary health care. METHODS ======= This systematic review was conducted to determine the diagnostic strategies used in primary health care to diagnose dementia and cognitive dysfunction in low-, middle-, and high-income countries. Thus, based on this research question, studies from the past five years were searched on SCOPUS, PubMed, EMBASE, LILACS, SCIELO, and Web of Science. The search occurred in October, 2018, and the key-words used in this study were obtained both from DeCS (Descritores em Ciências da Saúde) and MeSH (Medical Subject Headlines). Country-income classification was based on data from the World Bank website (<http://www.worldbank.org/>) and adapted to comprise three categories as proposed by the International Association for Media and Communication Research (<https://iamcr.org/income>). The descriptors were: "dementia", "cognitive dysfunction", "diagnosis", "primary health care", and "mass screening" - and their correlates in Portuguese and Spanish. The Boolean operator "AND" was used as a search strategy to combine the descriptors considering all the possibilities. The combinations, in English, were: "Diagnosis AND Dementia AND Primary Health Care"; "Diagnosis AND Cognitive Dysfunction AND Primary Health Care"; "Dementia AND Primary Health Care AND Mass Screening"; "Cognitive Dysfunction AND Primary Health Care AND Mass Screening". The same combinations were employed in both Portuguese and Spanish. To make the search more precise, the following filters were applied: papers written in English, Spanish, or Portuguese; publication date from 2014 up to the time of the search (October, 2018). The limit of five years was established due to the improvement and recent discoveries that have been made in the field of dementia screening and diagnosis. On SCOPUS and EMBASE, the required document type was article, and the search was conducted by article, title, and key-word. On PubMed and Scielo, the search was conducted for all fields. On LILACS, the search was by words. Finally, on Web of Science, articles were searched by topic. After the search, a data base was created by two different researchers. The purpose was to minimize errors and bias. After both data bases were complete, another researcher compared them to ensure they were the same. The selection process was based on the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) protocol. PRISMA was chosen to accomplish careful planning and organizing data to ensure a review with rigor and quality.[@B12] Also, an adapted version of an instrument proposed by URSI (2005) was used for data extraction and analysis. From the findings obtained by the above mentioned instrument, results were organized in a table to facilitate data descriptive synthesis. For this review, the inclusion criteria considered studies from the previous five years; published in English, Portuguese, or Spanish; conducted in primary health care services; whose participants were aged 60 or older; availability (possible to access); and studies whose topic addressed either diagnosis/screening of dementia or cognitive dysfunction. On the other hand, exclusion criteria were: duplicated articles; drug trials, literature reviews, letters to the editor, editorial, recommendations, monographies, dissertations, and thesis; as well as for articles whose topic did not involve the diagnosis of dementia or cognitive dysfunction. Because this study was based on published articles, submission to the Research Ethics Committee was not required, according to Brazilian National Health Council's resolution (nº510/2016).[@B13] RESULTS ======= The search of the databases retrieved a total of 1987 articles. As mentioned above, PRISMA was the tool used for the selection process. Of the initial total found, 707 papers were excluded because they were duplicated (inter or intra-database). After this exclusion, 1280 remained for title and abstract reading. In this phase, a further 1123 papers were excluded, and 157 articles were selected for full reading. Of this total, 124 documents did not meet the inclusion criteria and therefore 33 studies were included in this systematic review. Results from PRISMA can be seen in [Figure 1](#f1){ref-type="fig"}. Figure 1Summary of Paper Selection Process, PRISMA, São Carlos, São Paulo, Brazil, 2019. This study's initial question was "what are the diagnostic strategies to diagnose dementia and cognitive dysfunction in primary health care in low-, middle-, and high-income countries?". Results showed that more than 90% (n = 30) of the articles were from high-income countries, while 3 papers were from middle-income countries. Unfortunately, no articles from low-income countries were found. Regarding participants' demographic characteristics, most of the articles (n = 21) had a predominance of female participants. Age was also analyzed. In general, participant age ranged from 70 to 80 years. Studies conducted in middle-income countries considered older adults as participants aged 60 or older. From the pool of selected studies, 75.8% had between 101 and 1,000 participants; 18.2% had between 1,001 and 10,000; finally, the percentage of studies whose number of participants was more than 10,001 was 6.0%. It was noted that studies often failed to describe participants' ethnicity. Of the studies that provided this information, Hispanic, African American, Chinese, and White ethnicities were reported. Because the topic of interest in this study was dementia/cognitive dysfunction diagnosis in primary health care, the type of diagnosis was a variable of interest. After the analysis, three diagnosis categories were established: dementia only (n = 10), MCI only (n = 8), and dementia and MCI (n = 15). Regarding the diagnostic criteria, all of the papers (n = 33) reported clinical diagnosis, conducted either by a general practitioner or a multidisciplinary group, where 13 articles used DSM-IV as the reference criteria. Three studies had different criteria sources for dementia and MCI. In these studies, dementia diagnosis was based on DSM-IV, whereas MCI was based on recommendations of experts (e.g. Petersen et al., and Winblad et al.). For biomarkers, three studies used blood measurements and one study used neuroimaging. Of the total, 9 articles mentioned only neuropsychological testing as a criterion for screening or diagnosing dementia and MCI. Interestingly, all the studies conducted in middle-income countries had this characteristic. This review also investigated the instruments used for assessing patients' neuropsychological status and others aspects (e.g. functioning, quality of life, and comorbidities). [Graph 1](#f3){ref-type="fig"} shows a schematic representation for the most used instruments by the studies. Cognitive instruments were cited in 31 out of the 33 articles; however, only 14 papers mentioned other types of evaluation (non-cognitive). Most of these evaluations reported measurements for quality of life, activities of daily living, and health status. Regarding cognitive assessment, 25 studies used the MMSE as one of the instruments for measuring cognition, and 23 used the MMSE together with another type of cognitive measure. MMSE was the most used instrument. In addition, 5 papers used the MoCA and NPI; 4 papers used the AD8; 3 papers used verbal fluency, digit span, CERAD, digit symbol, test your memory, and CAMCOG tests; 2 papers used the CDR, DemTect, Stroop color-word test, Mini-Cog and the Clock Drawing Test. Quality of life was assessed by the EuroQol in 3 studies and by the QoL-AD in one study. Depressive symptoms were evaluated by the GDS in 7 studies. Graph 1Measurement Instruments used in the studies, São Carlos, São Paulo, Brazil, 2019. The number of diagnosed older adults was also an outcome of interest. Only one study did not provide this information. In total, ten studies investigated the diagnosis of dementia. One did not provide information about the number of diagnosed participants. In three studies, all participants were diagnosed as having dementia. In the other six articles, dementia diagnosis rate ranged from 3.2% to 55%. Furthermore, MCI diagnosis ranged from 15.2% to 55.8% among those studies which investigated this condition only (n = 8). In studies that investigated both dementia and MCI, the number diagnosed with MCI was higher than the number diagnosed with dementia. [Appendix 1](#app01){ref-type="app"} shows the information obtained from the analysis of the articles selected for this systematic review. Also, some articles evaluated the number of patients that did not test positive on screened or diagnosis for dementia/MCI in primary health care. One study suggested that the elderly were considerably underdiagnosed in primary health care. Similarly, another article stated that the rate for underdiagnosed older adults was around 60%. The qualitative analysis revealed that high-income countries usually use a manual (e.g. DSM), in addition to cognitive and functional instruments, as well as general practitioners' evaluation, to establish a diagnosis of dementia in primary health care, for further referral to specialized care. On the other hand, middle-income countries seemed to use only neuropsychological instruments (e.g. MMSE). [Figure 2](#f2){ref-type="fig"} shows a scheme of diagnostic criteria used in high-income countries that should be helpful for general practitioners when evaluating or screening older adults for MCI or dementia in primary health care. Figure 2Practice for the diagnosis of dementia and cognitive impairment in high-income countries´primary health care. DISCUSSION ========== In this systematic review, studies about the diagnosis of dementia and MCI in primary health care were mostly from high-income countries. In addition, no studies in low-income countries were found. Although dementia is recognized as a global public health issue, poor countries face more difficulties diagnosing and treating this syndrome.[@B14] This could be explained by the fact that in low-income countries, health facilities are more often located in big cities, whereas there are few professionals practicing both in the countryside and rural areas.[@B15] Also, lack of economic and medical resources, poor training, and lack of expertise in mental health are the main factors contributing to poor care for the elderly, especially those with dementia.[@B14] ^,^ [@B16] Another possible explanation for the absence of studies in low-income countries may be related to the limited access to health services, as well as the limited creation and implementation of public health policies that contribute toward both patient diagnosis and treatment.[@B14] ^,^ [@B15] ^,^ [@B17] Regarding demographic information, the mean age observed in this review (70-80 years) follows the pattern in the literature, which shows that the prevalence of dementia is higher for the oldest elderly.[@B18] Research has suggested age as an important risk factor for the development of dementia because, in most cases, it affects individuals aged 65 or older.[@B19] ^,^ [@B20] It was also observed that high-income countries define older adults as those who are 65 years old or over. This is mainly defined by the increase in life expectancy, as well as the elderly's better socioeconomic and health conditions.[@B18] Because biological age is not always enough to define old age, the World Health Organization has established the age of 60 years old or over for low- and middle-income countries and 65 or over for high-income countries.[@B15] ^,^ [@B21] In this review, studies reported greater MCI than dementia diagnosis. Although much progress needs to be made in order to solve underdiagnosis problems, research has suggested that MCI is indeed more prevalent than dementia in older adults.[@B18] ^,^ [@B22] ^,^ [@B23] Regarding diagnostic criteria, most of the studies used DSM-IV as a guideline. It is important to mention that there is a new edition, DSM-V, but the studies reviewed probably used the previous version because the fourth edition was the only version available at the time the studies were conducted. Also, this manual was shown to be used in high-income countries. Middle-income countries used cognitive evaluation instruments. According to Parra et al.,[@B15] middle- and low-income countries have shown a tendency to accept international recommendations for dementia; however, the authors suggested that lack of financial support, resources, trained professionals, and the inexistence of primary health care programs make it difficult to follow these standards. As the strategy for screening older adults for cognitive decline, most of the articles in this review cited GP evaluation. Only a few studies mentioned a multi-professional group. However, different professionals can contribute toward identification of possible cases of MCI and dementia.[@B24] ^,^ [@B25] Middle-income countries, such as China, have been investing in the use of screening instruments for trained nurses, who are intended to be part of a multi-professional dementia identification network.[@B14] ^,^ [@B25] It is also noteworthy that a multi-professional approach with the elderly is recommended because this is desirable to achieve effective and comprehensive health care.[@B26] In this context, professionals such as gerontologists, nurses, physical therapists, geriatricians, neurologists, occupational therapists, and psychologists are key elements for dementia screening, diagnosis, and management. Another interesting aspect observed in this study was the different methods for dementia and MCI identification and confirmation. High-income countries had a uniform standard for diagnosis in primary health care. Our results suggest that these countries, in addition to a manual recommendation (e.g. DSM), also employ complementary tests, such as neuroimaging and blood tests. Research has shown that blood tests, neuropsychological evaluation, and patient health history,[@B27] as well as neuroimaging,[@B28] ^,^ [@B29] are relevant for early identification and differential diagnosis. On the other hand, in this review, studies from middle-income countries only cited the use of neuropsychological evaluation. According to Ferri et al.,[@B14] this might be explained by the lack of structure and financial resources for primary health care settings in low- and middle-income countries. Of the neuropsychological tests mentioned in the articles analyzed, MMSE was the most used. It is also the most commonly used test in screening strategies around the world due to its wide acceptance by the scientific and clinical community, and also because of its practicality and breadth of evaluation.[@B30] In addition, MMSE advantages include fast administration and availability in various languages.[@B31] As mentioned previously, MCI diagnosis was more common than dementia diagnosis. Although the number of diagnosed patients is substantially larger than the prevalence suggested in the literature, it is relevant to observe that some of the studies suggested the existence of undiagnosed older adults in primary health care. For instance, Zaganas et al.[@B32] stated in their study that 60% of the older adults remained without a dementia/MCI diagnosis in primary health care until further in-depth neuropsychiatric evaluation. Similarly, Parmar et al.[@B33] evaluated medical records from the Canadian primary health care system and found no cases of MCI diagnosis. The authors also mentioned that 41% of dementia cases were not identified in primary health care.[@B33] To sum up, Thyrian et al. concluded in their study, that elderly from primary health care are frequently underdiagnosed for dementia and MCI. Thus, there is still much to be done in order minimize the number of undiagnosed people in primary health care. One limitation of this study was the fact that the study design did not include the number of diagnoses missed in primary health care, in other words, the number of underdiagnosed patients. In conclusion, this systematic review aimed to describe how low-, middle-, and high-income countries establish diagnoses for dementia and cognitive dysfunction in primary health care. Most of the articles included in this study were from high-income countries, and no articles were published in low-income countries. In high-income countries, diagnosis or screening for dementia and cognitive dysfunction is usually conducted by general practitioners, who used well-established diagnostic criteria and instruments for assessments (cognitive and functional). In addition, some GPs used complementary evaluations, such as blood tests and neuroimaging. On the other hand, studies published in middle-income countries described only the cognitive assessment process. The diagnosis rate of patients was 3.2-55% for MCI and 15.2%-55.8% for dementia. Studies focusing on low- and middle-income countries should be conducted. It is important to mention that, considering the demographic profile of these countries, the population tends to be aging and dementia cases may increase considerably. Public policies and investment should be made to prepare primary health care professionals for screening and diagnosing dementia. This would improve both the health system and the flow of patients between the different levels of health care. ###### Main characteristics of the studies selected for analysis, São Carlos, São Paulo, Brazil, 2019. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- First author\ Demographics\ Diagnosis type Diagnostic criteria Percentage of positive\ Main findings Year, and place (n/mean/age/gender) screened/diagnosed patients -------------------------- ---------------------- ---------------- ------------------------------------------ ----------------------------- -------------------------------------------------------- Garcia-Ptacek[@B29]\ 3,891\ Dementia GP's evaluation; ICD-10; neuroimaging;\ 100% CDT and neuroimaging are used in most of GP's\ 2017, Sweden 81.1 (±6.6)\ blood testing dementia diagnosis in primary health care 63.9% Female Grober[@B34]\ 257\ Dementia DSM-IV; interview with family members\ 25.7% Screening based on informants to reduce false-\ 2016, USA 75.8\ or friends positive rates 69.7% Female Noda[@B28]\ 623\ Dementia GP's evaluation; DSM-IV 27.4% DSM Score \> I or \> II reduces errors for dementia\ 2018, Japan 86.9\ identification in primary health care 54.2% Female Tierney[@B35]\ 263\ MCI GP's evaluation; MMSE \< 26 28.5% MMSE would improve GP's capacity to detect MCI\ 2014, Canada 77.6 (±6.9)\ in primary health care 58.55% Male Wilcock[@B27]\ 136\ Dementia MCI Blood testing, cognitive evaluation 100% An update of diagnosis records for comprehensive\ 2016, England 79.5\ care is needed 64% Female Chan[@B36]\ 309\ Dementia MCI DSM-IV 21.3% Combinations of AD8 and NINDS provided\ 2016, Singapore 71.7 (+-8.2)\ a sensitivity of 73.3% and specificity of 96.9% for\ 50.2% Female dementia and MCI diagnosis, respectively Eichler[@B37]\ 243\ Dementia MCI DemTect \< 9; medical records Dementia - 40%\ Diagnosis rates for dementia in Germany are\ 2014, Germany 79.61 (±5.44)\ MCI-58% consistent with international literature 61 % Female Eichler[@B38]\ 243\ Dementia MCI MMSE \< 23; DemTect \< 9 Dementia - 49% The diagnosis rate of dementia increased 40% 2015, Germany \>70\ 60.9% Female Arabi[@B39]\ 200\ Dementia\ EDQ \< 5; MMSE \< 21 EDQ-40%\ Validated questionnaire 2016, Malaysia 68.5 (±6.28)\ MCI MMSE-20% 52% Female Shaik[@B40]\ 309\ MCI At least one impaired cognitive domain\ 54.8% Risk factors identified were: age, female gender,\ 2015, Singapore 71.8 (±8.2)\ on objective cognitive evaluation hypertension, diabetes, hyperlipidemia and smoking 54.8% Female Booker[@B41]\ 11,956\ Dementia Medical Database analysis 100% The risk factors identified were: diabetes,\ 2016, Germany 80.4\ hypertension, obesity, hyperlipidemia, vascular\ 61 % Female diseases Rosenbloom[@B42]\ 87\ Dementia\ Mini-Cog \< 4/5 27.3% among screened\ Twice the percentage previously identified with\ 2018, USA 77.2 (±6.2)\ MCI positive on Mini-Cog cognitive impairment 59.8% Female Lee[@B43]\ 140\ MCI MMSE; MoCA 23.5% Just a small fraction of those considered high\ 2017, Singapore 72.15 (±8.42)\ risk for developing dementia made use of health\ 68% Male services Corcoles[@B10]\ 104\ MCI MMSE 55.8% 91.4% of cases with alteration on MMSE had no\ 2017, Spain 77.8 (±6.74)\ history of Cognitive Impairment 68.3% Female Holsinger[@B44]\ 186\ DementiaCIND Medical evaluation Dementia -12%\ 20% returned to normal cognition, 67% remained\ 2015, USA 74.5 (±6.5)\ CIND-31% impaired, and 12% developed dementia 96.2% Male de Oliveira[@B45]\ 102\ Dementia DSM-IV; medical records; MMSE; CASI-S 46% Validation of CASI-S with a 93% sensitivity and\ 2016, Brazil 76.81 (±7.03)\ 81% specificity 83% Female Zaganas[@B32]\ 3,140\ Dementia\ DSM-IV Dementia-10.8%\ Dementia prevalence was 4%; in primary care 60%\ 2019, Greece 73.7 (±7.8)\ MCI MCI-32.4% remain undiagnosed until detailed neuropsychiatric\ Gender: 56.8% Female evaluation Pujades-Rodrigues[@B46]\ 47,386 Dementia Medical records 55% 47,386 with dementia, 12,633 Alzheimer Disease,\ 2018, UK 9,540 vascular disease and 1539 with other less\ common causes Malmstron[@B47]\ 533\ Dementia\ DSMIV Dementia -12%\ RCS sensitivity 89% and specificity 87% for\ 2015, USA 65-92\ MCI MCI-26% detecting Dementia, compared to 94% and 70%\ 100% Male for MCI Stein[@B48]\ 3,327\ Dementia GP's and multidisciplinary group's\ Follow-up I - 3.2%\ MMSE was more accurate than MMSE for diagnosis 2015, Germany 81.14\ evaluation; DSM-IV; SIDAM Follow-up II - 4.62% 65.3% Female Yang[@B25]\ 249\ MCI MMSE Impaired cognition -12.9%\ Simple instruments, such as MMSE and MoCA used\ 2015, China 67.6\ MCI-41% for screening the elderly in primary health care 61.8% Female Shaik[@B49]\ 168\ Dementia\ Nurses' screening; AD8; Specialist's\ Screened positive -13.7% 98.8% of nurses considered AD8 easy to use.\ 2016, Singapore 80.7\ MCI evaluation 78.3% of GPs considered AD8 useful 56% Female Thyrian[@B50]\ 516\ Dementia\ GP's evaluation; ICD-10 MCI-90.8%\ Older adults from primary health care are\ 2016, Germany 80\ MCI Dementia-99.8% considerably underdiagnosed 59.5% Female Koekkoek[@B51]\ 513\ MCI GP's evaluation; DSM-IV (Dementia);\ 15.2% This study protocol describes all the procedures for\ 2015, Netherlands \>70 Winblad et al. (MCI) the Cog-ld study Chan[@B52]\ 309\ Dementia DSM-IV; CDR 36.5% For participant age, AD8 was better than MMSE and as\ 2016, Singapore 71.7 (±8.2)\ good as MoCA 60.5% Female Koekkoek[@B53]\ 228\ MCI DSM-IV (Dementia); Winblad et al. (MCI) 19.3% TYM's negative predictive value (NPV) was 81% and\ 2016, Netherlands 76.8\ SAGE's was 85%. GP's evaluations had a similar\ 60% Male NPV, however, the positive predictive value was\ higher Dungen[@B54]\ 647\ Dementia\ DSM-IV (Dementia); Petersen at al. (MCI) Dementia -14%\ The authors did not find statistical relevance in the\ 2015, Netherlands 79.8 (±7.1)\ MCI MCI-31.5% number of diagnoses between the groups before or\ 39.6% Male after intervention Groeneveld[@B55]\ 120\ Dementia\ DSM-IV (Dementia). MCI: not dementia,\ Dementia-2.5%\ The authors suggested that patients with type 2\ 2018, Netherlands 77.0 (±4.5)\ MCI but not normal cognition; cognitive\ MCI-30% diabetes should be screened for MCI and dementia. 60% Male complaints; objective impairment in one\ or more cognitive domain; no functional\ impairment Campbell[@B56]\ 350\ Dementia\ Multidisciplinary group evaluation Dementia - 2%\ The use of anticholinergic drugs increased the\ 2018, USA 71.2 (±5.1)\ MCI MCI-94.8% likelihood of conversion from normal to MCI. On the\ 79.1 % Female other hand, reversion from MCI to normal cognition\ was not observed. Jessen[@B57]\ 2,892\ Dementia\ CERAD's verbal memory task (SMI, eMCI,\ Ml - 36.6%\ The highest risk for developing dementia was in the\ 2014, Germany 79.7 (±3.58)\ MCI\ and IMCI); DSM-IV, SIDAM (Dementia) eMCI - 8.6%\ late MCI group. In SMI and early MCI groups, those\ 64.8% Female SMI MCI-12.3%\ who had concerns about their memory impairment\ DA-7.4% had a similar risk for developing dementia. Wray[@B58]\ 5,333\ Dementia Medical records Not mentioned B0MC+ patients were 5.12 times more likely to\ 2014, USA 80.7\ receive a dementia diagnosis, when comparing to\ 97% Male BOMC- group. Alonso[@B59]\ 4,360\>65 MCI Mini-cog screening test, MMSE and\ 18.5% Cognitive impairment is a common reason for\ 2016, Spain Alzheimer's Questionnaire appointments in primary health care. Brodaty[@B60]\ 1,717\ Dementia Medical records, MMSE 7.3% GPCOG's sensitivity was 79% and specificity 92%. 2016, Australia 81.05\ (±4.12) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- MCI: Mild Cognitive Impairment; CIND-Cognitive impairment not-dementia; SMI -Subjective Memory Impairment; eMCI: Early Mild Cognitive Impairment; lMCI: Late Mild Cognitive Impairment; GP: General Practitioner; ICD: International Statistical Classifi cations of Diseases and Related Health Problems; DSM: Diagnostic and Statistical Manual of Mental Disorders; MMSE- Mini-Mental State Examination; MoCA: Montreal Cognitive Assessment; CASI-S: Cognitive Abilities Screening Instrument-Short Form; EDQ: Early Dementia Questionnaire; SIDAM - Structured Interview for the diagnosis of Dementia of the Alzheimer type; CDR: Clinical Dementia Rating; GPCOG: General Practitioner assessment of Cognition. [^1]: **Disclosure:** The authors report no conflicts of interest. [^2]: **Authors contributions.** Lucas N.C. Pelegrini: design, selection of studies, analysis of data, intellectual contribution to the writing of the manuscript. Gabriela M.P. Mota: design, selection of studies, intellectual contribution to the writing of the manuscript. Caio F. Ramos: design, selection of studies, intellectual contribution to the writing of the manuscript. Edson Jesus: design, selection of studies, intellectual contribution to the writing of the manuscript. Francisco A.C. Vale: design, selection of studies, analysis of data, intellectual contribution to the writing of the manuscript.
Andronis Luxury Suites is a 5-star hotel offering snorkeling, diving and horse riding for active rest and a spa lounge, a treatment room and a Jacuzzi for relaxation. Featuring luxurious architecture, the hotel is welcoming guests since 2007. The property is 10 minutes' walk from center of Oia. Such Oia sights as Santorini Volcano and Volcano are all within reach. Nearby points of interest include a gallery, a castle and a museum. Andronis Luxury Suites offers non-smoking rooms with a wonderful view of the mountain featuring free Wi-Fi, a mini bar, climate control, an individual safe and a trouser press. Guests will enjoy views of the Aegean Sea from their rooms. Bathrooms feature a shower, a spa bathtub and terry bathrobes. Local meals are served at the restaurant. Guests can relax in the poolside bar serving refreshing drinks. Serving both food and drinks, Thalami and Lotza are situated around 50 meters from the property. Rooms & Availability Andronis Luxury Suites offers non-smoking rooms with a wonderful view of the mountain featuring free Wi-Fi, a mini bar, climate control, an individual safe and a trouser press. Guests will enjoy views of the Aegean Sea from their rooms. Bathrooms feature a shower, a spa bathtub and terry bathrobes.
A cellular transtelephonic defibrillator for management of cardiac arrest outside the hospital. A cellular transtelephonic defibrillator facilitates early defibrillation in remote areas and involves electrocardiographic diagnosis and defibrillation control by a physician remote from but in voice contact with the patient-unit operator. The patient unit contains a microprocessor, microphone, defibrillator, electrocardiogram/defibrillator electrode pads and cellular telephone. Activation of the patient-unit initiates automatic dialing and contact with the remotely sited base station within 35 to 50 seconds. The physician at the base station identifies the rhythm and controls defibrillator charging and discharge. The minimal interaction required between the system and the local operator makes it suitable for use by minimally trained first responders. The cellular transtelephonic defibrillator has been tested in 211 calls responded to by a physician-manned mobile coronary care unit over distances up to 15 miles in an urban area. Satisfactory electrocardiographic transmission and voice communication were established in 172 of 211 calls (81.5%). In 39 (18.5%), connection with the base station either could not be established or maintained mainly because of geographic location or battery failure. One hundred direct current shocks of 50 to 360 J were effectively administered to 22 patients with 48 episodes of ventricular fibrillation or ventricular tachycardia with successful correction of 46 of 48 episodes using 1 to 4 shocks per episode. Widespread distribution of such devices could improve survival in patients with cardiac arrest outside the hospital.
Ascaris suum: development of intestinal immunity to infective second-stage larvae in swine. The development of protective immunity to Ascaris suum was examined in pigs naturally exposed to eggs on a contaminated dirt lot. Pigs became almost totally immune to second-stage larvae migrating from the intestines because few larvae from a challenge inoculum could be found in the lungs, and liver white-spot lesions (an immunopathologic response to migrating larvae) were absent. Blood from these pigs contained lymphocytes that responded blastogenically to larval antigens in vitro, while the serum contained antibody to larval antigens. Immunity was related to parasite exposure and not to the age of the host, and was not affected by the removal of adult A. suum from the intestines. Naturally exposed pigs responded to a variety of A. suum antigens with an immediate-type skin reactivity, and their intestinal mucosa contained relatively large numbers of mast cells and eosinophils. Other pigs were maintained on a dirt lot not contaminated with A. suum eggs and the effects of common environmental conditions on development of resistance to A. suum were studied. Resistance also developed in these pigs because 72% fewer larvae were detected in their lungs following a challenge exposure than in control pigs confined indoors on concrete floors and challenged similarly. This response was not expressed at the intestinal level, however, because their livers had numerous, intense white-spot lesions. To verify that the intestinal immunity that developed in pigs after natural exposure to A. suum was a direct result of homologous infection and not related to other stimuli encountered on a dirt lot, pigs maintained indoors on concrete floors, free from inadvertent helminthic infection, were inoculated orally with A. suum eggs daily for 16 weeks. Intestinal immunity was induced because larvae from a challenge inoculum were not detected in the lungs, and few white-spot lesions appeared on the livers of these pigs. Apparently, continual exposure of the intestinal mucosa to larvae eventually elicits the appropriate effector components necessary to prevent larval migration from the intestines.
Should the state help in providing childcare? “If you can’t pay for childcare yourself, then wait until you can. Why should the country pay? No one paid for me.” This is the view of many Australians, yet we never hear similar dismissals of state-funded schooling. This attitude to childcare is driven by a false distinction between childcare and education. It is true that childcare policies are in part about looking after children to enable parents who want do work to do so. But should this be the pre-eminent goal of policy, as the recent Productivity Commission report implies? Better equality in Australia is tainted by high jobless poverty rates Read more In the UK, perspectives have changed radically since 2004, driven by evidence on the value of early education. A Labour government introduced 15 hours of free nursery hours to all children from their third birthday. Fifteen hours was found to be most effective for improving child development for the general population, although children from disadvantaged families can benefit from more hours. This was extended to two-year-olds for the 40% most deprived families by a Conservative-led government, because politicians of all hues have been convinced by the growing evidence of the impact of nursery care for a child’s life chances. Primary school teachers have long been aware of this, as one teacher I spoke to put it: “Some children arrive at reception class having spent far too much time confined to home, glued to the television, and nurseries help prepare children for school.” The effects are present for all children according to longitudinal research of 4,000 children in the UK. Access to quality pre-school from the age of two can boost educational and social development that is apparent from the start right through to the end of schooling. The boost provided is greatest for the disadvantaged, but significant for middle class children as well. Similar evidence comes from Denmark, France, Germany, Norway and Switzerland. The painful reality is that, by the start of school, it might already be too late. For example James Heckman, a Nobel prize-winning economist says: “Like it or not, the most important mental and behavioral patterns, once established, are difficult to change once children enter school.” Using taxpayer money to help parents into work is not necessarily a drain on the treasury. The evidence is clear. This does not mean “formal” education should begin earlier, rather there should be state funding for play-based, nursery settings from the age of two, followed by a gentle transition into school. In the UK and across northern Europe, this is an issue on which politicians, of left, centre and right, agree. In the UK, the Conservative government has announced that the current 15 hours of free nursery provision is to be increased to 30 hours, largely to further boost women’s employment, and the policy is supported by Labour. The Liberal Democrats want to go further and provide universal care for two-year-olds and, for the first time, to cover toddlers – from nine months to two years – if their parents are working. The reason for this latter part is a different motivation. It is not driven by early years’ education, though it might have some benefit, particularly for children from disadvantaged families. Instead, it is about helping parents to work, because the costs of childcare can be too much to bear for many families. There are good reasons that parents go back to work when they have children, even if it isn’t economically sensible for them to do so at first. Some think of the longer-term, and are actually losing money to work, so that options do not shut down in the future. Others think of their work lives, and think it will be best for them and their children. This, of course, does not automatically justify state funding, but a powerful economic case can be made. Using taxpayer money to help parents into work is not necessarily a drain on the treasury; rather it can lead to more revenue. A UK think tank has published a report suggesting that a 5% increase in maternal employment in the UK could be worth £750m (AU$1.6bn) annually in increased tax revenue and reduced benefit spending. A stable high-quality early childhood education and care system should be regarded as part of the infrastructure for a country’s long-term economic and social development, to be developed in much the same way as the education system or roads. Forget the Productivity Commission. This is what childcare should look like | Eva Cox Read more This view is increasingly holding sway among OECD countries. The OECD report Starting Strong III begins: A growing body of research recognises that early childhood education and care (ECEC) brings a wide range of benefits, for example, better child wellbeing and learning outcomes as a foundation for lifelong learning; more equitable child outcomes and reduction of poverty; increased intergenerational social mobility; more female labour market participation; increased fertility rates; and better social and economic development for the society at large. Those countries that are planning for long-term economic growth are investing in early childhood education and care, because the jobs of the future will be for those with the most skills, and the foundations are laid early in life. China is an example of a country that has greatly increased its investment in this area. An additional government incentive may be to increase female participation in the workforce to raise the productivity of the nation. For example, an EU report advises member states to remove disincentives to female labour force participation. It recommends the provision of childcare to at least 90% of children between three years old and mandatory school age and at least 33% of children under three years of age. Australia has choices, and there are plenty of options. Professor Edward Melhuish is presenting his research at the Early Start Conference, University of Wollongong, 28-30 September 2015.
Detection of Bordetella pertussis using a PCR test in infants younger than one year old hospitalized with whooping cough in five Peruvian hospitals. To report the incidence, epidemiology, and clinical features of Bordetella pertussis in Peruvian infants under 1 year old. A prospective cross-sectional study was conducted in five hospitals in Peru from January 2010 to July 2012. A total of 392 infants under 1 year old were admitted with a clinical diagnosis of whooping cough and tested for B. pertussis by PCR. The pertussis toxin and IS481 genes were detected in 39.54% (155/392) of the cases. Infants aged less than 3 months were the most affected, with a prevalence of 73.55% (114/155). The most common household contact was the mother, identified in 20% (31/155) of cases. Paroxysm of coughing (89.03%, 138/155), cyanosis (68.39%, 106/155), respiratory distress (67.09%, 104/155), and breastfeeding difficulties (39.35%, 61/155) were the most frequent symptoms reported. An increase in pertussis cases has been reported in recent years in Peru, despite national immunization efforts. Surveillance with PCR for B. pertussis is essential, especially in infants less than 1 year old, in whom a higher rate of disease-related complications and higher mortality have been reported.
George Monbiot had an article in the Guardian on Monday about bastardised libertarianism and its inability to understand the real freedoms being fought for by environmentalists and social justice advocates. However, Monbiot’s treatment of environmentalism’s threat to libertarianism was a bit sloppy. He got sucked into the negative freedom and positive freedom debate, and although he worked his way to the correct conclusion ultimately, I felt like the clarity was lacking. So I want to explain more clearly just how much environmentalists stick in the side of libertarian ideology. First, consider what libertarians of the sort Monbiot criticizes are really about philosophically: they favor a procedural justice account of the world based heavily on property rights. This is the newest face of libertarianism. Gone is the appeal to utility and desert. The modern libertarians try to prop up their political ideas almost solely through a rigid formalism of property rights. I have written before about the problem with the procedural accounts of property rights, but here I want to just accept the libertarian property rights premise. Somehow individuals can grab up pieces of the world and exclude those pieces from everyone else forever. Once those individuals become owners of their respective property, nobody else can touch that property or do anything whatsoever to that property without their consent. Coming onto my property without my consent is a form of trespass under this picture. Doing anything to my property — whether it be painting it, dumping stuff on it, or causing some other harm to it — is totally off limits. So environmentalists point out that carbon emissions are warming the planet, one consequence of which is that harm will be done to the property of others. Most environmentalists — being the leftists that they generally are — do not make too much of the property rights issues, but one certainly could. Coal plants release particulates into the air which land on other people’s property. But no permission is ever granted for that. Coal plants do not contract with every nearby property owner to allow for them to deposit small amounts of particulate matter on their neighbors’ land. They are guilty of a form of property trespass. Beyond that, all sorts of industrial processes have environmental externalities that put things into the air or the water that ultimately make their way into the bodies of others. This is a rights-infringing activity under the procedure-focused libertarian account. The act of some industry is causing pieces of matter to land on me and enter into my body. But I never contracted with them to allow them to do so. The air and the atmosphere is an especially problematic issue for libertarians. Who owns those things? Libertarians might try to argue that you own the air above your land, but air — or the matter that it is made up of — does not stay above your land; it moves around the world. Any matter released into the air is sure to find itself to someone else’s property, causing a violation. The atmosphere might seem like something nobody owns and therefore something anybody can dump things into. But with climate change, we know that greenhouse gas emissions are causing the world to warm, the consequences of which will include damage to the property of others all over the world. Yet again though, greenhouse gas emitters have not contracted with every single property owner in the world, making their emissions a violation of a very strict libertarian property rights ideology. The short of is that environmentalists totally smash open the idea that property rights theories can really account for who is permitted to do what with the land that they own. Almost all uses of land will entail some infringement on some other piece of land that is owned by someone else. So how can that ever be permitted? No story about freedom and property rights can ever justify the pollution of the air or the burning of fuels because those things affect the freedom and property rights of others. Those actions ultimately cause damage to surrounding property and people without getting any consent from those affected. They are the ethical equivalent — for honest libertarians — of punching someone in the face or breaking someone else’s window. That is why environmentalism is such a huge problem for libertarians, and it is no doubt why so many of them are skeptical of the effects of climate change or other environmental issues. Admitting that someone’s use of their own property almost certainly entails an infringement on someone else’s property makes the whole libertarian position basically impossible to act out in the real world. A landowner could never get individual contracts with literally every single person that might ever be affected by the owner’s land-use (e.g. operating a coal-burning power plant). But a libertarian that was honest about environmental externalities would require such a landowner to undertake precisely that impossible task.
Requests for health records are completed under our Release of Health Records process. This process is typically used for processing requests for patient medical records, either from the patient themselves or from a third party such as hospitals, ICBC, and law firms. If you are a third party requesting a health record on behalf of a patient, we require the patient’s written consent prior to release. DATA REQUEST PROCESS 1. Download the Release of Health Records form here or request a form in person from one of our clinics.
Sunday, June 28, 2009 Bloggers Note: Isnt this exactly what Martha Stewart went to prison for? Thursday 25 June 2009 by: Stephen Koff and Sabrina Eaton; @ The Cleveland Plain Dealer The day before the House passed the financial rescue package, Rep. Ginny Brown-Waite of Florida grabbed up Citigroup stock. Washington - As financial markets tumbled and the government worked to stave off panic by pumping billions of dollars into banks last fall, several members of Congress who oversee the banking industry were grabbing up or dumping bank stocks. Anticipating bargains or profits or just trying to unload before the bottom fell out, these members of the House Financial Services Committee or brokers on their behalf were buying and selling stocks including Bank of America and Citigroup - some of the very corporations their committee would later rap for greed, a Plain Dealer examination of congressional stock market transactions shows. Financial disclosure records show that some of these Financial Services Committee members, including Ohio Rep. Charlie Wilson, made bank stock trades on the same day the banks were getting a government bailout from a program Congress approved. The transactions may not have been illegal or against congressional rules, but securities attorneys and congressional watchdog groups say they raise flags about the appearance of conflicts of interest. "I don't think that any of these people should be owning these types of financial instruments," said Brian Biggins, a Cleveland securities lawyer and former stock brokerage manager. "I'm not saying they shouldn't be in the stock market. But if they're on the banking committee and trading in these kinds of stocks, I don't think that's right." For example, Rep. Ginny Brown-Waite, a Florida Republican, bought Citigroup stock valued between $1,001 and $15,000 on Oct. 2, the day before the House passed the financial rescue bill and President George W. Bush signed it into law, records show. She opposed the bill. Eleven days later, she bought $1,001 to $15,000 worth of Bank of America stock. It was on the same day that then-Treasury Secretary Henry Paulson told leading banks that he expected them to accept billions in bailout money to prevent a financial meltdown. Brown-Waite, who has since left the committee to join the tax-writing Ways and Means Committee, and her spokeswoman would not comment for this article. The precise value of her investments is not publicly known because financial disclosure reports provide only broad ranges, although some members include detailed brokerage reports. Wilson, a Democrat from the eastern Ohio town of Bridgeport, sold between $15,001 and $50,000 worth of Huntington Bancshares stock on Nov. 14, the same day Huntington got $1.4 billion in bailout money from the federal Troubled Asset Relief Program, or TARP, records show. Wilson's transactions over the course of last autumn also included Bank of America and BB&T, both beneficiaries of the bank rescue program that Treasury implemented after congressional passage. Wilson's spokeswoman said the congressman did not personally pick these trades because he leaves day-to-day investment decisions to a money manager who uses a proprietary model in selecting securities to buy or sell. "To be clear, Mr. Wilson doesn't know about the trades ahead of time or even as they're being made," said spokeswoman Hillary Wicai Viers. A spokesman for Rep. Carolyn McCarthy, a New York Democrat also on the Financial Services Committee, said she similarly leaves transactions solely to the discretion of account managers. McCarthy's trades included a $2,275 purchase of bailout recipient J.P. Morgan Chase while Congress was still hammering out its rescue bill. Another member of the Financial Services Committee, Democratic Rep. Jackie Speier of California, said on a recent financial disclosure report that she bought up to $15,000 in Citigroup stock on Nov. 7. That was 10 days after the bank got a $25 billion bailout. Her office now says the report was filed in error, the transaction should have been listed as her husband's - and she wishes he had not made it. "When I brought it up with her, she said it was Barry's purchase and she didn't know about it but she would have disagreed with it at the time had she known about it," Speier spokesman Mike Larsen said. Her husband wasn't the only committee spouse trading on bank stocks. The stockbroker husband of West Virginia's Shelley Moore Capito, a Republican, sold more than $100,000 in Citigroup stock in several transactions late last year. His brokerage firm was owned by Citigroup and his compensation included Citigroup stock. A Capito spokesman said the House Ethics Committee gave her verbal approval to join the committee despite her husband's job. Another committee member, Illinois Republican Judith Biggert, whose husband sold Wells Fargo stock while Congress was helping to shape the rescue bill, said she does not discuss stock transactions with her spouse. "I wouldn't have the vaguest idea" why he sold at that time "because we don't discuss our stocks," said Biggert. "We have a financial group in Chicago, and they take care of all of that." Some of these stock sales enabled committee members or their families to cut losses before the market continued its slide. Other trades proved to be particularly ill-timed. Citigroup stock, for example, closed at $22.50 per share the day Brown-Waite bought it. Now it's hovering around $3. Many details about the massive financial bailout last fall were widely known outside Capitol Hill. Yet members of the Financial Services Committee were privy to closed-door discussions, staff briefings and political horse-trading decisions between political parties, Congress and the White House. Banks lobbied Congress and the administration heavily. Banks that received bailout money spent $77 million on lobbying and $37 million on federal campaign contributions last year, according to the Center for Responsive Politics. The center found that the banks spending the heaviest got the biggest rescue packages. There has been no direct evidence that this allowed members to engage in insider trading. But when lawmakers overseeing banks also buy and sell bank stocks, it can create "the appearance of a problem," said Anthony J. Hartman, a Cleveland securities attorney. "I do a lot of different types of litigation, and I just don't think anybody ought to be putting themselves in a situation where as an elected official, I can be suspect of what they are doing," Hartman said. The issue of appearances is complicated, said Melanie Sloan, executive director of Citizens for Responsibility in Ethics in Washington, because "we can't say that because you're a member of Congress you can't buy or sell any stocks at all." But she added, "I do think it's more troubling on an oversight committee, particularly Financial Services." No comments: Post a Comment ENOUGH IS NEVER ENOUGH Greed is Vice Welcome! Name of blog says it all. A bi-partisan effort to expose Corporate, U.S.Government & Individual Greed & Corruption in all its forms. We encourage "No holds barred" greedy bastard bashing." "Pull no punches" here, we are UnModerated & UnCensored as Far as the Law (and Google) will allow. WARNING! "In times of deciet, telling the truth is treason" -Sammuel Clemmens Financial Meltdown Playing Cards Click on cards to read about them in the NY Daily News, and Buy Them while they last @ www.financialcrisiscards.com About Me Lifetime Student in school of Hard Knocks.Born in Troy NY home of UncleSam,FALLOUT Capital of the Nation(google"The Troy Incident")other places I called home; Tucson & Bullhead City, Az., Seattle,Wa.,Taos,Ojo Caliente &Santa Fe,NM, LasVegas,Searchlight,Goodsprings,Jean,& Laughlin,NV.,San Francisco, Ca.,Portland,Or.,just 2 name a few. Places I have worked are bars, horse & dog tracks and casinos, and, later in life,law firms & with lawyers. Now in retirement,I stay home & mind my little mini-farm. In my spare time I pretty much live vicariously through the wonders of the www. I guess you could say (ala Eddie Rabitt) that I am - bloggin my life away,....lookin for & workin towards a better day, ohhh yeah. But however you look at it, there aint no gettin' around it, I'm jus' an ole' x-hippy-chick, struggling ever "onward through the fog" of life,....still here, still standin,..still laughing, after all these crazy years, and whats more amazing yet, still with at least some functioning brain cells!(Though through absolutely no fault of my own. Thank U geezis or whatever powers that be!)
[A collage of the six Moroccan activists charged for belonging to the February 20th Movement. Image from Mamfakinch.] [The following report was originally published in French on Mamfakinch on 2 September 2012. It was subsequently translated into English. Both the English translation and original French version appear below.] The prisoners are now "officially" accused of non-authorized assembly, insulting a public officer, assault and battery: accusations of guilt without evidence in order to conceal a political trial. During their hearing before the judge on 31 August, they confirmed that they were subjected to very serious physical and moral aggression. Through their eyewitness accounts, the prisoners took us back to a black era in Morocco's history: a time of political trials, admissions of guilt signed under torture, rape with objects inserted into the anus, insults, humiliations, fingernails and eyelashes pulled out…a history that would cause all proud Moroccans to blush with infinite shame. Present at this trial was the blogger, Larbi, who reports to us: In front of their families in tears and in the presence of all those flabbergasted by the testimonies, they gave a detailed account of their inhuman treatment from the moment of their arrests. Below is their story in an open letter addressed to the public at large, attesting to the details of what they experienced: "After the protest that was repressed on 22 July 2012, which was intended to object to the high cost of living, increased prices, and political sentencing, we were kidnapped individually by ununiformed police. They hauled us away in a paddy wagon, blindfolded us, and began beating us with their fists, their feet, and their truncheons. Insults and humiliations were added on to all of this. Once we got to the police station, they stripped us of all of our clothing and stuck hard objects into our anuses. They also ripped out our eyelashes, reports Nour Essalam Kartachi, in order to force us to cry, "long live the king." Samir Bradelly also reported this to the judge, reminding him of the videos taken in Syria, where the people held in prisons were forced to say, "long live Bashar." Then during the interrogations, and to intimidate us, the police told us all the details of our lives before this point. “After our refusal to sign the accusations, without having read them, they tried to tear off our fingernails with pliers,” describes Tarek Rouchdi. The police said to him “Mal Rabbek Kats7ab Rassek F l’Espagne? Hmazal Ma Wsalna Lih» [a series of swear words and "Do you think you're in Spain? We're not even at that point yet."] They refused to have our wounds cared for, and in particular those of Samir Bradelly who had a deep wound on his head that required several stitches. He requested medical help several times, but in vain. And realizing that he would have to spend the night in this condition, he had to stay awake without resting his head on the concrete floor in order to avoid an infection. After a hunger strike, the police finally agreed to take us to a small local hospital. The most seriously wounded, Samir, was taken care of only with an antiseptic (Betadine). As for the others, the doctor did nothing more than ask our names without caring for our injuries. But our saga wasn't over yet. Once we arrived at the Oukacha prison, several prisoners were enlisted in order to provoke us, aggress us, and harass us. As for Laila, she was let go temporarily. She constantly had pain in her back because of how she was beaten:she reported having been violently struck on her chest with a truncheon and brutally hit. In conclusion, we: Reaffirm our commitment to all claims that led us to join the February 20th Movement in the first place. Demand our immediate and unconditional freedom. Affirm our unconditional solidarity with all prisoners of conscience. Acknowledge with gratitude those who supported us or expressed solidarity with us Invite all free activists to remain faithful to and continue protesting on the streets and to challenge this tightening noose of repression that is suffocating the masses in this country.
Norwegian midwives' perceptions of empowerment. Midwives are educated to care for women during pregnancy, birth and the postnatal period. For midwives to be able to fulfill their professional role they need to be empowered to do so. To investigate Norwegian midwives' perception of empowerment in practice. A cross-sectional study. In September 2014, a random sample of 1500 midwives was sent a questionnaire, which included the Perception of Empowerment in Midwifery Practice Scale (PEMS). Of 1458 eligible midwives, 595 (41%) completed the PEMS. Exploratory factor analyses and comparative analyses were done. Exploratory factor analyses identified three factors (subscales): Supportive management, Autonomous professional role, and Equipped for practice. Midwives working in a hospital setting scored significantly lower on the factors Supportive management and Autonomous professional role compared to midwives not working in a hospital setting (p < 0.001). Midwives with extra/special responsibilities scored higher than those without (p < 0.001) on the same two factors. Midwives working at units with <2500 births scored significantly higher on all three factors compared to midwives working at units with ≥2500 births (p < 0.001). The PEMS showed that Norwegian midwives' perception of empowerment at work differed according to midwives' education, role at work, duration of work experience, working situation and environment. This study supports the psychometric qualities of the PEMS.
Q: MySQL Multiple Table Query The question with criteria: Any help with creating the SELECT statement that satisfies this criteria would be appreciated. The tables I believe are being use are as follows. CREATE TABLE `client` ( `ClientID` varchar(7) NOT NULL, `ClientName` varchar(45) DEFAULT NULL, `Street` varchar(100) DEFAULT NULL, `City` varchar(45) DEFAULT NULL, `State` char(2) DEFAULT NULL, `Zip` char(5) DEFAULT NULL, PRIMARY KEY (`ClientID`) ) CREATE TABLE `contact` ( `ClientID` varchar(7) NOT NULL, `ContactName` varchar(45) NOT NULL, `ContactPhone` char(17) DEFAULT NULL, `ContactEmail` varchar(45) DEFAULT NULL, PRIMARY KEY (`ClientID`,`ContactName`), CONSTRAINT `FK_ClientContact` FOREIGN KEY (`ClientID`) REFERENCES `client` (`ClientID`) ) Here is what I've got so far. SELECT ClientName, contact.ContactName, ContactPhone, ContactEmail, count() FROM client, contact GROUP BY ClientName, ContactName, ContactPhone, ContactEmail ORDER BY ClientName; Additional Information: CREATE TABLE `event_contact` ( `ClientID` varchar(7) NOT NULL, `ContactName` varchar(45) NOT NULL, `EventCode` varchar(12) NOT NULL, PRIMARY KEY (`ClientID`,`ContactName`,`EventCode`), KEY `FK_EC_Event_idx` (`EventCode`), CONSTRAINT `FK_EC_Contact` FOREIGN KEY (`ClientID`, `ContactName`) REFERENCES `contact` (`ClientID`, `ContactName`), CONSTRAINT `FK_EC_Event` FOREIGN KEY (`EventCode`) REFERENCES `events` (`EventCode`) ) CREATE TABLE `events` ( `EventCode` varchar(12) NOT NULL, `EventName` varchar(45) NOT NULL, `Description` varchar(150) DEFAULT NULL, `EventDate` date DEFAULT NULL, `StartTime` time DEFAULT NULL, `EndTime` time DEFAULT NULL, `Ticket` tinyint(4) DEFAULT NULL, `VenueID` char(7) NOT NULL, `ClientID` varchar(7) NOT NULL, PRIMARY KEY (`EventCode`), KEY `FK_Events_Venue_idx` (`ClientID`), KEY `FK_Events_Venue` (`VenueID`), CONSTRAINT `FK_Events_Client` FOREIGN KEY (`ClientID`) REFERENCES `client` (`ClientID`), CONSTRAINT `FK_Events_Venue` FOREIGN KEY (`VenueID`) REFERENCES `venue` (`VenueID`) ) A: This query would list the details for each client's contacts: SELECT client.ClientName, contact.ContactName, contact.ContactPhone, contact.ContactEmail FROM client, contact WHERE client.ClientID = contact.ClientID ORDER BY client.ClientName; A join needs to be used to connect the two tables to get every contact for each client. Group By would not be used for this because Group By would only return one contact for each client. The question states: "List the details for each client's contacts." Also, you are missing information. The question also ask for the number of events the contact has organized. I am guessing that should be another column in the contact table or possibly information in another table. Below is the query using an explicit join. SELECT client.ClientName, contact.ContactName, contact.ContactPhone, contact.ContactEmail FROM CLIENT INNER JOIN contact ON client.ClientID = contact.ClientID ORDER BY client.ClientName;
Doctor insights on: How Long Should It Take Prednisone To Start Taking Effect 1 Steroid taper: If you have tapered very slowly from being on steroids for years, it will take6 to 12 months to allow the adrenal glands to recover the normal cortisone production. The pituitary will take longer and it is very difficult to assess this recovery of gland function. Most doctors will add extra cortisone to your regime3n during times of stress ie surgery or infection, or you will be very sick. ...Read more Prednisone is a synthetic cortisone. The body makes cortisone, a natural hormone made in adrenal glands. The body converts it to Hydrocortisone to become active. 25 mg of cortisone has about same effect as 5 mgm prednisone. The average person would produce 3-6 mg of pred daily. So why use a substitute? The synthetic has more anti-inflammatory effect; but has less effect on minerals like potassium. ...Read more 3 Hive treatment: In general, it will take a couple of hours since Prednisone needs time to suppress the production of histamine. On the other hand, antihistamine works faster. It suppresses the hives itself. Your doctor would decide which medication works best in your situation. ...Read more 6 Prednisone : You did not mention the dose you were taking or if your Prednisone was tapered gradually before finally being discontinued. If it was gradually tapered as is recommended, your metabolic recovery should be complete within about a week after the last dose. ...Read more 7 Depends: Prednisone causes a focusing instability most prominent when the dosage is either increasing or decreasing. At steady state or if out of your system, it take generally two months to stabilize. If the blurred vision is from long term use with cataract formation, then the blur will remain until they are surgically removed. ...Read more 9 Not really: Prednisone at any dose has risks. Obviously, the size of the dose and the length of time it is taken determine the degree of risk. Avoid long term Prednisone at any dose if you can. ...Read more 10 Rapidly: One of prednisone's benefits is that in general it works rapidly like within 1-2 weeks. There are times with certain diseases such as lupus or vasculitis where it might take longer, like 3-4 weeks. ...Read more
5 Low-Water Lawns That Stay Green Under Pressure There’s nothing quite like a lush, green grass lawn. Although the year’s dry winter has many homeowners switching to grass-free yards, those low-water alternatives don’t always hit the spot when what you want is soft green turf. If you love your grass lawn, but don’t love the constant watering, then you need to check out these five grass lawn varieties. Suitable for every area of the country, these grasses are specially designed to stay thick and green with little to no watering, mowing, or fertilizing. Take advantage of warming temperatures and spring rain to overseed your lawn with these varieties now. By summertime, you’ll be enjoying green grass without a care. 1. Eco Lawn Developed from a mix of native fine fescue grasses, Eco-Lawn is the most versatile low-water turf lawn variety. Once established, you can almost forget about it: it doesn’t require regular mowing to stay green, and only needs to be watered during extremely dry periods, if at all. Eco-Lawn grows well in bright sun, the dappled shade cast by trees, and the deep shade cast by fences and neighboring buildings. Underplant it with spring bulbs and wildflower seeds for a windblown meadow look, or mow it regularly for a traditional-looking lawn. 2. Pearl’s Premium Once established, Pearl’s Premium grass seed develops roots that extend a foot or more into the ground, making the turf extremely drought-tolerant. Water this lawn once a month or less, and it will still grow a thick, verdant green. This hardy grass also grows slowly, so you only need to mow every month or so. 3. Fleur de Lawn The secret to Fleur de Lawn‘s extraordinary hardiness is its mix of grass and low-growing clover. Clover fixes nitrogen from the air into the soil, which fertilizes the grass and keeps it green year-round; the clover also acts as a “weed you want”, crowding out undesirable plants such as dandelions and crabgrass. In addition to these practical perks, clover also adds a sprinkling of color to your lawn in the spring and summer, making this mix perfect for rustic or country-style homes. 4. UC Verde Buffalo Grass Developed by researchers at the University of California, UC Verde Buffalo Grass was designed to withstand hot, dry weather with 75% less watering than traditional turf grasses. UC Verde can be allowed to grow for a tufty, flowing lawn, or mowed every few weeks for a hardy turf lawn perfect for running, playing, and foot traffic. UC Verde does best in temperate zones that with winters that stay above 20 degrees and summers that stay below 90 degrees, and will grow best in full-sun areas. 5. Bluestem Enviro-Turf Enviro-Turf is a hardy fescue mix that looks just like a traditional turf lawn, but only needs to be mown and watered every couple of weeks. This type of low-water lawn will work best in dry sites with good drainage, and has the added bonus of being naturally richer in color than most turf grass lawns. Top Image Credit: Lowe’sDoes your lawn need constant watering to stay green? What do you think of these low-water grass lawns?
The demonstrators included “Black bloc” protesters, who wear masks and black clothing to present a unified front as they disrupt events, making it difficult for police to recognize individuals in the group. They are often seen at protests organized by groups such as Black Lives Matter and Occupy Wall Street, destroying property and setting fires. They torched a limousine in Washington last month on the day of Trump’s inauguration, and a group spray-painted buildings and smashed electrical boxes during a demonstration in Portland, Ore., earlier in January. When a group of them arrived at Berkeley, it swiftly changed the tenor of the peaceful demonstration. Some students organized to try to protect the campus and businesses nearby, and then to pick up broken glass, scrub graffiti off buildings and clean the campus after the violence. William Morrow, president of the Associated Students of the University of California, said in a statement that the cleanup effort showed what Berkeley students care about. “Last night was not reflective of that,” he said; students were expecting a peaceful protest, an exchange of opinions and a dance party, “but outside agitators infiltrated our community and didn’t treat it with the respect for our historic tradition of non-violence.”
What are Solar Panels, and How Do They Work? Solar panels convert sunlight into electricity, and the process they use is actually quite simple. The more sunlight that hits a panel, the more electricity it will produce. Basically the sun’s light is made up of protons. When the sunlight passes through the solar panels, the protons react with electrons contained within materials inside the solar panel and electricity is produced. Source Inside a Solar Panel The inside of a solar panel contains a series of chemical sandwiches known as Photovoltaic cells. Each sandwiches directed into the chemical sandwiches inside the solar panels, a chemical reaction occurs that is transferred form one sandwich to another. This reaction creates an electric current, which is then converted into a type of energy which can be used to power the equipment and appliances in your home. A single photovoltaic cell produces approximately a half volt of electrical output. When 36 cells are connected in a series, a module is created which produces 18 volts. Another word for module is panel. Each solar panel contains a number of interconnected photovoltaic sells which are housed in a finished product. A typical module (solar panel) measures around 2.5 ft. wide by 5 feet long. They are usually bluish or black in color, and the frames are generally constructed out of aluminum. The frames can either r be left in their natural aluminum finish or painted black, though most homeowners choose black as the panels then blend in better with whatever color tiles or shingles are on the roof. A wide range of voltage and current outputs can be achieved according to how the solar panels are connected together. The larger the surface of the panel, the more electricity it will produce, as it will be able to absorb more sunlight. Solar panels are generally rectangular in shape and come in a variety of sizes, depending on the size of the roof or area where they will be housed, and the amount of electricity required. A typical residential solar panel produces around 200 watts of power. Basically photovoltaic cells are linked together to produce essentially what are solar panels. How Photovoltaic Solar Panels Work Source How Long Do Solar Panels Last? How long do solar panels last” is often one of the first questions that solar energy system contractors are asked. There is no simple, short answer to the question, as there are many variables to consider. The make of your panels, the type of panel, your geographic location, climate, and where they will be located all factor into the equation. There are solar panels that are still in operation after being installed in the 1970’s. These were some of the first panels developed, and technology since has only made them better. Most modern manufacturers offer a 25 year warranty on solar panels, but often they can last much longer. Under optimum conditions that figure could almost double, but that is also dependent on the quality of materials used by an individual manufacturer. Inside each solar panel is a series of photovoltaic cells. These are basically semi conductors composed of silicon, boron, and phosphorus as well as some metals. The quality of those materials and how they are manufactured will also play apart in a panel’s longevity. Your geographic location and climate probably plays a larger role in anything when considering how long do solar panels last. Don’t get climate confused with weather. Weather is the conditions for a day; climate is the cumulative weather conditions over a substantial period of time. Though a lot of sun will help you produce lots of electricity, it will also cause a bit more wear and tear on your solar hardware. Rain in areas with lots of pollution will be acidic, and so cause corrosion on fittings and panel frames. If you live in such an area, don’t let that discourage you from haven solar panels installed. A bit of regular maintenance can counter those effects quite easily. That can be as simple as hosing your panels off once or twice a week, which will also keep them free from dust and other debris which can also hinder output production. If you live in an area that is susceptible to high winds, you’ll need to take extra precautions to make sure your panels are adequately insured. Of course winter brings its own set of problems, and you’ll have to keep your panels clear of snow to avoid deterioration, and to keep output up. Generally maintenance is quite simple, and a lifespan of 25 years or more is not unreasonable to expect. The professionalism with which you have them installed, and how well you maintain them will dictate how long your solar panels will last. Take care of them and they’ll take care of you. Source 9 Reasons Why Solar Energy is Soaring The demand for fossil fuels in the United States is beginning to exceed the available supply. Even if any found new reserves exceed expectations or new technology significantly improves oil and gas recovery, supply and demand will become unbalanced. But that is only one of the reasons why many homeowners are converting to solar power. Solar power is versatile and can be used in a number of different ways. Each has its own complexity and costs, but in the end the benefits are indisputable. Following are nine applications of solar power that are contributing to its increasing popularity. Generating Electricity for General Usage: You can actually install a solar energy system that will inevitably reduce tour electricity bill to zero. It will require a significant initial investment, but it will end up saving you loads of cash, and it’s good for the environment. Running a household is one of the most popular uses of solar energy, and solar power systems installation is growing at a rate of 25% or more per year. Cooking: There are a few easy to build solar powered ovens and stoves that can be constructed so that you can cook using the sun’s energy. The technology is there, you are only limited by your imagination! Space Heating: Strategically placed blinds, awnings, sunrooms, and skylights and such can help you to heat your home naturally. Heating Water: Heating the domestic water supply is another popular application of solar energy. Many homeowners have sun-warmed water pumped through compatible plumbing systems to supply their homes, and often require no electrical pumps or other moving parts to do so. Pumping Water: There are solar systems that are designed so that you can slowly pump water into a tank when the sun is shining, and then draw on it later. The tanks are also designed to absorb sunlight, thus heating the water naturally and reducing the power load on any domestic water heaters in the house. Heating of Swimming Pools: Many homeowners who have a swimming pool keep it covered with a solar blanket that heats it efficiently and cheaply. There are also hot water heating panels that can be installed on the roof of your house or garage to heat the water in your pool year round. Landscape Lighting: Solar powered garden and landscaping lights are a very effective way to light your grounds at night. Modern technology has advanced to the point where such lights are efficient and attractive, so there is no need to use expensive lighting powered by the utility companies anymore. In fact, solar lighting is the most widely used solar technology, and there is virtually no drawback to using it. Indoor Lighting: There are a number of effective solar lighting systems for inside the home that use LED (light emitting diodes) technology. These small electronic lights use little current, and entire rooms in your house can be lit using them in combination with a small off-grid solar system connected to a battery. The battery is charged during the day so that there is enough juice to do the job at night. Powering Remote Dwellings: An entire holiday or camping cabin, RV or boat can be powered by solar energy, a niche in which technology continues to develop and solar usage increases. Summary Solar power is produced domestically, and every kilowatt hour produced reduces the demand for foreign oil by the same amount. As you can see from the above nine applications, you can install a system to power your entire household, or just an aspect of it. Every little bit helps; not just your wallet, but the environment as well. This website uses cookies As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so. This is used to display charts and graphs on articles and the author center. (Privacy Policy) Google AdSense Host API This service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy) This is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy) Facebook Login You can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy) Maven This supports the Maven widget and search functionality. (Privacy Policy) We may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites. Conversion Tracking Pixels We may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service. Statistics Author Google Analytics This is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy) Comscore ComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy) Amazon Tracking Pixel Some articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)
Pilot-scale in situ bioremedation of uranium in a highly contaminated aquifer. 2. Reduction of u(VI) and geochemical control of u(VI) bioavailability. In situ microbial reduction of soluble U(VI) to sparingly soluble U(IV) was evaluated at the site of the former S-3 Ponds in Area 3 of the U.S. Department of Energy Natural and Accelerated Bioremediation Research Field Research Center, Oak Ridge, TN. After establishing conditions favorable for bioremediation (Wu, et al. Environ. Sci. Technol. 2006, 40, 3988-3995), intermittent additions of ethanol were initiated within the conditioned inner loop of a nested well recirculation system. These additions initially stimulated denitrification of matrix-entrapped nitrate, but after 2 months, aqueous U levels fell from 5 to approximately 1 microM and sulfate reduction ensued. Continued additions sustained U(VI) reduction over 13 months. X-ray near-edge absorption spectroscopy (XANES) confirmed U(VI) reduction to U(IV) within the inner loop wells, with up to 51%, 35%, and 28% solid-phase U(IV) in sediment samples from the injection well, a monitoring well, and the extraction well, respectively. Microbial analyses confirmed the presence of denitrifying, sulfate-reducing, and iron-reducing bacteria in groundwater and sediments. System pH was generally maintained at less than 6.2 with low bicarbonate level (0.75-1.5 mM) and residual sulfate to suppress methanogenesis and minimize uranium mobilization. The bioavailability of sorbed U(VI) was manipulated by addition of low-level carbonate (< 5 mM) followed by ethanol (1-1.5 mM). Addition of low levels of carbonate increased the concentration of aqueous U, indicating an increased rate of U desorption due to formation of uranyl carbonate complexes. Upon ethanol addition, aqueous U(VI) levels fell, indicating that the rate of microbial reduction exceeded the rate of desorption. Sulfate levels simultaneously decreased, with a corresponding increase in sulfide. When ethanol addition ended but carbonate addition continued, soluble U levels increased, indicating faster desorption than reduction. When bicarbonate addition stopped, aqueous U levels decreased, indicating adsorption to sediments. Changes in the sequence of carbonate and ethanol addition confirmed that carbonate-controlled desorption increased bioavailability of U(VI) for reduction.
all: @$(MAKE) -s -f mydump.mk ifneq ($(wildcard /usr/include/mysql/mysql.h),) @$(MAKE) -s -f mydumper.mk endif clean: @$(MAKE) -s -f mydump.mk clean @$(MAKE) -s -f mydumper.mk clean
###### What is already known? - Whiplash injuries and associated disorders, if not treated and managed, can cause chronic pain, suffering, disability and healthcare utilisation. - There is no standardised protocol for rehabilitation or treatment of patients with whiplash-associated disorder (WAD). - The Quebec Task Force report remains the most ambitious and comprehensive review of the management of whiplash, to date. However, this report finds the available evidence to be sparse and of poor quality. ###### What are the new findings? - Literature shows that cervical spine exercise, including strengthening, endurance and stretching, shows benefit to patients with chronic WAD. - The benefit of invasive treatments, such as surgery or pharmaceuticals, is inconclusive in the current literature. - There is a need for further studies directed towards the optimal type, protocol and duration of a cervical spine exercise programme for patients with WAD. Introduction {#s1} ============ Whiplash injuries are estimated to affect 3.8/1000 people per year in the USA.[@R1] Sports, falls, automobile collisions and other physical trauma can cause a whiplash injury.[@R2] Whiplash injuries and associated disorders, if not treated and managed, can cause chronic pain, suffering, disability and healthcare utilisation.[@R3] Whiplash injuries and the associated medical care, disability, sick leave and lost work productivity account for \$3.9 billion annually in the USA.[@R4] It has been concluded that 14%--42% of patients with whiplash injury report symptoms 6 months postinjury, and 10% of those patients have constant severe pain.[@R6] Whiplash injuries result from a forceful, rapid, back and forth movement of the neck. The Quebec Task Force (QTF) developed recommendations regarding the classification and treatment of whiplash, and defined whiplash as an acceleration- deceleration mechanism of energy transfer to the neck and head from indirect neck trauma.[@R1] Whiplash symptoms can occur immediately postinjury, but can have delayed onset. The impact that causes whiplash may result in bony or soft tissue injuries, which in turn may lead to a variety of clinical presentations and manifestations.[@R9] The term whiplash-associated disorder (WAD), is used to describe the clinical presentation of a whiplash injury, and to separate it from the injury mechanism.[@R8] WADs represent a class of clinical complaints associated with a whiplash injury, and are characterised by multiple physical complaints after a flexion-extension trauma to the neck. Such complaints can include, but are not limited to, headache, dizziness, neck pain and cognitive symptoms.[@R10] Neck or cervical spine pain is the leading symptom of WAD, and is addressed by almost all published studies examining the diagnosis and treatment associated with such an injury.[@R11] WAD symptoms likely result from cervical ligament sprain or cervical muscle strain. Fractures to the cervical vertebrae are not included in WAD, although they can occur as a result of whiplash. Typically, symptoms of WAD should resolve in the first 2--3 months postinjury.[@R12] However, recent research suggests that approximately 60% of patients with whiplash injury are still reporting symptoms 3 months postinjury.[@R7] Patients remaining symptomatic for more than 3 months postinjury are considered to have chronic WAD, and present a challenging condition to treat for healthcare professionals.[@R13] WAD is difficult to treat, and often requires an interdisciplinary approach, including medical professionals, rehabilitation professionals and psychology professionals.[@R14] Interventions have to address physical complaints such as neck pain, and a cluster of physiological, cognitive and emotional symptoms that may result from WAD.[@R3] Due to the complexity of WAD, and the need for different interventions, WAD greatly impacts an individual's quality of life. Current treatment guidelines and protocols for healthcare professionals working with individuals with WAD, are vague, and have little supporting evidence. In 1995, QTF published its guidelines for the management of WADs that aimed to offer a recommendation to treatment and management for WADs.[@R8] While determining and performing an in-depth analysis of clinical, public health and financial determinates of whiplash, QTF[@R8] reviewed 10 000 publications to determine diagnosis, treatment and prognosis of WADs.[@R15] Through this process, the available evidence was found to be sparse and of poor quality.[@R16] QTF based their recommendations for WAD treatment on consensus and the expert knowledge of QTF members from various clinical fields. This report remains the most ambitious and comprehensive review of the management of whiplash, to date. However, the review by QTF (1995) does not include any studies investigating the effect of electrical stimulation, modalities, surgery, exercise, nerve blocks, psychological interventions or acupuncture.[@R9] All of these possible interventions may have a role in WAD treatment, and have been further investigated post-QTF guidelines. Conlin *et al* [@R16] published a systematic review of the whiplash intervention literature (included studies from 1993 to 2003), and noted that despite QTF's recommendations, 'remarkably little quality research' (Conlin, p39)[@R16] had been published in the area of whiplash treatment and management. Conlin *et al* [@R16] determined the need for further research in the area of chronic whiplash management. The finding in the QTF report, as well as the subsequent available literature on WAD treatment continues to be inconsistent and inconclusive, and does not offer a specific treatment or rehabilitation model for WADs. Rather, the best available literature varies in scope, profession and approach to the management of WAD in adults. The literature requires further research exploration prior to implementation into evidence-based clinical practice guidelines (moving beyond clinical experience and consensus). The purpose of this narrative review is to examine the literature to determine the effect of different cervical spine interventions in adults with chronic WAD. Materials and methods {#s2} ===================== This narrative review uses the Arksey and O'Malley[@R17] methodological framework. A narrative review refers to a rapid gathering of literature in a given clinical area with the objective of gathering as much evidence as possible and mapping the results.[@R17] Thus, a narrative review is appropriate for this topic, as the goal was to summarise the current state of the literature as it relates to the proposed research question. This narrative review aimed to address the research question: *What are current cervical spine treatment techniques for adults with chronic WAD.* The purpose of this narrative review was not to assess the quality of the intervention studies included (which would be more appropriate in a systematic review), but rather to encapsulate the current literature on cervical spine interventions for adults with a chronic WAD. Furthermore, if applicable, included studies were assessed for statistical significance and for clinical significance.[@R22] linical significance answers the question, 'how effective is the intervention or treatment, or how much change does the treatment cause?'.[@R18] Four of the 14 included studies, commented on clinical significance. The authors of each study reporting clinical significance determined these values for clinically important changes, and therefore there is some subjectivity.[@R21] A narrative study typical unfolds in a five-step process. Step 1: identify the research question {#s2a} -------------------------------------- The aim of this narrative review was to answer the question: What is known in the existing literature regarding the best ways to approach the biological rehabilitation of the cervical spine in adults with chronic WAD? Step 2: identification and study selection {#s2b} ------------------------------------------ A structured literature search was conducted from the following academic databases until April 2017 (with no publication data limitations): MEDLINE, MEDLINE non-indexed, PsychINFO, EMBASE, CINAHL, Web of Science and Scopus. For grey literature, ProQUEST dissertations and theses, wire feeds and trade journals were also searched. These databases were searched for articles published between 2003 and April 2017. The systematic review by Conlin *et al* [@R16] included studies published until the year 2003, therefore the purpose of this review is to compile and review the literature published post Conlin *et al* 's review. Step 3: select the studies for detailed analysis {#s2c} ------------------------------------------------ All articles that had titles that conformed to the inclusion criteria (listed below) were evaluated for relevance by two reviewers (CA and TT). The inclusion criteria for this narrative review was: (1) the study focused on people older than 19 years of age, defined as adults by WHO[@R27]; (2) the study focused on cervical spine treatment and intervention methods; (3) the study focused on rehabilitation postwhiplash with symptoms lasting longer than 3 months (chronic symptoms)[@R28] (4) the article was written in English and published after 2003, (5) the evaluation of the treatment effect must have included a measurable outcome. Citations were excluded from the review if they: (1) were case studies, non experimental or uncontrolled design[@R29]; (2) were not written in English. The definition of adult was inconsistent across the multiple databases therefore, the WHO definition of adult[@R27] was used to satisfy the overlap in age ranges found within the various databases. Symptoms lasting longer than 3 months are considered chronic.[@R7] For this reason, all included studies must have included symptoms lasting for longer than 3 months to be considered chronic, and thus to be included in the review. Abstracts were included to suit the broad and all-encompassing nature of narrative review methodology. Step 4: data extraction {#s2d} ----------------------- Consistent with the Arskey and O'Malley[@R17] recommendations, a 'descriptive-analytical' data extraction tool was developed by the primary author (CA) based on common variables across studies as well as 'process' information (eg, how were whiplash symptoms measured) to contextualise the outcome of the studies. All 14 selected articles were charted according to the domains outlined in the data extraction tool. Step 5: collate and summarise the findings of the selected studies {#s2e} ------------------------------------------------------------------ All articles included in the review were charted according to the data extraction tool as outlined in step 4. These conceptual dimensions provide the basis for the discussion of results in the next section. Numerical analysis was used to document the distribution of studies according to: study design, population characteristics, study purpose and neck pain measurement.[@R17] Qualitative content analysis[@R30] was used to summarise cervical spine interventions for adults with chronic WAD, wherein codes were generated from the data through an iterative process.[@R31] Studies were coded by the first author for whiplash severity; study procedure (eg, measurement of symptoms, diagnosis, intervention, time since injury) and outcome(s) measured (eg, neck disability and functional outcome). No attempt was made to represent these data in other terms (ie, through extant theory, themes) but rather to describe regularities as they appear in the data. Results {#s3} ======= In total, 14 articles were included in the narrative review (see [figure 1](#F1){ref-type="fig"} for the article selection process). These studies can be grouped into treatment categories: exercise programme, alternative techniques (including osteopathic therapy and other alternative therapies) and invasive interventions. ![Narrative review article selection flow diagram.](bmjsem-2017-000299f01){#F1} Treatment categories {#s3a} -------------------- ### Exercise intervention {#s3a1} Seven randomised controlled trials (RCTs) evaluated the effect of an exercise programme targeted at the cervical spine to address WAD symptoms in participants reporting symptoms for longer than 3 months. In an RCT, Stewart *et al* [@R32] determined that exercise and advice (experimental group) produces better outcomes than advice alone (control group) for people who have sustained WAD beyond 3 months. Furthermore, the effect of exercise has greater benefit in people with high levels of pain and disability associated with WAD. However, these positive effects seen with exercise and advice compared with advice alone were small and only apparent in the short term (up to 6 weeks post-treatment, pain intensity p=0.005, pain bothersome p=0.003, patient-specific functional p=0.006), where no significant results were evident between the experimental and control groups on any of the included outcome measures at 12 months post-treatment. During the treatment period, 15% of participants in the exercise and advice groups and 23% in the advice group reported seeking additional treatment, such as massage therapy, further physiotherapy, chiropractic treatment, hydrotherapy and osteopathic treatment. Similarly, in an RCT that included 170 participants, Michaeleff *et al* [@R23] concluded that a comprehensive exercise programme (including cervical spine strengthening, stretching and postural control exercises) did not provide statistically significant benefit over advice alone for the average reported pain intensity in the neck of patients with chronic WAD. A significant difference for self-reported recovery and functional ability was found for the exercise group at the conclusion of the study intervention. However, these results were not clinically significant, indicating that the differences found were not clinically meaningful. Vikne *et al* [@R33] also investigated cervical spine exercise and proposed incorporating a ceiling-mounted sling to promote and enable cervical spine stabilising exercise. In this study, there was no benefit of adding a ceiling-mounted sling exercise programme (designed to promote neuromuscular control of the neck) to a traditional exercise programme prescribed by a physiotherapist. In another RCT, Jull *et al* [@R34] discussed the effects of a supervised exercise and physiotherapy programme compared with a self-managing home exercise programme. Over a 10-week intervention, the multimodal physiotherapy (exercise, strength training of the cervical spine and stretching of the cervical spine) group attained significantly greater reduction in neck pain and disability than the self-managing group (p=0.04). In this study, neither the treating physiotherapists nor the patients were blinded. Ludvigsson *et al* [@R35] preformed a study to evaluate the effects of a cervical spine exercise programme with or without a behavioural approach on pain disability in patients with chronic WAD. They concluded that a physiotherapy-led exercise programme for the cervical spine resulted in superior outcomes (p\<0.01) compared with a physical activity prescription (not supervised) in the WAD population after 3 months. Overall, neck-specific exercise interventions resulted in significantly greater changes in disability compared with the unsupervised group. It should be noted that compliance was lower in the unsupervised exercise group. Authors indicated that patients may have felt that this intervention was less direct and specific to their problem and have been less motivated. In addition, a study by Treleaven *et al* [@R36] also investigated the effect of exercise with or without a behavioural approach for this population. The results found that all groups showed significant improvement in reported neck pain over the course of treatment. Simple contract analysis revealed significant improvements from baseline to 6 months (p\<0.001) and the 12-month (p=0.005) follow-up. No significant differences between the groups for supervised, unsupervised or exercise with behavioural approach was concluded. Finally, an RCT by Ryan *et al* [@R24] investigated strength training of the cervical spine with cervical spine endurance training. The results indicate statistically significant and clinically meaningful reductions in pain intensity over time in both the strength training and endurance training groups, but the strength training group experienced significantly greater reductions in functional limitations. ### Conclusions regarding exercise interventions in chronic WAD {#s3a2} Exercise programmes targeting the cervical spine (strength, endurance, flexibility, postural control) appear to be effective in reducing pain and disability from chronic WAD. Further research is needed to conclude the clinical significance of exercise interventions, specific exercise protocols, longevity of results and duration of treatment for exercise programmes in reducing symptoms for patients with chronic WAD. There was a high volume of RCT study designs employed in the literature assessing exercise interventions in chronic WAD, which suggests good methodological designs. ### Alternative interventions {#s3a3} The effectiveness of alternative interventions on the cervical spine for patients with chronic WAD was assessed in four studies. These alternative interventions included osteopathic treatment, dry needling and acupuncture, as possible interventions for patients with chronic WAD. In a clinical intervention study involving 42 participants, Schwerla *et al*,[@R25] conducted a study to evaluate the effects of osteopathic treatment on patients with chronic WAD. The osteopathic techniques included high velocity thrusts to the cervical spine, myofascial release, muscle energy practices, and indirect techniques such as balanced ligamentous tension and cranial techniques. A direct comparison between the untreated period and the treatment period revealed clinically relevant and statistically significant improvements in the osteopathic treatment period for the neck pain and disability outcome measurements. This study concluded that osteopathic treatments have a beneficial effect on the physical and mental components of WAD, and osteopathic treatment is a complimentary intervention for chronic WAD. It is important to note that because of the chosen study design (clinical intervention), the internal validity of the study is reduced and the results must be interpreted with caution. The results of the present study should be confirmed by RCTs. Another study, which investigated the effects of dry needling for chronic WADs, was conducted by Sterling *et al*.[@R26] Dry needling is a technique where a needle is used to release myofascial trigger points. In this study the researchers compared dry needling and exercise with sham dry needling and exercise for a 6-week intervention. Dry needling and exercise produced statistically significant reductions in pain-related disability, and pain catastrophising at 6 months and 12 months follow-up, statistically significant reductions in post-traumatic stress symptoms at 6 months and small increases in pressure pain thresholds over the neck at 12 weeks. Aside from this latter measure, there was no difference between the interventions at short-term follow-ups conducted immediately after treatment and 12 weeks later. The data indicate that there was improvement in the primary outcome measures (approximate 10% decrease in average Neck Disability Index Score) from baseline to the immediate 6-week postintervention assessment. The study did not include an exercise only group, so results should be interpreted with caution and are not deemed clinically relevant. Another similar study by Hyun *et al*,[@R37] investigated the effects of acupuncture three times per week (acupuncture points based on clinical assessment) for chronic WAD. This study found that a change in Visual Analogue Scale Scores in the acupuncture group was significant (−1.85 compared with −0.040 in the waiting-list group, 95% CI, p=0.001). No significant changes in secondary outcome measures, such as quality of life, were concluded. However, it was concluded that acupuncture treatment is associated with a significant alleviation of pain. It should be noted that the treatment period in this study was short (six treatments), and there was a lack of additional follow-up with participants after treatment termination. This may have resulted in only a partial assessment of the effect of acupuncture on patients with WAD. Diagnosis of WAD, and subsequent recovery reports from patients, were self-reported and there was no imaging involved. ### Conclusions regarding alternative interventions in chronic WAD {#s3a4} Based on the results from studies investigating alternative treatments for chronic WAD, there is limited evidence that such treatments provide relief to this population. Further research, with more rigorous methods, and RCTs are needed to produce clinically relevant results. ### Invasive interventions {#s3a5} Three studies investigated invasive techniques for the treatment of chronic WADs. A study by Nyostrom *et al* [@R38] explored treating patients suffering from WAD reporting cervical spine segmental pain through a fusion operation based on non-radiological segment localisation. At follow-up, 67% of the patients in the surgery group and 23% in the rehabilitation group demonstrated improvements in self-reported pain (p=0.0007). The researchers concluded that among patients with chronic neck pain postwhiplash, there are some in whom the neck pain emanates from a motion segment in the cervical spine, and may be suitable for fusion surgery. Another invasive technique, botulinum toxin A, was investigated in an RCT by Padberg *et al*.[@R39] In this study, participants were randomly assigned to receive botulinum toxin A or a placebo (saline) injection in the cervical spine muscles presenting with increased tenderness. No significant difference (95% CI) was found between the group who received botulinum toxin A and the group that received saline injections. Finally, a study by Lemming *et al* [@R40] investigated the effect of intravenous administration of morphine, lidocaine and ketamine and their relations to duration of chronic neck pain after whiplash trauma. The response to the pharmacological intervention did not show any relationship with pain duration. ### Conclusions regarding invasive interventions in chronic WAD {#s3a6} Results from the three studies investigating invasive interventions for symptoms related to chronic WAD concluded the need for further research to determine clinically relevant results. Further research and studies that employ the use of random allocation to experimental and control groups are needed to produce clinically relevant results. Discussion {#s4} ========== The soft tissue and/or vertebrae of the cervical spine are damaged in a whiplash injury due to the acceleration and deceleration of the neck.[@R9] For this reason, interventions addressing the soft tissue and/or the vertebrae of the cervical spine target the specific structures at fault likely causing the symptoms associated with chronic WAD. Studies investigating treatments for chronic WADs are consistent in concluding that addressing the soft tissue and vertebrae of the cervical spine, including exercise, alternative treatment and invasive interventions, is beneficial. Specifically, the treatments involving exercise seem to have more consistent clinical effect and show statistical significance than other studied interventions.[@R23] Up to 87% of patients with WAD have some degree of cervical spine muscle spasm.[@R6] Specific and graded exercise targeting cervical spine muscles and postural control can help alleviate muscle spasm, thus contributing to decreased symptoms in patients with chronic WAD. Research investigating alternative intervention techniques such as osteopathic treatment, acupuncture and dry needling, show benefit to patients with chronic WAD, but the results may not be clinically significant. Not all included studies in this review discussed the clinical significance of their results, and statistical significance cannot infer clinical significance. A calculation and discussion of clinical significance would contribute to this body of literature and the applicability of alternative interventions for the cervical spine post-WAD. Further studies, with improved methodological processes and RCTs must be published before clinical relevance can be deemed. Osteopathic treatment, acupuncture and dry needling pose very little risk or complications to a patient with WAD, and thus further research is required to support the use of these treatments as part of standard clinical care for adults with chronic WAD. Recommendations for invasive techniques cannot be made based on the current literature. Findings from a study discussing the use of pharmaceutical interventions concluded there was no relationship with pain management, and thus further research was needed to contribute to clinical practice. Studies investigating cervical fusion suggested a relief in pain for patients with chronic WAD, however the study was of low quality (due to study design), and further research is needed to be conclusive. The need for high quality research, such as RCTs, and the inclusion of calculations required to determine clinical significance, is required for surgical and injection-based interventions to prove clinical relevance and benefit for chronic WAD, especially given the cost and risk associated with invasive procedures.[@R29] However, if less invasive interventions, such as exercise and complementary medicine produce clinically significant results, such interventions would be preferred due to risk, cost, administration and outcome. A total of 14 studies was included in this narrative review, and although there are some statistically significant results, and results of clinical relevance (practical importance of a treatment effect), all studies concluded the need for further research to gain greater insight into cervical spine interventions for adults with chronic WAD. Conclusion {#s5} ========== In this review, all studies were RCTs, systematic reviews or clinical interventions. Exercise programmes focused on cervical spine strength and endurance appear to the most effective treatment technique for this population. However, the optimal type, protocol and duration of these programmes remain unknown. Further research is required to inform the implementation of these interventions into the standard clinical care for adults with chronic WAD. Literature contributing to cervical spine interventions for this population would be beneficial to reduce disability, improve quality of life and lessen the burden on our healthcare system. **Contributors:** CA and TT completed the literature review for this article. CA wrote the article. NR and EY edited and provided guidance throughout the process. **Funding:** The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. **Competing interests:** None declared. **Patient consent:** Not required. **Provenance and peer review:** Not commissioned; externally peer reviewed. **Data sharing statement:** The data shared in this article will be available to all. There is no data from this study that was not included.
The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus. Introduction ============ Hyperthyroidism is a common metabolic disorder with cardiovascular manifestations. It often causes classical high-output heart diseases because of decreased systemic vascular resistance and increased resting heart rate, left ventricular (LV) contractility, blood volume, and cardiac output \[[@REF1]-[@REF2]\]. However, thyrotoxic cardiomyopathy with severe LV dysfunction is rare. Heart failure (HF) is most commonly seen as a result of longstanding, often untreated, thyrotoxicosis with coexistent atrial fibrillation (AF). HF is a major cause of morbidity and mortality in Europe and in the United States and is responsible for a high rate of hospitalization \[[@REF3]-[@REF4]\]. Despite the advancement in HF treatment in the past 15 years, the prognosis of this dysfunction remains poor \[[@REF5]\]. Thyroid dysfunction is a modifiable risk factor for patients who are at risk of HF \[[@REF6]-[@REF7]\]. Case presentation ================= A 51-year-old male with a past medical history of hypertension and hyperthyroidism presented to the emergency department with symptoms of cough, shortness of breath, palpitation, dysphagia, pedal edema, and subjective fever. He was not taking medication for hyperthyroidism. The exam showed tachycardia, enlarged thyroid, crackles on lung auscultation, and edema feet. Further chest X-ray (CXR) showed pneumonia with right pleural effusion, as shown in Figure [1](#FIG1){ref-type="fig"}. ![Chest X-Ray\ Arrow showing right pleural effusion](cureus-0010-00000002410-i01){#FIG1} Electrocardiography (EKG) showed atrial flutter with a variable atrioventricular block, as shown in Figure [2](#FIG2){ref-type="fig"}. ![EKG showing atrial flutter\ Arrows showing sawtooth pattern\ EKG: electrocardiogram](cureus-0010-00000002410-i02){#FIG2} Other significant abnormal labs were as shown in Table [1](#TAB1){ref-type="table"}. ###### Thyroid function test --------- --------- --------- TSH Free T4 Free T3 \<0.005 \>7.5 25.1 --------- --------- --------- The patient was started on levaquin for pneumonia. For atrial flutter, an echocardiogram was ordered and cardiology was consulted. The patient was started on propylthiouracil (PTU) 50 mg three times a day and a beta-blocker. The echocardiogram showed mild biventricular dilatation. The left ventricular systolic function was markedly reduced with an ejection fraction of 25% to 30% with severe diffuse hypokinesis. The right ventricular systolic function was moderately reduced, with a markedly elevated right ventricular systolic pressure of 59 mmHg. There was also marked biatrial enlargement. Echocardiogram findings are shown in Figures [3](#FIG3){ref-type="fig"}-[5](#FIG5){ref-type="fig"}. ![Echocardiogram: Arrow pointing toward dilated right atrium](cureus-0010-00000002410-i03){#FIG3} ![Echocardiogram: Arrow pointing toward dilated left atrium](cureus-0010-00000002410-i04){#FIG4} ![Echocardiogram: Arrow pointing toward dilated left ventricle](cureus-0010-00000002410-i05){#FIG5} The patient was started on the beta-blocker and anticoagulation. A stress test was done that showed a small, fixed defect in the apex, but no evidence of ischemia. The patient was also diuresed with diuretics due to the volume overload. A thyroid ultrasound was ordered that showed that the right and left lobes were enlarged and hyperemic, measuring 7.1 X 4.0 X 4.0 and 6.4 X 3.9 X 4.0, cm, respectively. The thyroid ultrasound findings are shown in Figures [6](#FIG6){ref-type="fig"}-[11](#FIG11){ref-type="fig"}. ![Right thyroid lobe with dimensions](cureus-0010-00000002410-i06){#FIG6} ![Doppler showing vascularity in the right thyroid lobe](cureus-0010-00000002410-i07){#FIG7} ![Left thyroid lobe with dimensions](cureus-0010-00000002410-i08){#FIG8} ![Doppler showing vascularity in the left thyroid lobe](cureus-0010-00000002410-i09){#FIG9} ![Isthmus of thyroid](cureus-0010-00000002410-i10){#FIG10} ![Doppler showing vascularity in the isthmus of thyroid](cureus-0010-00000002410-i11){#FIG11} Findings were consistent with goiter. General surgery recommended thyroidectomy due to the mass effect of goiter. A thyroidectomy was performed without complications. The pathology showed multinodular goiter and the patient had an uneventful rest in the hospital. He was discharged on a beta-blocker, synthroid, lasix, and rivaroxaban. Discussion ========== This patient presented with severe but reversible systolic LV dysfunction due to hyperthyroidism. He was relatively young, which indicates that the development of overt congestive heart failure (CHF) due to hyperthyroidism is not limited to the elderly population, but can develop in younger patients if hyperthyroidism is left untreated for a long period of time. Hyperthyroidism, usually due to Graves' disease, is fairly commonly encountered in clinical practice and can present with a wide variety of signs and symptoms. Typically, it presents with the features of heat intolerance, weight loss, sweating, palpitation, tremors, and hyper defecation. If left untreated, it can cause heart failure. Occasionally, it presents with heart failure in the absence of any classic symptoms of hyperthyroidism, as is the case with the apathetic hyperthyroidism seen in the elderly \[[@REF8]\]. The cardiac effects of the thyroid hormone have been known for more than a century. Thyroid hormone exerts its cardiac effects indirectly through its effect on the vasculature and body metabolism and directly through its effect on the heart. Peripherally, tri-iodothyronine (T3) has been shown to decrease systemic vascular resistance (SVR) by promoting vaso-dilatation \[[@REF9]\]. This action is mediated by the direct effect of T3 on vascular smooth muscle \[[@REF10]\]. The resulting decrease in SVR activates the renin-angiotensin-aldosterone system, leading to retention of sodium (Na+) and fluid. Thyroid hormone also increases erythropoiesis. The net effect is a resultant increase in the total blood volume and stroke volume. At the myocyte level, T3 enters the cell via specific transport proteins, resulting in the enhanced contractility and relaxation of the myocardial cells through transcription and non-transcription-mediated effects. The transcriptional effects lead to increased contractility through effects on the release and uptake of sarcoplasmic reticular calcium (Ca++) and the phosphorylation of phospholamban. The nontranscriptional effects are mediated by the effect of thyroid hormone on various ion channels. These cardiac effects, coupled with a generalized increase in tissue metabolism, low SVR, and an increase in total blood volume, lead to a high cardiac output state in hyperthyroidism. Clinically, thyroid hormone can have a wide variety of effects on the heart, ranging from sinus tachycardia and AF, to dilated CHF. Clinically significant CHF due to hyperthyroidism is considered a rare occurrence. Initially, in the course of the disease, the patient is in a high cardiac output state, due to the factors mentioned above, limiting only exercise tolerance. Later in the course of the disease, if untreated, the patient can develop severe systolic dysfunction with overt signs and symptoms of heart failure. This is more commonly seen in patients with a pre-existing heart disease, such as ischemic, hypertensive, or alcoholic cardiomyopathy, the former being more common in the elderly. Although the exact etiology of CHF in hyperthyroidism is unclear, the concept of "tachycardia-induced cardiomyopathy" secondary to prolonged sinus tachycardia or AF with a rapid ventricular response is more plausible, as LV dysfunction commonly improves with adequate control of the heart rate long before the euthyroid state is restored. The treatment of CHF should be aimed at correcting hyperthyroidism with oral antithyroid medication. The first line of treatment of CHF secondary to hyperthyroidism is a beta blocker, except in patients with marked hypotension, reversible airway disease, and marked bradycardia, especially with a second- or third-degree atrioventricular block. Beta-blockers not only help ameliorate the noncardiac symptoms of the disease but also decrease the heart rate by controlling sinus tachycardia or decreasing the ventricular response to AF by action on the β1 receptors in addition to other unproven actions. Conclusions =========== Hyperthyroidism, if untreated, can lead to overt dilated HF and death. Hyperthyroidism treatment should be taken seriously. Definitive treatment is radioactive iodine gland ablation vs. surgery. Initially, the patient should be stabilized with oral antithyroid medication. The treatment of hyperthyroidism has not changed greatly in the past several decades. Future research should be directed toward better understanding the pathogenesis of Graves/goiter hyperthyroidism to direct therapy at the underlying cause of the hyperthyroidism and to obtain a cure that is safe, conservative, and definitive. Current treatment options are limited and include medication that needs to be taken lifelong; this is associated with their toxicity. Radioactive iodine ablation comes with the drawback of long-term replacement therapy. The last option is surgery, which is invasive and has complications. The authors have declared that no competing interests exist. Consent was obtained by all participants in this study
An LCD is a display apparatus in which polarized liquid crystal, a macromolecule substance, is sealed in between two transparent electrodes. Information is displayed on the LCD by applying a desired voltage between the two electrodes to change the orientation of the liquid crystal molecules according to the applied voltage to control the light transmittance between the electrodes on a pixel basis. To fabricate an LCD, therefore, a pixel part which consists of transparent electrodes and liquid crystal sealed therebetween as well as a driver for controlling the voltage to be applied to the pixel are required. FIG. 1 shows an equivalent circuit of an LCD. The liquid crystal sandwiched between two electrodes is represented as a pixel capacitor 1. In many cases, an auxiliary capacitor 9 is formed on a panel in order to provide sufficient capacitance. The auxiliary capacitor 9 has a constant capacitance. The pixel capacitor 1 and the auxiliary capacitor 9 are connected to a switching transistor 2 which is driven by a gate line 3. The source electrode of the switching transistor is connected to a source line 4. An address is assigned to the gate line 3 and the source line 4, respectively. When the address (Sm, Gn) is specified, the voltage on the source line 4 is provided to the pixel capacitor 1 which consists of two electrodes and liquid crystal sealed in between them, and the auxiliary capacitor 9 described above through the switching transistor 2 which is driven by the gate line 3. This voltage causes the orientation of the liquid crystal molecules to change to control light transmittance. An electrode which is opposed to a pixel electrode 5 is commonly called the "counter electrode" 6. In general, the tilt angle of liquid crystal molecules is roughly proportional to an applied voltage. In recent years, as the display quality has been refined, eight or 16 levels of voltage are applied, instead of simple two levels, and the different brightness levels are represented according to the different voltage levels. That is, the voltage applied to the source line is not constant. Instead, the voltage varies according to data which is to be displayed by a particular pixel. The alignment of the liquid crystal molecules may be caused by applying a dc voltage to them. However, it is known that the liquid crystal sealed in the cell deteriorates in a very short time or is burnt if a dc voltage is applied. To apply a level of voltage to the liquid crystal cell, therefore, an ac voltage is generally used. That is, usually, voltages which have the same absolute value and opposite polarity and corresponds to certain gray scale are applied alternately in order to display gray scale. There are two types of such a driving method using an alternating voltage which are conventionally used. The first method uses a high voltage driver. This method applies a potential to the pixel electrode by using an alternating voltage while retaining the voltage applied to the counter electrode at a constant level, as shown in FIG. 2. The potential applied to the cell is high, typically between 10 and 20V. This method presents a number of problems in terms of manufacturability. For example, it is difficult to develop a driver which achieves both a high voltage and high speed. Furthermore, it is not easy to integrate a high voltage circuit which provides multiple levels of output. The second method, as shown in FIG. 3, applies a relatively low voltage (about 5V) to the pixel electrode while applying a high alternating voltage to the counter electrode, and combines these voltages applied to the pixel and the counter electrode in order to achieve an alternating voltage drive effect. This method, however, requires that the counter electrode with large load be driven by a high alternating voltage, thus, the power consumption of the LCD panel is very large. Furthermore, this method is not practical because, as the pixel size becomes smaller, it is difficult to include wiring for driving the counter electrode by alternating voltage, especially in the case where an auxiliary capacitor 9 is included in the cell. As described above, although the counter electrode potential may be maintained at a constant level using a high voltage driver, it is difficult to achieve high speed using such a driver, and such a driver is costly. If a low withstand voltage driver is used, an alternating voltage must be applied to the counter electrode in order to accomplish alternative driving of the cell. The application of this voltage will consume more electric power and increase the complexity of wiring, and the complex wiring will increase the cost. Therefore, it is desirable to overcome these disadvantages.
MANHATTAN — It’s that time again: the sun-meets streets phenomenon of Manhattanhenge returns this weekend for what skygazers hope will be another spectacular showing. Each Manhattanhenge – when the sun aligns with the borough’s grid, lighting both the north and south sides of every street – is made of two nights. During the first night, half of the sun’s disk sits above the horizon, and the other half below. On the second night, the entire disk floats above the horizon. The year’s first henge will take place on Sunday; the sunset is at 8:12 p.m. An encore will happen the next night at the same. If you miss the holiday weekend showing, there’s another in July — specifically on the 11th and 12th of that month. Skygazers should position themselves as far east in Manhattan as possible. The best viewing will be along 14th, 23rd, 34th, 42nd and 57th streets, and at the Empire State and Chrysler buildings.
--- author: - | Sizhuo Ma Brandon M. Smith Mohit Gupta\ Department of Computer Sciences,\ University of Wisconsin-Madison, USA\ {sizhuoma,bmsmith,mohitg}@cs.wisc.edu title: | Differential Scene Flow from Light Field Gradients:\ Supplementary Technical Report --- In this report we show the full 3D visualization of the recovered motion for the experiments described in the main paper. ![**Recovering non-planar motion.** A rotating spherical ornament. All methods can estimate the gradually changing Z-motion, but only our method recovers the background correctly.[]{data-label="fig:exp_sphere"}](SuppFigures/ExpSphere.pdf){width="\linewidth"} ![**Recovering non-rigid motion.** An expanding hand. The expansion is demonstrated by the different Y-motion of the fingers.](SuppFigures/ExpHand.pdf){width="\linewidth"} ![**Recovering motion in natural environments with occlusions.** The card in the center moves upward. Our method can recover the motion of the card, despite occlusions and lack of texture around the white boundaries.](SuppFigures/ExpCards.pdf){width="\linewidth"} ![**Recovering motion in natural environments with occlusions.** The mug on the left is picked up by a hand. Our method estimates the motion boundaries accurately.](SuppFigures/ExpMug.pdf){width="\linewidth"} ![**Recovering motion in natural environments with occlusions.** The top two vertical branches of the plant quiver in the wind. Our method can correctly compute the motion of the two complex-shaped branches.](SuppFigures/ExpPlant.pdf){width="\linewidth"} ![**Recovering human actions.** Handshaking. All the three methods compute the joining movements of the hands correctly, while our method preserves the hand boundary best.](SuppFigures/ExpShakehand.pdf){width="\linewidth"} ![**Recovering human actions.** Waving hand. Our method correctly estimates the motion in spite of the reflections and textureless regions in the background, which is challenging for depth estimation algorithms.](SuppFigures/ExpWavehand.pdf){width="\linewidth"} ![**Recovering motion under challenging lighting conditions.** A figurine moves under weak, directional lighting. Our method still preserves the overall shape of the object, although its reflection on the table is also regarded as moving.](SuppFigures/ExpFlash.pdf){width="\linewidth"} ![**Recovering motion under challenging lighting conditions.** A few objects move independently. Due to shadows and lack of texture in the background, boundaries of the objects are not distinguishable in the recovered motion field of all the three methods.[]{data-label="fig:exp_desktop"}](SuppFigures/ExpDesktop.pdf){width="\linewidth"}
UK Denies Accusation of Betrayal Against Kurds ERBIL — A British Foreign Office minister, in response to criticisms against his government’s position towards Kurds, said that the UK did not betray the people of Kurdistan Region during their independence vote. “We formed a view very early that we didn’t believe it was in the interests of the region or those who advocated it,” he said. “We weren’t alone in relation to this,” said Alistair Burt, Minister of State for the Middle East. He claimed that the position UK had taken on the issue was an “honest” advice to the Kurds and not “a matter of betrayal”. The remarks are in response to Tom Hardie-Forsyth, a former British Cabinet Office, and NATO official, who said earlier this week that London had failed to support and protect the Kurds against Iran. He named the Kurds as the most important allies for the UK in the Middle East. Hardie-Forsyth, currently an informal adviser to the KRG, said that Britain had “detailed intelligence warning of the precise links” between Iran and the Shia militias that Tehran supports in Iraq, and that ministers had inadvertently helped to “neutralize” the Iraqi Kurds in efforts to limit the influence of Iran in the region, as reported by Kurdistan 24. Burt denied the statement, saying, “I’m not aware that we have taken such action that we would have neutralized the Kurds. I don’t think of the Iraqi Kurds to be a neutered body.”
[Clinical course as a risk factor for chronification of depressive disorders. Results of a 6-year follow-up]. A 6-year follow-up study of unipolar depressive inpatients shows a high percentage of 26% of patients with a chronic course. The chronic patients have significant longer phases of depression in the history of their illness or in the index episode and a longer hospitalisation in the index episode. Patients with chronic course have although more often a first episode before the age of 25 years, were less suicidal by admission to hospital and show less improvement from admission to discharge. The study shows, that although not all aspects of the clinical course are related with chronicity, the duration of depressive phases is a good predictor for a chronic course.
The Bronx A University Heights man has been charged with murdering his roommate with a baseball bat, law-enforcement sources said Sunday. Cleto Chalche-Rivera, 39, beat the victim to death over a financial dispute, according to a police source. Israel Ramos-Lopez, 45, was found with his head split open in the bathroom of the apartment he shared with Chalche-Rivera on Morris Avenue near Commerce Avenue at about 12:30 p.m. April 13, police sources said. His roommate was arrested at around 8:15 p.m. Saturday, and faces weapons charges in addition to murder, according to police. Four people were shot Sunday after a Morris Heights party, cops said. The victims were leaving the house party on University Avenue near West Burnside Avenue at about 1:20 a.m. when a man believed to be in his 20s began to shoot at the crowd, according to police sources. A 25-year-old man was shot in the leg, another man, 55, was struck in the torso, and a woman, 22, sustained a leg wound. A 26-year-old man was grazed in the shoulder. The victims took themselves to Lincoln Hospital. No arrests have been made. Police knocked down the door of a Mott Haven apartment to rescue a woman in a domestic dispute with her boyfriend Sunday, cops said. When officers arrived on the 12th floor of the building on East 137th Street near Willis Avenue at about 7:30 a.m., they heard a woman inside crying and screaming, police sources said. Her boyfriend refused to open the door for the cops, so they smashed their way in and arrested the man, the sources said. The woman and her three children were taken to an area hospital with minor injuries. The boyfriend was also taken to the hospital for evaluation, the sources said. The name of the man has been withheld pending the filing of charges, cops said. Brooklyn A man studying in a Sheepshead Bay park was slashed and robbed Sunday morning by a pedaling pilcher, according to cops. The 28-year-old victim was sitting in Yak Playground at Coyle Street and Avenue Y at about 9:30 a.m. when a man on a bike approached him and asked to use his cellphone, according to police. As the victim handed over his phone, cops said, the thug reached into the man’s pocket and grabbed his wallet while proclaiming, “You are beat!” The victim tried grabbing the thief’s bike, but his attacker slashed him in the stomach with a pocket knife, police said. The thug then dropped the man’s wallet and phone and fled with just $10, cops said. The crook was described as being in his 20s, about 6-foot-2 and 160 pounds. Queens An elderly woman was killed when she smashed her car into a pillar in Ozone Park, cops said Sunday. Carmela Ruotolo, 88, crashed her 2000 Hyundai at 7:40 p.m. April 20 at Crossbay Boulevard and Liberty Avenue, according to police. Ruotolo, from Howard Beach, was rushed to Jamaica Hospital, where she died April 24, cops said. Manhattan A man was walloped in the head with a bicycle security chain in the West Village, cops said Sunday. An argument escalated into violence at West 13th Street and Washington Avenue at about 1 p.m. April 22, according to law-enforcement sources. Carl Wu, 20, picked up his bicycle lock-chain with the intent to cause physical injury and hit the victim, also 20, in the face, the sources said. The victim’s lip was split, and he was rushed to a nearby hospital, where he got several stitches, police sources said. Wu was charged with felony assault. A fight broke out in front of a Greenwich Village bar, and one man hurled a glass at a window, injuring people, cops said Sunday. Nathaniel Johnson, 40, allegedly chucked the glass at an open window in front of Author’s Tavern at Grove Street and Seventh Avenue at about 10:30 p.m. on April 23, sources said. The glass hit a bar on the window and shattered, striking two people with shards, the sources said. A witness followed Johnson north on Seventh Avenue for some time, and then police nabbed him, the sources said. He was charged with assault and criminal possession of a weapon. Police arrested two women who attacked each other outside an East Harlem party Sunday, cops said. The fight erupted on East 108th Street near Park Avenue at about 3:15 a.m., police said.Catianna McLeod, 30, told cops that Jessica Johnson, 38, bit her and used a Taser on her, police sources said. Johnson said McLeod punched and kicked her, leaving her with swollen eyes and a bloody nose, thee sources said. It is not clear what started the dispute. Both women were treated at Metropolitan Hospital and charged with assault, the sources said.
Closing Time: Training Camp Wraps Up Ah, training camp. A time when the heady mix of a long-forgotten feeling merges with the barely-glimpsed ghost of actual on-court play to create its own admixture of hope and anxiety about the future. Take that, Deadspin. But seriously: Practice ran long yesterday, resulting in a clutch of media perched by the front window of the Taylor Center on the lovely (no, seriously, the autumn colors were lovely) campus of Mankato State University and doing things like this: By the time we got into Bresnan Arena, the Wolves still on the court were shooting spot-up 3-pointers in pairs from five positions on the court (right and left corners, right and left wings and top of the key), 20 shots at each spot, for a total of 100 shots. Kevin Martin talked about it while J.J. Barea and Ricky Rubio fed each other in sets of ten, a Wolves’ assistant writing down their makes-to-attempts on a clipboard. “I’ve never been on too many teams that do that,” Martin said. “So that’s the importance of how the 3-point shot has evolved.” Clearly the Wolves are taking 3-point shooting very seriously this year, and my guess is that they just want there to be no chance that they start the year off cold. Although the Wolves were dead last in 3-point shooting last season, they weren’t a team like Memphis that just doesn’t have shooters. They were missing two of their best in Kevin Love and Chase Budinger, and then guys that should have been better from distance just weren’t. It even baffled Martin, who said he couldn’t see how they could have been last in the league. “From what I’ve seen, everybody’s making threes now. We’ll see when the lights come on.” Something that seems to come up a lot is how Rubio, Love and Nikola Pekovic are all going to play together given that they only notched 13 minutes on the court last season, according to nbawowy.com (the fewest minutes for a 3-man lineup recorded on NBA.com for the Wolves is 18). But given that they played 457 minutes together in 2011-12 (eighth most of any 3-man lineup that season) and looked like a lock for the eighth seed in the West while doing so, their chemistry shouldn’t overly concern anyone. If anything, incorporating Martin and Corey Brewer into the starting lineup is a more pressing task, and that was a lot of what this training camp was about. “Just trying to read each other’s sweet spots on the court, where not to be when Ricky is doing his dribbling thing out there,” said Martin. “It’s all learning process right now, but I think everything’s coming together good.” Head coach Rick Adelman echoed this: “I think we wanted to find out about players. We’ve got to get our guys playing together, the main guys, find out about our rookies and free agents and see where they fit in. Every day you evaluate that. We still got a long ways to go.” That long way is particularly evident to Adelman on the defensive end. Although he emphasized that everyone was working hard, he said, “The concentration on the defensive end — especially the veterans — has not been good, it’s not the quality we need. But we’ll see how that goes as the weeks move on and we start playing games and see where we are. “We know from the very beginning that we have a lot of offensive players and they’re concentration is on that end,” he continued. “Their concentration has to be at both ends. You can drill on everything in the world and when they’re drilling they’re okay, but as soon as they get out on the court, their concentration starts slipping. So I’d say most of it is mental. As soon as we start playing other teams, we’re going to find out quick.” It may sound blindingly obvious, but it seems like the major goal for the Wolves out of the gate this year is to get better in two areas: offense and defense. The emphasis on 3-point shooting was clear in their offseason moves and in the shooting drill they were using to end practice. Shooting a hundred 3-pointers after you’ve already been practicing for several hours is not easy. But Adelman was equally concerned about not just playing good defense, but establishing a defensive identity, and he explained that was something that had to start with Love and Pekovic. “Those two guys really should be the anchors for us,” he said. “I believe that most of the good teams, especially defensively, their big guys set the tone. They’re the ones talking. They see everything coming at you. They’ve gotta be more vocal and in the right spot early. I think for us to get better our big people have to do that. Dante [Cunningham], he does a good job of that. We have to have those two guys do the same thing.” The problem is that while Pekovic has become a solid positional defender in terms of understanding rotating and defending the pick and roll, neither is going to affect shots. “We don’t have that advantage,” Adelman continued. “We don’t have somebody who bothers shots. G [Gorgui Dieng] can do it, he can bother shots around the basket, he’s long. But he’s also a rookie.” Adelman went on to praise new addition Ronny Turiaf as a great position player, always in the right spot and committed to helping. “That’s the type of thing that we need our big guys to do.” But as training camp closes for the Wolves in Mankato and they prepare to take on CSKA Moscow on Monday night at the Target Center, the only thing that’s clear is that training camp is a labyrinth of lessons and expectations that obscures more than it reveals. We barely got to see any action on the court and even if we had, it’s not clear what it would have told us, balanced as it is between trying new, untested things and establishing solid go-to approaches. Martin broke down what the rest of the preseason will look like. “We’re gonna go out Monday night, work on the basics, what we learned in training camp,” he said. “Then we’ll play four games with the basics. Then we’ve got seven days off, so we’ll watch a lot of film and then we’ll start to put in new stuff as we go. It’s just all fill-out process.” In short, the takeaway is more patience. More patience for the coaches as they learn what players are capable of both individually and as a team and as units within that team. More patience for the players as they learn each others’ rhythms and spots. And finally, more patience for fans as we wait for preseason games that are in themselves only shadows of what’s to come. But it’s coming. Steve McPherson 2 responses to Closing Time: Training Camp Wraps Up I’m really excited to see this team play. As a Wolves fan I’m trying hard to fight back the pessimism I inherently feel each year but *if* this team stays healthy (I know that’s a big if) it will be a blast to watch. After last year it feels like we’ve been waiting for two off seasons not just one to see this thing come together. Thanks for the update Steve, I didn’t mean to come off too harsh in my last post. I’m just one hungry fan waiting to see this season get under way. Let’s hope we have a lot more luck on our side this year.
Q: Balance sheet measure DAX (fill blanks) Background I'm trying to build a balance sheet in Power BI based on a transaction file. My report has a transaction table containing classic accounting transactions (account number, amount, description, date etc.), an allocation table which allocates accounts to a balance sheet, P&L or cashflow hierarchy (account, PLlvl1, PLlvl2 etc.) and a calendar table. Constructing a proper running total measure to sum all previous transactions creating a basic balance measure is pretty straight forward, see code below. Balance = CALCULATE( SUM ( data[Amount] ) ; FILTER( ALL( '$Calendar' ); '$Calendar'[Date] <= MAX( '$Calendar'[Date] ) ) ) Problem This works fine at low resolutions (year) however, when making a month on month overview, the summation only show a value in periods where there was a mutation, all other months remain empty. Desired solution In this simplified example, my desired result would be for the the blanks to carry over values from the previous period, the -350 also showing in February and March, the -700 in May and June etc. etc. but I cant seem to figure a way to do it properly. Attempts So far I've tried creating a huge cross table between the calendar table and the accounts table but this makes the report grind to a halt pretty fast as soon as I import more data. Furthermore I tried using LASTNONBLANK(), TOTALYTD() and others in several ways even trying a more manual approach like: Attempt 6 = var LastNonBlankDate= CALCULATE( MAX('$Calendar'[Date]); FILTER( ALL('$Calendar'[Date]); '$Calendar'[Date]<=MAX('$Calendar'[Date]) && SUM(data[Amount])<>0) ) RETURN CALCULATE(SUM(data[Amount]); FILTER(ALL('$Calendar');'$Calendar'[Date]=LastNonBlankDate)) Nothing seems to do what I want.. Can somebody help me in the right direction? A fiddle is temporary available here A: Just change your data model relations into "single" from "both": Never, ever use bi-directional relations unless you have no other choice (which almost never happens except some very rare situations, which is not the case here). You can also simplify your measure a bit: Attempt 3 = VAR Current_Date = MAX( '$Calendar'[Date] ) RETURN CALCULATE( SUM ( data[Amount] ) , '$Calendar'[Date] <= Current_Date ) Result:
We Can't Stop Laughing At Harrison Ford 'Forgetting' This Detail About Ryan Gosling Ryan who? The two actors recently appeared on The Graham Norton Show, based in London, to promote their new film, Blade Runner 2049. Ford was in the middle of discussing how he suggested Gosling be cast in the film with the show's host when he suddenly forgets Gosling's first name and looks over to him for a quick reminder of what it is. Without cracking a smile, Gosling tells him, causing immediate laughter from the audience.
With recent news surrounding The DOA and the Brexit vote causing a stir in the blockchain world and beyond, it seems like a regulatory update is due. So we called up our favorite regulatory affairs specialist Si?ƒn Jones to enlighten us on some of the recent developments in Bitcoin and blockchain regulation. Topics covered in this episode include: The Brexit and it’s potential impacts on the Blockchain and Fintech space in the UK The Bank of England opening its doors to more than a thousand financial institution and payment service providers Some of the initiatives by the UK government to potentially adopt blockchain technologies The recent European Parliament plenary sitting on virtual currencies and the Distributed Ledger Technology Task Force An update on BitLicense and its impacts a year and a half after being adopted in New York
Q: Integrability of Lie algebroids In the article https://arxiv.org/pdf/math/0611259.pdf, it is defined the integrability of a Lie algebroid as follows: a Lie algebroid $A$ is integrable iff it is isomorphic to the Lie algebroid of a Lie groupoid $\mathcal{G}$. I have two questions concerning this definition: 1) The Lie groupoid $\mathcal{G}$ is required to be over the same base as $A$? 2) What is exactly an isomorphism of Lie algebroids? It is strange, because they define this notion just before defining what a morphism of algebroids is. Is it because the isomorphism is intended to be over the same base, as I'm asking in 1), and then the notion of compatibility with the anchors and the brackets is trivial and, hence, such an isomorphism is just an isomorphism of vector bundles over the same base with these compatibilities? Thanks a lot! A: 1) Yes: the Lie algebroid associated to a Lie groupoid $G\rightrightarrows M$ is a vector bundle over $M$. If we'd like a Lie groupoid to be an integration of a given Lie algebroid, then its associated Lie algebroid should be the one we started with. 2) The definition of what it means to be a (general) morphism of Lie algebroids is not obvious because a map of vector bundles does not, in general, induce a map on sections. If, however, we have a map of vector bundles whose base map is a diffeomorphism, then there is an induced map on sections - so indeed, checking that an isomorphism of vector bundles is an isomorphism of Lie algebroids is easier than the general case. To be a bit more precise, suppose we have two Lie algebroids $A\to M$ and $B\to N$, and $\varphi\,\colon A\to B$ is a map of vector bundles such that the base map $f\,\colon M\to N$ is a diffeomorphism. Then, since $f$ is invertible, there is a pushforward map $\varphi_*\,\colon\Gamma(M,A)\to\Gamma(N,B)$. Then it is true that $\varphi$ is a morphism of Lie algebroids if it is compatible with the anchors (which means $\rho_B\circ \varphi = df\circ \rho_A$) and $\varphi_*$ is compatible with the Lie brackets. In particular, $\varphi$ is an isomorphism as long as it's an isomorphism of vector bundles and these two conditions are satisfied.
#!/usr/bin/env python3 # -*- coding: utf-8 -*- # # IkaLog # ====== # Copyright (C) 2015 Takeshi HASEGAWA # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import cv2 import os import numpy as np from ikalog.utils.character_recoginizer import * from ikalog.utils import * class NumberRecoginizer(CharacterRecoginizer): def __new__(cls, *args, **kwargs): if not hasattr(cls, '__instance__'): cls.__instance__ = super( NumberRecoginizer, cls).__new__(cls, *args, **kwargs) return cls.__instance__ def __init__(self): if hasattr(self, 'trained') and self.trained: return super(NumberRecoginizer, self).__init__() model_name = 'data/number.model' if os.path.isfile(model_name): self.load_model_from_file(model_name) self.train() return IkaUtils.dprint('Building number recoginization model.') # try to rebuild model data = [ {'file': 'numbers2/num0_1.png', 'response': 0, }, {'file': 'numbers2/num0_2.png', 'response': 0, }, {'file': 'numbers2/num0_3.png', 'response': 0, }, {'file': 'numbers2/num1_1.png', 'response': 1, }, {'file': 'numbers2/num1_2.png', 'response': 1, }, {'file': 'numbers2/num1_3.png', 'response': 1, }, {'file': 'numbers2/num2_1.png', 'response': 2, }, {'file': 'numbers2/num2_2.png', 'response': 2, }, {'file': 'numbers2/num2_3.png', 'response': 2, }, {'file': 'numbers2/num3_1.png', 'response': 3, }, {'file': 'numbers2/num3_2.png', 'response': 3, }, {'file': 'numbers2/num3_3.png', 'response': 3, }, {'file': 'numbers2/num4_1.png', 'response': 4, }, {'file': 'numbers2/num4_2.png', 'response': 4, }, {'file': 'numbers2/num4_3.png', 'response': 4, }, {'file': 'numbers2/num5_1.png', 'response': 5, }, {'file': 'numbers2/num5_2.png', 'response': 5, }, {'file': 'numbers2/num5_3.png', 'response': 5, }, {'file': 'numbers2/num6_1.png', 'response': 6, }, {'file': 'numbers2/num6_2.png', 'response': 6, }, {'file': 'numbers2/num6_3.png', 'response': 6, }, {'file': 'numbers2/num7_1.png', 'response': 7, }, {'file': 'numbers2/num7_2.png', 'response': 7, }, {'file': 'numbers2/num7_3.png', 'response': 7, }, {'file': 'numbers2/num8_1.png', 'response': 8, }, {'file': 'numbers2/num8_2.png', 'response': 8, }, {'file': 'numbers2/num8_3.png', 'response': 8, }, {'file': 'numbers2/num9_1.png', 'response': 9, }, {'file': 'numbers2/num9_2.png', 'response': 9, }, {'file': 'numbers2/num9_3.png', 'response': 9, }, # bigger {'file': 'numbers2/num0_4.png', 'response': 0, }, {'file': 'numbers2/num0_4.png', 'response': 0, }, {'file': 'numbers2/num0_4.png', 'response': 0, }, # { 'file': 'numbers2/num1_4.png', 'response': 1, }, # { 'file': 'numbers2/num1_5.png', 'response': 1, }, # { 'file': 'numbers2/num1_6.png', 'response': 1, }, {'file': 'numbers2/num2_4.png', 'response': 2, }, {'file': 'numbers2/num2_5.png', 'response': 2, }, {'file': 'numbers2/num2_6.png', 'response': 2, }, {'file': 'numbers2/num3_4.png', 'response': 3, }, {'file': 'numbers2/num3_5.png', 'response': 3, }, {'file': 'numbers2/num3_6.png', 'response': 3, }, {'file': 'numbers2/num4_4.png', 'response': 4, }, {'file': 'numbers2/num4_5.png', 'response': 4, }, {'file': 'numbers2/num4_5.png', 'response': 4, }, {'file': 'numbers2/num5_4.png', 'response': 5, }, {'file': 'numbers2/num5_4.png', 'response': 5, }, {'file': 'numbers2/num5_4.png', 'response': 5, }, {'file': 'numbers2/num6_4.png', 'response': 6, }, {'file': 'numbers2/num6_5.png', 'response': 6, }, {'file': 'numbers2/num6_6.png', 'response': 6, }, {'file': 'numbers2/num7_4.png', 'response': 7, }, {'file': 'numbers2/num7_5.png', 'response': 7, }, {'file': 'numbers2/num7_5.png', 'response': 7, }, {'file': 'numbers2/num8_4.png', 'response': 8, }, {'file': 'numbers2/num8_5.png', 'response': 8, }, {'file': 'numbers2/num8_6.png', 'response': 8, }, {'file': 'numbers2/num9_4.png', 'response': 9, }, {'file': 'numbers2/num9_5.png', 'response': 9, }, {'file': 'numbers2/num9_6.png', 'response': 9, }, {'file': 'numbers2/slash_1.png', 'response': '/', }, {'file': 'numbers2/slash_2.png', 'response': '/', }, {'file': 'numbers2/slash_3.png', 'response': '/', }, {'file': 'numbers2/dot_1.png', 'response': '.', }, {'file': 'numbers2/dot_2.png', 'response': '.', }, {'file': 'numbers2/dot_3.png', 'response': '.', }, ] for d in data: d['img'] = cv2.imread(d['file']) self.add_sample(d['response'], d['img']) self.save_model_to_file(model_name) self.train()
Q: Как скопировать массив в динамический массив? (Си) Есть массив char str[] = "abcdef"; И динамический массив: char *dstr = (char*)malloc(sizeof(char) * N); Как скопировать str[] в dstr[] ? P.S. Заранее извиняюсь, если вопрос глупый. A: Есть 3 способа, нативное копирование области памяти, специальной функцией копирования строк, посимвольный перебор массива и запись в dstr. memcpy(&str, &dstr, strlen(str) + 1); // Копирование области памяти strcpy(dstr, str); // Специальная функция для строки // Перебор массива циклом. int strsize = strlen(str) + 1; for(int i = 0; i < strsize ; i++) { dstr[i] = str[i]; } Имейте ввиду, в обоих случаях вы должны быть уверены что размер dstr больше на 1 чем длина строки str. A: В данном случае (создание копии строки, не содержащей двоичных нулей) проще всего (а если размер исходной строки не известен, то и не менее эффективно, чем использование malloc с последующими вызовами strlen, а затем strcpy или memcpy) вызвать strdup Например: #include <string.h> ... char str[] = "abcdef"; char *dstr = strdup(str); Конечно, возвращаемый указатель стоит проверить на NULL (аналогично использованию malloc).
Assistant Commissioner Brett Guerin is overhauling the police complaints process. Whichello slammed Jack into the wall, grabbed him around the back of the neck, and asked him if he remembered what it felt like to be sprayed. He told Jack that if he got another call to the home he would "empty the whole can into him and shove the can up his arse". Then he punched him in the ribs, and left. Jack started to cry. When the officers were back in the police van, Whichello asked his partner – who had only been a constable for three months – if they "minded the odd kidney punch?" It was, according to Assistant Commissioner Brett Guerin, a particularly troubling case. Senior Constable John Whichello was later dismissed. Mr Guerin is overhauling the police complaints process. "That's exactly the type of behaviour we don't need: a senior constable, training a constable, punching a kid in handcuffs, at a DHS house, with staff there," he said. The Victoria Police complaints handling process has been rounded on for more than a decade: first by the Office of Police Integrity; then by Jack Rush, QC, in his inquiry on the ructions between former chief Simon Overland and his deputy Sir Ken Jones; and most recently by the Victorian Equal Opportunity and Human Rights Commission, in a report on predatory sexual behaviour in the force. Finally, progress is being made, Mr Guerin says. The process has been simplified and expedited, with complaints triaged into three categories: those which are not criminal and do not warrant dismissal, those which are not criminal but may warrant dismissal, and those which may be criminal. When Mr Guerin started about two years ago as head of the internal investigations unit, known as professional standards command, one of his first tasks was approving the criminal brief to charge Leading Senior Constable Timothy Baker with murdering a man during a routine traffic stop in 2013. The now former cop was found not guilty on Friday. But less serious investigations required just as much attention, to ensure they were not taking months or even years to finalise. And the force is starting to respond to the new broom. "In a sense we're leading and the rest of the organisation is catching up," he said. "The development of our people is probably lagging on what we're asking of them. We can be a bit like an aircraft carrier." Mr Guerin said the new complaints process could result in fewer police being formally disciplined. The vast majority of complaints, he says, relate to what he calls service delivery issues; police didn't show up, an officer was rude, police didn't take a report, police put handcuffs on too tight, police pushed me when they didn't need to. He is encouraging these complaints to be dealt with in seven days. "We're moving towards a less punitive discipline system but the intuitive push-back from that is that 'You're going soft on cops'. "But the sky hasn't fallen, complaints are not going up, in some areas they're going down, and community confidence is still [high]." While the number of complaints remains static, or have fallen, 32 police were charged with criminal offences last year. Serious internal investigations are ongoing, including into the Inflation nightclub shooting. Further changes could also be made to how police are investigated, with a parliamentary committee inquiring into the external oversight of police corruption and misconduct. Forty-three submissions have been made to the joint committee into the Independent Broad-based Anti-corruption Commission, including several that back the establishment of a new body to investigate police complaints. That suggestion has been dismissed by the Police Association, which also called for IBAC to no longer have powers to publicly interview officers as part of its inquiries. * Name has been changed
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/fortify_base.R \name{fortify.matrix} \alias{fortify.matrix} \title{Convert \code{base::matrix} to \code{data.frame}} \usage{ \method{fortify}{matrix}(model, data = NULL, compat = FALSE, ...) } \arguments{ \item{model}{\code{base::matrix} instance} \item{data}{original dataset, if needed} \item{compat}{Logical frag to specify the behaviour when converting matrix which has no column name. If \code{FALSE}, result has character columns like c('1', '2', ...). If \code{TRUE}, result has character columns like c('V1', 'V2', ...).} \item{...}{other arguments passed to methods} } \value{ data.frame } \description{ Different from \code{as.data.frame} } \examples{ fortify(matrix(1:6, nrow=2, ncol=3)) }
Bitshares 0 Blocks: The Future is Now: The Affluence Network Bitshares 0 Blocks: Your Dreams. Your Future. The Affluence Network. Thank you so much for visiting us in search for “Bitshares 0 Blocks” online. In the case of a fully functioning cryptocurrency, it could actually be exchanged as a product. Promoters of cryptocurrencies say that this form of personal money isn’t handled by a main banking system and is not thus subject to the whims of its inflation. Since there are always a limited number of goods, this cash’s benefit is founded on market forces, letting homeowners to deal over cryptocurrency exchanges. Cryptocurrencies such as Bitcoin, LiteCoin, Ether, The Affluence Network, and many others have already been designed as a non-fiat currency. Put simply, its backers assert that there is “real” value, even through there is absolutely no physical representation of that value. The value grows due to computing power, that’s, is the lone way to create new coins distributed by allocating CPU power via computer programs called miners. Miners create a block after a time period that is worth an ever decreasing amount of money or some sort of wages so that you can ensure the shortfall. Each coin contains many smaller units. For Bitcoin, each component is called a satoshi. Operations that take place during mining are just to authenticate other trades, such that both creates and authenticates itself, a simple and elegant alternative, which will be among the appealing aspects of the coin. The individual who has mined the coin holds the address, and transfers it to a value is provided by another address, which is a “wallet” file saved on a computer. The blockchain is where the public record of transactions dwells. The fact that there is little evidence of any increase in using virtual money as a currency may be the reason there are minimal efforts to regulate it. The reason for this could be just that the market is too small for cryptocurrencies to justify any regulatory effort. It’s also possible that the regulators just don’t comprehend the technology and its consequences, anticipating any developments to act. Here is the coolest thing about cryptocurrencies; they usually do not physically exist everywhere, not even on a hard drive. When you take a look at a unique address for a wallet containing a cryptocurrency, there’s no digital information held in it, like in the same way a bank could hold dollars in a bank account. It’s nothing more than a representation of worth, but there’s no real tangible sort of that worth. Cryptocurrency wallets may not be confiscated or immobilized or audited by the banks and the law. They do not have spending limits and withdrawal limitations imposed on them. No one but the person who owns the crypto wallet can decide how their riches will be managed. Mining cryptocurrencies is how new coins are put in circulation. Because there’s no government control and crypto coins are digital, they cannot be printed or minted to create more. The mining process is what makes more of the coin. It may be useful to think of the mining as joining a lottery group, the pros and cons are the same. Mining crypto coins means you’ll really get to keep the full benefits of your efforts, but this reduces your odds of being successful. Instead, joining a pool means that, overall, members are going to have much higher possibility of solving a block, but the benefit will be divided between all members of the pool, predicated on the amount of “shares” won. If you are considering going it alone, it is worth noting the applications configuration for solo mining can be more complicated than with a swimming pool, and beginners would be probably better take the latter path. This option also creates a steady stream of revenue, even if each payment is small compared to totally block the reward. The sweetness of the cryptocurrencies is that fraud was proved an impossibility: due to the dynamics of the protocol in which it is transacted. All deals over a crypto currency blockchain are permanent. As soon as youare paid, you get paid. This is not something temporary where your customers may challenge or require a discounts, or employ illegal sleight of hand. Used, many investors will be a good idea to make use of a cost processor, due to the permanent dynamics of crypto currency orders, you should ensure that safety is difficult. With any type of crypto currency whether a bitcoin, ether, litecoin, or the numerous additional altcoins, thieves and hackers might access your individual keys and so take your cash. Unfortunately, you almost certainly will never get it back. It is very important for you to embrace some very good secure and safe procedures when working with any cryptocurrency. Doing this will guard you from all of these negative events. The physical Internet backbone that carries information between the different nodes of the network is now the work of several companies called Internet service providers (ISPs), which includes companies that offer long distance pipelines, sometimes at the international level, regional local conduit, which ultimately joins in families and businesses. The physical connection to the Internet can only happen through any of these ISPs, players like amount 3, Cogent, and IBM AT&T. Each ISP manages its own network. Internet service providers Exchange IXPs, owned or private companies, and sometimes by Governments, make for each of these networks to be interconnected or to transfer messages across the network. Many ISPs have agreements with providers of physical Internet backbone providers to offer Internet service over their networks for “last mile”-consumers and businesses who desire to get Internet connectivity. Internet protocols, followed by everyone in the network causes it to be possible for the info to stream without interruption, in the right area at the perfect time. While none of these organizations “possesses” the Internet together these companies decide how it operates, and recognized rules and standards that everyone stays. Contracts and legal framework that underlies all that’s happening to ascertain how things work and what happens if something bad happens. To get a domain name, for instance, one needs permission from a Registrar, which has a contract with ICANN. To connect to the Internet, your ISP must be physical contracts with providers of Internet backbone services, and suppliers have contracts with IXPs from the Internet backbone to attach to and with her. Concern over security dilemmas? A working group is formed to work on the issue and the alternative developed and deployed is in the interest of all parties. If the Internet is down, you’ve got someone to call to get it repaired. If the problem is from your ISP, they in turn have contracts set up and service level agreements, which regulate the way in which these issues are worked out. The benefit of cryptocurrency is that it uses blockchain technology. The network of nodes the make up the blockchain isn’t governed by any centered business. No one can tell the miners to upgrade, speed up, slow down, stop or do anything. And that’s something that as a dedicated advocate badge of honor, and is identical to the way the Internet operates. But as you comprehend now, public Internet governance, normalities and rules that regulate how it works present inherent problems to the consumer. Blockchain technology has none of that. For most users of cryptocurrencies it’s not essential to comprehend how the process operates in and of itself, but it is fundamentally crucial that you comprehend that there is a procedure for mining to create virtual currency. Unlike monies as we understand them now where Governments and banks can only select to print unlimited numbers (I ‘m not saying they’re doing so, just one point), cryptocurrencies to be operated by users using a mining application, which solves the sophisticated algorithms to release blocks of monies that can enter into circulation. Ethereum is an incredible cryptocurrency platform, nevertheless, if growth is too quickly, there may be some difficulties. If the platform is adopted immediately, Ethereum requests could grow dramatically, and at a rate that surpasses the rate with which the miners can create new coins. Under such a scenario, the whole platform of Ethereum could become destabilized because of the raising costs of running distributed applications. In turn, this could dampen interest Ethereum platform and ether. Instability of demand for ether can lead to a negative change in the economical parameters of an Ethereum based business that may lead to business being unable to continue to operate or to cease operation. A lot of people prefer to use a money deflation, especially those that need to save. Despite the criticism and skepticism, a cryptocurrency coin may be better suited for some applications than others. Fiscal seclusion, for example, is excellent for political activists, but more debatable as it pertains to political campaign financing. We need a stable cryptocurrency for use in trade; if you’re living pay check to pay check, it’d happen as part of your wealth, with the remainder earmarked for other currencies. You’ve probably heard this many times where you frequently spread the great word about crypto. “It is not volatile? What happens when the value failures? ” sofar, many POS programs delivers free conversion of fiat, improving some problem, but until the volatility cryptocurrencies is resolved, many people will soon be resistant to put up any. We have to discover a way to fight the volatility that is inherent in cryptocurrencies. When searching on the web forBitshares 0 Blocks, there are many things to think about. Bitshares 0 Blocks: No Credit No Problem: TAN Click here to visit our home page and learn more about Bitshares 0 Blocks. Cryptocurrency is freeing individuals to transact money and do business on their terms. Each user can send and receive payments in a similar way, but they also take part in more elaborate smart contracts. Multiple signatures allow a trade to be supported by the network, but where a certain number of a defined group of folks consent to sign the deal, blockchain technology makes this possible. This allows innovative dispute arbitration services to be developed in the foreseeable future. These services could allow a third party to approve or reject a trade in the event of disagreement between the other parties without checking their money. Unlike cash and other payment methods, the blockchain consistently leaves public evidence a transaction occurred. This can be possibly used in an appeal against companies with deceptive practices. Since one of the earliest forms of earning money is in cash financing, it’s a fact which you can do this with cryptocurrency. Most of the giving sites now focus on Bitcoin, many of these sites you happen to be demanded fill in a captcha after a certain time period and are rewarded with a small amount of coins for visiting them. You can visit the www.cryptofunds.co web site to find some lists of of these sites to tap into the money of your choice. Unlike forex, stocks and options, etc., altcoin markets have very different dynamics. New ones are always popping up which means they do not have lots of market data and historical view for you to backtest against. Most altcoins have quite inferior liquidity as well and it is hard to come up with a reasonable investment strategy. Bitcoin is the primary cryptocurrency of the internet: a digital money standard by which all other coins are compared to. Cryptocurrencies are distributed, international, and decentralized. Unlike conventional fiat currencies, there’s no authorities, banks, or any regulatory agencies. As such, it is more immune to outrageous inflation and tainted banks. The advantages of using cryptocurrencies as your method of transacting cash online outweigh the security and privacy hazards. Security and privacy can readily be attained by simply being bright, and following some basic guidelines. You’dn’t put your whole bank ledger online for the word to see, but my nature, your cryptocurrency ledger is publicized. This can be fixed by removing any identity of possession in the wallets and thereby keeping you anonymous. This mining action validates and records the transactions across the whole network. So if you are attempting to do something illegal, it’s not a good idea because everything is recorded in the public register for the remainder of the world to see eternally. If you are looking for Bitshares 0 Blocks, look no further than TAN. Bitshares 0 Blocks – The Affluence Network: One Global Coin! It should be difficult to get more little increases (~ 10%) throughout the day. Study the best way to read these Candlestick charts! And I found these two rules to be true: having small increases is more profitable than trying to resist up to the peak. Most day traders follow Candlestick, therefore it is better to look at novels than wait for order confirmation when you think the cost is going down. Second, there’s more unpredictability and reward in currencies that never have made it to the profitableness of sites like Coinwarz. as Ethereum. The platform allows creation of a contract without having to go through a third party. The third parties involved can include bank, credit card Business, You may run a search on the web. First learn, then models, indicators and most importantly practice looking at old charts and pick out trends. When you learn to keep a trading diary screenshots and your comment/forecast. Precisely what is the best way to get confident with charts IMHO. Oh certainly, and don’t fool yourself into thinking that you purchase the uptrend will never drop! Always will go down! You will discover that incremental profits are more reliable and profitable (most times)
1953–54 Chicago Black Hawks season The 1953–54 Chicago Black Hawks season was the team's 28th season in the NHL, and they were coming off of a successful season in 1952–53, in which the team set team records in wins (27) and points (69), while earning their first playoff berth since the 1945–46 season. The Hawks lost to the Montreal Canadiens in seven games in the NHL semi-finals. The Black Hawks were looking to build on their newfound success, however, the club would open the season with a record of 0–7–1 to quickly fall into last place in the NHL standings. Wins would be few and far between for the club, as they won consecutive games only thrice throughout the season, and finished the year dead last in the league with a 12–51–7 record, earning 31 points. The 12 wins was Chicago's fewest since the 1938–39 season, while the 31 points was their lowest total since the 1928–29 season. Offensively, Chicago was led by Larry Wilson, who had a team high 33 assists and 42 points, while Pete Conacher scored a club best 19 goals. Defenceman and team captain Bill Gadsby had a career season, scoring 12 goals and 41 points, while getting 108 penalty minutes. Fellow defenceman Gus Mortson led the team with 132 penalty minutes. In goal, Al Rollins played in 66 games, winning 12 games, while posting a 3.23 GAA and 7 shutouts. He was awarded the Hart Trophy for his efforts. Season standings Record vs. opponents Game log Regular season Season stats Scoring leaders Goaltending Roster References Sources Hockey-Reference National Hockey League Guide & Record Book 2007 Category:Chicago Blackhawks seasons Chicago Black Hawks season, 1953-54 Chicago
Welcome to ROMs Universe, a brand new website that equips you with all the tools you need to indulge in the video game classics of yesteryear. That’s right, at our website, users will find the most played retro games along with a long list of top rated emulators and roms that will make them accessible on whichever gaming device you have. However, you do not choose our emulators and roms merely to play games with many different consoles. There are plenty of secondary benefits for you to enjoy such as enhanced resolution and mini-sized downloads that are fast and easy. What we are offering then is every hard-core gamer’s dream: the ability to play classic games from the last decade and beyond on any modern gaming console and with enhanced quality. Our site contains up to 109 Rom systems and 63 emulators. With that kind of variety, you can bet there is going to be at least a few emulators and roms that would work with the hardware you are currently using. There is something for everyone here. So fret no more, those retro games you have been dying to play for ages are no longer off limits. Let’s game!
‘Mental’ musings “Mental,” a procedural show offered in a psychiatric ward premiering tonight on Fox, gives us exactly what you’d expect. Instead of House solving the disease of the week or a cop solving the crime of the week, we now have a psychiatrist solving the mental illness of the week. Of course, that’s slightly unfair. He makes the turning point of the week with his patients because you can’t solve mental illness in a week. (In reality, I doubt there are many turning points made in the space of a week, either.) Still, if you put the ludicrous nature of the premise aside, the show is enjoyable enough. Star Chris Vance (Whister on “Prison Break”) is charming enough (and his resemblance to a young Sting is eerie) as Dr. Jack Gallagher, who brings his maverick ways to the clinic, upsetting the status quo. Annabella Sciorra is his boss, who may have a thing for him. Oh, and Jack has some deep, dark personal secret because you can’t be a show called “Mental” without ripping something off from “The Mentalist.” I’m thinking Fox doesn’t have much faith in this series based on the post-season premiere.
Educational Assistant Program (Educational Support Diploma) Update your skills and knowledge by earning an Educational Assistant (Educational Support) diploma. Designed for those currently working in the field, this part-time program takes 2-3 years to complete.
WASHINGTON (Reuters) - Democratic Senator Ron Wyden thinks Republican leadership is moving away from trying to implement full-scale tax reform this year and instead are turning their attention to simply cutting rates. “If you look at the language of Mitch McConnell in the last couple of days, he’s talking about tax cuts, he’s been saying, ‘Well, I don’t know about tax reform, let’s have a tax cut,’” Wyden, the top Democrat on the Senate Finance Committee, said in an interview with Reuters on Friday. “To me that would really be, again, contrary to what the president campaigned on,” he said. McConnell is the Senate majority leader. Sweeping tax reform had been high on the Republican legislative agenda. But so far, House of Representatives and the Senate, both controlled by Republicans, have been unable to find consensus on a tax package that could pass both chambers and be signed into law by President Donald Trump. Wyden argued there is robust bipartisan agreement that a tax overhaul is needed. “There are plenty of Republicans in Congress - because they talk to me - who would really like to do major tax reform,” he said. Last month, the White House weighed in on the tax discussion, offering only a one-page plan that included deep cuts in rates, many for businesses, but stopped well short of legislative language or providing detail on specific changes. “The tax reform proposal is shorter than drug store receipts I have,” Wyden remarked. Several lawmakers have warned that if tax reform is not completed by the end of 2017, it will become more difficult to pass in 2018, when Congressional midterm elections will be held. The House has begun holding hearings and has developed a “blue print” - including a controversial border adjustment tax proposal that would tax imports while providing credit for exports. Wyden, in an echo of other senators, including Republicans, said that a border adjustment tax would be unlikely to fare well in the Senate - going so far to call it a “grocery tax” that would unfairly hit consumers in the middle class.
Q: Need a regular expression to create friendly URLs Long time ago I asked almost the same thing but now I need more something more difficult. I need the same regex code to work for all the request ( if is possible ) So let's say I have the following: $friendly = ''; to output / $friendly = '/////'; to output / $friendly = '///text//'; to output /text/ $friendly = '/text/?var=text'; to output /text/ $friendly = '/text/?var=text/'; to output /text/var-text/ $friendly = '/?text!@#$%^&*(()_+|/#anchor'; to output /text/ $friendly = '/!@#$%^&*(()_+|text/!@#$%^&*(()_+|text/'; to output /text/text/ Hope that make sense! A: Seems like a combination of preg_replace(), parse_url() and rtrim() will help here. $values = array( '' => '/' , '/////' => '/' , '///text//' => '/text/' , '/text/?var=text' => '/text/' , '/text/?var=text/' => '/text/var-text/' , '/?text!@#$%^&*(()_+|/#anchor' => '/text/' , '/!@#$%^&*(()_+|text/!@#$%^&*(()_+|text/' => '/text/text/' ); foreach( $values as $raw => $expected ) { /* Remove '#'s that don't appear to be document fragments and anything * else that's not a letter or one of '?' or '='. */ $url = preg_replace(array('|(?<!/)#|', '|[^?=#a-z/]+|i'), '', $raw); /* Pull out the path and query strings from the resulting value. */ $path = parse_url($url, PHP_URL_PATH); $query = parse_url($url, PHP_URL_QUERY); /* Ensure the path ends with '/'. */ $friendly = rtrim($path, '/') . '/'; /* If the query string ends with '/', append it to the path. */ if( substr($query, -1) == '/' ) { /* Replace '=' with '-'. */ $friendly .= str_replace('=', '-', $query); } /* Clean up repeated slashes. */ $friendly = preg_replace('|/{2,}|', '/', $friendly); /* Check our work. */ printf( 'Raw: %-42s - Friendly: %-18s (Expected: %-18s) - %-4s' , "'$raw'" , "'$friendly'" , "'$expected'" , ($friendly == $expected) ? 'OK' : 'FAIL' ); echo PHP_EOL; } The above code outputs: Raw: '' - Friendly: '/' (Expected: '/' ) - OK Raw: '/////' - Friendly: '/' (Expected: '/' ) - OK Raw: '///text//' - Friendly: '/text/' (Expected: '/text/' ) - OK Raw: '/text/?var=text' - Friendly: '/text/' (Expected: '/text/' ) - OK Raw: '/text/?var=text/' - Friendly: '/text/var-text/' (Expected: '/text/var-text/' ) - OK Raw: '/?text!@#$%^&*(()_+|/#anchor' - Friendly: '/text/' (Expected: '/text/' ) - OK Raw: '/!@#$%^&*(()_+|text/!@#$%^&*(()_+|text/' - Friendly: '/text/text/' (Expected: '/text/text/' ) - OK Note that this code does pass based on the examples you provided, but it might not properly capture the intent of what you are trying to accomplish. I've commented the code to explain what it does so that you can adjust it where necessary. For your reference: parse_url() preg_replace() printf() rtrim() str_replace() substr()
Optical networks are increasingly relied upon for communications and data transfer activities. However, while many data transfer activities involve communications across large geographical distances, the spatial expanse of “proprietary networks,” or those networks controlled by individual network providers, is often somewhat more limited. As a result, some network providers have sought to implement a system in which each network provider shares access to its own proprietary network, or “domain,” with other network providers. In that case, optical signals would be passed from one domain to another, thereby expanding the spatial communications capabilities of all users. By employing such a system, network providers hope to enable national and international communications services in line with customer demands. One of the significant obstacles to the above system of network sharing is ensuring network interoperability, or the ability of one domain to effectively receive, process, and/or propagate optical signals from another domain. Specifically, in many cases, network providers are related to the communications service providers, and the various domains are configured to be consistent with specific communications methods and protocols. Components included in the network forming each domain, while well-suited for handling intra-domain optical signals, are often ill-suited to interacting with inter-domain optical signals, due to an inability to recognize degradation of the signals. Until recently, solutions to this issue focused on mainly software-implemented strategies to allow optical signals to be recognized by different domains. However, software solutions have failed to completely solve the problem, due to the requirement to reveal proprietary network information in order to create the software. As such, there is a need for an optical communications system in which domain interoperability is enhanced.
On Monday, March 11, the court in Amsterdam began looking into the appeal submitted in March 2017 by representatives of four museums in Crimea claiming their rights to a collection of Scythian gold artifacts that was lent to a Dutch museum in February 2014, one month before Russia annexed the Black Sea peninsula from Ukraine. The RFE/RL reported that the claimants in the case – the “Crimean museums” are not participating in the court hearing. Netherlands -- Artefacts are on display during the exhibition 'Crimea: Gold and Secrets of the Black Sea' at the Allard Pierson Museum in Amsterdam, August 21, 2014 On March 11, the first day of hearing in the Dutch capitol the “Crimean museums” published a statement on a Website of the Eastern-Crimean Historical and Cultural Museum-Preserve organization that is part of the Russian Ministry of Culture. It said that the “unique collection” forming the “Crimea – the Golden Island in the Black Sea” exhibit is “important not only for the museums but for all of the Crimean people as well.” The statement added the 2,111 items, including jewelry and a fourth-century helmet, should not be “torn away from their history, context and collections for political reasons.” A picture showing artifacts on display during the exhibition 'Crimea: Gold and Secrets of the Black Sea' at the Allard Pierson Museum in Amsterdam. There are “no legal, cultural or historical reasons” to grant Kyiv the items, the statement added. Ukraine said its claim over the Scythian gold collection is based on legal provisions of the 1972 UNESCO Convention concerning the Protection of the World Cultural and Natural Heritage, which says the property rights belong to the states on which territory the treasures are situated. Kyiv also argues the collective “Crimean museums” are legally "inappropriate claimants" in the European country’s court since they operate on the basis of the Russian law and as representatives of the Russian state, which EU views as an occupying power in Crimea. Netherlands -- A visitor looks at artefacts on display during the exhibition 'Crimea: Gold and Secrets of the Black Sea' at the Allard Pierson Museum in Amsterdam, August 21, 2014 In December 2016, a Dutch court ruled that the items should be returned to Ukraine, arguing that only sovereign countries can claim objects as cultural heritage. “Ownership questions have to be settled when they have been returned to the state and in accordance with the law of the state in question,” Reuters quoted Judge Mieke Dudok van Heel as saying. “The Allard Pierson Museum [in Amsterdam] must return the treasures to Kiev.” Russian Culture Minister Vladimir Medinsky told Russian state media at the time that the decision would set a “dangerous precedent,” and he threatened to cut off museum exchanges with the Netherlands if it was upheld. “We are talking about an unprecedented alienation of museum values. This can only be compared to lootings dating back to Napoleon’s Italian campaigns, or to those during the times of Nazi aggression. I think that the Dutch court ruling was absolutely politicized. It destroys the very system of exhibition exchange,” RT quoted Medinsky as saying. Ukrainian authorities welcomed the December 2016 decision, with Ukrainian Foreign Minister Pavlo Klimkin saying: “The Scythian gold is coming back home - to Ukraine. I’m sure, it will also return to Ukrainian Crimea.” The four Crimean museums – the Central Museum of Tavrida, Kerch Historical and Cultural Preserve, Bakhchysarai Historical and Cultural Preserve and Chersonesus Historical and Cultural Preserve – were given three months to appeal the decision, which they did on March 28, 2017. Russia’s TASS state media outlet says a court spokesman told the agency the Amsterdam Court of Appeals would deliver a verdict in the Scythian gold case on June 11, 2019. The international community, by and large, does not recognize Crimea as Russian territory, and a March 2014 United Nations General Assembly vote backed Ukraine’s territorial integrity. Elena Gagarina, director of the Kremlin Museum in Moscow, said in December 2016 that she understood the Dutch court’s decision. “In this case, when these objects were taken from the territory of Ukraine and belonged to Ukraine as a state, this decision seems perfectly reasonable to me,” Reuters quoted her as telling Russia’s Interfax news agency. Ukraine’s Ministry of Culture first agreed to transfer the collection, at that time under the auspices of the State Museum Fund of Ukraine. The Director of the Central Museum of Tavrida, Andrei Mal’gin, has argued that the Scythian artifacts are fundamentally under Crimean, and not Ukrainian, dominion. The items were excavated in Crimea, and funding for the excavation came from the local budget, not the central one, Mal’gin told the Russian-language Ukrainian newspaper Segodnya. He added that he believed the outcome would depend on the “political situation.” But Lyudmila Strokovich, head of the Museum of Historical Treasures of Ukraine, told Segodnya the Crimean museums, which “immediately recognized themselves as Russian,” did not coordinate with Kyiv when extending the contract governing the loan of the items to the Dutch museum. This, in her words, meant that after June 13, 2014, the exhibits had the status of having been “illegally exported from Ukraine.” Ralph Oman, a lecturer in intellectual property and patent law at the George Washington University Law School, told Polygraph.info that the World Intellectual Property Organization in Geneva is working on a new treaty that could protect folkloric treasures, including the Scythian gold. An artifact on display at the exhibition 'Crimea: Gold and Secrets of the Black Sea' at the Allard Pierson Museum in Amsterdam, Such a treaty would provide protection for “an undetermined number of years,” but it “has not yet been finalized or ratified,” Oman said. Oman said the Scythian gold case is analogous to “former colonies and materials being sent back to British museums,” although that line of reasoning is likely a dead end for the Crimean museums. “It’s perhaps a good moral argument [for returning the artifacts to Crimea], but not a strong legal one,” he said. Oman said the possibility the appellate court could uphold the lower court’s decision was “strong,” adding that “to decide otherwise would be going out on a limb.” Given the highly complicated and precedential nature of the case, which remains under appeal, Polygraph.info finds that the museums’ claim that there are “no legal, cultural or historical reasons to hand these items over to Kyiv” remains unclear.
Doubts on progress and technology Bedazzled by Energy Efficiency To focus on energy efficiency is to make present ways of life non-negotiable. However, transforming present ways of life is key to mitigating climate change and decreasing our dependence on fossil fuels. Energy efficiency policy Energy efficiency is a cornerstone of policies to reduce carbon emissions and fossil fuel dependence in the industrialised world. For example, the European Union (EU) has set a target of achieving 20% energy savings through improvements in energy efficiency by 2020, and 30% by 2030. Measures to achieve these EU goals include mandatory energy efficiency certificates for buildings, minimum efficiency standards and labelling for a variety of products such as boilers, household appliances, lighting and televisions, and emissions performance standards for cars. [1] The EU has the world’s most progressive energy efficiency policy, but similar measures are now applied in many other industrialised countries, including China. On a global scale, the International Energy Agency (IEA) asserts that “energy efficiency is the key to ensuring a safe, reliable, affordable and sustainable energy system for the future”. [2] In 2011, the organisation launched its 450 scenario, which aims to limit the concentration of CO2 in the atmosphere to 450 parts per million. Improved energy efficiency accounts for 71% of projected carbon reductions in the period to 2020, and 48% in the period to 2035. [2] [3] What are the results? Do improvements in energy efficiency actually lead to energy savings? At first sight, the advantages of efficiency seem to be impressive. For example, the energy efficiency of a range of domestic appliances covered by the EU directives has improved significantly over the last 15 years. Between 1998 and 2012, fridges and freezers became 75% more energy efficient, washing machines 63%, laundry dryers 72%, and dishwashers 50%. [4] However, energy use in the EU-28 in 2015 was only slightly below the energy use in 2000 (1,627 Mtoe compared to 1.730 Mtoe, or million tonnes of oil equivalents). Furthermore, there are several other factors that may explain the (limited) decrease in energy use, like the 2007 economic crisis. Indeed, after decades of continuous growth, energy use in the EU decreased slightly between 2007 and 2014, only to go up again in 2015 and 2016 when economic growth returned. [1] On a global level, energy use keeps rising at an average rate of 2.4% per year. [3] This is double the rate of population growth, while close to half of the global population has limited or no access to modern energy sources. [5] In industrialised (OECD) countries, energy use per head of the population doubled between 1960 and 2007. [6] Rebound effects? Why is it that advances in energy efficiency do not result in a reduction of energy demand? Most critics focus on so-called “rebound effects”, which have been described since the nineteenth century. [7] According to the rebound argument, improvements in energy efficiency often encourage greater use of the services which energy helps to provide. [8] For example, the advance of solid state lighting (LED), which is six times more energy efficient than old-fashioned incandescent lighting, has not led to a decrease in energy demand for lighting. Instead, it resulted in six times more light. [9] In some cases, rebound effects may be sufficiently large to lead to an overall increase in energy use. [8] For example, the improved efficiency of microchips has accelerated the use of computers, whose total energy use now exceeds the total energy use of earlier generations of computers which had less energy efficient microchips. Energy efficiency advances in one product category can also lead to increased energy use in other product categories, or lead to the creation of an entirely new product category. For example, LED-screens are more energy efficient than LCD-screens, and could therefore reduce the energy use of televisions. However, they also led to the arrival of digital billboards, which are enormous power hogs in spite of their energy efficient components. [10] Finally, money saved through improvements in energy efficiency can also be spent on other energy-intensive goods and services, which is a possibility usually referred to as an indirect rebound effect. Beyond the rebound argument Rebound effects are ignored by the EU and the IEA, and this might partly explain why the results fall short of the projections. Among academics, the magnitude of the rebound effect is hotly debated. While some argue that “rebound effects frequently offset or even eliminate the energy savings from improved efficiency” [3], others maintain that rebound effects “have become a distraction” because they are relatively small: “behavioural responses shave 5-30% of intended energy savings, reaching no more than 60% when combined with macro-economic effects – energy efficiency does save energy”. [11] Those who downplay rebound effects attribute the lack of results to the fact that we don’t try hard enough: “many opportunities for improving energy efficiency still go wasted”. [11] Others are driven by the goal of improving energy efficiency policy. One response is to suggest that the frame of reference be expanded and that analysts should consider the efficiency not of individual products but of entire systems or societies. In this view, energy efficiency is not framed holistically enough, nor given sufficient context. [12] [13] However, a few critics go one step further. In their view, energy efficiency policy cannot be fixed. The problem with energy efficiency, they argue, is that it establishes and reproduces ways of life that are not sustainable in the long run. [12][14] A parellel universe Rebound effects are often presented as “unintended” consequences, but they are the logical outcome of the abstraction that is required to define and measure energy efficiency. According to Loren Lutzenhiser, a researcher at Portland State University in the US, energy efficiency policy is so abstracted from the everyday dynamics of energy use that it operates in a “parallel universe”. [14] In a more recent paper, What is wrong with energy efficiency?, UK researcher Elizabeth Shove unravels this “parallel universe”, concluding that efficiency policies are “counter-productive” and “part of the problem”. [12] According to some critics, efficiency policies are "counter-productive" and "part of the problem". To start with, the parallel universe of energy efficiency interprets “energy savings” in a peculiar way. When the EU states that it will achieve 20% “energy savings” by 2020, “energy savings” are not defined as a reduction in actual energy consumption compared to present or historical figures. Indeed, such a definition would show that energy efficiency doesn’t reduce energy use at all. Instead, the “energy savings” are defined as reductions compared to the projected energy use in 2020. These reductions are measured by quantifying “avoided energy” – the energy resources not used because of advances in energy efficiency. Even if the projected “energy savings” were to be fully realised, they would not result in an absolute reduction in energy demand. The EU argues that advances in energy efficiency will be “roughly equivalent to turning off 400 power stations”, but in reality no single power station will be turned off in 2020 because of advances in energy efficiency. Instead, the reasoning is that Europe would have needed to build an extra 400 power stations by 2020, were it not for the increases in energy efficiency. In taking this approach, the EU treats energy efficiency as a fuel, “a source of energy in its own right”. [15] The IEA goes even further when it claims that “energy avoided by IEA member countries in 2010 (generated from investments over the preceding 1974 to 2010 period), was larger than actual demand met by any other supply side resource, including oil, gas, coal and electricity”, thus making energy efficiency “the largest or first fuel”. [16] [12] Measuring something that doesn’t exist Treating energy efficiency as a fuel and measuring its success in terms of “avoided energy” is pretty weird. For one thing, it is about not using a fuel that does not exist. [14] Furthermore, the higher the projected energy use in 2030, the larger the “avoided energy” would be. On the other hand, if the projected energy use in 2030 were to be lower than present-day energy use (we reduce energy demand), the “avoided energy” becomes negative. An energy policy that seeks to reduce greenhouse gas emissions and fossil fuel dependency must measure its success in terms of lower fossil fuel consumption. [17] However, by measuring “avoided energy”, energy efficiency policy does exactly the opposite. Because projected energy use is higher than present energy use, energy efficiency policy takes for granted that total energy consumption will keep rising. That other pillar of climate change policy – the decarbonisation of the energy supply by encouraging the use of renewable energy power plants – suffers from similar defects. Because the increase in total energy demand outpaces the growth in renewable energy, solar and wind power plants are in fact not decarbonising the energy supply. They are not replacing fossil fuel power plants, but are helping to accommodate the ever growing demand for energy. Only by introducing the concept of “avoided emissions” can renewables be presented as having something of the desired effect. [18] What is it that is efficient? In What is wrong with energy efficiency?, Elizabeth Shove demonstrates that the concept of energy efficiency is just as abstract as the concept of “avoided energy”. Efficiency is about delivering more services (heat, light, transportation,…) for the same energy input, or the same services for less energy input. Consequently, a first step in identifying improvements depends on specifying “service” (what is it that is efficient?) and on quantifying the amount of energy involved (how is “less energy” known?). Setting a reference against which “energy savings” are measured also involves specifying temporal boundaries (where does efficiency start and end?). [12] Shove’s main argument is that setting temporal boundaries (where does efficiency start and end?) automatically specifies the “service” (what is it that is efficient?), and the other way around. That’s because energy efficiency can only be defined and measured if it is based on equivalence of service. Shove focuses on home heating, but her point is valid for all other technology. For example, in 1985, the average passenger plane used 8 litres of fuel to transport one passenger over a distance of 100 km, a figure that came down to 3.7 litres today. Consequently, we are told that airplanes have become twice as efficient. However, if we make a comparison in fuel use between today and 1950, instead of 1985, airplanes do not use less energy at all. In the 1960s, propeller aircraft were replaced by jet aircraft, which are twice as fast but initially consumed twice as much fuel. Only fifty years later, the jet airplane became as “energy efficient” as the last propeller planes from the 1950s. [19] If viewed in a larger historical context, the concept of energy efficiency completely disintegrates. What then is a meaningful timespan over which to compare efficiencies? Should propeller planes be taken into account, or should they be ignored? The answer depends on the definition of equivalent service. If the service is defined as “flying”, then propeller planes should be included. But, if the energy service is defined as “flying at a speed of roughly 1,000 km/h”, we can discard propellers and focus on jet engines. However, the latter definition assumes a more energy-intensive service. If we go back even further in time, for example to the early twentieth century, people didn’t fly at all and there’s no sense in comparing fuel use per passenger per kilometre. Similar observations can be made for many other technologies or services that have become “more energy efficient”. If they are viewed in a larger historical context, the concept of energy efficiency completely disintegrates because the services are not at all equivalent. Often, it’s not necessary to go back very far to prove this. For example, when the energy efficiency of smartphones is calculated, the earlier generation of much less energy demanding “dumbphones” is not taken into account, although they were common less than a decade ago. How efficient is a clothesline? Because of the need to compare 'like with like' and establish equivalent of service, energy efficiency policy ignores many low energy alternatives that often have a long history but are still relevant in the context of climate change. For example, the EU has calculated that energy labels for tumble driers will be able to “save up to 3.3 Twh of electricity by 2020, equivalent to the annual energy consumption of Malta”. [20]. But how much energy use would be avoided if by 2020 every European would use a clothesline instead of a tumble drier? Don’t ask the EU, because it has not calculated the avoided energy use of clotheslines. Neither do the EU or the IEA measure the energy efficiency and avoided energy of bicycles, hand powered drills, or thermal underwear. Nevertheless, if clotheslines would be taken seriously as an alternative, then the projected 3.3 TWh of energy “saved” by more energy efficient tumble driers can no longer be considered “avoided energy”, let alone a fuel. In a similar way, bicycles and clothing undermine the very idea of calculating the “avoided energy” of more energy efficient cars and central heating boilers. Unsustainable concepts of service The problem with energy efficiency policies, then, is that they are very effective in reproducing and stabilising essentially unsustainable concepts of service. [12] Measuring the energy efficiency of cars and tumble driers, but not of bicycles and clotheslines, makes fast but energy-intensive ways of travel or clothes drying non-negotiable, and marginalises much more sustainable alternatives. According to Shove: “Programmes of energy efficiency are politically uncontroversial precisely because they take current interpretations of ‘service’ for granted… The unreflexive pursuit of efficiency is problematic not because it doesn’t work or because the benefits are absorbed elsewhere, as the rebound effect suggests, but because it does work – via the necessary concept of equivalence of services – to sustain, perhaps escalate, but never undermine… increasingly energy intensive ways of life.” [12] Indeed, the concept of energy efficiency easily accommodates further growth of energy services. All future novelties can be subjected to an efficiency approach. For example, if patio heaters and monsoon showers become “normal”, they could be incorporated in existing energy efficiency policy – and when that happens, the problem of their energy use is considered to be under control. At the same time, defining, measuring and comparing the efficiency of patio heaters and monsoon showers helps make them more “normal”. As a bonus, adding new products to the mix will only increase the energy use that is “avoided” through energy efficiency. In short, neither the EU nor the IEA capture the “avoided energy” generated by doing things differently, or by not doing them at all – while these arguably have the largest potential to reduce energy demand. [12] Since the start of the Industrial Revolution, there has been a massive expansion in the uses of energy and in the delegation of human to mechanical forms of power. But although these trends are driving the continuing increase in energy demand, they cannot be measured through the concept of energy efficiency. As Shove demonstrates, this problem cannot be solved, because energy efficiency can only be measured on the basis of equivalent service. Instead, she argues that the challenge is “to debate and extend meanings of service and explicitly engage with the ways in which these evolve”. [12] Towards an energy inefficiency policy? There are several ways to escape from the parallel universe of energy efficiency. First, while energy efficiency hinders significant long term reduction in energy demand through the need for equivalence of service, the opposite also holds true – making everything less energy efficient would reverse the growth in energy services and reduce energy demand. For example, if we were to install 1960s internal combustion engines into modern SUVs, fuel use per kilometre driven would be much higher than it is today. Few people would be able or willing to afford to drive such cars, and they would have no other choice but to switch to a much lighter, smaller and less powerful vehicle, or to drive less. Making everything less energy efficient would reverse the growth in energy services and reduce energy demand. Likewise, if an “energy inefficiency policy” were to mandate the use of inefficient central heating boilers, heating large homes to present-day comfort standards would be unaffordable for most people. They would be forced to find alternative solutions to achieve thermal comfort, for instance heating only one room, dressing more warmly, using personal heating devices, or moving to a smaller home. Recent research into the heating of buildings confirms that inefficiency can save energy. A German study examined the calculated energy performance ratings of 3,400 homes and compared these with the actual measured consumption. [21] In line with the rebound argument, the researchers found that residents of the most energy efficient homes (75 kWh/m2/yr) use on average 30% more energy than the calculated rating. However, for less energy efficient homes, the opposite – “pre-bound” – effect was observed: people use less energy than the models had calculated, and the more inefficient the dwelling is, the larger this gap becomes. In the most energy inefficient dwellings (500 kWh/m2/yr), energy use was 60% below the predicted level. From efficiency to sufficiency? However, while abandoning – or reversing – energy efficiency policy would arguably bring more energy savings than continuing it, there is another option that’s more attractive and could bring even larger energy savings. For an effective policy approach, efficiency can be complemented by or perhaps woven into a “sufficiency” strategy. Energy efficiency aims to increase the ratio of service output to energy input while holding the output at least constant. Energy sufficiency, by contrast, is a strategy that aims to reduce the growth in energy services. [4] In essence, this is a return to the “conservation” policies of the 1970s. [14] Sufficiency can involve a reduction of services (less light, less travelling, less speed, lower indoor temperatures, smaller houses), or a substitution of services (a bicycle instead of a car, a clothesline instead of a tumble drier, thermal underclothing instead of central heating). Unlike energy efficiency, the policy objectives of sufficiency cannot be expressed in relative variables (like kWh/m2/year). Instead, the focus is on absolute variables, such as reductions in carbon emissions, fossil fuel use, or oil imports. [17] Unlike energy efficiency, sufficiency cannot be defined and measured by examining a single product type, because sufficiency can involve various forms of substitution. [22] Instead, a sufficiency policy is defined and measured by looking at what people actually do. A sufficiency policy could be developed without a parallel efficiency policy, but combining them could bring larger energy savings. The key step here is to think of energy efficiency as a means rather than an end in itself, argues Shove. [12] For example, imagine how much energy could be saved if we would use an energy efficient boiler to heat just one room to 16 degrees, if we install an energy efficient engine in a much lighter vehicle, or if we combine an energy saving shower design with fewer and shorter showers. Nevertheless, while energy efficiency is considered to be a win-win strategy, to develop the concept of sufficiency as a significant force in policy is to make normative judgments: so much consumption is enough, so much is too much. [23] This is sure to be controversial, and it risks being authoritarian, at least as long as there is a cheap supply of fossil fuels. Low-tech Magazine makes the jump from web to paper. The first result is a 710-page perfect-bound paperback which is printed on demand and contains 37 of the most recent articles from the website (2012 to 2018). A second volume, collecting articles published between 2007 and 2011, will appear later this year. Thank you for writing this article and explaining why the energy efficiency madness only creates more problems. Whenever people try to give me long rants about how green energy will save the planet I can refer them to this. For too long I have argued that energy efficiency can only work if society rethinks and backtracks on modern conveniences they refuse to give up. People look at me like I have three heads for insisting on riding my bike or walking instead of driving short distances, using my effective dumb phone instead of constantly buying new ones, building a well insulated small house instead of living in a large house with air conditioning, using an integrated fish pond system to clean my dishes with carp instead of a washing machine, and growing most of my own food to avoid the fuel and plastic waste. If people are willing to be a little imaginative and less lazy we could deal with many of our environmental issues. I won't hold my breath for that. the advance of solid state lighting (LED), which is six times more energy efficient than old-fashioned incandescent lighting, has not led to a decrease in energy demand for lighting. Instead, it resulted in six times more light. Ha! Caught me! I designed a lighting system using copper wire, low-voltage transformers, alligator clips and halogen lights. I had three strings with 150 watts on each. I took pride in providing such "task lighting" instead of "area lighting," and life was good. Now, 12V MR-16 LED lamps are common and cheap. My spouse wanted more light on the kitchen counter. We had a dark spot in the living room, and we needed to extend one of the strings there, possible only because we could use LEDs instead of halogens. Bottom line: we have more lights, using the same amount of electricity. Jevon's Paradox is a bitch. There are practical benefits to doing things this way. My spouse likes "a bright house," not desiring to feel deprivation, leaving the task lighting on in the kitchen "because it brightens the dining room." So much for the benefits of task lighting. Admittedly, getting through a northern winter without a major bout of depressian is aided by plenty of light. So, do we succumb to "shivering in the dark," or do we take the gift of efficiency (with its ominous, hidden footprint) to improve our mental state? Taking aside those virtual savings and the quest to reduce energy use in the EU, I don't understand why we have to connect energy efficiency and saving energy. To me energy efficiency means only that we can use energy more efficiently. Not that we use less. Of course defining "efficient" is possible only within some kind of ideological framework, but within one I would say that the problem is not comparing servies, but units. For example: going fast has greater value than going slow taking into account limited human life, therefore we should compare fuel consumption not against distance, but speed (not J/m but J/(m/s), which translates to Ns istead of N showing visibly that now we only compare how hard we "push" and not how long) Having said that, I would agree that increasing energy efficiency is not a way to reduce energy use, only to use it more efficiently and there's no miracle cure other than just restraining ourselves. You can make a very similar argument about budgets: Spending money on crap you don't need is a loss, no matter how good of deal you get on it. It seems to me like your argument is "The efficiency of random widgets isn't important, because if we met human needs more efficiently in the first place, we wouldn't need the widgets". Are you familiar with "Fundamental Human Needs and Human-Scale Development" https://en.wikipedia.org/wiki/Fundamental_human_needs. It might be a good baseline to start from. Within your same mode of thinking, I'm also struck by the logical opposite: efficiency in reclaiming ambient energy doesn't really matter. A boat mill that is only 5% efficient wastes exactly? You can't really waste ambient energy. There is no need for policy to think about all those technical details. Such thinking is both difficult and authoritarian. Same applies for efficiency and sufficiency. Thinking from policy point of view, there is only one tool which is needed - Pigou tax. If something (like dirty energy) generates negative externalities, then tax it proportionally. Period. Leave everything else to market. People and companies will think about ways to avoid taxated consumption, in both quantity and quality ways, in a way that fits them best personally. So there is no need to compare anyone's way of life with anything else, it all becomes just a matter of style. I'd argue that sufficiency is efficiency on a systemic level. Spacial planning e.g. can save very much energy because one's personal daily mobility can be reached with less traffic if jobs, schools, shops are close. Driving less is considered as sufficiency, but providing a city with high accessibilities leads to more mobility with less traffic: it's efficient! Friedrich E. Schumacher, in his Small Is Beautiful, called this the "efficiency as seen by a buddhist economist". I've always had an issue with the argument that increased efficient leads to increased use. That is absolutely true up to a point of maximum utility, but after that you just have energy savings. I used to work for a solar company that installed off-grid systems and we would see exactly what Jan Steinman wrote about with lighting. However, when the room was bright enough for all parties, gains in efficiency did not result in an increase in usage. If someone had enough incandescent lights in their home to satisfy them, they didn't double (assuming double efficiency for the sake of argument) the number of bulbs they used when CFLs were introduced and they didn't double them again when LEDs were introduced. When you have enough lumens you don't add more when technology increases efficiency. The same argument is often made for commuter vehicles. If commuting becomes cheaper people will move farther and farther from where they work because the housing farther away is cheaper. Someone driving half an hour each way might choose to move an hour out if commuting was cheap enough, but very very few people are going to live 2, 3 or 4 hours away from where they work regardless of how cheap it is to commute. It would not be worth the time. I had an old 25mpg Subaru Outback. I bought a new Priuc C which is roughly 50 mpg the way I drive. I had already been driving everywhere I needed or wanted to with the Subaru, so my driving habits didn't change when I got a more efficient vehicle and I saved money and energy. I now have a company car (also a Prius C) and I no longer pay for fuel at all, but again, since I was already going everywhere I needed or anted to go effectively having "free fuel" didn't increase the amount of fuel I was using. So, this confirms again that supply side solutions are no real solutions. I have sort of come to the conclusion that people and society will use up all the energy that is available to them, for as long as it is affordable. Interesting piece. But it continues the mistaken idea that rebound is a reason to discard energy efficiency improvements. What is missing in the analysis, though, is that rebound can be controlled with adequate "rebound policy", which would cause energy efficiency to automatically become much more effective in contributing to less energy use (or fewer carbon emissions). See this paper for more details:https://link.springer.com/article/10.1007/s10640-010-9396-z About 'commuting', you will probably find the Marchetti's constant[1] or de BREVER-wet [2] interesting. So since Neolithic times, people spend 1 to 1,5 hour on transport. So 'time spend' is indeed a constant, but not 'travel distance', which has increased a l-o-t the last years (and so the energy use). Most people who have a 'company car' or 'salary car' *will* probably have an increased amount of fuel. According to a study [3] 84 to 93% of those with a company car take their car to go to work, compared to 59% with people who don't have such a car. That same study also found that people with a company car have 9200 extra driven kilometers! Think carefully about your situation. Suppose you have to go somewhere and you could go by public transport. But you will have to pay a ticket. And than you see your company car on your drive way, which rides totally for free. What will you choose? What will most people choose in the same situation? Thanks for an extremely mind-provoking article! One quote here is very precise This is sure to be controversial, and it risks being authoritarian, at least as long as there is a cheap supply of fossil fuels. Now, as a child I've seen the "saving policies" being gradually introduced in my family due to an economic collapse. First we've had to sell the car because we were not in a position to buy fuel. Then it became way too expensive to use public transportation. At times, there was no electricity for many hours, the temperatures indoors dropped to 16C, and I, a thin boy of eight, have been freezing at home alone, struggling to save some warmth by keeping my arms inside the sweater... it didn't really help, I've been getting cold every now and then. And by the way, when electricity was going down, so did the elevators, and I've had to climb to the 11 floor. And when the water was off... well, use your imagination. Still, we've had food every day, even occasionally meat, "seasonal" and conserved fruits and vegetables - we've had an extreme luck of having our own garden and potato field and three extremely laborous grannies (one of them never married). I even had an opportunity to eat some cheese several times in the year 1992, bought especially for me by another childless grandauntie. My future wife was by far not that lucky. Whatever the cause, I've live through the gradual application of these "sustainability" policies. To everyone who thinks of them as being just "controversal" - wish you to experience it yourself. Taking the UK as an example, domestic lighting is totally saturated thanks to halogen downlighters which now account for half of domestic lighting demand. Consumption by downlighters will be cut by about 75% by switching to LEDs. There will doubtless be some rebound due to people being more casual about leaving lights on after the switch, but that will be dwarfed by switching from 30W-50W halogens to 5W-10W LEDs. People already have a crazy number of downlighters in their living areas; they're not going to add more. UK domestic lighting demand fell of a cliff thanks to the switch from GLS to CFL and will fall off a cliff again thanks to the switch from halogen to LED, no question. Would making lights less efficient force people to use their lights less? I'm sure they'd be a bit more diligent about switching off when not in use but they're not going to reconfigure their lighting set up and they're certainly not going to use their lights 25% of the time. The idea of making energy using products less efficient in order to increase the cost of service so that people use less energy seems sub-optimal to say the least. Surely if you want to use cost-of-service as your tool to reduce energy demand (which is controversial) then increasing the price of energy whilst making energy using products more efficient would be the logical way to do it. It would have the same outcome but wouldn't mean needlessly designing inefficiency into our energy system. Note that the above example is for an affluent country where ownership of energy using products is pretty saturated. We're seeing sustained reductions in domestic electricity consumption as a direct result of energy efficiency measures and these reductions will continue into the future for quite a while yet. In less affluent countries where ownership of energy using products is at low levels then naturally we would expect energy demand to grow as more people start to purchase these things. Energy efficiency will allow much needed development and energy access to take place with reduced impact which is a good thing isn't it? The idea of foisting less efficient products on developing countries is, in my view, perverse. I completely agree that the EU target is bollocks and we need to be achieving absolute, not relative, reductions. I am also a strong advocate of low tech solutions like line drying, better body insulation and bike use, but I just don't see that energy efficiency inhibiting the adoption of those solutions significantly. A far stronger factor is the mindset perpetuated by the media that these are 'hair shirt' privations that hark back to the 19th century. That is the main barrier, not energy efficiency. "As a child I've seen the 'saving policies' being gradually introduced in my family due to an economic collapse....To everyone who thinks of them as being just 'controversial' - wish you to experience it yourself." You raise an important issue that will be the topic of the next article. Some people have insufficient access to energy, a condition which scientists and policy makers call "energy poverty". Your description fits that definition. People living in these conditions need MORE energy, in spite of climate change and all the other environmental problems. However, there are other people who use much more energy than they "need". They should use LESS energy. Energy use needs to be redistributed to solve the problem. @ Jan Steinman (#5) "So, do we succumb to "shivering in the dark," or do we take the gift of efficiency (with its ominous, hidden footprint) to improve our mental state?" That is a very difficult question. See also my comment to Nikolay above. If the absence of light makes people depressed, you could argue that it is a need and not a luxury. But then again, thinking like this could lead to even more light, because indoor light levels are still very much lower than outdoor light levels. And we feel at our best when the sun shines. @ Coutney C (#4) "Using an integrated fish pond system to clean my dishes with carp instead of a washing machine" "When you have enough lumens you don't add more when technology increases efficiency" Your clients did not buy double the amounts of bulbs when CFLs were introduced, because that's not how it works. They will probably only renew their bulbs if the old ones die and need to be replaced. Lighting habits change gradually over time, which makes them hard to notice unless you do historical research or observe these trends over a lifetime. Much of the rebound effects of LEDs have also crossed product categories, see for instance the giant digital screens popping up everywhere (same LED-technology). Also, let's assume that you are right and that there is a level of light that is bright enough for all parties, in a domestic context. How do you know if this level of light and energy use is sustainable? It's easy to think of a world in which everything is perfect but you also have to keep in mind that high energy use comes with a cost. Considering the environmental constraints, we probably can't have all that we can imagine. @ JamieB (#14) "We're seeing sustained reductions in domestic consumption as a direct result of energy efficiency measures and these reductions will continue into the future for a quite a while yet" Sorry, but the article has made clear that this is bullsh*t. "The idea of foisting less efficient products on developing countries is, in my view, perverse." The article is not advancing that idea. "The idea of making energy using products less efficient in order to increase the cost of service so that people use less energy seems sub-optimal to say the least." The article argues that we should combine efficiency with sufficiency. Thanks for the link. But in contrast to what you write I do not argue that rebound is a reason to discard energy efficiency. I argue that the need for equivalence of service is the reason to discard energy efficiency. And in fact I don't discard energy efficiency at all, I conclude that it should be combined with sufficiency in order to reduce energy demand. What's bullshit? That UK domestic electricity demand is reducing or that the reduction is due to energy efficiency measures or that it'll continue for the foreseeable future? All three statements are correct. Demand for domestic lighting and refrigeration services in the UK is saturated and since the mid-2000s efficiency has been pushing demand down. Surprisingly even consumer electronics and ICT energy demand appear to have started to trend down too, primarily thanks to everyone having plenty of TVs and the switch to mobile devices. Looking at EU28 gross inland consumption data I see that it is trending down quite nicely too and is 10% lower than its peak in 2006. There was a particularly big drop in 2009 due to the financial crisis but t returned to the trend line in 2010 and has continued reducing since, even while the EU economy has been growing. Is EU28 demand reducing fast enough? Manifestly not. Re-reading your piece I realise I was so distracted by the suggestion in the penultimate section that energy inefficiency might be a good thing that I forgot that you switch back to endorsing energy efficiency in the conclusion (to be honest I'm a bit puzzled as to why you put that penultimate section in the piece). I think we're basically coming from the same position: as long as society is demanding more of a service then efficiency can only stem (or at best modestly reduce) the increase in absolute demand. Where I think we probably diverge is the extent to which society will accept reductions in service demand. In my view the potential for reduced service demand is big in the transport sector and there is some potential in the domestic sector but I think that potential will be limited by the attitude society currently has towards what is perceived as privation. Personally I'll accept saturated demand for services because then efficiency can do its job, but if we can reduce service demand too then I'm all for it as it will mean we'll get to where we need to be faster. Efficiency is, as noted in the article, "inefficient" when it is the driving factor. However, the article did not recognize that this is the condition it used for the evaluation. Efficiency is however "efficient" in the case where it is necessary for sufficiency. That is, when the supply or availability of "something" is restricted. As Herman Daly put it: "A policy of 'frugality-first' induces efficiency as a secondary consequence. Efficiency-first does not induce frugality, it makes frugality seem less necessary." Now consider the case of the need to reduce CO2 emissions to essentially zero in western nations by about 2035 and globally by 2050 (as the carbon budget for 1.5 or 2oC is currently understood). The increase so far in renewable energy in minuscule in comparison to any nation's overall energy consumption. Electricity and fuels are overwhelmingly from fossil sources. This means that either informal rationing (e.g. by price or taxes) or traditional rationing (by coupons or smart cards) of fossil fuels will be necessary if we are avoid catastrophe -- until renewable energy supplies can catch up to what is considered "sufficient." Rationing as mentioned here would be with an annually declining cap (directly or indirectly) on carbon in fossil fuels. (And of the two methods the first would be inherently unfair, and the latter would give equal access. Past rationing, e.g. in WW-II has included price controls, to have low prices.) The interesting thing with rationing is that it will drive efficiency, hence avoiding rebound. It will do so because efficiency becomes a necessity for sufficiency (and some luxury), instead of being primarily a means to monetary savings. All three statements are. First, because you are making exactly the kind of abstraction that the article is arguing against. It is always possible to demonstrate "energy savings" or "avoided energy" because of efficiency, you just need to set the parameters right. Second, you cannot simply attribute any reduction in energy use to energy efficiency. There are many other factors at play. For example, gas and electricity prices in the UK have more than doubled between 2002 and 2016, and there was a major economic crisis that was not limited to 2009, as you seem to suggest. Energy use declined in almost all European countries between 2008 and 2014, and only went up again in 2015 and 2016, when economic growth returned. Also in the UK, by the way: "Final energy consumption increased by 2,167 ktoe (1.6%) in 2016 to 140,668 ktoe. The domestic sector saw the biggest increase in both absolute and percentage terms; by 1,249ktoe (3.1%)." And look what happens when we define and measure energy efficiency as domestic electric use alone: "Average gas consumption increased by 4.6 per cent to 13,801kWh, and by 1.7 per cent on a temperature corrected basis. Average electricity consumption continued to fall, by 0.8 per cent to 3,889KWh in 2016" We can still pretend that energy use goes down, while it actually goes up. Just set the parameters right. Looking at these figures, it could very well be that domestic electricity use in the UK has declined because people have switched from electric to gas powered water boilers and cooking stoves. Efficiency is always a ratio. Productive efficiency, energy efficiency, time efficiency, etc. It's all arbitrary value X divided by arbitrary value Y. Those values and those ratios have no intrinsic meaning or utility. In a world of nearly infinite numbers to collect and compare, we are guided on what efficiencies we will concern ourselves by our ethics, the numbers themselves are amoral. Numbers cannot tell us what we should do, only what we can. Our values tell us what we should do. I am sorry I keep harping on this but it cannot be escaped: this is ethics conversation, not a technological one. Efficiency, in and of itself, means nothing. The concern is why not how, and if the answer to why is "So the endless growth model can continue, but with slightly less pollution per dollar of GDP growth", then that is exactly what you get, as the article points out extensively. If the answer to why is "so that billions of humans like us don't have to suffer and die from privation" we might get something else all together. There is more than enough time and money on earth for everyone to live a decent life. We make, for instance about 200% more food than it takes to feed everyone and there many more homes than homeless people. For that to change the food and homes are going to have to be taken from their current lawful owners and given to someone else. If there is a way to do that without authoritarianism, I'm all ears. Thanks Kris for your excellent article. It helps me greatly with what has recently been on my mind: that reducing carbon emissions can only occur by people becoming poorer (in the economic sense, not necessarily in experience). The question of efficient lighting discussed above is a good example: if LED lights allow a family to save money (instead of increasing lighting), then the savings will be spent on energy-consuming products, perhaps travelling by Prius instead of by bike, or eating more cheese. As you point out, real reductions in energy consumption are consequences of economic constraints - people getting poorer. The value of efficiency is not in reducing energy consumption or emissions, but in improving the conditions for people who are poor through low income or high energy price. I don't see how what I'm saying can be described as an abstraction. The EU target of X% reduction compared to some assumed counterfactual is, sure, but the electricity demand reductions that we're seeing in the UK are very real. I don't see how you can argue otherwise. 1) "UK domestic electricity demand is reducing". It really is! See Table 3.01 of ECUK. It peaked in 2005 and is 14% down in 2016, in spite of there being 10% more households over the same period. 2) "The reduction is due to energy efficiency measures". Sure it's not *all* due to physical measures - prices have an effect as well, naturally, as they drive behaviour change (which is energy efficiency as well). But real electricity prices have been stable for the past 3 or 4 years and yet electricity demand continues to drop because people steadily replace their appliances as they break and people are starting the switch to LED lighting. 3) "It'll continue for the foreseeable future". Well only time will tell but as long as we continue not to take any meaningful steps to electrify heat, I can't see how it can go anywhere but down. All end uses of domestic electricity are now flat or trending down and there are still an awful lot of inefficient appliances and lights to swap out. Regarding the final energy consumption data you cite, don't forget that weather has a strong effect but temperature-corrected final energy demand in the domestic sector did increase slightly (1.4%) between 2015 and 2016. The main reason for that? We've more or less stopped insulating homes in the last couple of years. I think the problem resides in our capitalist "self regulating" market democracies. As we now are used to think, control over energy consumption takes away "freedom" from cityzen's lives. If we want to guarantee a free market, of course democracies cannot avoid letting producers sell and overproduce, and people to buy and overconsume. If we base our economy on GDP, we cannot avoid improving fossil energy use. If we do the contrary, as you stated ("...This is sure to be controversial, and it risks being authoritarian, at least as long as there is a cheap supply of fossil fuels...") freedom as we know it - which is substantially freedom to consume energy - will end. So the only way is to change our culture, and it will be a long long way, hoping we have time enough. Thank you for you effort. Talking about household appliances,Your article does not mention the fact that these units are made to fail, unlike appliances of the past. We recently purchased all brand new appliances for our new home, thank God for extended warranties! We have had at least 15 visits from various repairmen to replace or repair different issues. The repairmen all say the same thing "they don't make them like they used to" I have brought this up in conversation with many folks who all have the same reports. The Brand name dishwasher made it to the landfill after 18 months. Talk about efficiency! Consumerism at it's finest. Thank you thank you Kris! I am an energy efficiency consultant for buildings, and I see all of this in my work. What a waste of time for me, and how self deluded are my clients! The fake "savings" thinking infects most educated people I know. I would like to change careers to develop more useful cargo bicycles. Your work on this site brings me joy. Great stuff. Thinking like this is a rare occurrence. I would imagine you are quite familiar with the works of Schumacher, but I wanted to make sure you had read or heard this lecture given by Andrew Kimbrell: Congratulations! Very thought-provoking! Change could come quickly if we had the courage to tax energy use to include its environmental costs. See the EPA's Social Cost of Carbon before and after Trump. Take an extreme example: increase taxes on gasoline so that the price per gallon doubles. Thank you Kris, excellent! Right, energy saving is neither “the” solution nor the problem, it is a necessity. Unfortunately the rebound effect is disputed, but also the (your?) idea of “sufficiency” or to stick to a “fixed supply” in the EU simply neglects the existence of roughly 6 billion non westerners. Population growth is the minor issue here. The real problem is lifestyle and it’s scalability. This includes living in huge cities, contrary to where food is produced. With all energy (and other) savings, but considering the world’s actually existing population, we clearly must accept _the_enemy_is_us_. We are already beyond all limits and will go further. Increasing efficiency costs a lot of time, diligence and even more energy + resources to develop and produce efficient products. Efficiency isn’t bad. On the contrary, efficiency may be the main factor why we are still alive, given our mantra of never ending growth. But our net growth is in energy consumption per capita (worldwide average). Only that it is impossible to scale this prosperity to the already living mankind. Excellent paper, Kris. Your paper didn't mention this but it was implied by examples of automobiles and aircraft. For example the BLS in the US doesn't measure inflation by comparing the early price of a good to the price later if there has been a " hedonic quality adjustment" in the auto. Everyone is aware there has been a very large increase in car prices from say 30 or 40 years ago but BLS can prove that isn't so by virtue of improvements in the cars which negates the increase in price. Example: I bought a new GMC suburban for $7800 in the late 70's. A new suburban is about $60000.I see that as inflation of 700%. The bls says not so. My old one had roll up windows, a bench seat rubber mats and an am radio. A 2019 has cruise and climate control, a back up camera, power windows and computers and is not repairable by a handy car owner like my 70's suburban.The BLS gets to decide how much to value these "hedonic improvements" to prove that there is minimal inflation. I could also make the point that fuel use is probably higher in the new suburbans which get 18 mpg instead of the old one of 13-14 mpg which were also much heavier. Because of highway congestion the avg speed today is lower, spent in low gear.My suburban in stop and go can get under 10 mpg. Because of highway improvements and 75-80 mph speed limits the suburban gets abysmal economy whereas with the old suburban and 50-55 mph limits, economy was around 14. Not only do these newer vehicles cost far more, they still use the same or more fuel in a year. In only 5 years or so they lose 75% or more of their value perhaps40-$50000. Which one would you buy if you had a choice?
Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and . Comment: The case is in Very Good Condition. The DVD(s) is/are in very good shape. Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and . Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and . {"currencyCode":"USD","itemData":[{"priceBreaksMAP":null,"buyingPrice":20,"ASIN":"B000KHX70I","moqNum":1,"isPreorder":0},{"priceBreaksMAP":null,"buyingPrice":10.1,"ASIN":"B000LC3IJW","moqNum":1,"isPreorder":0},{"priceBreaksMAP":null,"buyingPrice":4.99,"ASIN":"B000LC3IJC","moqNum":1,"isPreorder":0}],"shippingId":"B000KHX70I::W%2FH7VUZOOaAlm3X0SqQuAGCyLB0NcjEMQUpFl1DRUs5N5UtfY67BjOG4oXtFNQdKEkklDddVXvIG28DnffFtiajDOHOK2%2FZOI0L5jQbTemzClUm7HMpx1g0AhCD%2B0soUeCwQaz5CoGz3OZmIWgdmp%2BX%2BwfM78zcx,B000LC3IJW::GXJv46QjaKyzUfKKdWn6FPHuh2KxeYGYJO2mn7PxBGkn4iXK1Sja1ya%2FASaa11kN%2Ff8BzYQvNIPbrykJ8pl9Rpt%2BJrLgn51srXmqg4LAytM1x8u45HykEtCKrCYLTUtx3r%2FDN0ErBGhByvLrDkWsYw%3D%3D,B000LC3IJC::MCPsfBcmIjHdMSLsEQ3kuBDPAkJquexaLbv4FWfVAAU294t1A9xTOrGs1sJJ0FU8jpK%2F6MEh%2BTRBF%2B5Zta7zJEYcJdteSpIIaTvMEjmeGX2Uzr%2BDEIM0z8bgXOyEjMGkextsPxWt9gNlmTgcMKd5TQ%3D%3D","sprites":{"addToWishlist":["wl_one","wl_two","wl_three"],"addToCart":["s_addToCart","s_addBothToCart","s_add3ToCart"],"preorder":["s_preorderThis","s_preorderBoth","s_preorderAll3"]},"shippingDetails":{"xz":"same","xy":"same","yz":"same","xyz":"same"},"tags":["x","y","z","w"],"strings":{"addToWishlist":["Add to Wish List","Add both to Wish List","Add all three to Wish List","Add all four to Wish List"],"addToCart":["Add to Cart","Add both to Cart","Add all three to Cart","Add all four to Cart"],"showDetailsDefault":"Show availability and shipping details","shippingError":"An error occurred, please try again","hideDetailsDefault":"Hide availability and shipping details","priceLabel":["Price:","Price for both:","Price for all three:","Price For All Four:"],"preorder":["Pre-order this item","Pre-order both items","Pre-order all three items","Pre-order all four items"]}} Editorial Reviews (Sports)The Revolution continues as D-Generation X leads the stars of Raw into another year of chaos at New Year’s Revolution. Stars Triple H and Shawn Michaels, John Cena, Edge, Randy Orton, Ric Flair, Carlito, and more. Features an elimination chamber which is a variation of the steel cage match. Most Helpful Customer Reviews Normally, this PPV is RAW's version of SmackDown's Great American Bash (or ECW's December To Dismember) in which WWE only focuses on their main events & pays little to no attention to the under card resulting in a horrible PPV. However, with ECW having the Elimination Chamber this year...RAW focused on the entire card & gave us a show worth watching. Steel Cage Match: Intercontinental Champion Jeff Hardy vs. Johnny Nitro - This cage match was similar to the Bret Hart vs. Owen Hart match in which they didn't turn it into a bloody massacre but did good wrestling inside the cage. If you've seen their previous matches, you know this will not disappoint but the ending was something unique for a cage match. Tag Team Turmoil: Highlanders vs. World's Greatest Tag Team vs. Jim Duggan & Super Crazy vs. Cade & Murdock vs. Cryme Tyme - Just like Cyber Sunday, this was just a filler match. It had some weird booking like Duggan & Crazy being a tag team along with Cryme Tyme & WGTT not fighting each other when they were feuding for the past month on RAW. Ric Flair vs. Kenny Dykstra - This was a solid match as Flair did what he had to do to elevate the young talent but if you've seen their matches on RAW the weeks before & after this match, you aren't missing anything. Womens Champion Mickey James vs. Victoria - I liked that they had Victoria doing the "Kill Bill" gimmick but this match was more focused on setting up the Mickey/Melina feud & having the divas get their revenge on Victoria more than showcasing what Mickey & the most underrated diva in WWE can do. This was the right way to start off a nice year in 2007, and was definatly a great ppv. Steel Cage match for the Intercontinental championshipJeff Hardy vs. Johnny Nitro(John Morrison)This was a great way to start off a exciting event. These two put everything in there skills into this match. These guys had some huge moments like a sunset flip powerbomb off the top rope, Melina gettign involved, and a brutal moment where nitro was escaping, and hardy opened the door and nitro slipped, landed on the door, and injured his crotch(which was also the funniest moment in the match)and getting a huge laughing reaction from Jerry Lawler. 4.5/5 Tag Team Turmoil for a shot at the World Tag Team championshipThis match included Cryme Tyme, The Highlanders, The worlds greatest tag team(the worlds lamest tag team), jim duggan and super crazy, lance cade and trevor murdoch. This was actually a pretty nice turn out to this match, having the few highflying moments, and the most surprising moment, with Cryme Tyme actually winning a match that would be big to there career. But i enjoyed this, and thought that it was surprising, especially with the tag team of jim duggan and SUPER CRAZY! 3.75/5 Kenny Dykstra vs. Ric FlairWell, this match ruined two good matches in a row, this match ruined the streak. All this was, was just Kenny being a freaking jackass and beating a legend who can bearly talk anymore without spitting in someones face, so really big whoop, because this match should have never even been on the card. 2/5 Womans championshipMickie James vs. VictoriaThis was another waste of time, these two woman are great fighters, but here they just didn't put on a good match at all, this isn't even worth watching and is boring.Read more › As far as the New Year's Revolution series goes, this is probably second best of the three. The first was great due to the Elimination Chamber main event, the second was terrible with the Elimination Chamber being the second to worst behind the ECW Extreme Chamber - however it ended excellentyl with Edge becoming champion. This edition of NYR was watchable, but nothing amazing. The main event was better than last years, but not as good as the first NYR's main event. An okay match, but not as good as the Inferno match that kick-started Amrageddon on the last SmackDown PPV. The Hardy/ MNM would stay fresh and entertaining for a while to come, however the single feud between Nitro and Jeff Hardy had become depressing by the time this match came about. Not as good as their Ladder match on RAW several weeks previously, but it got the crowd pumped which always helps during a PPV. Some nice spots involving the Cage, and an entertaining finish, but nothing special. WINNER: INTERCONTINENTAL CHAMPION JEFF HARDY 7/10 SPECIAL MATCH - TAG TEAM TURMOIL This was announced on the night but didn't live up to turmoils at events such as Backlash 2005. Cryme Tyme and Hagan/ Super Crazy (a very unusual tag team) were the fan favourites, and there were some good performances by the World's Greatest Tag Team, but it wasn't quick enough in its pace and could've been better. The winners of the match were supposed to recieve a Tag Team Championship opportunity in the future, but 6 months later the winning team still haven't recieved their shot.
The big money, by now, has largely been spent on the 2017 free-agent market. If you see significant multiyear deals popping up in the coming months, chances are they are going to players still under contract to prevent them from testing free agency in 2018 or 2019. How can we assess the impact of the 2017 class? One way is to look at how the contracts ranked among the most lucrative at each position. What follows is an attempt to do just that, using average per year (APY) as the benchmark while also including money that is guaranteed for injury, skill or both. Quarterback 1. Andrew Luck, Colts ($24.594M APY, $87M guaranteed) 2. Drew Brees, Saints ($24.250M APY, $44M guaranteed) 3. Kirk Cousins, Redskins ($23.944M APY/guaranteed) 4. Joe Flacco, Ravens ($22.133M APY, $62M guaranteed) 5. Aaron Rodgers, Packers ($22M APY, $54M guaranteed) 6. Russell Wilson, Seahawks ($21.9M APY, $61.542M guaranteed) 7. Ben Roethlisberger, Steelers ($21.850M APY, $64M guaranteed) 8. Eli Manning, Giants ($21M APY, $65M guaranteed) 9. Philip Rivers, Chargers ($20.813M APY, $65M guaranteed) 10. Cam Newton, Panthers ($20.76M APY, $60M guaranteed) Someone missing? Yup. Super Bowl MVP Tom Brady's perennially team-friendly contract ranks No. 22 on this list. His current deal with the Patriots, which expires in 2019, included $30 million in guarantees and averages $15 million per year in this reckoning. The other missing name here is the Falcons' Matt Ryan, whose APY ($20.75M) falls just $10,000 short of Newton's. One player to keep an eye on is the Lions' Matthew Stafford, who is in position for a contract extension that should put him in the mix for this list. It also will be interesting to see when the Packers approach Rodgers about his deal, which has three more years on it. Cousins is on the franchise tag and, barring a major regression this season, it will keep him on this list whenever he signs a longer-term deal. Running backs 1. Le'Veon Bell, Steelers ($12.083M APY/guaranteed) 2. LeSean McCoy, Bills ($8.01M APY, $26.55M guaranteed) 3. Jonathan Stewart, Panthers ($7.3M APY, $23M guaranteed) 4. Doug Martin, Buccaneers ($7.15M APY, $15M guaranteed) 5. Lamar Miller, Texans ($6.5M APY, $14M guaranteed) 6. Chris Ivory, Jaguars ($6.4M APY, $10M guaranteed) 7. DeMarco Murray, Titans ($6.375M APY, $15.25M guaranteed) 8. Ezekiel Elliott, Cowboys ($6.239M APY, $24.956M guaranteed) 9. Kyle Juszczyk, 49ers ($5.25M APY, $9.75M guaranteed) 10. Giovani Bernard, Bengals ($5.167M APY, $5M guaranteed) This list provides a visual look at what we already knew: The 2017 class of running backs didn't cash in. Bell received the franchise tag, but the only other recent deal was for Juszczyk -- a fullback whom the 49ers say they will use as an "offensive weapon." McCoy's deal, renegotiated in 2015, is increasingly an outlier. Stewart's contract, meanwhile, was agreed upon five years ago. Elliott's high guarantee was a function of his draft slot as the No. 4 overall pick in 2016. Wide receivers 1. Antonio Brown, Steelers ($17M APY, $19M guaranteed) 2. A.J. Green, Bengals ($15M APY, $15M guaranteed) 3. Julio Jones, Falcons ($14.25M APY, $47M guaranteed) 4. Demaryius Thomas, Broncos ($14M APY, $43.5M guaranteed) 5. Dez Bryant, Cowboys ($14M APY, $45M guaranteed) 6. T.Y. Hilton, Colts ($13M APY, $39M guaranteed) 7. Doug Baldwin, Seahawks ($11.5M APY, $24.25M guaranteed) 8. Keenan Allen, Chargers ($11.25M APY, $24.156M guaranteed) 9. DeSean Jackson, Buccaneers ($11.167M APY, $20M guaranteed) 10. Emmanuel Sanders, Broncos ($11M APY, $26.9M guaranteed) Brown set the market by re-signing with the Steelers, but the only free agent to sign a top-10 deal with a new team was Jackson. That's a rarity for a player of Jackson's age -- he will turn 31 during the 2017 season -- but NFL player tracking last season revealed he hasn't lost much, if any, of his high-end speed. The next-highest paid free agent to change teams was Alshon Jeffery, whose one-year deal with the Eagles is worth $9.5 million for this season (ranking him at No. 17 overall). The Patriots dealt a fourth-round draft pick to Indianapolis for tight end Dwayne Allen and also received a sixth-round pick in return. Zach Bolinger/Icon Sportswire Tight ends 1. Jimmy Graham, Seahawks ($10M APY, $20.9M guaranteed) 2. Travis Kelce, Chiefs ($9.368M APY, $20.017M guaranteed) 3. Jordan Reed, Redskins ($9.350M APY, $22M guaranteed) 4. Rob Gronkowski, Patriots ($9M APY, $17.92M guaranteed) 5. Zach Ertz, Eagles ($8.5M APY, $21M guaranteed) 6. Charles Clay, Bills ($7.6M APY, $24.5M guaranteed) 7. Greg Olsen, Panthers ($7.5M APY, $12M guaranteed) 8. Jason Witten, Cowboys ($7.4M APY, $19M guaranteed) 9. Dwayne Allen, Patriots ($7.35M APY, $16M guaranteed) 10. Coby Fleener, Saints ($7.2M APY, $18M guaranteed) The Patriots acquired Allen via trade from the Colts and now have two of the NFL's highest-paid tight ends. Otherwise, the big takeaway is that none of the free agents who changed teams this spring -- and there were many -- cracked this top 10. That list includes Martellus Bennett (Packers), Jared Cook (Raiders), Dion Sims (Bears) and Rhett Ellison (Giants). Guards 1. Kevin Zeitler, Browns ($12M APY, $31.5 guaranteed) 2. Kelechi Osemele, Raiders ($11.7M APY, $25.4M guaranteed) 3. Kyle Long, Bears ($10M APY, $30M guaranteed) 4. David DeCastro, Steelers ($10M APY, $16M guaranteed) 5. T.J. Lang, Lions ($9.5M APY, $19M guaranteed) The market for guards, long considered a secondary position, has exploded in recent years. Two players, Zeitler and Lang, jumped into the top five this year after Osemele set the bar in 2016. Two other free agents, Ronald Leary (Broncos) and Larry Warford (Saints), cracked the top 10. So did Joel Bitonio (Browns) and Laurent Duvernay-Tardif (Chiefs), who both signed extensions to existing deals. Centers 1. Travis Frederick, Cowboys ($9.4M APY, $28.162M guaranteed) 2. Alex Mack, Falcons ($9M APY, $28.5M guaranteed) 3. Mike Pouncey, Dolphins ($8.95M APY, $22M guaranteed) 4. Rodney Hudson, Raiders ($8.9M APY, $20M guaranteed) 5. Maurkice Pouncey, Steelers ($8.827M APY, $13M guaranteed) No one from the 2017 class came close to the top of the market. The closest was JC Tretter, whose APY of $5.583 million ranks No. 10 among NFL centers. Generally speaking, teams work hard to keep their starting centers because it is such an important leadership position. The Chargers brought in left tackle Russell Okung to protect Philip Rivers' blind side. Isaiah J. Downing/USA TODAY Sports Offensive tackles 1. Russell Okung, Chargers ($13.25M APY, $25M guaranteed) 2. Trent Williams, Redskins ($13.2M APY, $41.25M guaranteed) 3. Terron Armstead, Saints ($13M APY, $38M guaranteed) 4. Tyron Smith, Cowboys ($12.2M APY, $21.118M guaranteed) 5. Eric Fisher, Chiefs ($12M APY, $40M guaranteed) 6. David Bakhtiari, Packers ($12M APY, $16M guaranteed) 7. Riley Reiff, Vikings ($11.75M APY, $26.3M guaranteed) 8. Cordy Glenn, Bills ($11.573M APY, $36M guaranteed) 9. Joe Thomas, Browns ($11.5M APY, $29.5M guaranteed) 10. Andrew Whitworth, Rams ($11.25M APY, $15M guaranteed) Left tackles were the biggest winners in the 2017 market, with Okung, Reiff and Whitworth all securing top-10 deals. The Panthers' Matt Kalil was not far behind. The Lions, meanwhile, made Rick Wagner one of the highest-paid right tackles with a deal that averages $9.5 million per year. When you consider the left tackle and guard markets together, you realize how motivated some teams were to upgrade their offensive lines. Concerns about the transition from college offenses, as well as the depth of the 2017 draft class, were contributing factors. Defensive tackles 1. Ndamukong Suh, Dolphins ($19.063M APY, $59.955M guaranteed) 2. Fletcher Cox, Eagles ($17.1M APY, $63.299M guaranteed) 3. Marcell Dareus, Bills ($15.850M APY, $60M guaranteed) 4. Malik Jackson, Jaguars ($14.35M APY, $14.35M guaranteed) 5. Gerald McCoy, Buccaneers ($13.6M APY, $51.5M guaranteed) The top of the 2017 class largely stayed home, without cracking the top five of this sub-group. Kawann Short signed his franchise tag with the Panthers. Brandon Williams received an APY of $10.5 million and guarantees of $27.5 million to return to the Ravens. Defensive ends 1. Muhammad Wilkerson, Jets ($17.2M APY, $53.5M guaranteed) 2. Olivier Vernon, Giants ($17M APY, $52.5M guaranteed) 3. J.J. Watt, Texans ($16.667M APY, $51.976M guaranteed) 4. Jason Pierre-Paul, Giants ($15.5M APY, $60M guaranteed) 5. Calais Campbell, Jaguars ($15M APY, $30M guaranteed) This list includes both 3-4 and 4-3 defensive ends. Campbell, 30, received a pretty extraordinary deal for a player his age even if the guarantees were lower than some others in this group. The Giants' commitment to edge pass rushing, meanwhile, is pretty clear based on this list. Pierre-Paul's numbers are approximate given his recent agreement. Pierre-Paul was never available on the market after receiving the franchise tag and, in truth, the NFL's best pass-rushers rarely reach free agency. Dont'a Hightower is returning to New England, getting $19 million guaranteed on a four-year deal. EPA/CJ GUNTHER Linebackers 1. Von Miller, Broncos ($19.083M APY, $70M guaranteed) 2. Justin Houston, Chiefs ($16.833M APY, $52.5M guaranteed) 3. Chandler Jones, Cardinals ($16.5M APY, $53M guaranteed) 4. Melvin Ingram, Chargers ($14.55M APY/guaranteed) 5. Clay Matthews, Packers ($13.2M APY, $20.5M guaranteed) 6. Jamie Collins, Browns ($12.5M APY, $26.4M guaranteed) 7. Luke Kuechly, Panthers ($12.359M APY, $34.363M guaranteed) 8. Nick Perry, Packers ($12M APY, $18.5M guaranteed) 9. Ryan Kerrigan, Redskins ($11.5M APY, $23.788M guaranteed) 10. NaVorro Bowman, 49ers ($11M APY, $15.3M guaranteed) Most of this list is composed of 3-4 outside linebackers, who are pass-rushers and thus the most valuable. Again, elite players in that role rarely are exposed to the open market. Jones, for example, was traded from the Patriots to the Cardinals and then given the franchise tag before he agreed to terms. Collins' recent deal stands as a bit of an outlier given his production relative to this list, but the Browns have had to pay above market at several positions to retain or acquire talent. Dont'a Hightower was the best player at this position to hit free agency, and he wound up returning to the Patriots at an APY of $8.875 million. Cornerbacks 1. Trumaine Johnson, Rams ($16.742M APY/guaranteed) 2. Josh Norman, Redskins ($15M APY, $50M guaranteed) 3. Patrick Peterson, Cardinals ($14.01M APY, $48M guaranteed) 4. Richard Sherman, Seahawks ($14M APY, $40M guaranteed) 5. Joe Haden, Browns ($13.5M APY, $45.078M guaranteed) 6. A.J. Bouye, Jaguars ($13.5M APY, $26M guaranteed) 7. Stephon Gilmore, Patriots ($13M APY, $40M guaranteed) 8. Janoris Jenkins, Giants ($12.5M APY, $28.8M guaranteed) 9. Darius Slay, Lions ($12M APY, $23.1M guaranteed) 10. Dre Kirkpatrick, Bengals ($10.5M APY, $12M guaranteed) Whether or not it is true, NFL teams generally operate as if there is a massive shortage of cornerbacks. Big money goes to above-average players. Johnson's status as a consecutive franchise player is one example. So is the deal that the Jaguars gave Bouye, who is a promising player but didn't become a regular starter until 2016. It didn't make this list, but the deal that brought Logan Ryan to the Titans -- $10M APY, $16M guarantees -- was also strong. Safeties 1. Eric Berry, Chiefs ($13M APY, $40M guaranteed) 2. Tyrann Mathieu, Cardinals ($12.5M APY, $35M guaranteed) 3. Reshad Jones, Dolphins ($12M APY, $33M guaranteed) 4. Harrison Smith, Vikings ($10.25M APY, $28.578M guaranteed) 5. Earl Thomas, Seahawks ($10M APY, $25.725M guaranteed) 6. Devin McCourty, Patriots ($9.5M APY, $28.5M guaranteed) 7. Malcolm Jenkins, Eagles ($8.75M APY, $21M guaranteed) 8. Tony Jefferson, Ravens ($8.5M APY, $19M guaranteed) 9. Tashaun Gipson, Jaguars ($7.2M APY, $12M guaranteed) 10. Kam Chancellor, Seahawks ($7M APY, $17M guaranteed) Jefferson, the top safety to hit the market, got a deal that cracked the top 10 from the Ravens. But it didn't come close to what the Chiefs gave Berry, who spent last season under the franchise tag but vowed not to do it again. (Johnathan Cyprien got $6.25M in APY and $9 million to move from the Jaguars to the Titans.) Meanwhile, the Dolphins re-signed Jones a year before he would have been in position to replace Berry atop this list. Steven Hauschka is signing with the Bills on a four-year deal. AP Photo/Stephen Brashear Specialists 1. Stephen Gostkowski, Patriots ($4.187M APY, $10.1M guaranteed) 2. Justin Tucker, Ravens ($4.076M APY, $10.8M guaranteed) 3. Mason Crosby, Packers ($4.025M APY, $5M guaranteed) 4. Sebastian Janikowski, Raiders ($3.775M APY, $8M guaranteed) 5. Dustin Colquitt, Chiefs ($3.75M APY, $4.95M guaranteed) Kickers are paid better than punters in the NFL. The best free-agent contract of this offseason has been the deal Steven Hauschka received from the Bills. Hauschka received $4 million guaranteed and an average of $2.95 million per year, ranking him 10th among kickers at the moment.
1. Introduction =============== The attractive properties of Carbon Fiber Reinforced Polymer (CFRPs) make this family of materials suitable for a wide range of high responsibility structural applications. CFRPs exhibit fatigue and corrosion resistance combined with lightweight, high specific stiffness, and strength. \[[@B1-materials-07-04442]\] CFRP components are manufactured to be near net shape, however, dimensional and assembly requirements commonly involve some machining operations. Mainly trimming and drilling are required, being critical operations, since they are performed on high value components susceptible to suffering machining induced damage. Delamination, fiber pull-out, and thermal degradation \[[@B2-materials-07-04442]\], usually observed when machining with worn tools or inappropriate cutting parameters, could affect the performance of the composite component or the mechanical joint during service life \[[@B3-materials-07-04442],[@B4-materials-07-04442],[@B5-materials-07-04442]\]. Delamination related to further strength reduction of the component, has received considerable attention in the scientific literature, using experimental and numerical approaches, see for instance recent advances in \[[@B6-materials-07-04442],[@B7-materials-07-04442]\]. However, mechanical delamination is not the unique risk for the surface integrity of the component. Thermal damage, related to low glass transition temperatures (around 180 °C for a typical epoxy resin in CFRPs), can cause matrix degradation and, thus, it is also involved in plies separation \[[@B8-materials-07-04442]\]. Despite of the potential risk of thermal damage when machining CFRPs, it has been analyzed, mostly based on experimental works, and only in a few works in the literature. The measurement of the temperature at the cutting tool has been achieved, using thermocoupling, by several authors. Chen \[[@B9-materials-07-04442]\] obtained the temperature reached during drilling at the flank surface. A significant influence of the cutting speed was observed, with temperature increasing from 120 to 300 °C when the cutting speed increased from 40 to 200 m/min. Brinksmeier *et al.* \[[@B10-materials-07-04442]\] embedded a thermocouple at the tool tip for temperature measurement in drilling and orbital milling of hybrid components Ti/CFRP/Al. Drilling operation involved lower surface quality and higher temperatures than orbital milling. The installation of a thermocouple inside the tool gives indirect information about the temperature level at the workpiece. Direct measurement of workpiece temperature was achieved in \[[@B8-materials-07-04442]\], also completed with temperature measurement at the cutting tool. Milling of CFRP was conducted with carbide endmill in dry conditions. Two different techniques were used: infrared thermo-graph camera for endmill surface temperature measurement, and embedded K-type thermocouple in the CFRP for measurement of the temperature at the machined surface. The temperature at a depth of 0.3 mm beneath the machined surface reached 104 °C (at cutting a speed equal to 300 m/min), this temperature being much lower than that measured at tool--chip contact point. The influence of tool wear in temperature induced during trimming was analyzed in \[[@B11-materials-07-04442]\]. Fresh tools induced temperatures below the glass transition temperature of the composite, while a critical level was reached using worn tools. In addition, machining parameters had significant influence on the variation of the machined surface quality and cutting forces. Cutting temperature in rotary ultrasonic machining of carbon fiber reinforced plastic has been recently analyzed \[[@B12-materials-07-04442]\] using two techniques: fiber optic sensor and thermocouples. Relations between input variables (ultrasonic power, tool rotation speed, and feed rate) and cutting temperature were obtained from experiments. The authors found that the maximum cutting temperature decreased as ultrasonic power and feed rate decreased. On the other hand, as tool rotation speed increased, maximum cutting temperature firstly increased, and then decreased. Concerning the method for temperature measurement, the fiber optic sensor gave higher temperatures than those measured by the thermocouple method. The development of modeling tools, able to predict temperature distribution during composite cutting, is a desirable objective because of its relation with damage. Prediction of mechanical damage in composite cutting has been commonly achieved using finite element analyses. Although this field is not as active as metal cutting, it is possible to find some works focusing on orthogonal cutting of composite, involving two-dimensional (2D) models (see for instance \[[@B13-materials-07-04442],[@B14-materials-07-04442],[@B15-materials-07-04442],[@B16-materials-07-04442]\]), as well as three-dimensional (3D) approaches (see, for example, \[[@B17-materials-07-04442],[@B18-materials-07-04442]\] analyzing the validity of 2D hypotheses). Main contributions in the field of modeling of composite machining have been summarized in a recent review \[[@B19-materials-07-04442]\]. Simulation of real cutting processes, such as drilling, requires a high computational cost because of the need of simulating tool rotation and feed movement using both damage and failure criteria of the workpiece. These types of complex models for drilling have been recently developed showing good correlation between measured and predicted torque, thrust force, and delamination area \[[@B7-materials-07-04442],[@B20-materials-07-04442]\]. Although mechanical effects in composite cutting have been analyzed using simulation tools, thermal effects have been neglected in these models. The first approach to the numerical modeling of thermal phenomena involved in the orthogonal cutting of CFRPs has been presented in a recent work of the authors \[[@B21-materials-07-04442]\]. The model included heat generation due to friction at the chip/tool interface and it was used for the prediction of intralaminar damage, delamination, and thermal damage accounted in terms of temperature level beneath the machined surface. The heat generated due to plastic work can be assumed to be negligible in CFRPs because they exhibit elevated elastic modulus with small deformations, even when breakage is initiated. Deformation energy is negligible and, thus, heat generated can be neglected. This behavior is quite different from that observed in metal cutting, involving large amount of plastic work converted into heat, leading to very high temperatures at the primary shear zone and, sometimes, to the formation of adiabatic shear bands (see, for instance, \[[@B22-materials-07-04442]\]). As deformation energy is neglected, the unique heat source considered in the model was friction at the interface. Thus, the estimation of friction heat at the tool/chip interface is crucial in drilling. Measurement of temperature during industrial drilling is not possible since the thermocouple technique is invasive. The main objective of this paper is the estimation of heat amount from easy-to-measure, in-process variables: torque and thrust force. A simple analysis, based on energy balance, is presented to obtain the heat generated at the interface, indirectly, from experimental tests. As far as the bibliographic revision has been carried out, this approach has not been applied to the problem of drilling. However, it is possible to find interesting analytical models of composite impact in \[[@B23-materials-07-04442]\], based on energy balance (impact processes have common characteristics with drilling, mainly the penetration of the projectile/drill into the target/workpiece). Once the frictional heat amount is available, it is possible to establish temperature distribution with a simple numerical model accounting for thermal conductivity in a composite. The detection of critical levels of temperature at certain zones can be used for assessment during the definition of drilling process of structural components, avoiding risk of thermal damage. The establishment of maximum levels of thermal energy, directly related with torque and thrust force evolution registered during cutting, could be used as an indicator of excessive tool wear. The aim of the paper and its practical application are illustrated in [Figure 1](#materials-07-04442-f001){ref-type="fig"}. ![Relationships between experimental, analytical, and numerical steps proposed.](materials-07-04442-g001){#materials-07-04442-f001} This work is structured firstly in this introduction, followed by the estimation of heat developed in the second section. The third section involves the simulation of heat propagation, analyzing different cases and, finally, discussion and conclusions are presented. 2. Estimation of Frictional Heat ================================ A drilling process is performed at a constant feed and rotary velocity. The control of the machine tool maintains these parameters and the resultant torque and thrust force measured at the spindle are dependent on the cutting parameters, the material properties, and the characteristics of the cutting tool, including contact behavior at the interface. [Figure 2](#materials-07-04442-f002){ref-type="fig"} illustrates the entrance of the drill through the composite during cutting. Thus, a simplified method to estimate the heat generated at the interface has been developed assuming constant feed and rotary velocity of the drill. The assumption of axial-symmetry was considered. Although CFRP strength is orientation dependent, a woven laminate was selected to minimize the effect of the anisotropy. 2.1. Energy Balance ------------------- The differential work due to torque (d*W~T~*) and thrust force (d*W~F~*) during a differential time increment, d*t*, involved in a differential of turn angle of the drill, dθ, and is the result of several contributions. These contributions are summarized in Equation (1): the energy required for breakage of composite (d*E~f~*), the kinetic energy transferred to the chip (d*E~c~*), and the amount of heat generated at the interface due to friction (d*Q*). The kinetic energy of the chip, once separated from the workpiece, can be neglected due to the small mass of the chip and the moderated velocity involved in cutting. A first estimation showed that the level was negligible compared to the rest of terms of Equation (1) (around 0.005% of the energy required for breakage of composite (d*E~f~*)). Thus, the heat generated at the interface can be estimated as: The terms corresponding to work developed by torque and thrust force were obtained from experiments. The torque and thrust force recorded at each time increment were derived from the discretization of the evolution with cutting time of both variables, measured during the drilling test. The term corresponding to energy involved in composite breakage can be calculated considering the differential volume removed by the drill during a differential time, *dt*, corresponding to a differential drill turn, dθ. where *w~f~* is the specific energy of the woven composite breakage; and d*V~f~* is the differential volume associated to a differential turn of the drill, *d*θ. The specific energy can be estimated as *w~f~* = 2(1/2*X*ε*~f~*), where *X* is the strength of the woven composite (the same in direction 1 and 2 because of the woven architecture of the composite); and ε*~f~* is the ultimate strain of the composite. It is worth noting that composite strength is orientation dependent, but the hypothesis of axial-symmetry was necessary to avoid unaffordable computational costs. The approach was similar to that used by Artero-Guerrero *et al.* \[[@B24-materials-07-04442]\] when modeling impact on a woven composite. Mechanical and thermal problems are uncoupled in the present, simplified model. Thus, mechanical properties are considered temperature independent under 180 °C, and thermal damage is assumed for temperatures above 180 °C. The differential volume considered is presented in [Figure 2](#materials-07-04442-f002){ref-type="fig"}. From this figure the volume can be calculated as: where *L*~cut~ is the effective cutting edge length; and *f*~cut~ is the feed rate. It is worth noting that the volume of material removed depends on the stage of the drilling process. It is possible to distinguish three different stages illustrated in [Figure 2](#materials-07-04442-f002){ref-type="fig"}, the entrance of the conical zone, the cut performed with the complete edge and the exit of the drill. For the geometry of the drill used in the machining tests (described in the next section) the stages indicated in the figure corresponds with the following values of time and effective length of cutting edge. where *R*~cut~ is the drill radius. The expression presented in Equation (2) was applied to real experiments involving drilling of woven carbon composite. The specific conditions of the experiments are presented in the next subsection, involving both new and worn tool geometries. ![Scheme of drilling process: differential volume removed during the differential time d*t* and effective cutting edge at the different stages of drilling from entrance to drill exit.](materials-07-04442-g002){#materials-07-04442-f002} 2.2. Application to Experiments ------------------------------- In order to apply the energy balance formulated in Equation (2) to a real case of drilling, experimental tests were performed on woven CFRP composite. Each ply was manufactured by Hexcel Composites consisting of AS-4 carbon fiber and epoxy matrix. The specimens, with a stacking sequence of 10 plies with the same fiber orientation in all of them, and a total thickness of 2.2 mm, were cut in plates. The mechanical properties of this material were obtained from scientific literature \[[@B24-materials-07-04442]\], see [Table 1](#materials-07-04442-t001){ref-type="table"}. The cutting tests were carried out in a machining center (B500 KONDIA, Kondia, Elgoibar, Spain), shown in [Figure 3](#materials-07-04442-f003){ref-type="fig"}. The machining center was equipped with a dynamometer (Kistler 9123C, Winterthur, Switzerland) for measurement of cutting forces and torque (see [Figure 3](#materials-07-04442-f003){ref-type="fig"}). The drill (diameter, 6 mm; point angle, 118°) was recommended by the manufacturer GUHRING (Albstadt-Ebingen, Germany) for CFRP drilling. Drilling tests were performed with new drill and with worn tool exhibiting flank wear (this wear mode is commonly observed to be dominant in the drilling of CFRPs). Fresh tool and severe wear (flank = 0.3 mm \[[@B25-materials-07-04442]\]) were tested in order to study different conditions for cutting forces and torque, and, in consequence, different levels of generated heat. Obtaining controlled worn geometries directly from wear tests is not easy, thus, the flank at the clearance surface was artificially generated by grinding. materials-07-04442-t001_Table 1 ###### Mechanical properties of AGP 193-PW/8552 composite material \[[@B24-materials-07-04442]\]. Property Value ------------------------------------------------- ------- Density, ρ (kg/m^3^) 1570 Resin content (%) 55.29 Longitudinal modulus, *E*~1~ (GPa) 68 Transverse modulus, *E*~2~ (GPa) 68 Major Poisson's ratio, ν~21~ 0.21 Longitudinal tensile strength, *X*~T~ (MPa) 880 Longitudinal compressive strength, *X*~C~ (MPa) 880 Transverse tensile strength, *Y*~T~ (MPa) 880 Transverse compressive strength, *Y*~C~ (MPa) 880 In-plane shear strength, *S*~T~ (MPa) 84 ![Machining center used in the experiments was equipped with the dynamometer, acquisition system and also with a system for chip aspiration.](materials-07-04442-g003){#materials-07-04442-f003} Several drilling experiments were performed in the ranges of cutting speed (25, 50, 100 m/min) and feed (0.05, 0.1, 0.15 mm/rev). From the observation of force and torque evolution, the tests corresponding to cutting speed 50 m/min and feed 0.1 mm/rev, with fresh and worn tool, were selected. In the selected cases, it was possible to identify steady values of force and torque in the different states of the drilling process and levels of energy large enough to consider the possibility of thermal damages in the matrix in the case of worn tool. The characteristics of the workpiece and drill, stated for the experiments, allowed the calculation of time intervals defined in Equations (5)--(7). Accounting for the drill tip angle equal to 118°, the thickness of the composite plate and the drill radius equal to 6 mm, *t*~1~ = 0.38 s; *t*~2~ = 0.88 s; and *t*~3~ = 1.26 s. As feed and rotary velocities are known, the evolution of thrust force and torque allowed the calculation of consumed power in penetration and rotation movement. In [Figure 4](#materials-07-04442-f004){ref-type="fig"}, the total consumed power in both thrust and cutting movement (for cutting speed 50 m/min and feed 0.1 mm/rev) is presented. The curves were obtained with a fresh tool ([Figure 4](#materials-07-04442-f004){ref-type="fig"}a) and a worn tool ([Figure 4](#materials-07-04442-f004){ref-type="fig"}b), respectively. From the recorded signal, the amount of heat generated due to friction was obtained. The power consumed by peripheral friction was calculated as the average value of total power once the exit stage is reached (*t* \> *t*~3~). During entrance stage (*t* \< *t*~1~) peripheral friction power (due to contact between the drill body and the hole wall) is null while during steady stage (*t*~1~ \< *t* \< *t*~2~) it varies linearly from 0 to the exit stage value. The power consumed by cutting edge friction was estimated according to Equation (3) as the total power, minus the energy consumed by composite breakage (Equations (5)--(7)), and minus the peripheral friction. Negative values in some points are the result of noise and also due to subtracting the peripheral power from the total power (as it was considered as a constant value when all the drill nose get through the specimen thickness, when this value is subtracted from the total power negative values appear but these values are not applied to the model since there is no material in the nose direction). ![(**a**) Power due to the drilling operation (spindle velocity 2653 rpm and feed 0.1 mm/rev) for a new tool; (**b**) power due to the drilling operation (spindle velocity 2653 rpm and feed 0.1 mm/rev) for a worn tool (with flank = 0.3 mm).](materials-07-04442-g004){#materials-07-04442-f004} The discretization of the curve of power *vs.* cutting time, allows the statement of thermal flux to the workpiece and the analysis of heat propagation in the finite element code. The heat propagation in the workpiece is analyzed in the next section. 3. Numerical Modeling of Temperature Distribution ================================================= The numerical model was developed using the commercial finite element code ABAQUS/Explicit \[[@B26-materials-07-04442]\]. The aim of the model is the analysis of heat propagation during the drilling process in order to identify critical zones with excessive temperature level. Simplifying hypotheses have been formulated. First of all, the model does not account for chip removal; in fact, it is an uncoupled thermal model. The assumption of axial-symmetry was considered in order to create a simplified model with reasonably computational cost. It is worth noting that CFRP strength and thermal conductivity is orientation dependent, but the disposal of low computational cost models to evaluate heat generation from experimental data requires the assumption of a strong hypotheses. Unidirectional tape laminates present high anisotropy in mechanical and thermal properties, thus, a woven laminate was selected to minimize the effect of the anisotropy in the axial-symmetrical model. Complex models of composite drilling recently developed in the literature (see, for instance, \[[@B7-materials-07-04442]\]) involve elevated computational costs. These models account for chip removal and are able to predict mechanical damage at the machined surface, both intra laminar and delamination. However, up to the present, mechanical analysis has not been coupled with heat generation and propagation. The main objective of the model developed in this paper is the prediction of thermal issues during drilling, however, mechanical damage cannot be predicted. The model has been divided in zones corresponding to each drill revolution corresponding to penetration equal to the feed (this depth of penetration per revolution will be used as time increment per step to apply the loads). The frictional heat generated in each time increment was calculated from the analysis explained in the previous section. The proportion of the frictional heat energy allocated to the chip is characterized by the coefficient of heat partition. In the case of composite cutting the chip is highly fragmented as it is not observed the adhesion mechanism at the tool/chip interface found in metal cutting, characterizing the sticking/sliding zones (see for instance \[[@B22-materials-07-04442]\]). The amount of heat transferred to the chip is neglected and the frictional heat is assumed to be transferred 50%/50% to the workpiece and to the tool. The present paper is one of the earliest works dealing with the thermal effects in composite cutting. Further improvements of the research in this field are desirable and the statement of heat partition in composite cutting should be soundly analyzed. It is worth noting that the nature of both materials in contact (composite and tool material) could cause the increase of the amount of heat transferred to the tool and, thus, the extension of the thermal affected zone of the workpiece would diminish. Simulations with lower level of percentage of heat transferred to the workpiece were carried out (25% and 40%, respectively). In the first case critical temperature were not reached, even in the case of worn tool. For the second case, the thermal damage appeared but the affected zone was smaller than in the case corresponding to 50%. Heat partition, 50%/50%, could be treated as a starting point for the analysis of thermal problems in composite cutting and an upper limit for the generation of damage. The scheme of the model is shown in [Figure 5](#materials-07-04442-f005){ref-type="fig"}, including boundary conditions and geometry. The model was meshed with 70,000 linear triangular elements with a size of 25 μm. With the element size used in the model, each simulation takes around 2 h, being a reasonable computational cost. The element size was stated after several iterations; no change in the temperature distribution was observed when the element size was lower than 25 μm, however the computational cost increased. The mechanical properties of the workpiece are summarized in [Table 1](#materials-07-04442-t001){ref-type="table"}. Thermal properties for CFRPs are given in a wide range in the literature, thus, the values used in this work (thermal conductivity 5 W/mK and specific heat 1100 J/KgK) were averaged from several references covering different applications \[[@B4-materials-07-04442],[@B8-materials-07-04442],[@B21-materials-07-04442],[@B27-materials-07-04442],[@B28-materials-07-04442]\]. ![Scheme of the numerical model.](materials-07-04442-g005){#materials-07-04442-f005} The procedure of the simulation is described in the following. In a generic time step (being the time involved in a drill revolution) the amount of heat along the cutting edge and lateral wall is calculated, applying the analytical model to the measured thrust force and torque, as it was explained in the previous section. At the end of each step, the layer of elements corresponding to the chip area removed in one revolution of the drill is eliminated from the model (thus, the layer become inactive for heat propagation) and the heat corresponding to the subsequent step is applied. 4. Results and Discussion ========================= The numerical model was applied to the analysis of heat propagation. The model developed was used for analysis of the effect of tool wear. The effect of wear on torque and thrust force were included in the estimation of heat generated, and the numerical model allowed the estimation of the temperature distribution and the establishment of critical levels. [Figure 6](#materials-07-04442-f006){ref-type="fig"} shows the evolution of the temperature fields as the entrance of the drill progresses. The cases, shown in [Figure 6](#materials-07-04442-f006){ref-type="fig"}, correspond to drilling tests performed at cutting speed 50 m/min and feed rate 0.1 mm/rev. [Figure 6](#materials-07-04442-f006){ref-type="fig"}a,b present, respectively, a fresh drill and a worn tool. It is observed at three stages of the drill entrance, the maximum level of the wall temperature occurred at the exit of the hole. It is worth noting that this zone also experience mechanical delamination \[[@B7-materials-07-04442]\]. Both effects would superpose inducing combined thermal and mechanical damage. For a typical epoxy-based CFRP material, the initiation of resin degradation can be produced at a temperature of approximately 180 °C. The thermal damage at this temperature can create cracks leading to the onset of delamination and strength reduction \[[@B8-materials-07-04442],[@B29-materials-07-04442]\]. The maximum temperature at the wall was significantly higher in the case of worn tool, with a temperature level higher than 180 °C (453 K) in a more extended area (in depth penetration below the machined wall equal to 275 μm). It is clear that this value of wear produces unacceptable level of temperature. In the case of a new tool the area reaching this high value of temperature is also significant (penetration beneath machined surface 150 μm). Although it is not directly comparable, the temperature distribution predicted with the model is in the order of that measured in milling CFRP \[[@B8-materials-07-04442]\]. The temperature at 0.3 mm beneath the machined surface reached 104 °C being drilling more critically under the thermal point of view. ![Predicted temperature (K) for tests developed at cutting speed 50 m/min and feed 0.1 mm/rev (grey zone represents temperature higher than 180 °C, 453 K): (**a**) fresh tool; (**b**) worn tool.](materials-07-04442-g006){#materials-07-04442-f006} 5. Conclusions ============== This work focuses on the prediction of temperature at the workpiece during drilling of woven CFRPs composites. The approach combine experimental testing (to establish the evolution of thrust and torque with cutting time); analytical modeling to estimate the heat flux at the interface tool/workpiece, and numerical simulation to analyze the heat propagation and the maximum level of temperature in the workpiece. The main contribution of the work is the development of the combined approach for the prediction of thermal damage. The model was applied to two different real cases of drilling: fresh tool and worn tool (significant level of flank wear). The numerical model showed the maximum temperature occurring at the hole wall, close to the exit of the drill, being a zone where mechanical delamination is commonly observed. The occurrence of thermal damage, in the case of excessive wear, enhances the risk of defect at the exit of the hole. The model is simple and very efficient from the computational point of view. The problem of using realistic models of drilling, including penetration and cutting movement, and elements erosion is the computational cost. The implementation of simulations tools in industry for assistance during manufacturing requires rapid response. The model proposed could be easily implemented to detect excessive levels of thermal power, because of inappropriate cutting parameters or excessive wear of the tool. The authors acknowledge the financial support for the work to the Ministry of Economy and Competitiveness of Spain under the project DPI2011-25999. Carlos Santiuste and Alvaro Olmedo have developed the analytical model for the estimation of heat. José Díaz-Álvarez and María Henar Miguélez developed the numerical model and the drilling experiments for real application. All authors have participated in the manuscript writing and figures preparation. The authors declare no conflict of interest.
Q: Can I increase the thickness of an outlined path in Illustrator? I have downloaded some SVG icons from the web which are all outlined paths. In Illustrator I would like to make those paths thicker. Is there a way to do this? I know when a line is still in stroke-mode you can just adjust the size of the stroke, but once this has been converted to outlines I don't know if this is still possible? A: Yes, you can make the outlined path thicker. Simplest way is to just apply a stroke on the outlines. This will then be added to your stroke (so remember it needs to be 1/2 the additional weight you need). Closed outlines may need this done to both sides. A bit more cleaner way would be to offset the outline. I suggest using Effect → Path → Offset Path... as its nondestructive so you can change your mind later (as opposed to Object → Path → Offset Path...). You can then later expand this if you need to bake the effect in. Image 1: Offset the path to create thicker (for thinner use negative values) outlines. It is also possible to reduce the outlined stroke back to a stroke. To do this measure the distance between the outlines and then delete the other side and offset by half the distance. This is slightly less work for closed paths as you dont need to clean up after yourself. Image 2: Reversing the expanded path back to a stroke. A: Since the outlined art consists simply of filled objects, the intuitive solution would be to add an exterior stroke of the weight of half the amount you wanted to increase the initial stroke. For example, if the outlined stroke was 1pt, and you want to change it to a 2pt line, you would add a 0.5pt exterior stroke to the outline. However, some caveats I can think of off the top of my head: Adding strokes to unjoined outlines will add weight to the ends, while with path stroke behaviour, the stroke is capped at the endpoints by default It's a lot harder to reduce the weight of the stroke after it's outlined. Luckily, you want only to thicken it, so you don't really have to worry about this. There's probably others — working with outlines when you want paths is never ideal. Nonetheless, the short answer is yes, it is entirely possible. You just have to be a little craftier.
Şahika Tekand Şahika Tekand (born 1959) is a Turkish actress. She appeared in more than fifteen films since 1987. Biography Tekand graduated from the Fine Arts Faculty, Department of Theatre and Acting at the Dokuz Eylül University in 1984 and received her PhD in 1986. Selected filmography References External links Category:1959 births Category:Living people Category:Turkish film actresses
1. Field of the Invention This invention relates to improvements in a mechanism for locking both the opened and closed states of a foldable baby carriage. 2. Description of the Prior Art An example of a foldable baby carriage to which this invention is applicable is disclosed in Japanese Patent Application No. 132136/1979 (Japanese Patent Application Laid-Open No. 57574/1981, Japanese Patent Publication No. 50705/1982; substantially corresponding to U.S. Pat. No. 4,317,581). The baby carriage disclosed therein comprises, as parts which enable the folding thereof, a pair of rear legs extending rearwardly downward from the middle of the baby carriage, a pair of support angle members turnably connected respectively to the middle portions of said rear legs so that they are turnable to extend along the upper or lower halves of the rear legs, a pair of push rods turnably connected at their lower ends to the middle portions of the support angle members, a pair of sleeves axially slidably installed adjacent the lower ends of the push rods, and springs for urging the sleeves to slide downward. Further, each support angle member is formed at its end (the other end) opposite to one end thereof connected to the rear leg with an engaging portion engageable by the sleeve for locking the opened state. In such baby carriage, when it is opened, the support angle members are positioned along the upper halves of the rear legs while the other ends of the support angle members are aligned with the push rods, whereupon the sleeves slide downward until they engage the open state locking-purpose engaging portions of the support angle members. Thereby, the aligned state of the push rods and support angle members is maintained, with the result that the opened state of the baby carriage is locked. Further, Japanese Patent Application No. 84159/1980 (Japanese Patent Application Laid-Open No. 11168/1982, Japanese Patent Publication No. 32065/1983) discloses a baby carriage of the same basic construction as in said first application, with the following improvements applied thereto. In the baby carriage of this second application, to lock both the opened and closed states, the support angle member is additionally formed with a closed state locking-purpose engaging portion, while the sleeve is designed to be engageable with both the opened and closed state locking-purpose engaging portions of the sleeve. In the opened state of the baby carriage, the situation is the same as in the first application described above, but in the closed state of the baby carriage, each support angle member is positioned along the lower half of the rear leg while the middle portion of the support angle member intersects the push rod, and in this state, each sleeve engages the closed state locking-purpose engaging portion of the support angle member, thereby locking this closed state. According to this second application, both the opened and closed states of the baby carriage are locked by using such common members as the sleeves, so that a reduction in the number of parts can be expected. However, in said second application, when the simplicity of operation is further investigated, it is seen that there is still room for improvement. That is, when it is desired to change the opened or closed state of the baby carriage to its closed or opened state, respectively, it is necessary to first remove the locking. This removal of the locking is attained by sliding the sleeve along the push rod to escape from the opened or closed state locking-purpose engaging portion, but since the sleeve is urged by a spring to move in a direction for engagement with the opened or closed state locking-purpose engaging portion, it is necessary that the force required to cause the sleeve to escape from said engaging portion be applied continuously at least in the early period of operation for opening or closing the baby carriage. On the other hand, in the case of means being provided for maintaining the sleeve in its state escaping from the engaging portion against the force of the spring, it is necessary that at the end of the opening or closing operation, said means for maintaining said state of escape be operated again to allow the sleeve to engage the engaging portion. As for a technique for eliminating said inconvenience in operation, there is one disclosed in Japanese Patent Application No. 66274/1981 (Japanese Patent Application Laid-Open No. 182566/1982). In this third application, which is an improvement on said second application, the arrangement for maintaining the state of the sleeve escaping from the engaging portion is adopted on the one hand and on the other hand means is provided for canceling the escape state of the sleeve in the course of operation from the opened to the closed state of the baby carriage and in the course of operation from the closed to the opened state thereof, so that finally, when the baby carriage is brought to its opened or closed state, the sleeve automatically engages the opened or closed state locking-purpose engaging portion. In this third application, there is provided a lock start member arranged like a cam adapted to rotate together with a foldable push rod connecting member for connecting a pair of push rods, said lock start member being adapted to act on an operating lever which controls the sleeve movement to prepare for locking the opened or closed state of the baby carriage. The lock start member is adapted to turn in response to the folding movement of the push rod connecting member, and the folding movement of the push rod connecting member is attained because the distance between the pair of push rods changes depending on whether the baby carriage is in its opened or closed state.
One of the most common knee surgeries currently being performed is the anterior cruciate ligament transplant. In this operation the anterior cruciate ligament is replaced by a graft from the patient's patellar tendon. The graft includes bone plugs at both ends of the tendon which are typically removed from the patient's own patella and tibia using a hammer and chisel and perhaps also a standard reciprocating saw. The bone plugs of the resultant graft must then be trimmed to an approximate circular cross section of typically either 9 millimeters or 10 millimeters in diameter for insertion into drilled out femoral and tibial tunnels. Since the chiseled bone plugs are approximately triangular or rectangular in cross section and the desired plugs are to be circular, obviously more bone must be removed from the patient than if the means were available to cut a circular cross section plug from the donor site to begin with. Also the bone trimming process can be tedious and inexact and also quite costly since the patient must remain under anesthesia and the remainder of the operating team must stand by as one person trims the bone plugs. More time is lost if the bone plugs must be re trimmed after the initial attempt at insertion into the transplant site. Clearly a substantial improvement to this operation could be realized if a tool were available to initially cut properly sized circular cross section plugs from the donor sites. Attempts have been made to use a conventional hole saw for cutting the plugs, but the results have not been satisfactory. A conventional hole saw cannot easily cut out a plug which must be parallel to and congruent with the surface of the bone because of the presence of the hole saw shaft. The proper entry angle could only be achieved if the bone portion opposite to the direction of the cut were first removed. This is an unsatisfactory solution since the patient is also the donor and it becomes therefore quite important to conserve intact as much of the donor's bone as possible. What is needed is a shaftless hole saw, which is the object of this invention.
Launched in 1974, Gentlemen by Givenchy is a classic, masculine scent that has helped define what it means to be a true gentleman. It is a carefully crafted blend of notes of tarragon, bright cinnamon, exotic patchouli, vetiver, civet and Russian Leather. It is an ideal scent for the man of impeccable taste and high standards who aspires be a true gentleman. The smooth elegance of Gentlemen makes if a perfect scent for the daytime while its intriguing elements help it transition to the evening. Gentlemen by Givenchy has top notes of tarragon and fiery cinnamon. It has heart notes of bright patchouli and vetiver. It finishes with masculine base notes of civet and Russian leather. O-G643 Launched in 1974, Gentlemen by Givenchy is a classic, masculine scent that has helped define what it means to be a true gentleman. It is a carefully crafted blend of notes of tarragon, bright cinnamon, exotic patchouli, vetiver, civet and Russian Leather. It is an ideal scent for the man of impeccable taste and high standards who aspires be a true gentleman. The smooth elegance of Gentlemen makes if a perfect scent for the daytime while its intriguing elements help it transition to the evening. Gentlemen by Givenchy has top notes of tarragon and fiery cinnamon. It has heart notes of bright patchouli and vetiver. It finishes with masculine base notes of civet and Russian leather. O-G64L Launched in 1974, Gentlemen by Givenchy is a classic, masculine scent that has helped define what it means to be a true gentleman. It is a carefully crafted blend of notes of tarragon, bright cinnamon, exotic patchouli, vetiver, civet and Russian Leather. It is an ideal scent for the man of impeccable taste and high standards who aspires be a true gentleman. The smooth elegance of Gentlemen makes if a perfect scent for the daytime while its intriguing elements help it transition to the evening. Gentlemen by Givenchy has top notes of tarragon and fiery cinnamon. It has heart notes of bright patchouli and vetiver. It finishes with masculine base notes of civet and Russian leather. O-G641 Launched in 1974, Gentlemen by Givenchy is a classic, masculine scent that has helped define what it means to be a true gentleman. It is a carefully crafted blend of notes of tarragon, bright cinnamon, exotic patchouli, vetiver, civet and Russian Leather. It is an ideal scent for the man of impeccable taste and high standards who aspires be a true gentleman. The smooth elegance of Gentlemen makes if a perfect scent for the daytime while its intriguing elements help it transition to the evening. Gentlemen by Givenchy has top notes of tarragon and fiery cinnamon. It has heart notes of bright patchouli and vetiver. It finishes with masculine base notes of civet and Russian leather. O-G644 Launched in 1974, Gentlemen by Givenchy is a classic, masculine scent that has helped define what it means to be a true gentleman. It is a carefully crafted blend of notes of tarragon, bright cinnamon, exotic patchouli, vetiver, civet and Russian Leather. It is an ideal scent for the man of impeccable taste and high standards who aspires be a true gentleman. The smooth elegance of Gentlemen makes if a perfect scent for the daytime while its intriguing elements help it transition to the evening. Gentlemen by Givenchy has top notes of tarragon and fiery cinnamon. It has heart notes of bright patchouli and vetiver. It finishes with masculine base notes of civet and Russian leather. O-G648 Get carried away by this luxurious feeling and smelling Shea butter. 100% unrefined and all-natural, all in a convenient 1 oz take-anywhere-size jar. Never be without the Shea butter you love so much! M-P910
Believe those who are seeking the truth. Doubt those who find it. Andre Gide Thursday, March 29, 2018 Inflation and Unemployment (Part 2) In my previous post (Inflation and Unemployment), I reviewed what I thought was a fair characterization of the way the Federal Reserve Board staff organize their thinking about inflation and unemployment, as well as how this view of the world was at least partly responsible for the "hawkish" overtone of current Fed policy. I also suggested that the inflation and unemployment dynamic might be better understood through the lens of an alternative theory that emphasized the supply and demand for money (broadly defined to include U.S. treasury debt). I want to thank Paul Krugman for taking the time to critique my post and draw attention to an important issue that concerns U.S. monetary policy makers today (see: Immaculate Inflation Strikes Again). I was only a little disappointed to learn that I agreed with almost all of what he wrote in his column. But if this is the case, then what are we debating? And more importantly, how does it matter, if at all, for monetary policy? The amount of disagreement in macroeconomics is often exaggerated and I think this has definitely been the case here. While we may disagree on some things, we seem to agree on the most important part, namely, on the present conduct of U.S. monetary policy. Krugman begins his piece by stating three questions. Let me state the questions, followed by my own answers and comparisons. 1. Does the Fed know how low the unemployment rate can go? I have quipped before that this is one case in which the Fed can definitely count on a zero-lower-bound being in effect (and this is not just in theory, Switzerland had virtually zero unemployment throughout the 1960s, with low inflation I might add). But what this question is really asking is how low can the "natural" rate of unemployment go? I agree with Krugman: we don't know. But I'll further add: we don't even know if a "natural" rate of unemployment exists in the first place. It's just a theory, after all (which is not to say it shouldn't be taken seriously, only that we need to keep that important caveat in mind). 2. Should the Fed begin tightening now, even though inflation is still low? This is legitimately debatable--and the FOMC is presently debating it. My own view, on balance, seems presently more aligned with the "doves" on the committee. And so I also agree with Krugman on this score, namely, that the Fed could be tightening too aggressively. Krugman suggests that there are several reasons supporting his view and he mentions a few of them. They are all legitimate reasons, in my view. But I could add more reasons, based on my own preferred theory of inflation. The demand for money (broadly defined to include U.S. treasuries) appears to remain elevated. This disinflationary force has been in place for a long time and could, as I explain here, account for the lowflation phenomenon (see also here). If you follow that link, you'll note that I quote Krugman approvingly in regard to his view about monetary policy in a liquidity trap. Perhaps I am wrong, but I read Krugman here as not recommending that the Japanese lower their unemployment rate to raise inflation. Instead, he appeals to the model I alluded to in my post: a monetary-fiscal theory of inflation. If the Japanese want inflation, just cut taxes and finance social security spending by printing JGBs (as I recommended here). I'm not sure what this has to do with "immaculate" inflation. (I did learn from Nick Rowe that "maculate" is indeed a word.) 3. Is there any relationship between inflation and unemployment? I think it would be odd for any macroeconomist schooled in general equilibrium to suggest that the answer to this question is unequivocally no. The answer is yes. The real question is what type of relationship? In labor market search theories of unemployment, where firms and workers bargain over a joint surplus, a low unemployment rate can result in a higher real wage because workers have greater bargaining power. If a decrease in the unemployment rate leads to a rise in the real wage, it could, ceteris paribus, have an effect on the price-level (and, if prices are temporarily sticky, the adjustment could come along other margins). But an increase in the price-level is not the same thing as an increase in the inflation rate (though short-run price-adjustment costs can transform a price-level effect as a short-term rise in measured inflation). For workers to afford buying goods in the presence of ever-rising prices, their bank accounts are going to have to grow accordingly. Ultimately, this can only happen in aggregate if the aggregate quantity of money is growing, either through the banking sector or through the increase in the supply of outside money (including treasury debt). This is the sense in which I think inflation has to be a monetary phenomenon and that, moreover, the actual rate of inflation is ultimately not governed by whether unemployment is living above or below its "natural" rate, whatever that is. So perhaps there's some room for debate on point 3. But we should be careful not portray the question as an "either/or" issue. We could just be two blind men, feeling different parts of the elephant--the two interpretations are not necessarily inconsistent with each other. We should try to work this out. On the plus side, it seems we are led to the same policy recommendation. This is something worth noting. (I plead guilty on the score of needlessly antagonizing people who "believe in" the Phillips curve. Rather than suggesting we abandon the theory, I could instead have suggested we supplement it with the monetary view.) Does the debate over question 3 matter? Yes, it could, because different interpretations of how the world works usually--though not always---implies something different about optimal policy. The Phillips curve theory of inflation suffers from a free parameter problem: the natural rate is unobservable and hence, one can always appeal to a shift in the natural rate to explain away discrepancies with the data. However, the monetary theory I prefer also suffers from a free parameter problem: money demand is not directly observable either. I can always appeal to some unobserved shift in money demand to explain away discrepancies with the data. For this reason, it would be useful for economists to identify "robust" policies--policies that can be expected to deliver good results regardless of which theory best describes the world we are living in. Is the Phillips curve view of inflation contributing to a policy mistake? I wanted to suggest in my post that it is, although this is not necessarily a fault of the theory as much as how it is applied. That is, there may be no policy mistake in the making if the FOMC simply lets its estimate of the natural rate fall freely as evidence of impending inflation fails to materialize. However, this is not what is happening. As Jim Bullard explained to me, he believes that Phillips curve proponents have a (strictly positive) lower bound on their estimate of the natural rate. The unemployment rate is so low now -- how can it possibly go any lower -- this has to lead to inflation in the near future -- it just has to. We'd better start raising now, before we find ourselves behind the curve. Here is where the "monetarist" view could temper such resolve. Granted, the global outlook is looking relatively rosy, and fiscal policy seems expansionary--these are both inflation risks from a monetarist perspective. On the other hand, there is considerable uncertainty in this outlook, not the least of which is presently being fueled by talk of a global trade war. In uncertain times, consumers and investors are likely to lower their demand for goods and services--increasing their demand for safe assets, like U.S. dollars and U.S. treasuries. We can see these concerns weigh on long-bond yields. Market-based inflation expectations (like the 5yr-5yr forward) seem well-anchored. Current inflation is running below target. All of this suggests that the Fed can afford not to move aggressively at this time (to be fair, the FOMC regularly emphasizes the "data dependent" nature of its policy path). And yes, this is consistent with PC advocates that are willing to let their estimate of the NRU decline in line with the evidence. This post is already getting too long and so I wouldn't blame you if you stopped reading here: the main points I wanted to make have been made. Still, I have a bit more to say, so in case you are interested... Krugman presents the following data for Spain. He writes "Consider, for example, the case of Spain. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment:" Krugman asserts because Spain doesn't have its own monetary policy, that monetary factors were not responsible for swings in Spanish inflation and unemployment. But my interpretation of the great crash and subsequent rise in unemployment is that it was caused by a large positive money demand shock (where again, I stress, by money I include safe government debt). This positive money demand shock (flight to safety) is just the opposite side of what Krugman and others would label a negative aggregate demand shock. So once again, I think Krugman is digging moats (perhaps unintentionally) where he could be building bridges. The other thing I should like to point out about the Spanish data is whether it suggests that low unemployment forecasts future inflation (which is really what my post was about). A naive reading of the data above suggests that low unemployment actually seems to forecast low inflation. Again, this suggests caution in using the unemployment rate to forecast inflation. Finally, on Krugman's broader point: "economics is about what people do, and stories about macrobehavior should always include an explanation of the micromotives that make people change what they do. This isn’t the same thing as saying that we must have “microfoundations” in the sense that everyone is maximizing; often people don’t, and a lot of sensible economics involves just accepting some limits to maximization. But incentives and motives are still key." 10 comments: Loved these last two posts, David. I felt, though, that you walked right up to the edge of what would have truly aced your argument at the end and then left me sighing on the issue of relative demand for nominal wealth and its connection to inflation. There is one other factor involved in the hoarding or release of nominal wealth aggregates, and that is HOW nominal wealth is held (or accrued) in the first place. As you increase wealth/income inequality, nominal wealth aggregates are less likely to see disposal in a boom (or if it does, it is in Minsky's second price setting mechanism - that of real and financial assets, not goods and services). Thus, the mistake being made, even by those correctly focused on broad money, is in ignoring how nominal wealth is held and the fact that - of course - the rich are far less likely to release their excess nominal holdings into the consumption of goods and services. This, then, is the most profound difference between the impact of excess nominal wealth on inflation in this century versus the last. Those most likely to consume do not possess much in the way of excess assets (or any net worth at all, in many cases). Finally, low U3 needs to be expressed in the context of job quality, not just quantity. The quality of U.S. jobs has deteriorated markedly in favor of more low wage/low hours positions since 1990. This trend, which I am building into an index at Cornell, is an enormous offset to the positive effect of improvement in the headcount of those with a job. And, of course, is an additional expression of wealth and income polarization. In the op-ed, when the CPI was at 3%, Friedman chastised the Fed for being too tight, despite the fact that the Fed had cut the funds rate in the previous three years to 3% from 10%. Through the Reagan years, the WSJ pushed the Fed to loosen up, as did the Reaganauts. Indeed, Reagan himself suggested placing the Fed into the Treasury (where it would answer the Oval Office, obviously). The White House loathed Volcker, and packed the FOMC against him. Today, no right-wing economist, and the bulk of macroeconomists, would dare utter such heresy as, "3% inflation is okay," or "The Fed is too tight." Indeed, the Fed recently adopted a 2% inflation goal, which evidently is actually an inviolate and perhaps radioactive ceiling (judging from Fed policies). So, decades after the inflation era, long after the PCE sank in low single digits---only then did academics, central bankers and related parties start fervently genuflecting to the sub-2% inflation totem, or even zero inflation (Charles Plosser has rhapsodized about deflation, usually with a seraphic expression on his face). Why this inflation-obsession now. as we recede further and further from the inflation days? Well, I hope that others will chime in as well, but here's what I think. The current generation of monetary policy makers still have the 70s in their mind. You are right that official measures of inflation did not break double digits for long, but inflation was high. I have a vivid recollection, as a kid, buying potato chips at 5 cents a bag, then 10 cents, then 15 (with the bags getting smaller at the same time). But perhaps even more importantly, high inflations tend to be volatile inflations. In any case, rightly or wrongly, the general impression was that this high and volatile inflation was bad and the Volcker ended it at a terrible price. The Fed never wants to be placed in that position again. And I think that this, more than anything else, explains the extreme aversion to inflation. History shows that when the inflation anchor is let loose, it becomes hard to control. As for an agenda behind the anti-inflation extremists, who do you have in mind exactly? I'm pretty sure there's no political agenda on the FOMC. These people are committed to having the Fed fulfill its Congressional mandates, and that's it. Price stability (interpreted as 2% average long run inflation) is one of the Fed's important mandates. Well, I think your answer is right in some measure...but does not explain the increasing squeamishness about inflation that developed AFTER the 1990s, as the high single-digit inflation of late 1970s-early 1980s receded further into the past. As I said, in 1992, a Milton Friedman could accept 3% inflation...and that acceptance was only 12 years after the peak, 9% PCE inflation of 1980. The Reagnaut-WSJ crowd basically said 4% inflation was good enough, right after the worst inflation. They lived through your potato-chip days too. (Being the literate sort, I noticed comic books prices went up.) But who today can say, "Sure, 3% inflation. Piece of cake---worth it to get more output."? So a building inflation-phobia happened long AFTER the high single-digit inflation days….but what? Your memory of potato-chip bags suggests prices more than tripled in your youth. Maybe you had a long youth (you have company in that case). Unfortunately, the only FRED series I can find starts in 1985. It shows potato-chip prices not doing much until 2000, then curiously rising sharply though the Great Recession. https://fred.stlouisfed.org/series/PCU3119193119191 BTW, slicing and then pan-frying potatoes in salt, pepper, butter and oil is cheaper and delivers even more satisfaction than potato chips. Even more calories. Not advised. As for agendas at the Fed or FOMC….well, that is long answer. There might be industry capture at a federal regulatory agency. You can fight deflation through money-financed tax cuts on FICA levies (yay!), or you could stuff commercial banks full of reserves, and then pay banks to do nothing with the reserves. No wry comments? There might be class-bias in a 4.75% natural rate of unemployment target (which works out to having about 1.3 people looking for work for every job opening). Oh, how nice. The Fed Beige Books are obsessed with worker shortages, but much,much less so with housing scarcity (induced by property zoning). Commercial banks lend heavily on property. Zoned property. There are 12 regional bank seats on the FOMC, but no labor seats, no manufacturing seats, no construction seats. Do bondholders want interest rates to go up? I wonder if the Fed is a lot like the USDA. It works closely with one industry…. In terms of broad representation, you should remember that the 12 regional Feds have their own boards of directors drawn from a variety of local sectors. Moreover, many of the regional Feds have branch offices with their own boards of directors (the St. Louis Fed, for example, has branches in Louisville, Memphis and Little Rock). True about the regional bank presidents, although they turn out often to be the extremists, regarding inflation fighting. In general I do not think the system of regional bank presidents is a sterling example of transparency, accountability, and democracy in action. There is actually something to be said in favor of Ronald Reagan's idea of placing Federal Reserve policy within the Treasury Department. This would leave accountability for monetary policy in the president's office and citizens could vote accordingly. The opaque nature of the Federal Reserve results in all but a minute percent of the US public understanding monetary policy. If anyone does. The fact that monetary policy-making apparatus is so incomprehensible aids and abets industry-capture of the Federal Reserve, and provides a cocoon for the zany ministry's that survive for decades after empirical results vanish. As far as I know, some people argue that U.S has low inflation rate now because of AMAZON. They have been simplifying the products distribution of process, so we can cost less money to buy and sell. And that might be one of the reasons, of course fixed or lower real wage which is driven by deteriorated job quality could included. Now I wonder what you think about AMAZON as one of the reasons because you didn't mention it. I think I know where this is going. There is a theory which does away with non-accelerating inflation completely and offers up trade-offs inherent to government subsidised ELR programs. What do you make of the talk surrounding these theories? Subscribe To Favorite Quotations "Believe those who are seeking the truth. Doubt those who find it." Andre Gide The Democrats are the party that says government will make you smarter, taller, richer, and remove the crabgrass on your lawn. The Republicans are the party that says government doesn't work and then they get elected and prove it (P.J. O'Rourke) But to manipulate men, to propel them toward goals which you – the social reformers – see, but they may not, is to deny their human essence, to treat them as objects without wills of their own, and therefore to degrade them (Isaiah Berlin) I believe that sex is one of the most natural, wholesome things that money can buy (Steve Martin) Nothing so needs reforming like other people's habits (Samuel Clemens)
Transgenic mouse model for breast cancer: induction of breast cancer in novel oncogene HCCR-2 transgenic mice. Transgenic mice containing novel oncogene HCCR-2 were generated to analyse the phenotype and to characterize the role of HCCR-2 in cellular events. Mice transgenic for HCCR-2 developed breast cancers and metastasis. The level of p53 in HCCR-2 transgenic mice was elevated in most tissues including breast, brain, heart, lung, liver, stomach, kidney, spleen, and lymph node. We examined whether stabilized p53 is functional in HCCR-2 transgenic mice. Defective induction of p53 responsive genes including p21WAF1, MDM2, and bax indicates that stabilized p53 in HCCR-2 transgenic mice exists in an inactive form. These results suggest that HCCR-2 represents an oncoprotein that is related to breast cancer development and regulation of the p53 tumor suppressor.
This all starts – as these things often do – with Harry Redknapp. “Guardiola’s going to leave Bayern Munich at the end of the season,” Redknapp said in December 2015. “I’d like to see him go to Dagenham and Redbridge. I think that would be a challenge for him. Let’s see if he can get them up to the Premier League; if he does that, we’ll all say he’s the greatest manager we’ve ever seen.” As ever with Redknapp, the insult was hardly veiled. As well as questioning Guardiola’s reputation as a coach, he was implying that it is easier to manage an elite club because you get to spend lots of money on players. The irony of that accusation coming from perhaps the most famous – or infamous – English exponent of the transfer market should not be missed. How could it be? Sandra would bury that. That same trope has become the albatross around Guardiola’s neck since it became clear that Manchester City would win the Premier League title. During 2016/17, Guardiola was told that he must adapt to English football. As soon as it became clear that the vice versa was actually true, the qualifiers began to appear. ‘Oh well done Pep, you won the league while spending the most money. How hard that must be.’ Perhaps these are merely the groans of green-eyed monsters. Redknapp was miffed to have missed out on elite club jobs, while supporters of Manchester City’s rivals appease their own disappointment by deflecting attention away from their own club’s failings. Welcome to tribalism, where a supporter must live their life through their football club. Focus on the money spent by a manager is a relatively new concept, which is odd given that ‘biggest clubs spend the most money’ is hardly unfamiliar. When Arsene Wenger joined Arsenal and revolutionised the club’s coaching system before winning the league, very few people said “the thing is Dave, he’s brought in Vieira, Overmars and Petit and inherited a great squad anyway. Bespectacled fraud, if you ask me.” The focus was on what Wenger changed and improved, not what he bought. Firstly, there is an element of ‘old man shouting at clouds’ to this debate. Heavy spending is a reality of top-level football, and a prerequisite of elite club management. Pretending anything else could happen is farcical. Why would we see what Guardiola would win without spending, when he is the manager of Manchester City and Manchester City have vast wealth to call upon? Everton have bought nine players for more than £20m each since the start of last season, for goodness sake. Everyone is loaded. Yet there is an extra level of nonsense here: the assumption that spending money makes success come easy. In reality, high spending narrows the margins for success and lowers the patience of owners, supporters and the media, particularly as selling clubs push up prices thanks to the increased transparency over enormous transfer budgets. Spending money is easy; spending money well is far more difficult. Elite club management is not more straightforward than managing Dagenham, to use Redknapp’s example, merely different. The higher the spend, the brighter the spotlight, the further the fall. Let’s take two examples: Manchester United and Chelsea. In the last three years, United have signed nine different players for £25m or more. That list in full: Memphis Depay (gone), Morgan Schneiderlin (gone), Anthony Martial (going?), Paul Pogba (going?), Henrikh Mkhitaryan (gone), Eric Bailly, Romelu Lukaku, Victor Lindelof and Nemanja Matic. How many successes? Three? If the (perfectly reasonable) argument is that Jose Mourinho inherited a squad far lower on talent than Guardiola did, that only emphasises how hard it is to spend money well. Louis van Gaal, David Moyes and even Alex Ferguson – in his latter days – struggled to do it well. And so to the list of players arriving at Chelsea for £25m or more, this time only since the start of last season: David Luiz (going?), Michy Batshuayi (going?), N’Golo Kante, Danny Drinkwater (going?), Tiemoue Bakayoko (going?), Alvaro Morata (going?) and Antonio Rudiger. Drop the cut-off fee, and Ross Barkley, Emerson Palmieri and Davide Zappacosta can be included. How many successes on that long list? One? The same is repeated elsewhere. Tottenham have spent far less money than Chelsea, Manchester United or Manchester City, but their recent history is littered with players who haven’t worked out (Fernando Llorente, Vincent Janssen, Moussa Sissoko, Georges-Kévin N’Koudou, Clinton N’Jie, Kevin Wimmer). Everton’s transfer activity last summer has been shown to be catastrophically bad in hindsight. Arsenal spent more than £30m on both Shkodran Mustafi and Granit Xhaka, and we’ve seen precious little from either to suggest they were good value at half the price. Finally, look at the last three players to break the British transfer record: Angel di Maria, Fernando Torres and Pogba. One is considered to be one of the worst signings in Premier League history, another left England after one season of struggle and the third is now reportedly fighting for his Manchester United future. Importantly, these are not bad players; they flourished elsewhere. But ‘very good player + very good club = very good transfer’ is simplistic fallacy. Time and again it is proven, so to repeat: Spending money well is damn hard. And yet Guardiola and Manchester City have bumped that trend. The list of ten players that the Spaniard has signed for £20m or more: John Stones, Leroy Sane, Gabriel Jesus, Ilkay Gundogan, Kyle Walker, Ederson, Danilo, Benjamin Mendy, Aymeric Laporte, Bernardo Silva. Are there any on that list who could be considered failures? Flip that on its head, and at least five have been raging successes, integral parts of City’s title canter. Look too at the ages of the players Guardiola has spent City’s money on. Seven of that ten-man list joined the club at the age of 23 or under. The oldest was Walker, who arrived at Manchester City weeks after his 27th birthday. In contrast, three of Manchester United’s major signings under Mourinho: Mkhitaryan (27), Sanchez (28), Matic (29). Guardiola bought a team for tomorrow and knitted them together today. Crucial to all this is Guardiola’s increased power within Manchester City’s structure, and the influence he has over player recruitment. Txiki Begiristain made a number of high-profile errors before Guardiola’s arrival, when neither Roberto Mancini nor Manuel Pellegrini enjoyed anything approaching omnipotence: Eliaquim Mangala, Wilfried Bony, Fernando and Alvaro Negredo were four. Guardiola’s increased sway, joining his former Barcelona colleague, has vastly improved that record. The arrival of Carles Planchart as a video analyst with a significant role in the assessment of potential transfer targets helps too; Planchart worked with Guardiola at Barcelona and Bayern Munich. Guardiola has done many things well; a 16-point lead ahead of a competitive top six prove that. It is tempting to view this as a victory for his coaching methods, obsessiveness with minutiae and exemplary man-management, and on all three counts the manager has indeed been a game-changer for City. But arguably Guardiola’s greatest achievement lies in his transfer market activity, and his eye for finding the right fit for his needs. While Guardiola’s critics might use his outlay on players as a stick with which to beat him, the opposite deserves to be true. Rather than ignoring his spending, embrace it. At a time when the clubs directly below them are preparing their own transfer warchests, it’s just another thing Guardiola and Manchester City are doing smarter and more efficiently than their rivals. Daniel Storey More from Planet Sport: Pit Chat – the best sound bites and social bits from the Chinese Grand Prix (PlanetF1). Gallery: Check out the world’s most beautiful golf holes (Golf365). Loose Pass: What rugby could learn from Man City and football (PlanetRugby). Remembering when Rafa Nadal came of age on clay (Tennis365).
President Trump’s commitment to building a wall along the U.S.-Mexican border continues to polarize both Congress and bilateral ties with our southern neighbor. While Democrats argue walls don’t work and even many Republicans question the $21.6 billion price tag, both concerns are overwrought. While critics say there is no utility in a border wall, countries around the globe have come to rely on them. Consider the latest: On Jan. 7, Turkey announced it had completed half of a more than 100-mile wall along its border with Iran in terrain far more difficult than the Rio Grande Valley. Of course, the United States may not want to be like Turkey or Iran, both anti-American dictatorships with some of the world’s worst human rights records. But border walls exist in Africa, Asia, and even Europe and they are not simply the tool of dictatorships. Democracies, too, embrace walls. Consider all the countries which have turned to walls to increase security: India and Pakistan: The two nuclear powers, with 1.5 billion people between them, have fought four wars since 1947, and continue to face each other down in Kashmir, a territory both countries dispute. In order to prevent Pakistani terrorists from striking inside India, the Indian government built a series of fences and walls to keep Pakistani terrorists at bay. Had it not, it is quite possible that the two countries might be at war right now. Morocco and Algeria: Morocco built a 1,700-mile system of berms, fences, and ditches to stop the Polisario Front, an Algerian-sponsored terrorist group, from infiltrating the Western Sahara. It took seven years to build, but the result was so effective that Algeria agreed to a cease-fire, ending the Western Sahara war that had raged since 1975. Israel and the West Bank: The Israeli border wall — well, actually more of a fence in most places — remains hugely controversial because many journalists and United Nations officials condemn anything Israel does, no matter how much precedent exists outside Israel. But, Israel’s fence reduced terror attacks by more than 90 percent, something decades of diplomacy failed to do. Cyprus: The irony of so many United Nations officials condemning Israel or Trump’s demands for a wall is that the United Nations itself built a wall dividing Cyprus in order to separate Turkish and Greek combatants. While Cyprus remains divided, the wall ended the fighting. Northern Ireland: Against the backdrop of a decades-long terror campaign by the Irish Republican Army and Unionist violence, the British and government of Northern Ireland built several so-called “Peace Lines,” fences and walls up to 25 feet tall and sometimes running for miles to separate Protestant and Catholic neighborhoods. Saudi Arabia and Yemen: While the Iranian-backed Houthi militia has launched missiles at Riyadh, why hasn’t it sent terrorists to conduct hit-and-run attacks in Saudi Arabia? The answer is easy. After a series of Yemeni attacks in the late 1990s, Saudi Arabia demarcated the border and built a 1,100-mile border wall. Saudi Arabia and Iraq: After the Islamic State steamrolled through northern Iraq, Saudi Arabia scrambled to build a 600-mile border fence and ditch system stretching from Jordan to Kuwait. It worked. Turkey and Syria: During the 1990s, the Syrian government supported the Kurdish insurgency inside Turkey. Turkey responded by reinforcing its border with fences and minefields. The result? Fifteen years of quiet. It was only after Turkey’s President Recep Tayyip Erdogan cleared many of the mines and loosened restrictions that security declined in both countries. Today, as a result, Turkey is building a new, fortified wall stretching more than 500 miles. Kenya and Somalia: Over the last two years, Kenya has made good on its promise to build a barrier along its 440-mile border. It may not look like much — as between Israel and the West Bank, it is more barbed wire fence than concrete wall — but Kenyan authorities have said it has reduced infiltrations by Somali terrorists. Of course, not all countries utilize walls for security. Many others use walls and border fences to prevent illegal immigration. India and Bangladesh: Beginning in the 1980s, India began construction on almost 1,800 miles along its border with its neighbor. While India justifies the fence in its efforts to curb illegal immigration, they have also cut down cross-border crime. Spain and Morocco: Spain has long maintained two enclaves — Ceuta and Melilla — on the Morocco side of the Strait of Gibraltar. Both are surrounded by high fences to keep African migrants out of Spain and therefore the European Union. Greece and Turkey: The land border between the two countries is little more than 100 miles, but this is marked by barbed wire fences and, in places, minefields. While the mines are a vestige of military conflicts between the two countries, the European Union has been fine with them remaining to deter illegal immigration from the Middle East into Europe. Hungary and Serbia, Croatia: Hungary isn’t shy about justifying its border fence in its desire to prevent illegal immigration by those originating outside of Europe. Greece’s land border may be well-defended, but African and Middle Eastern migrants simply make the first leg of their journey by sea, before moving north through the Balkans. Other European states might tolerate such a flow; Hungary sees no need. After all, migrants and asylum-seekers are supposed to remain in their first country of entry, which land-locked Hungary never would be. Just because other countries have invested in walls and fences, of course, does not necessarily make them a panacea. But as debates in Congress once again turn toward immigration and the status of illegal (or, in politically correct parlance, “undocumented”) aliens, critics of the border wall are more uninformed than the president they dispute if they believe Trump’s proposal is inconsistent with international norms. Indeed, every year, more countries resort to walls after more liberal policies fail. Michael Rubin (@Mrubin1971) is a contributor to the Washington Examiner's Beltway Confidential blog. He is a resident scholar at the American Enterprise Institute and a former Pentagon official. If you would like to write an op-ed for the Washington Examiner, please read our guidelines on submissions here.
Q: How to specify a database schema in PowerDesigner I want to export my model to a PostgreSQL database. If I do so as is, the objects are built in the Public schema, because the model doesn't specify a schema, and Public happens to be the default. Does anyone know a way to specify a schema in PowerDesigner? I can change the default schema in the database, but that seems a little cheesy to me. I ought to be able to control that in my modeling tool, it seems to me. A: Go to the Tools Menu Go to Model Options Under Category>Model Settings>Table & View Then you'll see Default owner on the right side. Response to comment PD is a great tool because it's very easy to try out simple cases. Follow these steps. Create a new PDM for PostGRES Add a table_1 (to it add columns_1, columns_2, columns_3) Add a new user called DBO (make sure to set the NAME and the CODE to DBO) Make the change I describe to the model options Add a table_2 (to it add columns_1, columns_2, columns_3) Now right click on the PDM in the browser pane and choose the preview tab. You'll see: Notice how the preview for table_2 has DBO. in front of the table name EXACTLY as you desire. I've also included in the screencap the screen for the List of Tables. You get to that via the Model menu. Notice how the owner is set to the DBO user for table_2, exactly like in the previewed DDL. If you go into the properties for table_1 or use this screen to change ALL of your tables en masse, all of your DDL will work the way you want. XDB File create [%Temporary% ]table [%QUALIFIER%]%TABLE% ( %TABLDEFN% ) [%OPTIONS%] Not sure what the %QUALIFIER% variable gets filled with but it seems to work.
Competing interest statement ============================ Conflict of interest: the authors declare no potential conflict of interest. Abstract {#sec1-1} ======== Autologous hematopoietic stem cell transplant (AHSCT) is the standard of care in the treatment of multiple myeloma worldwide. Infections are one of the most common complications of the chemotherapy regimen and AHSCT. Thrombotic microangiopathies are one of the rare but potentially life-threatening complications of infections associated with AHSCT. Thrombotic thrombocytopenic purpura and hemolytic uremic syndrome (HUS) are two most common type of thrombotic microangiopathies. The HUS is classically related to diarrheal illness such as with *E.coli* strain O157: H7 that produce Shiga-like toxins. But it has never been described with *Enterococcus raffinosus* urinary tract infections (UTI). Here we are describing a case of atypical HUS associated with *Enterococcus raffinosus* UTI in a patient with multiple myeloma after AHSCT. The management of atypical HUS especially after AHSCT is challenging. Eculizumab, a humanized monoclonal antibody against complement protein C5, and thrombomodulin have emerging role in the management of some cases, but more studies are needed to define evidence-based management of this condition. Introduction {#sec1-2} ============ Autologous hematopoietic stem cell transplant (AHSCT) is the standard of care in the treatment for multiple myeloma worldwide. Infections are commonly associated with the chemotherapy regimen and AHSCT. Thrombotic microangiopathies such as atypical HUS are rare but potentially life-threatening complications of infections associated with AHSCT leading to increased morbidity and mortality after stem cell transplant. Atypical HUS is caused by endothelial toxicity related to chemotherapeutic agents and infections. The complement activation and dysregulation leads to the clinical hallmarks of hemolytic anemia and thrombocytopenia that are also seen in other thrombotic microangiopathies. Plasmapheresis, intravenous immunoglobulins (IVIG) and steroids have been used with variable success. The C5 complement antibody eculizumab, as well as thrombopoietin agonists, are new emerging agents which have been successfully used in some studies. Here we are describing a case of atypical HUS associated with Enterococcus raffinosus UTI in a multiple myeloma patient after AHSCT. Case Report {#sec1-3} =========== The patient is a 62 y.o. Female, with past medical history of hypertension and gastroesophageal reflux disease, was diagnosed with Ig G-lambda multiple myeloma with initial presentation of acute renal insufficiency. Her bone marrow had 60% plasma cells on bone marrow aspirate at the time of diagnosis. She was treated with VDT-ACE (bortezomib, dexamethasone, thalidomide, adriamycin, cyclophosphamide, etoposide) induction chemotherapy and her renal function normalized. Repeat bone marrow showed 5% plasma cells on aspirate. The second induction chemotherapy cycle was done with VDTPACE (bortezomib, dexamethasone, thalidomide, adriamycin, cyclophosphamide, etoposide, cisplatin) with stem cell mobilization collection. After the second induction cycle of chemotherapy, her bone marrow was negative for plasma cells. For consolidation phase, she received Melphalan 200 mg/m^2^ based autologous stem cell transplant as an outpatient. Subsequently, she was admitted to the hospital for neutropenic fever with severe mucositis, diarrhea and Enterococcus raffinosus associated urinary tract infection. The patient was admitted to the hospital. The testing for infectious causes including HHV6/HHV8, HSV1/2, EBV, CMV, adenovirus, parvovirus and stool for *C. difficile* was negative. The patient was started on broad-spectrum antibiotics and antifungals imipenem, vancomycin, and micafungin. Repeated blood cultures and urine cultures were negative, and antibiotics were stopped after completion their course of two weeks. During her hospital course, she developed hypertension, acute renal insufficiency with elevated creatinine, LDH and liver enzymes including bilirubin ([Figure 1](#fig001){ref-type="fig"}). In the peripheral smear, she had features of microangiopathy including thrombocytopenia, hemolytic anemia with schistocytes \>6/ HPF. The patient became drowsy, though afebrile was transferred to intensive care unit for suspicion of thrombotic thrombocytopenic purpura (TTP). She was also started on plasmapheresis as the clinical features were suggestive of TTP. However her plasma ADAMTS 13 level was around 46%. Therefore classical TTP was excluded, and plasmapheresis after three sessions was stopped. The autoimmune profile including ANA, ds DNA was negative. The complements C3, C4 were within normal limits, but CH 50 levels were elevated. The patient engrafted, recovered WBC counts, but microangiopathic hemolytic anemia with thrombocytopenia persisted. We made a working diagnosis of atypical HUS associated with Enterococcus raffinosus UTI after AHSCT in a patient with multiple myeloma. The gene rearrangement for atypical HUS was negative though as mentioned in the discussion below, it is positive only in approximately 50% patients with atypical HUS. Among other coagulation parameters, direct coombs test was negative, coagulation profile PT/APTT, INR was within normal limits, however, d- dimers were persistently high, and fibrinogen levels continue to be on lower limits of normal. Since atypical HUS has a complement mediated autoimmune pathology, it was decided to give her eculizumab. Meanwhile pending approval of this medication, we decided to give her high dose IVIG (0.5 g/kg × 3 days followed by 1 g/kg × 3 days) in combination with 1 mg/kg prednisone for the underlying autoimmune pathology of the disease. The liver function tests and LDH came down drastically after initiation of IVIG and prednisone, however, her thrombocytopenia and hemolytic anemia persisted. Subsequently, she was started on eculizumab. She received the meningococcal vaccine before starting eculizumab. She received three weekly doses of eculizumab, along with ciprofloxacin prophylaxis. Her hemolytic anemia improved after eculizumab but her thrombocytopenia persisted. The repeat bone marrow examination showed decreased megakaryocytes. She was started on eltrombopag. Her platelets levels stabilized and subsequently discharged home ([Figure 2](#fig002){ref-type="fig"}). Discussion and Conclusions {#sec1-4} ========================== Hemolytic uremic syndrome (HUS) is a rare thrombotic microangiopathy characterized by microangiopathic hemolytic anemia, thrombocytopenia, and renal injury.^[@ref1]^ The typical HUS primarily affects children.^[@ref2]^ Approximately 90% of cases are preceded by an *Escherichia coli* infection, typically with *E.coli* strain O157:H7 that produce Shiga-like toxins and is thus classified as Shiga toxin-induced HUS (STECHUS). ^[@ref1],[@ref2]^ Typical HUS is generally associated with a good prognosis and low mortality. Atypical HUS also has features of microangiopathic hemolytic anemia, thrombocytopenia, and renal injury. Atypical HUS is associated with increased mortality, and about 50-60% of cases progress to endstage renal disease.^[@ref1]^ The thrombotic microangiopathies occurring after hematopoietic stem cell transplant are particularly devastating and have very high mortality rates.^[@ref2],[@ref3]^ In our patient, clinical features were suggestive for atypical HUS with *Enterococcus raffinosus* UTI after AHSCT for multiple myeloma. The species *Enterococcus raffinosus* was first recognized in 1989,^[@ref4]^ since then it has been associated with various infections in immunosuppressed patients especially in the acute care hospital settings.^[@ref4],[@ref5]^ The natural habitat of *Enterococcus raffinosus* is unknown, but the organism has been found in the oropharyngeal flora of cats. It has been associated with wound infections, abscesses, urinary tract infections, vertebral osteomyelitis and endocarditis.^[@ref4]^ However, microangiopathic hemolytic anemia has never been described in the literature with Enterococcus UTI. Though hemolytic uremic syndrome has been described with urinary tract infections with *E. coli.*^[@ref10],[@ref11]^ Chiurchiu *et al*.^[@ref12]^ first reported that hemolytic uremic syndrome could be associated with non-diarrheal Shiga toxin producing *Escherichia coli* O157:H7 which causes bacteremia and urinary tract infection. Later, Park *et al.*^[@ref13]^ studied the association between thrombotic microangiopathy and UTI. They found out that 23% of the visits for thrombotic thrombocytopenic purpura had UTI and concluded that occult bacterial infections could cause alterations in the coagulation pathways probably resulting from the molecular mimicry between antibodies directed against infectious agents and the ADAMTS13 protein moiety.^[@ref12]^ Over-activation of the complement system is the most common etiology of atypical HUS.^[@ref14]^ The dysregulation of alternative complement pathway plays a central role in the pathogenesis of atypical HUS ([Figure 3](#fig003){ref-type="fig"}). The regulation of complement pathway is critical in preventing thrombosis from occurring, and genetic mutations have been described in up to 60% of adult aHUS case.^[@ref14],[@ref15]^ The alteration in the alternative complement pathways especially mutations in complement factor I, complement factor H, and membrane cofactor protein are among the commonly seen abnormalities and account for approximately 50% of aHUS cases.^[@ref14],[@ref15]^ Drugs such as calcineurin inhibitors and sirolimus have also been shown to cause endothelial injury and decrease VEGF expression leading to thrombotic microangiopathy.^[@ref16]^ Sepsis may cause an alteration in ADAMTS13 activity most likely due to cleavage by circulating proteolytic enzymes.^[@ref17]^ Until recently, treatment of aHUS was accomplished with plasma exchange although the efficacy of this treatment was variable and approximately 50% progressed to ESRD.^[@ref18]^ Eculizumab, a monoclonal anti- C5 component antibody, is a promising new treatment option for aHUS.^[@ref19]^ Eculizumab has also shown to be useful in treating typical HUS caused by STEC and thrombotic thrombocytopenic purpura (TTP) in which ADAMTS13 levels are below acceptable ranges.^[@ref20],[@ref21]^ Patients who develop atypical HUS or any thrombotic microangiopathy following AHSCT have a high likelihood of succumbing to it and progression to multi-organ failure. This makes it especially important to identify candidates with the underlying genetic anomaly that predisposes them to this condition as well as the patients who will respond to eculizumab. Recently, recombinant human soluble thrombomodulin has also been used successfully to treat thrombotic microangiopathy after hematopoietic stem cell transplantation.^[@ref22]^ Lastly, we need more studies to determine the exact pathophysiology of all these relatively uncommon disorders which can help proper diagnosis and management of these conditions especially in the setting of autologous stem cell transplants in hematological malignancies. ![The graph showing the laboratory results in the patient with multiple myeloma with *Enterococcus raffinosus* infection](hr-9-3-7094-g001){#fig001} ![Timeline of the atypical HUS after *Enterococcus raffinosus* UTI infection after AHSCT in a multiple myeloma patient.](hr-9-3-7094-g002){#fig002} ![Classical and alternative pathways and role in pathogenesis of atypical HUS.](hr-9-3-7094-g003){#fig003} [^1]: Contributions: the authors contributed equally.
The Lincolnshire The Lincolnshire is a grand mansion at 22 Hidden Road and 28 Hidden Way in Andover, Massachusetts. History The mansion was built between 1897 and 1898 for Henry Bradford Lewis (1868-1951), who made a considerable fortune from his scouring mills in Lawrence. Lewis was in charge of the three mills of the E. Frank Lewis Co. and the American Lanolin Co., which employed 500 persons at their peak. The manufacturer spared no expense—or whimsey—in creating and furnishing this estate. The architect of the house was Otis A. Merrill, of the Lowell firm of Merrill & Cutler. It was formerly attributed to George G. Adams of Lawrence, who designed a similar mansion for Lewis' father in that city. At his death in 1951 Lewis left $80,750 in real estate (including this house and its greenhouse at $35,000 and No. 17 Hidden Rd, bought for his daughter) and more than $97,000 in personal estate, most of which was left to his wife Lillian. As Lewis anticipated, the house was too large to be maintained for only one occupant, and the building was sold in 1953 to C. Lincoln Giles, and converted into apartments. The carriage house is now a single-family residence. The mansion was added to the National Register of Historic Places because it is distinguished: by its association with a wealthy Lawrence manufacturer as an excellent example of around the start of the 20th century architecture on a grand scale. Architecture The Colonial Revival style structure is characteristically voluminous with such classical motifs as Palladian and oculus windows, but it also retains a Queen Anne playfulness. Small, unexpected features as large as the cross-gambrel pavilions and as small as an oriel window in an exterior chimney give a cheerful picturesqueness to the imposing mass. The original, Colonial Revival carriage house at the rear is similarly large in scale, with wood shingles and a large gambrel roof that complements the main house. Specifications Exterior wall fabric: Wood Shingles Outbuildings: Carriage House Other features: Pavilions; irregular mass; classical porches; bay windows; palladian window; decorative chimneys, 1 with oriel window; decorative leaded glass Lot Size: Over One Acre Approximate Frontage: 305' Approximate Distance of building from street: 200' Altered: Apartment Conversion, Late 20th Century See also National Register of Historic Places listings in Andover, Massachusetts National Register of Historic Places listings in Essex County, Massachusetts References Further reading Essex Country Registry of Deeds, Lawrence & Salem (1130/205; 1075/56; 780/315; 770/532; 770/534; 154/483; Probate Docket No. 233195) Bulletin of the National Association of Wool Manufacturers, 1932, ed. by Walter Humphreys, Vol. LXII. Boston, MA, Murray Printing co., 1932. History of Massachusetts Industries, Orra L. Stone, Vol III. boston, MA, S.J. Clarke Publishing Co., 1930 Lawrence Up to Date, 1845-1895. Lawrence, MA: Rushforth & Donoghue, 1895. Andover Townsman, "Obituary: H. Bradford Lewis, industrialist, Dies." January 25, 1951. Category:Buildings and structures in Andover, Massachusetts Category:Houses completed in 1898 Category:National Register of Historic Places in Andover, Massachusetts Category:Houses on the National Register of Historic Places in Essex County, Massachusetts
Ultrastructure of endomorphin-1 immunoreactivity in the rat dorsal pontine tegmentum: evidence for preferential targeting of peptidergic neurons in Barrington's nucleus rather than catecholaminergic neurons in the peri-locus coeruleus. Endomorphins are opioid tetrapeptides that have high affinity and selectivity for mu-opioid receptors (muORs). Light microscopic studies have shown that endomorphin-1 (EM-1) -containing fibers are distributed within the brainstem dorsal pontine tegmentum. Here, immunoelectron microscopy was conducted in the rat brainstem to identify potential cellular interactions between EM-1 and tyrosine hydroxylase (TH) -labeled cellular profiles in the locus coeruleus (LC) and peri-LC, an area known to contain extensive noradrenergic dendrites of LC neurons. Furthermore, sections through the rostral dorsal pons, from colchicine-treated rats, were processed for EM-1 and corticotropin releasing factor (CRF), a neuropeptide known to be present in neurons of Barrington's nucleus. EM-1 immunoreactivity was identified in unmyelinated axons, axon terminals, and occasionally in cellular profiles resembling glial processes. Within axon terminals, peroxidase labeling for EM-1 was enriched in large dense core vesicles. In sections processed for EM-1 and TH, approximately 10% of EM-1-containing axon terminals (n=269) targeted dendrites that exhibited immunogold-silver labeling for TH. In contrast, approximately 30% of EM-1-labeled axon terminals analyzed (n = 180) targeted CRF-containing somata and dendrites in Barrington's nucleus. Taken together, these data indicate that the modulation of nociceptive and autonomic function as well as stress and arousal responses attributed to EM-1 in the central nervous system may arise, in part, from direct actions on catecholaminergic neurons in the peri-LC. However, the increased frequency with which EM-1 axon terminals form synapses with CRF-containing profiles in Barrington's nucleus suggests a novel role for EM-1 in the modulation of functions associated with Barrington's nucleus neurons such as micturition control and pelvic visceral function.
Health Information Technology Adoption and Clinical Performance in Federally Qualified Health Centers. A national sample (N = 982) of federally qualified health centers (FQHCs) for the period 2011-2016 was examined regarding the relationship between the age and extent of health information technology (HIT) use and clinical performance. We found that each additional year of HIT use was associated with an approximate 4 percent increase in both process and outcome measures of clinical performance. Furthermore, FQHCs that fully adopted HIT had 7 percent higher clinical performance on hypertension control than those that did not adopt HIT. This study's findings can assist stakeholders to make informed decisions for improving care and sustaining a competitive advantage.
Which clinical features differentiate progressive supranuclear palsy (Steele-Richardson-Olszewski syndrome) from related disorders? A clinicopathological study. The difficulty in differentiating progressive supranuclear palsy (PSP, also called Steele-Richardson-Olszewski syndrome) from other related disorders was the incentive for a study to determine the clinical features that best distinguish PSP. Logistic regression and classification and regression tree analysis (CART) were used to analyse data obtained at the first visit from a sample of 83 patients with a clinical history of parkinsonism or dementia confirmed neuropathologically, including PSP (n = 24), corticobasal degeneration (n = 11), Parkinson's disease (PD, n = 11), diffuse Lewy body disease (n = 14). Pick's disease (n = 8) and multiple system atrophy (MSA, n = 15). Supranuclear vertical gaze palsy, moderate or severe postural instability and falls during the first year after onset of symptoms classified the sample with 9% error using logistic regression analysis. The CART identified similar features and was also helpful in identifying particular attributes that separate PSP from each of the other disorders. Unstable gait, absence of tremor-dominant disease and absence of a response to levodopa differentiated PSP from PD. Supranuclear vertical gaze palsy, gait instability and the absence of delusions distinguished PSP from diffuse Lewy body disease. Supranuclear vertical gaze palsy and increased age at symptom-onset distinguished PSP from MSA. Gait abnormality, severe upward gaze palsy, bilateral bradykinesia and absence of alien limb syndorme separated PSP from corticobasal degeneration. Postural instability successfully classified PSP from Pick's disease. The present study may help to minimize the difficulties neurologists experience when attempting to classify these disorders at early stages.
Q: IndexError: index 3 is out of bounds for axis 0 with size 3 Estoy realizando un cluster K means, al graficar el siguinte cdig me sale error: código fig= plt.figure(figsize=(10,10)) ax=fig.add_subplot(1,1,1) ax.set_xlabel('Componente 1 ',fontsize=15) ax.set_ylabel('Componente 2 ',fontsize=15) ax.set_title('Componentes principales',fontsize=20) color_theme=np.array(['blue','green','orange',]) ax.scatter(x=pca_nombres_data.Componente_1, y=pca_nombres_data.Compononte_2, c=color_theme[pca_nombres_data.KMeans_Clusters],s=50) plt.show() código el error por lo que evidencio es en la siguiente linea código ax.scatter(x=pca_nombres_data.Componente_1, y=pca_nombres_data.Compononte_2, c=color_theme[pca_nombres_data.KMeans_Clusters],s=50) código A: Amigo. pues al parecer en mi humilde opinion estas llamando una referencia nula de un vector creo que puede ser color_theme[pca_nombres_data.KMeans_Clusters], deberias correrlo en frio y ver en inspeccion rapida, el tamaño del vector color_theme que sea mayor o igual que el valor que tiene pca_nombres_data.KMeans_Clusters . chequealo y cuenta que te da.
Q: How do i proof that that the map $ \varphi: Aut(F_n) \to GL_n(\mathbb{Z})$ is homomorpism? I'm trying to proof that a map $ \varphi: Aut(F_n) \to GL_n(\mathbb{Z})$ is a homomorphism but i can't exactly define which is the function to show that. The map $ \varphi $ is a map with for any $ \alpha \in Aut(F_n) $ the $(i,j)$-th entry of the matrix $ \varphi(a) $ is the sum of exponents of the letter $x_j$ in the $ a(x_i)$. $F_n$ denotes the non-abelian free group generated by $n$ elements A: The easiest way to see this is to note that this map factors through an abelianization of $F_n$. Namely, if we quotient by the commutator subgroup then an element $x_{i_1}x_{i_2}\cdots x_{i_n}$ reduces to $x_1^{m_1}x_2^{m_2}\cdots x_n^{m_n}$, where $m_i$ is the sum of the exponents of $x_i$ in the expression. For any automorphism $\alpha:F_n\to F_n$ there is a corresponding automorphism $\alpha':\mathbb{Z}^n\to\mathbb{Z}^n$ where $\alpha'(x_i)$ is the element obtained by combining the powers of each generator. But $\alpha'$ is exactly the element of $\mathrm{GL}_n(\mathbb{Z})$ that $\alpha$ is mapped to.
Q: Jquery resizable shifts my DOM element I have the following code <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> <style type="text/css"> #main-content { width:300px; height:500px; background:#00F; overflow:auto; } #top-name , #top-ip { background:#000; color:#FFF; width:80%; position:relative; left:10%; margin-bottom:5px; margin-top:5px; } </style> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css" rel="stylesheet" type="text/css"/> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js"> </script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <script> $(function() { $("#top-ip").resizable({ handles: 'n, s' }); $("#extrainfo").hide(); $("#top-name").mouseenter(function() { $("#extrainfo").fadeIn(); }); $("#top-name").mouseleave(function() { $("#extrainfo").fadeOut(); }); var stop = false; $( "#accordion h2" ).click(function( event ) { if ( stop ) { event.stopImmediatePropagation(); event.preventDefault(); stop = false; } }); $( "#accordion" ) .accordion({ header: "> div > h2" }) .sortable({ axis: "y", handle: "h2", stop: function() { stop = true; } }); }); $(function() { $( "#accordion" ).resizable({ maxHeight: 100, resize: function() { $( "#accordion" ).accordion( "resize" ); } }); }); </script> </head> <body> <div id="main-content"> <div id="top-name"> Name here <div id="extrainfo"> blanl</div> </div> <div id="top-ip"> Resizalbe element </div> <div id="accordion"> <div> <h2>Player List</h2> <div style="background:#F00"> OMG OMG OMG OMG </div> </div> <div> <h2>Configs</h2> <div> OMG OMG OMG OMG sdg<br /> SDF sDsag sdzh z<br /> zh<br /> zh </div> </div> <div> <h2>Comming Soon</h2> <div> Comming Soon Zong OMG OMG OMG OMG </div> </div> <div> <h2>Server Disscussion</h2> <div> Server Disscussion Server Disscussion </div> </div> <div> <h2>comming Soon</h2> <div> comming Soon comming Soon </div> </div> </div> </div> </body> </html> When i resizable the ip element it moves the element a bit left y? demo here Demo A: It's the percentage left property on the #top-name, #top-ip selector. I was surprised to find the jQuery UI #2421 enhancement request has been around for 3 years! Until that is fixed, if you make the left property a non-percentage value (30px seems about right), the resize works as expected. Edit: I've found a workaround. You can use the resize function to keep setting the left value during the resize. If you change the code to this, the resize works as expected and does not alter the left position of the element. $("#top-ip").resizable({ handles: 'n, s', resize: function(event, ui) { $(this).css({left:'10%'}); } });
Posted By John Zappe On February 25, 2013 @ 2:19 am In Featured,News and Features | 3 Comments [1]Even as the number of tech jobs in the U.S. was steadily climbing in 2012, hitting an all-time high last month, workers were feeling just a bit less confident in their ability to find a new job. In the last few days, several reports and forecasts have come online, all of them showing that tech workers, despite their own confidence issues, will be hard to recruit. TechServe Alliance[2] declared that the number of tech workers in the U.S. grew in 2012 by 4.14% to an estimated 4,339,800 workers. That’s more than two-and-a-half times the national rate of job growth (1.52%) exceeds even the growth in the health sector, which increased by 2.26% between January 2012 and last month. Yet a Randstad Technologies[3] poll conducted in the fourth quarter of last year found 44% of the participating IT workers confident of their ability to find a new job. In the third quarter, 55% were confident. Workers’ confidence could have been shaken by the showdown over the fiscal cliff issues. Another possibility is that with only 275 IT workers in the survey, a few real worriers could have skewed the results. In any case, employers aren’t buying into that fear. CareerBuilder’s 2013 job forecast[4] says 27% of hiring managers plan to hire permanent, full-time IT workers this year. That’s also what TechServe Alliance expects. Noting IT employment got off to a “strong start” last month adding 15,800 jobs, TechServe CEO Mark Roberts said, “Despite the lingering uncertainty with the U.S. and global economies, I anticipate demand for IT professionals will remain robust throughout 2013.” With unemployment in the sector below 4%, the hunt for IT talent is only going to get harder this year. Rona Borre, CEO of Chicago’s recruiting and tech-staffing firm Instant Technology LLC., told Crain’s Chicago Business[5], that the market for tech talent is “the tightest I’ve seen it since the tech boom in the late 1990s.” That view is shared by executives of startups in sectors from software to life sciences. Nine out of 10 companies are hiring, but facing difficulty in finding and keeping the talent they need. [6]Silicon Valley Bank surveyed[7] 750 company leaders, learning that 87% of them find recruiting talent with the skills they need to be “somewhat” or “extremely challenging.” Two-thirds of them say the biggest challenge they have to retaining talent is “finding and competing for the people with the right skills.” The cost of salaries and benefits, though a concern, came in a distant second. Most critical are the STEM skills, the executives said.
Jens Klok Jens Christian Jensen Klok (25 January 1889 - June 16, 1974) was a Danish architect. Biography Jens Klok was born at Vinderslev Parish in Viborg, Denmark. He was the son of Laurits Klok and Karoline Adolfsen. He was first a mason apprentice and a construction manager before going to technical school and the Royal Danish Academy of Fine Arts, School of Architecture, where he graduated in 1929. He received the Academy Bursary award in 1925 and K. A. Larssens Legat 1927 and travelled to Italy, France and England to study key architectural influences. As an employee at the Royal Danish Naval Building Service and from 1935 Head of the Naval Architecture section, he designed a number of Navy buildings in a unique style. Designs Jens Klok designed the Marine Air Station in Avnsø in 1937 and in Holmen, Copenhagen in 1939. He later designed the Royal Danish Naval Academy office building in 1940, with Holger Sorensen, and had a special exhibition of his works at Charlottenborg Spring Exhibition between 1942-1943. He also designed the Motor Torpedo Workshop in 1953. Personal life In 1932, he married Marie Augusta Elisabeth Bech (1887-1977). He died in Varde during 1974. References Note This biography is a translation of the Danish Wikipedia version that has additional references. Category:Danish architects Category:1889 births Category:1974 deaths Category:Royal Danish Academy of Fine Arts alumni Category:People from Viborg Municipality
Monday, August 15, 2005 They must decide by midnight today about using the NBA's new luxury-tax amnesty provision and waiving Finley. Or they could trade him for useable assets. Either way, Finley, 32, will be a former Maverick by 12:01 a.m. Tuesday. On the floor, he was a two-time All-Star and a key figure in the Mavericks' rise from the dark days of the 1990s. Off the court, he was a solid citizen who held a charity golf tournament each summer. I admit it. It bums me out. I hate to see a guy’s run in a city where he played so hard for so long end like this. I guess the Finley critics will rejoice, but I would like to think that he will be considered one of the greatest Mavericks of all-time, and maybe be considered for having his #4 lifted to the rafter someday. This is the right move right now, but I wish it didn’t have to end this way for a guy who was a fine professional. "If somebody came in here and dismantled the clubhouse, I think it would only make matters worse," outfielder David Dellucci said. "Everybody is doing what they can to win. "This is our job. This is our life. It's not a hobby. We play hard every day, and it's incredibly frustrating to come in the clubhouse after every game knowing you've done all you can and you've still lost." “It's OK if they think that, but I just don't want to hear it come out of their mouths," Brocail said. "We know we're not doing the job. We know they are scoring five and six and seven runs a game and still losing. "But we're pointing the finger at ourselves. We feel like if we'd just allow a run every other inning – what's that, almost five a game? – we'd win." I visited the new stadium of FC Dallas Saturday night, as they lost to New England 2-1…But the stadium is going to be perfect. I really enjoyed the setting, and just about everything about the place. They still have plenty of construction to complete before it will be totally finished, but it is pretty promising from what I saw Saturday. Before we go too far on other issues from Cowboys-Cardinals, please allow me a moment to direct you to the blog from last Wednesday, which references other blog entries earlier. Basically, what I am trying to do is explain that despite the fact that I could have written about many other things this summer when it comes to the Cowboys, I have been banging the “RT sucks” drum all along. After Exhibition Game #1, my reviews of the RT: Rob Pettiti = D (abused on several occasions on run blocking – they cannot run right with him in the game evidently)Jacob Rogers = F (played 2 plays, looked horrendous on the first. Left game with injury that may or may not exist)Torrin Tucker = F (what a clinic of poor blocking/penalties) Here is what I wrote last week: Perhaps this is a good time for me to go on record and suggest that I might be the only media member around here who is not buying all of this “Jacob Rogers for RT” baloney. There are real stories early in training camp, and there are stories from camp that when reviewed a few months later are quite humorous that the media believed it when the mighty Tuna floated it out there. But there is every paper, lapping up the information that Jacob Rogers, a guy who could not get on the field for even 1 offensive down last season, is the best option at Right Tackle. OK, sure, he is a lot stronger, and he is a lot smarter, but c’mon. Now, Rogers has a right shoulder issue, which matches up with his injury history, and also his toughness questions. But we are to believe that Rogers will be ready to go head to head with Jevon Kearse and Michael Strahan this fall? Right. Not buying it. At this point, I should likely admit that they don’t have very good options, and as I wrote before I think Larry Allen might be the best option . But for now, that is not something they will look at, and we can all ponder whether Rogers, Tucker, or Vollers will be the bad player at RT. Other things: How come when the QB is liked by the coach, we get to say that the WR’s can’t get open; but when we don’t like the QB, he is not accurate enough and he is missing open receivers? Andre Gurode is no better at Center than he was at Guard. If they had any depth, I would cut him and Rogers right now. Kurt Warner looked like Kurt Warner used to look. Perfect example of what happens when you put a QB in a system that makes sense. Warner with 3 WR’s and quick passes show his strengths and hide his weaknesses. Denny Green may not be good in the playoffs, but strategically, he is a fine coach. Terrence Newman and Anthony Henry both showed some good things. DeMarcus Ware did not show much to confirm my high hopes, as he was hard to notice. Never seen a 3rd and 50 before, but I really enjoyed it. Tony Romo sure looks awfully confident out there. I think #68, Pepper Johnson, makes this team. Tyson Thompson looks pretty fast against 3rd team defenders. I want to see more of that. Here is where I would normally type a paragraph pounding Drew Bledsoe. I just can’t get too carried away yet. I think it will be difficult to survive a season with the combination of that QB and that Offensive Line. But, it is also early August, so let’s all take a deep breath and see what next week holds. 7 comments: Anonymous said... If the 'Boys have a glaring sesspool at RT, but a fine player at a different position who doesn't quite fit the system (Glover), shouldn't they try to work out a trade? D-Line depth is one of the things that many teams often complain about. It seems likely that somebody has a good young RT behind a solid starter. If the 'Boys could trade for the solid starter, it'd help both teams. I didn't note a Parcell's press conference on the BaD website...does that mean we don't have to listen to that crap at 1:30 today? By the way, there was a T.O. sighting today, he was applying to be the Affirmative Action Director with the Philly chapter of the Rainbow Coalition. His motto is "If I ain't getting paid, then football ain't getting played". To save us an hour of a$# whipping, here is the transcript of the Hansen show. 1. Offense looks like crap2. Bledsoe looked slow3. RT is gonna get someone hurt4. Vinny will be signed by October5. Cowboy fan's are used to being average6. Defense still can't get pressure on the QB7. I'm God, bow down before me8. The lovely Mrs. Hansen said that I'm a stud If you listened to me this weekend, which you didn't, you would realize that Finley is trash and we can only trade him for garbage. I referred to Finley as Mavs trash at least 5 times in one segment alone! How can you be so bummed? If Finley was any good, he would have at least started for a playoff team last year. I am also right about soccer and the world cup is boring and, like the holocaust, a total media fiction. The fans in those stadia (not such a hick after all) are all computer generated. im so dissapointed in things at the temple. i guess this is how the tampa bays and of the world get along with life. my family has owned season tix since the year before the temple, and i just may cancel them. its very dissapointing that last year at the deadline they couldnt get one sp or bat, when they were getting great bullpen pitching, i mean great bullpen pitching, way above average starting pitching for texas that is, and good offensive production. yet hicks and company thought that this is just one of many many years they will be 1 game out with a week to play, so they did nothing. when you catch lightning you GO FOR IT, PERIOD. no excuse, i dont care if DVD turns into 3 pedros. im sorry but FU*K them for doing this to us.
The gp130-stimulating designer cytokine hyper-IL-6 promotes the expansion of human hematopoietic progenitor cells capable to differentiate into functional dendritic cells. Hyper-IL-6, a fusion protein of interleukin-6 and its specific receptor, together with stem cell factor leads to the proliferation of primitive hematopoietic progenitor cells. Based on these findings, the current study examined whether hyper-IL-6 promotes the growth of precursor cells that can be further differentiated into dendritic cells in the presence of additional cytokines. Dendritic cell cultures were generated from CD34(+) hematopoietic progenitor cells derived either from bone marrow or from peripheral blood. CD34(+) cells were cultured in the presence of cytokines for 2 weeks and then used for phenotyping and T-cell stimulation assays. Hyper-IL-6 in the presence of stem cell factor induced a 60- to 80-fold expansion of CD34(+) progenitor cells following 2 weeks of culture in serum-free medium. The addition of granulocyte-macrophage colony-stimulating factor to hyper-IL-6 and stem cell factor was essential for the differentiation of expanded progenitor cells into antigen presenting cells capable of inducing a primary T-cell response to soluble protein, which is a typical feature of dendritic cells. Phenotypic analyses confirmed the expansion of immature dendritic cells, which could be further differentiated into mature CD83(+) dendritic cells under the influence of interleukin-4, interleukin-1beta, tumor necrosis factor-alpha, and prostaglandin E(2). The capacity of expanded dendritic cells to stimulate protein-specific CD4(+) T cells was used to stimulate a primary T-helper cell response to the recombinant protein of the hepatitis-B core antigen in healthy donors. The expansion and differentiation of functional dendritic cells from CD34(+) progenitor cells under serum-free culture conditions allow for the possibility to develop more effective ways to immunize against viral infections and tumor diseases.
Glaucoma is an ocular disorder associated with elevated intraocular pressures which are too high for normal function and may result in irreversible loss of visual function. If untreated, glaucoma may eventually lead to blindness. Ocular hypertension, i.e., the condition of elevated intraocular pressure without optic nerve head damage or characteristic glaucomatous visual field defects, is now believed by many ophthalmologists to represent the earliest phase of glaucoma. Many of the drugs formerly used to treat glaucoma proved not entirely satisfactory. Indeed, few advances were made in the treatment of glaucoma since pilocarpine and physostigmine were introduced. Only recently have clinicians noted that many .beta.-adrenergic blocking agents are effective in reducing intraocular pressure. While many of these agents are effective in reducing intraocular pressure, they also have other characteristics, e.g. membrane stabilizing activity, that are not acceptable for chronic ocular use. (S)-1-tert-Butylamino-3[(4-morpholino-1,2,5-thiadiazol-3yl)oxy]-2-propanol , a .beta.-adrenergic blocking agent, was found to reduce intraocular pressure and to be devoid of many unwanted side effects associated with pilocarpine and, in addition, to possess advantages over many other .beta.-adrenergic blocking agents, e.g. to be devoid of local anesthetic properties, to have a long duration of activity, and to display minimal tolerance. Although pilocarpine, physostigmine and the .beta.-blocking agents mentioned above reduce intraocular pressure, none of these drugs manifests its action by inhibiting the enzyme carbonic anhydrase and, thereby, impeding the contribution to aqueous humor formation made by the carbonic anhydrase pathway. Agents referred to as carbonic anhydrase inhibitors, block or impede this inflow pathway by inhibiting the enzyme, carbonic anhydrase. While such carbonic anhydrase inhibitors are now used to treat intraocular pressure by oral, intravenous or other systemic routes, they thereby have the distinct disadvantage of inhibiting carbonic anhydrase throughout the entire body. Such a gross disruption of a basic enzyme system is justified only during an acute attack of alarmingly elevated intraocular pressure, or when no other agent is effective. Despite the desirability of directing the carbonic anhydrase inhibitor only to the desired ophthalmic target tissue, no topically effective carbonic anhydrase inhibitors are available for clinical use. However topically effective carbonic anhydrase inhibitors are reported in U.S. Pat. Nos. 4,386,098; 4,416,890; 4,426,388; and 4,668,697, where the compounds reported therein are 5 (and 6)-hydroxy-2-benzothiazole-sulfonamides and acyl esters thereof and 5 (and 6)-hydroxy-2-sulfamoyl-benzothiophenes and esters thereof, and U.S. Pat. No. 4,677,115, where the compounds are reported to be 5,6-dihydro-thieno-thiophensulfonamides.
/* * Generated by class-dump 3.3.4 (64 bit). * * class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2011 by Steve Nygard. */ #import <DTGraphKit/DTNetworkGraphVelocitySpreadLayout.h> @interface DTNetworkGraphCycleLayout : DTNetworkGraphVelocitySpreadLayout { } - (void)layoutGraph:(id)arg1; @end
What makes folk tales unique: content familiarity, causal structure, scripts, or superstructures? Requiring readers to re-order randomly ordered sentences into a coherent text significantly enhances recall relative to that in a read-only control condition for non-folk-tale texts but not for folk tales (Einstein, McDaniel, Owen, & Coté, 1990). Experiments 1-3 showed that embedding components of folk tales (e.g., causal structure, conventional scripts, content related to background knowledge) in non-folk-tale texts did not render sentence unscrambling ineffective for increasing recall. In Experiments 4a-4c, a folk tale was presented either as a fairy tale or as part of a newspaper article. Significant sentence unscrambling effects (in free recall) were not obtained in either presentation format, which implies that a story superstructure (a story grammar) does not contribute to the absence of the sentence unscrambling effect. It is suggested that understanding why the sentence unscrambling effect is absent for folk tales may require considering the functional role that narrative plays in socioculturally situated cognition.