text
stringlengths
8
5.77M
Sardinian Autonomist Populars The Sardinian Autonomist Populars (Popolari Autonomisti Sardi, PAS) was a tiny regionalist Christian-democratic Italian political party based in Sardinia. The party was founded on 1 March 2008 by Sardinian splinters from the UDEUR led by Satta, who was at the time deputy national secretary. All the eight provincial sections of UDEUR followed Satta into the new party. After trying to form an alliance for the 2008 general election either with the Union of Christian and Centre Democrats or the Movement for Autonomy, in June the PAS joined forces with splinters of The Rose for Italy led by Mario Baccini in order to form the Federation of Christian Populars (FCP). Satta became vice president of the new party, while Satta King was appointed regional leader. In August 2009, however, Satta left FCP in order to launch the Christian Popular Union. References Category:Political parties in Sardinia Category:2008 establishments in Italy Category:Political parties established in 2008
Field of the Invention The present invention relates to display control that utilizes a virtual three-dimensional space, and more particularly to display control processing for flying a player object in a virtual three-dimensional space. Description of the Background Art Conventionally, there are known so-called flight simulation games. In a flight simulation game, a player operates and flies a player object (e.g., an airplane) in a virtual three-dimensional space, which allows the player to enjoy a feeling as if to be freely flying in the sky. However, simply flying in the sky (within the virtual space) lacks amusing game characteristics. Therefore, some attempts have been made to add amusing game characteristics. As one example, there is a game in which a player of the game acquires predetermined points by sequentially acquiring ring-shaped objects that are placed in the air. This adds amusing characteristics to the game (e.g., “Pilotwings 64 Toriatsukai Setsumeisho (Instruction Manual of Pilotwings 64)”, Nintendo Co., Ltd., Jun. 23, 1996, p. 21; and “64 Books 100 Percent Asobu Pilotwings 64 (64 Books—Play Pilotwings 64 to 100 percent)”, GEIBUNSYA PUBLISHING Co., Ltd., Sep. 28, 1996, p. 54). In the above game, the player can acquire a ring by simply causing a player object to come into contact with the ring (in this game, when the player has acquired a ring, the player may be informed that the player object has “passed through” the ring). Since the game has the feature that the player is required to control the player object to come into contact with a ring to acquire the ring, the game's amusing characteristics are improved to some extent. However, the game, in which the player simply aims to fly the player object to the position of a ring for the purpose of acquiring the ring, is still monotonous and lacks variety. In this respect, the applicant of the present application discovered that there was still room for improvement which could be made, for example, by allowing a player to freely control a flying player object and perform more difficult flight operations (e.g., an operation of causing the player object to pass through a ring) with a moderate degree of difficulty (i.e., not too difficult and not too easy), thereby enhancing the amusement of the game.
External fixation of distal femoral fractures in adults' multicentre retrospective study of 43 patients. A multicenter cohort of 43 adults with distal femoral fractures (DFFs) managed with external fixation was evaluated to determine the potential of this treatment. The patients were young adults (mean age: 39.6 years) with high-energy trauma; 12 had polytrauma and 41 multiple fractures. Most patients (38/43) had compound DFFs. Fracture types were A in 3 patients, B in 3 patients, and C in 37 patients. A tibio-femoral construct was required in 11 patients and a femoro-femoral construct in 32 patients. The normal femoral axis was restored within 5° in the coronal plane in 34 (79%) patients and in the sagittal plane in 22 (51%) patients. Axis restoration within 5° in both planes was achieved in 19 (44.7%) patients. After femoro-femoral external fixation, mean malalignment was 4.2° in the coronal plane and 8.6° in the sagittal plane; corresponding values after tibio-femoral external fixation were 1.3° and 8.6°. In 23 patients (of whom 1 was lost to follow-up), external fixation was intended as the only and definitive treatment; among them, 1 required amputation after a failed revascularization procedure, 10 achieved fracture healing within a mean of 21.2 weeks, 6 required conversion to another technique, and 5 underwent non-conservative procedures (total knee arthroplasty in 3 and arthrodesis in 2). In the remaining 20 patients, conversion to internal fixation was intended initially and performed within a mean of 4.7 weeks; 1 of these patients required amputation for ischemia, 3 did not achieve fracture healing, 12 achieved primary fracture healing, and 4 achieved fracture healing after repeated grafting (n=3) or osteotomy (n=1). At last follow-up (at least 1 year), the mean International Knee Society (IKS) Function Score was 67.3 and an IKS Knee Score of 68.5. Range of active flexion was 85.75° overall, 62.3° in the group with intended definitive external fixation and 101° in the group with intended conversion to internal fixation. Healing without complications was achieved in 10 (43%) in the former group and 12 (60%) in the latter group. Our data support provisional external fixation followed by early conversion to internal fixation in patients with extensively compounded DFFs; patients with multiple fractures requiring several surgical procedures; and polytrauma patients awaiting hemodynamic, respiratory, or neurological stabilization. IV, retrospective study.
Rituximab in combination with platinum-containing chemotherapy in patients with relapsed or primary refractory diffuse large B-cell lymphoma. The aim of the study was to evaluate the efficacy of a regimen consisting of rituximab and a platinum-containing chemotherapy with either Ifosfamide, Carboplatin and Etoposide (ICE) or Cisplatin, high-dose Ara-C and Dexamethasone (DHAP) in patients with relapsed or primary refractory diffuse large B-cell lymphoma. Ten patients with relapsed or primary refractory diffuse large B-cell lymphoma were treated from June 2000 until May 2001 with a platinum-containing chemotherapy regimen according to the ICE- or DHAP-protocol in combination with rituximab at the University of Muenster. Two cycles of ICE or DHAP and rituximab were given. In case of at least a minor response after 2 cycles, 2 additional cycles of the same combination were applied. Response rate, remission duration and duration of survival were evaluated. All 10 patients could be analysed with respect to these endpoints. No treatment related mortality was observed. The response rate (CR/PR) was 60% (10/50%). Twenty percent of the patients had progressive disease. The median duration of remission and survival was 3 and 3.5 months, respectively (range: 1-6 and 1-7 months, respectively), the survival rate was 10%. Eight of 10 patients died because of their underlying disease with short remission duration, 1 patient died of complications of allogeneic transplantation in CR. In conclusion, the combination of platinum-containing chemotherapy (ICE or DHAP) with rituximab demonstrates significant activity in intensively pretreated patients with relapsed or primary refractory diffuse large B-cell lymphoma. Considering the short duration of remission and survival, respectively, other experimental therapeutic approaches (e.g. allogeneic stem cell transplantation, radioimmunotherapy) should be pursued following this treatment in order to induce long-term remission.
Synthesis and evaluation of analogues of N-phthaloyl-l-tryptophan (RG108) as inhibitors of DNA methyltransferase 1. DNA methyltransferases (DNMT) are promising drug targets in cancer provided that new, more specific, and chemically stable inhibitors are discovered. Among the non-nucleoside DNMT inhibitors, N-phthaloyl-l-tryptophan 1 (RG108) was first identified as inhibitor of DNMT1. Here, 1 analogues were synthesized to understand its interaction with DNMT. The indole, carboxylate, and phthalimide moieties were modified. Homologated and conformationally constrained analogues were prepared. The latter were synthesized from prolinohomotryptophan derivatives through a methodology based amino-zinc-ene-enolate cyclization. All compounds were tested for their ability to inhibit DNMT1 in vitro. Among them, constrained compounds 16-18 and NPys derivatives 10-11 were found to be at least 10-fold more potent than the reference compound. The cytotoxicity on the tumor DU145 cell line of the most potent inhibitors was correlated to their inhibitory potency. Finally, docking studies were conducted in order to understand their binding mode. This study provides insights for the design of the next-generation of DNMT inhibitors.
President Trump reportedly dictated a misleading statement about his son's meeting with a Russian lawyer that was ultimately issued to The New York Times by Donald Trump Jr., The Washington Post reported Monday evening. Trump dictated the statement on July 8, while he was en route back to the United States from the Group of 20 summit in Germany, to director of strategic communications Hope Hicks, the Post said. The statement about a meeting Trump Jr. had with a Russian lawyer during the 2016 presidential race emphasized that it was "not a campaign issue at the time." Instead, it said the topic had been primarily Russian adoption policy. But a few days later, news broke that Trump Jr. arranged the meeting believing he would obtain harmful information about Democratic presidential candidate Hillary Clinton. The New York Times first disclosed details of the meeting that took place in July 2016 and that also included Jared Kushner and then-Trump campaign chairman Paul Manafort. According to the report, Ivanka Trump and Kushner, her husband, worked with advisers during breaks at the G-20 summit to craft a response to questions from the Times. Hicks and another aide pushed transparency, the Post said. But the president reportedly overruled the consensus his advisers had reached on how to respond to inquires about the meeting. Trump Jr. did not respond to the Post's requests for comment Monday, while his attorney said he and his client "were fully prepared and absolutely prepared to make a fulsome statement" about the details surrounding the meeting. His lawyer also said he has "no evidence to support" the "theory" that Trump was involved in writing the statement. The president's attorney, Jay Sekulow, refused to speak about details regarding Trump's involvement with the statement. "Apart from being of no consequence, the characterizations are misinformed, inaccurate, and not pertinent," Sekulow said in his statement to the Post.
Next Story Rare amoeba eats Seattle woman's brain – KXLY Spokane A frightening story out of Seattle today comes with a warning. A woman there died after an amoeba ate her brain. She got it using a common health tool, and doctors say it could happen again if someone else is doing the same thing. The International Journal of Infectious Diseases put together a case study on the 69-year-old woman after she died earlier this year. She was not identified in the study, but the Seattle Times reports she was a Seattle resident. Doctors believe she got the brain-eating amoeba after using a neti pot to rinse her sinuses. Instead of using sterile water, or a saline solution, she used tap water. The amoeba reached her brain after entering her nose and had been feeding on her brain for a whole year before it killed her. There have been more than 100 cases of this kind of amoeba, Balamuthia mandrillaris, in the United States since the 1970s. 90 percent of the time, the infection kills the patient. Doctors say people who rinse their sinuses should not use tap water to do so. Get your weather forecast from people that actually live in your community. We update with short, easy-to-use video forecasts you can watch on your phone every day. Download the iOS or Android app here.
/* ==================================================================== Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==================================================================== */ package org.apache.poi.hemf.draw; import java.awt.Color; import java.awt.geom.AffineTransform; import java.awt.geom.Path2D; import java.awt.geom.Rectangle2D; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.function.BiConsumer; import org.apache.poi.hemf.record.emfplus.HemfPlusBrush.EmfPlusHatchStyle; import org.apache.poi.hwmf.draw.HwmfDrawProperties; import org.apache.poi.sl.draw.ImageRenderer; public class HemfDrawProperties extends HwmfDrawProperties { enum TransOperand { left(AffineTransform::concatenate), right(AffineTransform::preConcatenate); BiConsumer<AffineTransform,AffineTransform> fun; TransOperand(BiConsumer<AffineTransform,AffineTransform> fun) { this.fun = fun; } } /** Path for path bracket operations */ protected Path2D path = null; protected boolean usePathBracket = false; private EmfPlusHatchStyle emfPlusBrushHatch; private ImageRenderer emfPlusImage; private final List<AffineTransform> transXForm = new ArrayList<>(); private final List<TransOperand> transOper = new ArrayList<>(); private Rectangle2D brushRect; private List<? extends Map.Entry<Float,Color>> brushColorsV; private List<? extends Map.Entry<Float,Color>> brushColorsH; public HemfDrawProperties() { } public HemfDrawProperties(HemfDrawProperties other) { super(other); path = (other.path != null) ? (Path2D)other.path.clone() : null; usePathBracket = other.usePathBracket; emfPlusBrushHatch = other.emfPlusBrushHatch; // TODO: check how to clone clip = other.clip; emfPlusImage = other.emfPlusImage; transXForm.addAll(other.transXForm); transOper.addAll(other.transOper); if (other.brushRect != null) { brushRect = (Rectangle2D)other.brushRect.clone(); } if (other.brushColorsV != null) { brushColorsV = new ArrayList<>(other.brushColorsV); } if (other.brushColorsH != null) { brushColorsH = new ArrayList<>(other.brushColorsH); } } /** * @return the current path used for bracket operations */ public Path2D getPath() { return path; } /** * Un-/Sets the bracket path * @param path the bracket path */ public void setPath(Path2D path) { this.path = path; } /** * Use path (bracket) or graphics context for drawing operations * @return {@code true}, if the drawing should go to the path bracket, * if {@code false} draw directly to the graphics context */ public boolean getUsePathBracket() { return usePathBracket; } public void setUsePathBracket(boolean usePathBracket) { this.usePathBracket = usePathBracket; } public EmfPlusHatchStyle getEmfPlusBrushHatch() { return emfPlusBrushHatch; } public void setEmfPlusBrushHatch(EmfPlusHatchStyle emfPlusBrushHatch) { this.emfPlusBrushHatch = emfPlusBrushHatch; } public ImageRenderer getEmfPlusImage() { return emfPlusImage; } public void setEmfPlusImage(ImageRenderer emfPlusImage) { this.emfPlusImage = emfPlusImage; } public void addLeftTransform(AffineTransform transform) { addLRTransform(transform, TransOperand.left); } public void addRightTransform(AffineTransform transform) { addLRTransform(transform, TransOperand.right); } private static <T> T last(List<T> list) { return list.isEmpty() ? null : list.get(list.size()-1); } private void addLRTransform(AffineTransform transform, TransOperand lr) { if (transform.isIdentity() || (transform.equals(last(transXForm)) && lr.equals(last(transOper)))) { // some EMFs add duplicated transformations - ignore them return; } transXForm.add(transform); transOper.add(lr); } public void clearTransform() { transXForm.clear(); transOper.clear(); } List<AffineTransform> getTransXForm() { return transXForm; } List<TransOperand> getTransOper() { return transOper; } public Rectangle2D getBrushRect() { return brushRect; } public void setBrushRect(Rectangle2D brushRect) { this.brushRect = brushRect; } public List<? extends Map.Entry<Float, Color>> getBrushColorsV() { return brushColorsV; } public void setBrushColorsV(List<? extends Map.Entry<Float, Color>> brushColorsV) { this.brushColorsV = brushColorsV; } public List<? extends Map.Entry<Float, Color>> getBrushColorsH() { return brushColorsH; } public void setBrushColorsH(List<? extends Map.Entry<Float, Color>> brushColorsH) { this.brushColorsH = brushColorsH; } }
Q: So I can't do gameObject.AddComponent(StringVariable); anymore... What's the alternative? Been searching the web, and haven't really gotten a straight answer. What I want to do is this: class Weapon1 { public string Ability = "example"; } class Weapon2 { public string Ability = "other"; } class Character : MonoBehaviour { Weapon1 foo = new weapon1(); Weapon2 boo = new weapon2(); void useAbility () { gameObject.AddComponent(foo.Ability); gameObject.AddComponent(boo.Ability); } } But it tells me it's outdated. Any help is appreciated. A: You'll have to use types now. See if this works: class Weapon1 { public Type Ability = typeof(ExampleAbilityClassName); } class Weapon2 { public Type Ability = typeof(OtherAbilityClassName); } class Character : MonoBehaviour { Weapon1 foo = new weapon1(); Weapon2 boo = new weapon2(); void useAbility () { gameObject.AddComponent(foo.Ability); gameObject.AddComponent(boo.Ability); } } This is anyway much better code than to use strings. Strings are a bit more error prone.
Q: devexpress pivot grid send to attach excel mail How can I send mail DevExpress pivotgrid attached excel fileName = "GridView.xls"; this.ASPxGridViewExporter1.WriteXlsToResponse(); Response.Clear(); Response.AddHeader("Content-Type", contentType); Response.AddHeader("Content-Transfer-Encoding", "binary"); Response.AddHeader("Content-Disposition", disposition + "; filename=" + fileName); Response.BinaryWrite(buffer); A: There is not such a built-in capability. You need to export PivorGrid and send emails using standard methods. Here is a corresponding ticket in DX Support Center: Q422021. I see also that you are using a GridView exporter. You need to use ASPxPivotGridExporter.
In these situations of economic meltdown, who not like to have totally free things like Drake concert tickets? It is a truth that a good deal of people would rather spend their hard earned money on meals or other more practical and things than buying on concert tickets. But, there few happy individuals that will get to watch concerts for free. Sales reports are looking better than expected, but all retailers are still nervous about post-Christmas sales, so it is definitely a buyers market. Which can be may be scrambling to uncover some last gifts, dont fall in the trap of just obtaining the first thing you realize. Do your research, check online deals for pricing (use your smart phones to check pricing), very good with sales-people, etc. and youll find that the perfect gift for your family member may not effect your pocketbook nearly everything you thought possible. They am certain if you manage to get a date using a seriously hot chick, a 9 look 10, it is recommended be to be able to spend generously: dinners at nice restaurants, rounds of drinks at expensive nightclubs, Concert Tickets, gifts, etc. A general lack of love is a sure indication of things on the downward spin. When a woman decides that shes not interested anymore, it is time to really keep your distance. Give her space. Remember the fact that women love space. Whenever a women loses the strive to be affectionate, go green nice uncomplicated. Maybe a nice note on automobile before she goes efficient or possess a bath drawn for her with some candles when she returns for strive. The key is by no means to expect anything in bounce right back. Do it because you care, not because leaping something. Your girl friend will pick as a result of that as getting hawk and can also appreciate you for this situation. No fees/Low Fees - Finding a card without fees extra way to scale back the number of money an individual spending. Most financial institutions have a no or low fee credit card option. Got some additional tickets to the next super, hot concert on Friday evenings? Why not sell them at StubHub? You furthermore buy tickets here and all purchases are guaranteed. Here you gets sports tickets, Cirque du Soleil tickets, concert amos lee tickets 2018-- you name it, they provide it. For the avid golfers in your comes known as product. It allows golfers to dispense both hot and cold beverages on can be course is in the form of a golf club. One of approach Christmas gifts 2010 for men that additionally be a huge aconversational portions. Very easy to store and clean making it one from the best holiday gifts 2010 for males. While wedding plans can be extremely busy or a little hectic, this is not true you need to stress out about the bridesmaid christmas present. Remember to keep things simple and work on getting one thing will be appropriate to the bridesmaids. Your bridesmaids will appreciate your effort.
Another trendy establishment that’s opened up in Shoreditch, Yuu Kitchen serves up tapas-style dishes from South-East Asia and the Pacific Rim. Trying our luck, we rocked up without a reservation which turned out well when the friendly waiter shifted some bookings around to cater for our group of 5 – kudos to you mister! Funky cage lighting Cocktails are nice, albeit on the strong side (I had the feng shui £8.00) so this saw me through the entire meal as I sipped it very slowly. We ordered a range of dishes from the menu starting with the edamame (£3.50), pork chicharron (£3.70) and taro and lotus root chips (£4.00). The bao buns were the only items we didn’t share, not just because of the size but the taste alone meant given the choice, you wouldn’t want to share anyway. The buns were only big enough for one portion and were light, fluffy and the filling – full of intense flavour. A great place to try, especially if you’re in a group and want something different to your standard Asian cuisine. Recommended dishes are the octopus, chicken wings, ribs and the soft shell crab bao buns.
It is fairly common for a business owner to want to exit their business rapidly after completing a sale. Most owners are accustomed to being the boss and aren’t sure how they will feel after the new owner has stepped in to run the business, or they are simply ready to move on. However, one of the biggest buyer fears is that performance will decline after ownership is transferred because of the absence of the former owner's knowledge, experience, and relationships. For a business to be attractive, buyers will need to feel confident that there can be a smooth transition with little or no business interruption, and if problems do arise that they will be able to depend on the seller for help and advice to resolve the issues. The length of time for training and transition will vary from one business to another, and the seller’s compensation for such will depend on the nature of the business and the structure of the sale. Generally, the less dependent that a business is on the owner’s knowledge and relationships, the shorter the training and transition period. Conversely, the more dependent a business is on an owner’s knowledge and relationships, the longer the training and transition period. For most business sales, Codiligent is able to negotiate a training and transition period of 2-4 weeks as part of the agreed upon sale price and structure, with promised seller availability for limited paid consulting hours for an additional 1-11 months. With businesses that have a high degree of dependence on the seller’s specialized knowledge or relationships, training and transition could last as long as 2-3 years. With any sale, a business will attract more buyer interest if an owner is willing to be available for a longer transition period. Accommodating a longer-than-typical transition period can be accomplished using an agreed upon employment contract or by making paid consulting hours available. While a buyer may not use consulting hours offered by a seller, the fact that they are available will lower a buyer’s perception of risk and provide greater confidence of success after the sale has closed. If you are planning for a sale a few years in advance, you may want to ask yourself, “How dependent is the business on me? Are there important relationships that I have which may not transfer to a new owner? Do I have unique knowledge and skills that a buyer may not have? Are there things that I know that my employees don’t?” If the answer to any of these questions is “yes”, you may want to work on some changes to the business which could include: Recording your knowledge in written form showing step-by-step instructions on how you address particular issues; Training and educating employees to provide them with your knowledge; Increasing employee involvement with clients, vendors, and consultants to shift your personal relationships to the business. Making these changes will help minimize the amount of post-sale training and transition and will create a greater sense that the business’ performance isn’t dependent on you, which will lower the perception of risk, increase marketability, and ultimately increase the price you may receive. Want some advice from someone who has recently been through the M&A process? Location Labs was recently acquired by antivirus company AVG for $220 million. The founder of Location Labs, Tasso Roumeliotis, shared some insights on issues to consider when pursuing a sale or merger of your company in an article on VentureBeat.com "The 5 things you can't forget when preparing for M&A". I concur with all of his comments except for one: he says that your VP of Finance will need to be extremely good:strategically savvy, incredibly organized, and willing to work 100 hours a week for 3 to 4 months straight. That very well may be true for companies the size of Location Labs - particularly if they did not use an investment banker to help with the deal. However, Codiligent takes on smaller clients, usually under $20 million in annual revenue. So while we do hope that clients will be well organized and that the person dealing with assembling financial information will work hard, as the intermediary we usually play the role that Tasso describes of financial modeling, negotiations, term-setting, and other transaction aspects that fall under the financial category. In fact, some of our clients' key financial persons are not aware that the owner is selling the business until the end of the deal. It is accurate that this financial role, which Codiligent primarily plays for its clients, is extremely time consuming. Many would-be business sellers mistakenly assume that their investment banker or business broker is simply matching up a buyer and seller, and does little else. If you are using a high-quality intermediary the work they do behind the scenes is substantial. That's not to say that there aren't plenty of business brokers and investment bankers that take short-cuts and don't do all that they should - but that's why it's important to select the right broker. 1. General Marketing / Negotiated Deal: This business sale model is when a business broker places ads on websites or does other types of general marketing in hopes of attracting strategic and financial buyers, who then call or email in response to ads. When a buyer is interested they will not move forward as part of an auction process with a deadline, but rather will submit a Letter of Intent (LOI) which will then be negotiated with the seller. This is most appropriate for businesses where it would be difficult to identify logical strategic buyers (usually more main-street type businesses - for example, a single-location restaurant or retail business). 2. Active Search / Negotiated Deal: Some businesses wouldn't likely be a good acquisition for a general buyer due to the specialized nature of the business. Other businesses with high confidentiality requirements may be so unique that it would be hard to create an accurate description of the business in an advertisement that wouldn't raise suspicions about the true identity of the business. Under such circumstances, general advertising may be inappropriate, and instead a business broker may conduct an Active Search for logical strategic buyers and for financial buyers with specific matching acquisition criteria. When using this business sale model there is not an auction with a deadline, rather buyers submit a LOI after they become comfortable with the business. While the business broker strives to create competition for the business, the absence of an offer deadline may lower the probability of receiving multiple concurrent offers. Some business sellers assume that means this is not a good business sale model. However, many business buyers refuse to devote the time and energy necessary to participate in an auction, so a seller who is depending on an auction process could find that they don't receive ANY offers. An active search with a negotiated deal is more common for small and lower mid-market businesses than an auction. 3. General Marketing & Active Search / Negotiated Deal: This is the business sale model that Codiligent most often utilizes. It involves a combination of general marketing and an active search for logical strategic and financial buyers, culminating in a negotiated offer, rather than an auction process. The goal is to get multiple parties to move forward at roughly the same time to create competition but without the risk of having low participation in an auction. This business sale model will tend to be most productive in attracting a broader group of both financial and strategic buyers. While the broker will take great care to protect the confidentiality of the seller, if there is a reason why an unusually high level of confidentiality is required and the business is fairly unique and thus difficult to accurately describe without raising suspicions about its identity, this model may be uncomfortable for some sellers. 4. Active Search / Auction Process: This business sale model involves locating logical strategic buyers and financial buyers who have acquisition criteria that match the characteristics of the business, and then getting them to agree to participate in an auction where they are provided with information about the business and then are given a date by which they must submit their best offer. After receiving the bids the broker clarifies the deal terms and then helps the seller choose the one that best meets the seller's goals. This model can be effective for businesses for which a seller and their broker are confident there will be high demand (often because there are multiple parties who have previously made inquiries or let their interest be known, and there are other acquisitive industry buyers for whom the business would be an extremely good match). For businesses that may not have as strong of demand, an auction can be problematic for the following reasons: The timing may not be exactly right for the companies that would be ideal bidders (where even just one or two months later they would have the bandwidth to devote to an acquisition); Some buyers refuse to participate in an auction because they are unwilling to devote the time and energy necessary to evaluate the business if they perceive themselves to be competing directly with multiple other buyers; and Many of the best buyers are not anticipated by the seller and business broker - Axial Market suggests that 40%-45% of actual business buyers are not on a seller's top tier buyer list. If you are working with a quality business broker or investment banker, they will help you determine which business sale model is best for meeting your exit objectives. If you are not already working with a business broker and would like to explore this further please contact Codiligent for a confidential consultation at: 888-324-5888. Many business owners underestimate the nuances and complexity of exit planning. Unfortunately, this is often exacerbated by some professional advisors who strive to overly control the process or who want to be the sole representative of the business owner in assisting with the process. For example, a contract CFO, wealth manager, CPA, or attorney may be keenly interested in exit and transition planning and may have significant experience, formal education, attended seminars, and read books on the subject. However, I would argue that it would be the rare individual who will have expertise and skill in all areas necessary to help a business owner optimize their exit. Rather, a business owner should be looking for advisors who play well with others. So, yes, an attorney plays a vital role in the exit process, but they aren't an accountant and may not possess some of the knowledge of a CPA. Likewise, while the attorney may be good at helping protect clients through appropriate legal documents, and the CPA may have invaluable advice on the tax implications of the deal structure, neither of these professionals may have the skill set that an investment banker or business broker has of analyzing, valuing, and packaging the business or of finding and screening the best buyer and negotiating a win-win deal. And what about planning for what a business owner will do after the sale? Is a business broker, investment banker, attorney, or CPA going to be the best person to provide advice on what type of assets you need to own to securely provide for your long-term financial needs and reach goals in the next chapter of your life? Or might a wealth manager be better equipped to provide this type of guidance? How important is economic freedom? Are less free and more regulated people, as a whole, just as well off as those with more economic freedom? Are all types of economic organization equally good but with different pros and cons? Or does economic freedom truly lead to a higher quality life? Which countries' people have the lowest quality of life and what is their level of economic freedom? Do property rights really matter? Is a smaller or larger role for government better? Check out the following 60-second video that provides a perspective on this topic. What do you think?
We noticed that you're using an unsupported browser. The TripAdvisor website may not display properly. We support the following browsers: Windows:Internet Explorer, Mozilla Firefox, Google Chrome. Mac:Safari. New!Find and book your ideal hotel on TripAdvisor — and get the lowest prices Zenzero has a great position at the beach at Camps Bay. However, the Italian food we had was very average and below par when compared to what's on offer in Cape Town. Suggest this is a nice place for a coffee, light lunch, drinks etc but perhaps not for a meal. I visited Zenzero with my husband as a treat on our honeymoon. Location is beautiful, however the experience seriously disappointing, We did not feel welcomed nor could relax during our meal. We had a number of waiters pushing add ons to our order throughout the meal despite consistently stating we were happy with what we ordered. The food was disappointing,... More My wife loved the Baby squid, highly recommended, she would return just for that again, Parma Ham and melon starter was a very generous portion and yummy. Sea bass cooked perfectly, my wife had the mussels and loved her dish. Great views as your overlooking the beach and sea... would go back Buzzy atmosphere and delicious food. The mussels in white wine sauce were divine and the sea bass was amazing- very fresh. So good that we came back again- would definitely recommend. Fantastic sea view over Camps Bay Great seafood, terrific location in Camp's Bay. We enjoyed the grilled calamari as well as the grilled linefish (can't remember what it was called). Best drink was the yummy pineapple and kiwi daiquiri. Loved it here, Thanks. Stopped in here for a late lunch, there was a great buzz, beach views and delicious food. I had a fresh Caesar Salad and my husband had Seafood Linguine. We had a glass of the Lady Warwick Unwooded Chardonnay, it was delicious! The fresh food, view and atmosphere make this a must visit! We ate in this restaurant at least three time while we were at camps bay and can certainly recommend the prawn,asparagus risotto (one of their signature dishes). My husband enjoyed the fresh lobster and prawns. I would also like to say the young lady on the front desk was excellent and couldn't do enough for us. Well done to you... More Italian flavor and style laced in Camps Bay Cool, resplendent with a splash of Kove Collection abundance. This is Zenzero. Zenzero wraps you shamelessly in 'La Dolce Vita' whilst gently caressing with a cool Atlantic breeze.Italian cuisine is all about fresh ingredients and homemade, full flavour. At Zenzero we pride ourselves on just this, the freshest ingredients with zesty, spicy and feisty touches. Alongside Zenzero's signature dishes and spirited vibe, an abundant selection of seafood, pastas and homemade desserts will surprise and delight any palate, through-out the year. Questions & Answers Here's what previous visitors have asked, with answers from representatives of ZENZERO and other visitors * TripAdvisor LLC is not a booking agent and does not charge any service fees to users of our site... (more) We noticed that you're using an unsupported browser. The TripAdvisor website may not display properly. We support the following browsers: Windows:Internet Explorer, Mozilla Firefox, Google Chrome. Mac:Safari. TripAdvisor LLC is not responsible for content on external web sites. Taxes, fees not included for deals content.
#newImport p {margin: 1em 0;}
-----BEGIN RSA PRIVATE KEY----- MIICWwIBAAKBgQDCUBrd+D2V+aUJot2zad3a8q+6jzt6BoU3koSwnmejzwnZ5lZd D509J/PwxQZWoo271xlPQI3BWe4IO4XEgBSxpjxG9qboNNtcs5e7tKlA9n51WhM3 ydOGVOJVAhahirHxVULiJAxxcGnWxADvvBKjpg8p1DJCnxz1kCujvY7eWQIDAQAB AoGAbiT0JdCaMFIzb/PnEdU30e1xGSIpx7C8gNTH7EnOW7d3URHU8KlyKwFjsJ4u SpuYFdsG2Lqx3+D3IamD2O/1SgODmtdFas1C/hQ2zx42SgyBQolVJU1MHJxHqmCb nm2Wo8aHmvFXpQ8OF4YJLPxLOSdvmq0PC17evDyjz5PciWUCQQD5yzaBpJ7yzGwd b6nreWj6pt+jfi11YsA3gAdvTJcFzMGyNNC+U9OExjQqHsyaHyxGhHKQ6y+ybZkR BggkudPfAkEAxyQC/hmcvWegdGI4xOJNbm0kv8UyxyeqhtgzEW2hWgEQs4k3fflZ iNpvxyIBIp/7zZo02YqeQfZlDYuxKypUxwJAa6jQBzRCZXcBqfY0kA611kIR5U8+ nHdBTSpbCfdCp/dGDF6DEWTjpzgdx4GawVpqJMJ09kzHM+nUrOeinuGQlQJAMAsV Gb6OHPfaMxnbPkymh6SXQBjQNlHwhxWzxFmhmrg1EkthcufsXOLuIqmmgnb8Zc71 PyJ9KcbK/GieNp7A0wJAIz3Mm3Up9Rlk25TH9k5e3ELjC6fkd93u94Uo145oTgDm HSbCbjifP5eVl66PztxZppG2GBXiXT0hA/RMruTQMg== -----END RSA PRIVATE KEY-----
1. Field of the Invention The present invention relates to novel hydrazone compounds, to processes for the preparation thereof and to electrophotographic photoreceptors comprising them, especially to electrophotographic photoreceptors each of which comprise one of the novel hydrazone compounds as a charge-transporting material in a photosensitive layer on an electrically-conductive base. 2. Description of the Prior Art Inorganic photosensitive materials such as selenium, cadmium sulfide and zinc oxide have heretofore been used widely as photosensitive materials for electrophotographic photoreceptors. Photoreceptors using these photosensitive materials do not, however, fully provide the properties required of electrophotographic photoreceptors, such as sensitivity, light stability, moisture resistance and durability. For example, although photoreceptors based on a selenium material have excellent sensitivity, they have numerous drawbacks, viz., they are prone to crystallize under heat or to deposit smear or the like and their characteristics hence tend to be deteriorated, they are costly as they are fabricated by vacuum deposition, and they cannot satisfactorily be formed into a belt-like configuration due to their lack of flexibility. Photoreceptors using a cadmium sulfide involve problems of moisture resistance and durability, while those employing zinc oxide have a durability problem. With a view toward overcoming these drawbacks of photoreceptors which use such inorganic photosensitive materials, various photoreceptors using organic photosensitive materials have been investigated. Among photoreceptors developed to improve such drawbacks, function-separated photoreceptors in which the charge.-generating function and the charge-transporting function are assigned to different materials have attracted interest. Since function-separated photoreceptors permit the selection of a material having one of the above two functions and another material having the remaining function or functions from wide ranges of materials and then to use them in combination, it is possible to fabricate photoreceptors having both high sensitivity and high durability. Electrophotographic characteristics required for a charge-transporting material include: (1) A sufficiently high ability to receive charges generated by an associated charge-generating material. PA1 2) An ability to rapidly transport the charges thus PA1 (3) An ability to fully transport charges even in a low electric field, so that residual charges do not remain. In addition, the charge-transporting material is also required to have high durability so that it can remain stable to the light, heat and the like, to which it is repeatedly exposed as a photoreceptor in the course of the repeating steps of charging, exposure, development and transfer upon copying, so that it can thus provide reproduced pictures having high fidelity to the original and good reproduceability. A variety of compounds have been proposed as charge-transporting materials. For example, poly-N-vinylcarbazole has been known as a photoconducting material for many years. Photoreceptors using this compound as a charge-transporting material have been used commercially. However, this material itself has poor flexibility, is brittle and therefore tends to develop cracks. Accordingly, it has inferior durability with respect to repeated use. When it is used in combination with a binder to improve its flexibility, another problem arises, viz., the electrophotographic characteristics deteriorate. On the other hand, low molecular weight compounds generally are not film-formers. Therefore, they are generally mixed with a binder at desired ratios to form photosensitive layers. Many charge-transporting materials based on low molecular weight compounds have been proposed. For example, hydrazone compounds have high sensitivity as charge-transporting materials, including those disclosed by way of example in Japanese Patent Laid-Open Nos. 46761/1980, 52064/1980, 58156/1982 and 58157/1982. However, they have a decomposition problem due to the ozone given off upon corona charging or light and heat instability. Although they have excellent initial performance, low-contrast or high fogging pictures are obtained after repeated use because of a reduction in the charge-holding ability or an accumulated residual potential. Many other charge-transporting materials have also been proposed. However, there is no charge-transporting material which can fully satisfy the performance required as an electrophotographic photoreceptor in actual use. There is hence an outstanding demand for the development of still better photoreceptors.
The place of odds ratios in the study of place, race and differential occupational opportunities. During the last few decades the study of racial differences in occupational and economic attainment has progressed rapidly, both in terms of theory and research. The ecological study of the racial and ethnic organization of local labor markets has not. The use of odds ratios and occupation-specific categories may provide a fruitful avenue for future research in the study of inequality across space and time.
Retinal vein occlusion: evaluation of "classic" and "emerging" risk factors and treatment. Retinal vein occlusion (RVO) is the second most common retinal vein disease and an important cause of blindness and visual morbidity. Systemic risk factors are commonly associated with RVO, while unclear it is the role of the thrombophilic and coagulation disorders. To evaluate "classic" and "emerging" risk factors, and to establish a good treatment for RVO. Fifty patients, 31 males and 19 females, with RVO were selected for our study. RVO patients were divided into two groups: those with central retinal vein occlusion (CRVO) and those with branch retinal vein occlusion (BRVO). All patients were subjected to an anamnestic investigation and were tested for thrombophilia, coagulation disorders and hyperlipidemia. Treatment and prophylaxis were evaluated. We have named "classic" the systemic risk factors associated with RVO and "emerging" those risk factors, haemostasis related, not clearly associated with RVO. RVO occurs more commonly in patients aged over 50. "Emerging" risk factors were more frequent in CRVO, "classic" in BRVO. Hyperhomocysteinemia is the most common "emerging" risk factor related to RVO. 71.4% of tested patients had hypercholesterolemia. Treatment with LMWH would appear to be safe and effective, but the small number of patients considered not allow us a definitive evaluation of its efficacy. Although our study has shown the correlation between RVO and the "emerging" risk factors, more studies are necessary to better know the real role of thrombophilic and coagulation disorders in this disease and to determine a specific protocol for the treatment and prophylaxis of RVO.
development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000
Inhibition of metastasis and growth of breast cancer by pH-sensitive poly (β-amino ester) nanoparticles co-delivering two siRNA and paclitaxel. Breast cancer is the most vicious killer for women's health, while metastasis is the main culprit, which leads to failure of treatment by increasing relapse rate. In this work, a new complexes nanoparticles loading two siRNA (Snail siRNA (siSna) and Twist siRNA (siTwi)) and paclitaxel (PTX) were designed and constructed using two new amphiphilic polymer, polyethyleneimine-block-poly[(1,4-butanediol)-diacrylate-β-5-hydroxyamylamine] (PEI-PDHA) and polyethylene glycol-block-poly[(1,4-butanediol)-diacrylate-β-5-hydroxyamylamine] (PEG-PDHA) by self-assembly. The experimental results showed that in the 4T1 tumor-bearing mice models, PEI-PDHA/PEG-PDHA/PTX/siSna/siTwi) complex nanoparticles (PPSTs) raised the accumulation and retention of both PTX and siRNA in tumor after administrated intravenously, resulted in the strong inhibition of the tumor growth and metastasis simultaneously. It was found that co-delivery of siSna and siTwi had more significant anti-metastasis effect than delivering a single siRNA, as a result of simultaneously inhibiting the motility of cancer cells and degradation of ECM. Therefore, PPSTs could be a promising co-delivery vector for effective therapy of metastatic breast cancer.
Predicting mortality rates: Comparison of an administrative predictive model (hospital standardized mortality ratio) with a physiological predictive model (Acute Physiology and Chronic Health Evaluation IV)--A cross-sectional study. Direct comparison of mortality rates has limited value because most deaths are due to the disease process. Predicting the risk of death accurately remains a challenge. A cross-sectional study compared the expected mortality rate as calculated with an administrative model to a physiological model, Acute Physiology and Chronic Health Evaluation IV. The combined cohort and stratified samples (<0.1, 0.1-0.5, or >0.5 predicted mortality) were considered. A total of 47,982 patients were scored from 1 July 2013 to 30 June 2014, and 46,061 records were included in the analysis. A moderate correlation was shown for the combined cohort (Pearson correlation index, 0.618; 95% confidence interval [CI], 0.380-0.779; R(2) = 0.38). A very good correlation for the less than 10% stratum (Pearson correlation index, 0.884; R(2) = 0.78; 95% CI, 0.79-0.937) and a moderate correlation for 0.1 to 0.5 predicted mortality rates (Pearson correlation index, 0.782; R(2) = 0.61; 95% CI, 0.623-0.879). There was no significant positive correlation for the greater than 50% predicted mortality stratum (Pearson correlation index, 0.087; R(2) = 0.007; 95% CI, -0.23 to 0.387). At less than 0.1, the models are interchangeable, but in spite of a moderate correlation, greater than 0.1 hospital standardized mortality ratio cannot be used to predict mortality.
As you know from other episodes of Qigong Radio and other interviews, I always try to track down authoritative sources when I want to learn more about a subject and share it with you. Now that my teacher Bruce Frantzis is releasing two more DVD sets on Xingyi’s Five Elements, I wanted to talk to someone about these practices. To the best of my knowledge, Isaac Kamins is the only person actively teaching the Energy Arts Xingyi curriculum who also trained with Bruce Frantzis in weekly classes for several years in the Bay Area in the 90′s. Isaac has shared his deep knowledge of the Energy Arts system in past episodes of Qigong Radio and I think you’ll find that he doesn’t hold back on his training insights in this one. We discuss: The unique way Xingyi forges a strong mind-body connection. How the 5 Elements are manifested in the simple, repetitive forms of Xingyi’s 5 Fists — and how this gives you a direct experience of Water, Earth, Metal, Fire, and Wood. What it was like to go through the entire 5 Element cycle with Bruce three different times, each time over a two-year period. How, even if you don’t “major” in Xingyi, you can gain insight into Tai Chi, Bagua, or even qigong, with the direct, experiential quality of 5 element practice. Be sure to visit http://dankleiman.com/?p=4959 for a free Xingyi practice download! When you set out to learn Taoist Energy Arts like Tai Chi, qigong, or meditation, you come across the lore of masters with supernatural abilities or techniques too deadly to teach openly. Or more insidious, we grasp after images of unattainable perfection, always slightly beyond reach, unless we just find the right technique or are initiated into a secret practice. And even if we’ve given up silly kung fu fantasies of flying through the bamboo reeds, on a subtle level we still chase ideas and dreams that only live in the mental realm. The reality – and I’m not trying to disappoint you, but stay with me because the reality is deeply rewarding, meaningful, and rich too – is that Taoist Energy Arts must be lived-through and practiced-through to be truly discovered. In this episode of Qigong Radio, Paul Cavel and I explore the disconnect you can sometimes feel in the day-to-day of your practice and the idea of where you ought to be in your practice – it’s not because you must be a chosen one or special to get it, but rather, because you can't directly perceive or experience energy when you operate solely on the mental realm. We’ll give you some guidelines for how to recognize when you slip back into the mental realm of fantasy practice and talk about the struggles, as a student and as a teacher, of communicating your practice experience. Paul also shares specific neigong techniques that will keep you present, engaged, and help you cultivate a direct experience of your natural internal energy, that can be applied to any energy practice. In his new book, the Harvard Medical School Guide to Tai Chi, Dr. Peter Wayne lays out the "8 Active Ingredients of Tai Chi" to help us understand the interface between traditional Tai Chi practice and the Western biomedical paradigm. As the Research Director of the Osher Center for Integrative Medicine, jointly based at Brigham and Women's Hospital and Harvard Medical School, and the founder of the Tree of Life Tai Chi Center, Peter blends more than three decades of teaching experience with ongoing inquiry into what makes Tai Chi an effective medical intervention. In this episode of Qigong Radio, we explore the development of the 8 Active Ingredients and how help translate Tai Chi into a Western context For the last couple of years, I've been teaching regular workshops in Farmington, Maine. When I went up again last week, I had a fascinating conversation with one of the students. She was telling me how the core group had been coming along and that other people have come in and out of practicing with them. She said, "you know, it's not really for everyone." In this one casual statement, she was really saying that her motivation to practice now comes from inside of herself. Along with a small group of dedicated classmates, she gets everything she needs from the practice itself. That's a remarkable attitude. Every day, we get bombarded by messages that tell us to buy something to become something or fit in with other people. To be able to have a practice that lives inside of you, that's validated by performing it for yourself, and grows because you feed it with energy and time, is really incredible. In this episode of Qigong Radio, we're going to talk about how you discover your intrinsic motivation to practice. Paul explains what to focus on at each level and how your learning spiral takes you back through them over time. I found Paul's explanation of the relationships between the sets particularly useful. Specifically, understanding Energy Gates as direct preparation for Spiraling -- first you clear the downward flow of chi, then you strengthen the upward flow. Similarly, in Heaven and Earth you learn to expand the body (through opening/closing the joints and lengthening the soft tissue) before you really go into deep internal compression in Bend the Bow. As we discussed the meaning of "integration," Paul explained how neigong is the basis for Taoist meditation and where neigong shows up inside your meditation practice. In this episode of Qigong Radio, I answer some questions about different sensations readers have been experiencing when they practice. In the Dragon and Tiger Medical Qigong Instruction Manual, Bruce Frantzis lays out important guidelines for what kinds of "chi reactions" to expect. I want to show you how to apply these guidelines to your practice. Expect Chi Reactions Dragon and Tiger is a powerful tool for awakening your body on physical, energetic, emotional, mental and spiritual levels. As you practice these movements and begin to move your body in ways that may be different for you, energy and fluids in your body are stirred up and begin to move more vigorously. At some point you may experience reactions that may seem either positive or negative to you. These are called chi reactions: the body’s response to the effects of energy beginning to flow more freely through previously blocked places. These reactions may show up immediately, hours after practicing, or even a day or two later. Although many people will not begin to feel either negative or positive reactions without practicing a lot, others, particularly if they have done other forms of personal development work, may notice reactions almost immediately. Positive Chi Reactions Positive reactions can range from feeling less pain and having more energy to being more centered, relaxed and comfortable with your body. Some people report that they sleep much better; others report greater flexibility and balance. You may also notice that you are calmer and have fewer mood swings. You may experience an overall reduction in stress and tension. A transformative effect that most people consider positive, is an increase in and awareness of sexual energy. This is entirely normal as it is the most fundamental energy in your body, and practice of Dragon and Tiger will increase sexual energy. Finally, a transformative effect that confuses many people is what we call “good pain.” Dragon and Tiger is designed to gradually work more and more deeply into your body, to release muscles and other tissues and joints that have been restricted or blocked. When an area of your body that has been frozen begins to loosen and realign, more energy moves through that area than you are used to. But if the energy cannot flow freely or fully, you may experience temporary pain in the area. The Chinese medical theory of the body holds that pain in an area is a sign that the energy there is not flowing freely. You feel “bad” pain when an area is newly injured or hurt. In general, “good” pains tend to be temporary (lasting from a minute to at most a couple of days) and are usually dull, rather than sharp. As you practice you will learn to recognize such pains as signs of progress. Treat them with great care and keep within the 40 to 50 percent rule when you have pain, illness or injury (see p. 7). Back off practicing and be sure to consult your healthcare provider if you begin experiencing either significant pain or pain that does not go away quickly. Negative Chi Reactions As your body wakes up on various levels, it may do so the same way as when aroused from a deep slumber —cranky, sore and confused. You may experience some negative chi reactions. These can range from relatively mild but confusing aches, nausea, light-headedness, tingles, fatigue, unsteadiness, body temperature shifts or mood changes to strong emotional releases and mood swings to unusual dreams or shifts in perception. You may also experience physical discharges, such as stronger body odors or more frequent bowel movements. As blocked and stagnant energy moves or leaves the body, energetic memories that are associated with the problem, stored in either your energy channels or physical tissue can awaken and cause you to relive the underlying and often repressed causes of the problem—especially if you have a severe condition. You might experience what doctors refer to as a “healing crisis.” The term refers to that time during healing when a patient’s body temporarily feels worse before it feels better. For example, when the body burns out infections, the patient often has a high fever. When the fever breaks, the symptoms of the disease pass. The fever may cause the patient to feel terrible, until the stored toxins or blocked energy are released. Afterwards the individual feels better as the illness passes. All these reactions are common to many natural forms of healing and are often a sign that your body is cleansing itself. Many people have a healing crisis when they fast or switch to a cleansing or vegetarian diet. The practice of Dragon and Tiger may often trigger such effects; they are fairly normal reactions. What is important to remember is that these reactions are temporary and usually pass when your body begins to rebalance itself. If you begin to experience strong or uncomfortable sensations, immediately sit down, put your hands on your belly and gently breathe with your belly to ground and center yourself. Such sensations will usually pass within minutes. Then suspend or reduce your practice for a while. Start again by following the 20 percent or 40 percent rule and very gently explore your body’s reactions to these practices. Remember that you are not alone in such experiences; almost everyone who practices will experience some of these reactions at some time. If the symptoms are intense, pull back your practice to 30 percent or 40 percent of what you consider your normal practice and consult with your teacher. Remember to drink plenty of water. Water helps accelerate the release of toxins. Taking some vitamin C also helps that process. Make sure you rest after practicing. Be sure to consult a healthcare professional immediately if you have any symptoms that might be a sign of a medical or psychological problem. Listen to the episode to find out how to apply this principles to rising energy, activating the lower tantien, and differentiating between nerve flow, blood flood, and normal physical movement. In this episode of Qigong Radio, Energy Arts Senior Instructor Eric Peters describes what it's like to work with the energy of the spine, using Bend the Bow Spinal Qigong. Bend the Bow is an advanced qigong set that requires precise alignment and refined sensitivity, but it gives you access to a much deeper level of internal connection and coordination than standard ways of moving your body through space. I'm posting this episode on my annual summer retreat/vacation/recharge, where I assess my practice and teaching from the past year and plan courses for the coming year. This year, in the midst of big changes at Brookline Tai Chi, I've been wondering a lot about the way qigong practice informs your encounters with change in other areas of your life. Of course, I always like to think that there's a strong connection, but this year everyone at Brookline Tai Chi is truly testing whether the art of smooth change in the classroom manifests itself in real life as well. What do we mean when we say we "put our mind inside our body" when we meditate, do qigong, or Tai Chi? Dr. Cathy Kerr helps us understand this question from the perspective of modern neuroscience. In addition to being a Tai Chi practitioner, Cathy is the Director of Translational Neuroscience at Contemplative Studies Initiative and an Assistant Professor in the Department of Family Medicine at Brown University. Drawing on a growing body of research from mindfulness meditation, her own work on sensory processes, and ancient texts, Cathy explains these Eastern practices develop your Western brain in areas that span physical health as well as mental and emotional well-being. The Spiraling Energy Body Qigong set is one of the toughest in the Energy Arts system, so I asked Energy Arts Instructor Isaac Kamins to tell us about his experiences with this practice. In this episode of Qigong Radio, Isaac explains how as a teenager he came to appreciate the counter-intuitive approach the internal martial arts take to fighting and especially how developing the energetic sensitivity cultivated by Spiraling has influenced how he interacts with other people and the world around him, far beyond the martial arts. If you've learned many different meditation, qigong, breathing, and movement practices over the years, you may be faced with the problem of trying to decide what to practice each day. In this episode of Qigong Radio, I'll show you a framework for thinking about your different modes of practice and show you the single most important goal of an energy arts practice, regardless of the mode or specific techniques. When learning qigong or Tai Chi, people are often either more tuned into to energy or to their physical bodies. I asked my first qigong and Tai Chi teacher, Energy Arts Senior Instructor and founder of Brookline Tai Chi, Bill Ryan to explain why this is. More importantly, Bill teaches you how to navigate the experiences of developing your internal energy, regardless of how you first become aware of them. It turns out, you're probably already more tuned in than you think. If you are very sensitive to energy, Bill also has some great advice about how to become more grounded and balanced. One thing that's been on my mind since Bruce was here teaching a Push Hands Intensive is what it means to follow instructions in your training. He basically laid out a year or longer curriculum during a week. What do you come away with? What should you practice? How do you reconcile "downloading" a whole curriculum at once vs. really upacking it and learning to use it over the next year. When you spend a month training Tai Chi 10-12 hours a day, what happens when you go home? What does it feel like several months later when your life has returned to normal? Is it a let down? Do you need to be inspired again to continue your training? What have you continued to discover about your practice? How has the intensive training infused your teaching? I sat down with several Energy Arts Tai Chi instructors to discuss these issues for Episode 3 of Qigong Radio. Here’s what they had to say. Since we were all together for a Push Hands training, designed to be a follow-up to the Short Form training, we also talked about the ways we were seeing connections between the two topics. I sat down with Energy Arts Senior Instructor Paul Cavel to discuss the developemental process of the soft tissue of the body - muscles, fascia, ligaments - in the internal arts. Paul talks about the different stages that you will go through and how to recognize when you are ready to move on to the next one. Learn more from Paul at CircleWalking.com or RelaxationMeditation.co.uk and you can always visit me at DanKleiman.com. Why is the Marriage of Heaven and Earth - and it's primary neigong technique of "opening and closing" - considered the bridge between beginner-level and advanced practice? I asked Energy Arts Senior Instructor Eric Peters, who first learned this exercise 30 years ago, to explain. We are joined by several other Energy Arts Instructors for the conversation.
The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com. There is a biblical principle that everything produces after its own kind. Cats produce cats. Dogs produce dogs. Apple trees produce apples. Orange trees produce oranges. Love produces love. Hate produces hate. What does identity politics produce? It produces bigotry. It produces division. It produces a victim mentality. We’re seeing it today in the Democratic party. Joe Biden is now a racist. Nancy Pelosi is a racist. The party is reaping what it sowed. Everything is seen through the lens of identity politics. There can be no disagreement about issues. Instead, the disagreement is based on race or ethnicity or skin color or social status. If I’m a Hispanic woman and you differ with my position, it has nothing to do with my position. It has nothing to do with my ideology. No, you differ with me because you’re anti-woman or anti-Hispanic (or, probably, both!). If I’m a Muslim man and you take issue with my views, it has nothing to do with differences in philosophy. It’s because you are an anti-Islamic bigot, plain and simple. Where does all this lead? It leads to Kamala Harris portraying Joe Biden as an anti-busing racist. A friend of segregationists. More like a Republican than a Democrat. This is what happens to the trusted running-mate of America’s first African American president. This is the fruit of identity politics. It leads to Rep. Alexandria Ocasio-Cortez implying that Nancy Pelosi is racist for targeting the so-called “squad,” referring to AOC, Ilhan Omar (MN), Rashida Tlaib (MI) and Ayanna Pressley (MA). Forget the fact that AOC is espousing hopelessly extreme, completely unworkable socialist positions. No. Pelosi is pushing back against AOC because she doesn’t like non-whites. Forget the fact that Omar and Tlaib are making openly anti-Semitic statements. No. Pelosi has a problem with women of color. In AOC’s own words, “When these comments first started, I kind of thought that she was keeping the progressive flank at more of an arm's distance in order to protect more moderate members, which I understood. But the persistent singling out . . . it got to a point where it was just outright disrespectful . . . the explicit singling out of newly elected women of color.” This is what identity politics has created. It must ultimately eat its own. This is what happens when you build a movement based on victimology. On division. On class warfare. This is what happens when you focus on race more than issues. Skin becomes more important than substance and outward appearance than material differences. As expressed by Rep. Dan Crenshaw, “Madam Speaker, welcome to the true nature of identity politics — where you’re accused of being racist for no reason at all, and where intellectually lazy insults are used against you as a way to replace substantive debate of your argument or idea.” The sword always cuts both ways. And so, when you produce a mentality of self-entitlement, in the end, it’s all about me. “I deserve more! Give me more!” You cannot cater to self-entitlement and then expect any kind of self-sacrifice. In the same way, when you build a movement based on identity politics, which by nature divides, you cannot expect to cultivate unity. You cannot expect to bring people together across their ideological fissures. You can only expect more and more fragmentation. And if you ride the wave of being a victim, you will soon be attacked as the victimizer. The formerly oppressed quite quickly become oppressors. A Boston Herald headline warned, “Progressive firebrand Ayanna Pressley needs to think about her priorities.” Unfortunately, the story was written by Hillary Chabot, who is a white woman herself. Perhaps she too is bigoted? Condescending? Perhaps she writes from the viewpoint of the superior class? This is the way identity politics thinks. This is the trap it creates for itself, the pit that it digs before its own feet. Ironically, presidential candidate Pete Buttigieg recently condemned President Trump’s alleged embrace of “peak white identity politics,” claiming that such politics are “designed to drive apart people with common interests.” Yet he made these comments at “a fundraiser in Las Vegas for the LGBTQ advocacy group Human Rights Campaign,” one of the classic examples of an organization based on identity politics. The irony is beyond rich. In short, if you conquer by dividing you will ultimately be conquered by division. The fragmentation has only begun.
When faced with the problem of sticky summer skin, we so often turn to a thick coat of absorbent powder or extreme mattifying makeup to undo the damage that heat and humidity wreak. But lately, we’ve found ourselves drawn to a more delicate fix in the form of blotting papers, which, when used correctly, turn into the queen of stealth skin saviors. They’re elegant in their simplicity—whisper-thin sheets drawn out, one by one, to take in oil before it consumes your face. During Japan’s Edo period, the aburatorigami, as they were called, were beloved by kabuki actors and geisha because they left makeup untouched. As that sort of light-handed gesture, blotting papers are best used preventatively—to soak up moisture before it’s truly visible—and modern iterations stay true to that subtle spirit, looking entirely chic when pulled from your clutch. There’s Morihata’s shining black leaflets, kept in a slick flip-book and steeped in purifying charcoal from Kyotan bamboo. Tatcha’s cult abaca leaf sheets come packed in a sleek palm-sized tube, a smattering of gold flakes pressed into each one, and so do the peach-colored, petal-scented papiers from Serge Lutens. Those from Mai Couture are infused with essential oils and other ingredients (calming lavender, brightening Vitamin C) that lend extra skin care benefits and a pretty pastel tint, while Milk Makeup’s Roll + Blot papers, made with unbleached hemp fibers, are a no-fuss alternative to blanching powder—the cool-girl secret to looking always dewy, never drenched.
Q: How to know if it was clicked on a graphics object I have encountered a problem. I have a mouse event. @Override public void mouseClicked(MouseEvent e){ } And it works fine. But I have shapes. Rectangle r = new Rectangle(); r.setSize(50, 50); r.setLocation(200, 200) g2d.draw(r); And when the mouse event is triggered I need something check if it was clicked on my rectangle. Not just anywhere on the screen. if (e.ClickedOnRectangle) { //Do stuff } Something like that. Any solutions? How do check if my mouse was clicked on a AWT (graphics) object? A: It depends. If the Shapes are contained within the container that the mouse events are occurring then it should be a simple case of using the Shape#contains method @Override public void mouseClicked(MouseEvent e){ if (rect.contains(e.getPoint()) { // Was clicked... } } Take a closer look at the Shape JavaDocs for more details
New year (Kabul) I am a new arrival and I am shocked about the open-prison like social live with strict security. I think only social life can overcome this. Is there a few guys who would like to go out and meet on 31st? The restaurants I am allowed to go to are: Bella Italia, Bocacio, Raven Rae Villa.
Q: meta_value timestamp older than now In my wp_posts table I want to select the rows where post_type == event. I also want to a connection with wp_postmeta from the specific row. In my wp_postmeta table I want to do a where like this: Where post_id == (post id from wp_posts) and the meta_value from meta_key == event_date_end_timestamp is older than now. So in my wp_postmeta I have : But how can I form my query to do this? A: <?php $args = array( 'posts_per_page' => -1, 'post_type' => 'event', 'meta_query' => array( array( 'key' => 'event_date_end_timestamp', 'value' => time(), 'type' => 'numeric', 'compare' => '<' ) ) ); $programs = new WP_Query($args); ?> <?php while($programs->have_posts()): $programs->the_post(); ?> <h1><?php the_title(); ?></h1> <?php endwhile; ?>
Q: Understanding the results of Promise vs Obervables Promise: Implementation getDataPromise(): any { let promise = new Promise((resolve, reject) => { resolve([ { brand: 'iPhone', model: 'Xmax', price: '$1000' }, { brand: 'Samsung', model: 'S10', price: '$850' } ]); }); return promise; } // promise call getDataPromise.then((data) => { console.log("Result: ", data); }); Result is an Array: Result: [ { brand: 'iPhone', model: 'Xmax', price: '$1000' }, { brand: 'Samsung', model: 'S10', price: '$850' } ] Observable Implementation import { from } from 'rxjs'; getDataObservable(): any { return from([ { brand: 'iPhone', model: 'Xmax', price: '$1000' }, { brand: 'Samsung', model: 'S10', price: '$850' } ]); } // observable call getDataObservable().subscribe((data) => { console.log("Result: ", data); }); Result is 2 objects in sequence: Result: { brand: 'iPhone', model: 'Xmax', price: '$1000' } Result: { brand: 'Samsung', model: 'S10', price: '$850' } Was trying to understand the difference, went through online materials but still can not answer the below 2 questions. Why does the result differ ? How to get the result from the observable as an array (similar to as in promise) ? A: A promise ends - it resolves (then) or rejects (catch) and that's it. An observable streams values until it ends (lots of times, it never ends, you just "lose interest" - or unsubscribe from the stream of values). I like the keypress example - if you use a promise to resolve/reject based on a keypress event, you'll only ever listen to "the next" keypress, and then you'll have to create a new promise to listen for the next one and so on. With observables, you only ever need to create one and it will just continually stream values to you until you are no longer interested (unsubscribe). Promise: const subscribeToKeyPress = () => { let resolve; document.addEventListener('keydown',function listener(e) { resolve(e); // unsubscribe from the event listener, otherwise you've got a memory leak document.removeEventListener('keydown',listener); }); return new Promise(r => resolve = r); } subscribeToKeyPress().then(e => { // done, never to receive another result }); ##### Observable: const subscribeToKeyPress$ = () => fromEvent(document,'keypress); const unsubscribe = subscribeToKeyPress$().subscribe(e => { // called continually on each keypress until you unsubscribe }); unsubscribe(); // your callback function won't be called anymore, if you don't call unsubscribe when you're done, you've got a memory leak ##### Callback Function - the simplest "observable" const subscribeToKeyPress = callback => { const fn = e => callback(e); // this function will get called over and over until you unsubscribe document.addEventListener('keypress',fn); return () => document.removeEventListener('keypress',fn); } const unsubscribe = subscribeToKeyPress(e => { // called continually on each keypress until you unsubscribe }); unsubscribe(); // your callback function won't be called anymore, if you don't call unsubscribe when you're done, you've got a memory leak How to get the result from the observable as an array (similar to as in promise) ? Check out https://www.learnrxjs.io/operators/utility/topromise.html Also relevant about toPromise: https://github.com/ReactiveX/rxjs/issues/2536 If your observable doesn't "complete" (like a keypress listener) toPromise will never work (unless you pipe(take(1)) first, but that's a lot for a newbie, no offence, but coming from a promise world, observables are really hard to grok - from my personal experience - until they aren't anymore). A: Both promises and observables are asynchronous. Either ways, you will need to handle them respectively However, the from operator emits your array as a sequence of values, hence the results that you see. As stated on the documentation, the from operator Turn an array, promise, or iterable into an observable. This is why getDataObservable prints the array of objects as different sequences. If you wish to return them as a single sequence, do use the of operator instead. Emit variable amount of values in a sequence and then emits a complete notification. import { of } from 'rxjs'; const getDataObservable = () => { return of([ { brand: 'iPhone', model: 'Xmax', price: '$1000' }, { brand: 'Samsung', model: 'S10', price: '$850' } ]); } getDataObservable().subscribe((data) => { console.log("Result: ", data); }); Here is a demo.
Some people felt it was too gory for children and inappropriate with an elementary school right down the street. Barrett said they never had any plans to take it down, but she said she had to consider her kids safety. As for whether they’ll ever decorate for Halloween again, she said probably not. “I don’t think we even want to do anything. We don’t even want to put up a pumpkin or any of what some people would think are typical Halloween decorations. That’s not the way that we want to express it,” she said. “I think, by us decorating what maybe some people think that we should would be almost caving in, and this is not a caving in thing. This is a safety thing for our family.”
Trump Tariffs Would Impact American Firm’s Results 06/13/2019 The import tariffs on imported Chinese products by the Trump administration of the United States would weigh down on their results, warned a number of major American consumer companies. These statements came from such companies came after US President Donald Trump increased import tariffs on Chinese goods worth $200 billion on May 10 from 10 per cent to 25 per cent. In addition, Trump has also threatened to impose fresh tariffs on $300 billion worth of Chinese goods which would include virtually everything exported to the US from China, Reacting to the situation, Hubert Joly, CEO of best Buy Co Inc. told the media: “the impact of tariffs at 25% (proposed to be enacted) will result in price increases and will be felt by US consumers.” Home Depot Inc said that its annual cost of goods sold would be increased by $1 billion if the current round of tariffs continues to exist. This would be in addition to a $1 billion hit to the company for import tariffs imposed in 2018. However, Edward Decker, executive vice president of merchandising, said that because the quantum of products that would be included in the new tariffs of the US on China could comprise of only 1 per cent of the company’s total sale, therefore the company would be able to manage the impact from new tariffs. “We do anticipate a more meaningful impact on both our private and national brands if the potential fourth tranche of tariffs does go into effect,” said Jill Soltau, the CEO of J.C. Penney Co Inc. The department store Kohl’s Corp it sources just 20 per cent of its products from China and the new tariffs would hit its imports of home and accessories from China but would exclude import of apparel and footwear currently. The largest retailer of the world Walmart has said it could face some challenges because it is known for its low prices and therefore it would have to manage the effect of increased costs of its Chinese imports because of the tariffs on its customers and the company itself. “Higher tariffs will lead to higher prices for customers,” the company’s CFO Brett Biggs had told the media in an interview last week. “The increase of the third tranche from 10% to 25% on May 10 does have some impact, particularly on our furniture business. However, the team anticipates that this can be mitigated,” Jeffrey Gennette, the CEO of Macy’s Inc. to investors on a conference call. “It’s too early to comment on what we think that’s going to mean in terms of potential price increases and what categories are going to be more affected than others,” he said. The CFO of Ralph Lauren Corp said that the company is facing a limited impact by the tariffs implemented toil date but it is also getting prepared for various eventualities. The company has enhanced its speed of diversification of its global supply chain so that it can avoid the long-term impact of the tariffs. Footwear maker Cross Inc expects to be hit by about $5 million in 2019 if the threatened 25 per cent tariff is imposed but expects to reduce the amount of products it imports from China to 20 per cent from current rate of 30 per cent in 2020. “Our current sourcing mix reflects our need to balance ramping up incremental supply to meet the growing demand for our product and continuing our multi-year effort to reduce our sourcing from China,” The company said in a statement. The food products company DelMonte also raised concerns about increase in transportation and labor costs in addition to tariffs that is troubling the company.“It’s an inflationary environment. A lot of that’s going to have to be passed on. The consumer is going to have to pay more for a lot of critical goods,” said the company’s CEO Greg Longstreet at a conference last week.
Modern computer applications may be stored on computers in geographically remote locations, or on multiple computers around the world, in order to provide end users with convenient access to the applications. In addition, such applications may be heavily relied upon by end users such that an interruption in access to such applications may cause considerable social and financial frustrations. Typical computer applications are made unavailable to end users for a period of time when a new version of the application is deployed. Further, changes made to typical computer applications may only be viewed once the application has been redeployed after the changes have been made.
globalEDGE Blog: Keeping an Open Mind will Open New Markets Keeping an Open Mind will Open New Markets Approaching business challenges requires an open mind and willingness to take a fresh look. This couldn’t apply better than to opening new export markets. An interview with the U.S. Ambassador to the Kingdom of Saudi Arabia shed light on the opportunities opening in the Middle East. The Ambassador provided an insightful view of Saudi Arabia which presents great opportunities for businesspeople to view the country as a place to export. It all began in 1945 with a renewed relationship between the United States and the King of Saudi Arabia, King Abdul Aziz. Since then, the country has experienced significant population growth as well as improved social and political climates. In fact, Saudi Arabia is in the process of creating 250,000 graduates of a vocational school. Education is paramount to the country’s development and will fundamentally change it from a purely oil driven economy to one that is more diverse moving ahead into the future. The next question on your mind: But isn’t it hard to get started in the Middle East? Actually, Saudi Arabia is a capital rich environment where companies feel comfortable both manufacturing and selling products that are highly sensitive to intellectual property (IP) rights protection. As part of the current administration, IP protection has gone up because of the educational initiatives. Instead of stealing technology, the country is able to create its own. If you are considering new countries in which to export your products, keep an open mind. Saudi Arabia is an example of a great opportunity for the right company to come in and provide solutions. China, India and Russia have recently earned over $19 billion in business that has traditionally gone to American firms. This is the time to regain confidence in the partnership that originated over 50 years ago.
We may be in the off-season, but the news just keep on coming: the Game of Thrones season seven box-sets will arrive quite soon, first for digital download on September 25, and then for Blu-Ray and DVD on December 12. Read below the cut for the details! IGN just revealed the date, along with all the information about the bonus features. As usual, the digital and Blu-Ray versions bring some exclusives not available on DVD. The download will feature the bonus “Creating the North and Beyond,” a behind-the-scenes look at the creation of the Frozen Lake in “Beyond the Wall.” The Blu-Ray will include in-episode guides and seven “Histories and Lore” animated features, including “The Dragonpit,” “Highgarden,” “Prophecies of the Known World,” and “The Rains of Castamere,” among others, narrated by Nikolaj Coster-Waldau, Aidan Gillen, and others. Both the Blu-ray and DVD versions will feature the following bonuses: Conquest & Rebellion: An Animated History of the Seven Kingdoms — Cast members Pilou Asbæk (Euron Greyjoy), Nikolaj Coster-Waldau (Jaime Lannister), Aidan Gillen (Littlefinger), Conleth Hill (Varys), Harry Lloyd (Viserys Targaryen), and Sophie Turner (Sansa Stark) narrate an animated series focusing on Aegon Targaryen’s attempts to conquer the Seven Kingdoms, written by Dave Hill. — Cast members Pilou Asbæk (Euron Greyjoy), Nikolaj Coster-Waldau (Jaime Lannister), Aidan Gillen (Littlefinger), Conleth Hill (Varys), Harry Lloyd (Viserys Targaryen), and Sophie Turner (Sansa Stark) narrate an animated series focusing on Aegon Targaryen’s attempts to conquer the Seven Kingdoms, written by Dave Hill. From Imagination to Reality: Inside the Art Departmen — This 2-part featurette details production designer Deborah Riley and her department’s work, dissecting the process behind the creation of this season’s incredible new sets, including Dragonstone, Casterly Rock, Highgarden, the Dragonpit, and more. — This 2-part featurette details production designer Deborah Riley and her department’s work, dissecting the process behind the creation of this season’s incredible new sets, including Dragonstone, Casterly Rock, Highgarden, the Dragonpit, and more. Fire & Steel: Creating the Invasion of Westeros — This BTS feature dives into the season’s biggest moments via interviews with the cast and crew. — This BTS feature dives into the season’s biggest moments via interviews with the cast and crew. Audio Commentaries — Each episode will feature commentaries with various cast and crew members, such as David Benioff, D.B. Weiss, Jacob Anderson, Gwendoline Christie, Liam Cunningham, Kit Harington and Lena Headey. Also, IGN offers a preview of Conquest & Rebellion: An Animated History of the Seven Kingdoms, focusing on its first chapter: Valyria’s Last Scion: House Targaryen. Just like in the case of season six, the Blu-Rays are coming early this year. In just a few months, we will have season seven in our hands, as well as all those juicy extras!
The Christian Science Monitor (CSM) is an international news organization that delivers global coverage via its website, weekly magazine, daily news briefing, email newsletters, Amazon Kindle subscription, and mobile site. It was started in 1908 by Mary Baker Eddy, the founder of the Church of Christ, Scientist. As of 2011[update], the print circulation was 75,052.[1] The Monitor is a newspaper that covers international and United States current events. The paper includes a daily religious feature on "The Home Forum" page, but states the publication is not a platform for evangelizing.[2] In 2008 the Monitor discontinued its daily print version to focus on web-based publishing, replacing its daily print edition with a weekly news magazine with an international focus.[3] Since late 2013, the Editor-in-chief has been Marshall Ingwerson.[4] Contents Despite its name, the Monitor does not claim to be a religious-themed paper, and says it does not promote the doctrine of its patron church. However, at its founder Eddy's request, a daily religious article has appeared in every issue of the Monitor. Eddy also required the inclusion of "Christian Science" in the paper's name, over initial opposition by some of her advisors who thought the religious reference might repel a secular audience.[2] The Monitor's inception was, in part, a response by Eddy to the journalism of her day, which relentlessly covered the sensations and scandals surrounding her new religion with varying degrees of accuracy. In addition, Joseph Pulitzer's New York World was consistently critical of Eddy, and this, along with a derogatory article in McClure's, furthered Eddy's decision to found her own media outlet.[2] Eddy also saw a vital need to counteract the fear often spread by media reporting: Looking over the newspapers of the day, one naturally reflects that it is dangerous to live, so loaded with disease seems the very air. These descriptions carry fears to many minds, to be depicted in some future time upon the body. A periodical of our own will counteract to some extent this public nuisance; for through our paper, at the price at which we shall issue it, we shall be able to reach many homes with healing, purifying thought.[5] Eddy declared that the Monitor's mission should be "to injure no man, but to bless all mankind."[2] The Monitor was for several decades published in broadsheet form but in 1975 switched to tabloid format. The paper's overall circulation has ranged widely, from a peak of over 223,000 in 1970, to just under 56,000 shortly before the suspension of the daily print edition in 2009.[6] Partially in response to declining circulation and the struggle to earn a profit, the church's directors and the manager of the Christian Science Publishing Society were purportedly forced to plan cutbacks and closures (later denied), which led in 1989 to the mass protest resignations by its chief editor Kay Fanning (an ASNE president and former editor of the Anchorage Daily News), managing editor David Anable, associate editor David Winder, and several other newsroom staff. These developments also presaged administrative moves to scale back the print newspaper in favor of expansions into radio, a magazine, shortwave broadcasting, and television. Expenses, however, rapidly outpaced revenues, contradicting predictions by church directors. On the brink of bankruptcy, the board was forced to close the broadcast programs in 1992. The paper has been known for avoiding sensationalism, producing a "distinctive brand of nonhysterical journalism".[7][8] In 1997, the Washington Report on Middle East Affairs, a publication critical of United States policy in the Middle East, praised the Monitor for its objective and informative coverage of Islam and the Middle East.[9] In 2016, Christian Science Monitor Washington bureau chief Dave Cook irrevocably barred the entire Daily Caller from his newsmaker breakfasts because columnist Evan Gahr mocked him as acting like Congressman Sandy Levin's press secretary by trying to impede his questioning when the veteran Democrat was a guest.[10] Gahr asked Levin to define the difference between a Democrat and socialist since other prominent Democrats had recently refused to answer the same question.[11] In April 2003, after being provided documents by a former Iraqi General, several news organizations (including the Monitor) reported that George Galloway was accused by a U.S. Senate Committee led by Norm Coleman of personally profiting from corruption within the United Nations Oil-for-Food program. The Monitor investigated the matter, concluding that the documents were "almost certainly forgeries," and, in response to a lawsuit by Galloway, apologized in court.[19] In 2006, Jill Carroll, a freelance reporter for the Monitor, was kidnapped in Baghdad, and released safely after 82 days. Although Carroll was initially a freelancer, the paper worked tirelessly for her release, even hiring her as a staff writer shortly after her abduction to ensure that she had financial benefits, according to Bergenheim.[20] Beginning in August 2006, the Monitor published an account[21] of Carroll's kidnapping and subsequent release, with first-person reporting from Carroll and others involved. The print edition continued to struggle for readership, and, in 2004, faced a renewed mandate from the church to earn a profit. Subsequently, the Monitor began relying more on the Internet as an integral part of its business model. The Monitor was one of the first newspapers to put its text online in 1996, and was also one of the first to launch a PDF edition in 2001. It was also an early pioneer of RSS feeds.[22] In October 2008, citing losses of $US18.9 million per year versus $US12.5 million in annual revenue, the Monitor announced that it would cease printing daily and instead print weekly editions starting in April 2009.[24] The last daily print edition was published on March 27, 2009. The Monitor continues to offer daily news online on its website and via email.[25] Yemma stated that the move to go digital was made because they recognized that the Monitor's reach would be greater online than in print. He has also stated that in the next five years the Monitor would work to increase their online readership fivefold, from 5 million page-views to 25 million.[26] As the paper has turned its attention to online storytelling, it has been breaking ground with multimedia projects like "Little Bill Clinton", a narrative serial following a year in the life of a young refugee. The weekly magazine follows on from the Monitor's London edition, also a weekly, launched in 1960 and the weekly World Edition which replaced the London edition in 1974.[27] MonitoRadio was a radio service produced by the Church of Christ, Scientist between 1984 and 1997. It featured several one-hour news broadcasts a day, as well as top of the hour news bulletins. The service was widely heard on public radio stations throughout the United States. The Monitor later launched an international broadcast over shortwave radio, called the World Service of the Christian Science Monitor. Weekdays were news-led, but weekend schedules were exclusively dedicated to religious programming. That service ceased operations on June 28, 1997.[28] In 1986, the Monitor started producing a current affairs television series, The Christian Science Monitor Reports, which was distributed via syndication to television stations across the United States. In 1988, the Christian Science Monitor Reports won a Peabody Award[29] for a series of reports on Islamic fundamentalism. That same year, the program was canceled and the Monitor created a daily television program, World Monitor, anchored by former NBC correspondent John Hart, which was initially shown on the Discovery Channel. In 1991, World Monitor moved to the Monitor Channel, a 24-hour news and information channel.[28] The only religious programming on the channel was a five-minute Christian Science program early each morning.[30] In 1992, after eleven months on the air, the service was shut down amid huge financial losses.[31]
If you remember, Tom Hanks was busted last month in a photograph taking up two seats on the subway. It was the height of the anti-manspreading media frenzy and Hanks unwittingly became the "face of manspreading". In a recent interview on the Late Late Show with James Cordon, Hanks sets the story straight, defending his posture and what was really going on in that train car.
Variable selection for accelerated lifetime models with synthesized estimation techniques. We develop variable selection approaches for accelerated failure time models, consisting of a group of algorithms based on a synthesis of two widely used techniques in the area of variable selection for survival analysis-the Buckley-James method and the Dantzig selector. Two algorithms are based on proposed modified Buckley-James estimating methods that are designed for high-dimensional censored data. Another two algorithms are based on a two-stage weighted Dantzig selector method where weights are obtained from the two proposed synthesis-based algorithms. The methods are easy to understand and they perform estimation and variable selection simultaneously. Furthermore, they can deal with collinearity among the covariates. We conducted several simulation studies and one empirical analysis with a microarray dataset; these studies demonstrated satisfactory variable selection performance. In addition, the microarray data analysis shows the methods performing similarly to three other correlation-based greedy variable selection techniques in the literature-sure independence screening, tilted correlation screening (TCS), and partial correlation (PC) simple. This empirical study also found that the sure independence screening technique considerably improves the performance of most of the proposed methods.
Mechanisms to prevent caspase activation in rotenone-induced dopaminergic neurodegeneration: role of ATP depletion and procaspase-9 degradation. The evidence implicating a mode of cell death that either favors or argues against caspase-dependent apoptosis is available in studies that used experimental models of Parkinson's disease. We sought to investigate the mechanisms by which release of cytochrome c is not linked to caspase activation during rotenone-induced dopaminergic (DA) neurodegeneration. Unlike caspase activation in 6-hydroxydopamine-treated cells, both MN9D DA neuronal cells and primary cultures of mesencephalic neurons showed no obvious signs of caspase activation upon exposure to rotenone. We found that intracellular levels of ATP significantly decreased at the early phase of neurodegeneration (<~24 h) and therefore external addition of ATP to the lysates obtained at this stage reconstituted caspase-3 activity. At a later phase of cell death (>~24 h), both decreased levels of ATP and procaspase-9 contributed to the lack of caspase-3 activation. Under this condition, calpain and the proteasome system were responsible for the degradation of procaspase-9. Consequently, external addition of ATP and procaspase-9 to the lysates harvested at the later phase was required for activation of caspase-3. Similarly, caspase-3 activity was also reconstituted in the lysates harvested from cells co-treated with inhibitors of these proteases and incubated in the presence of external ATP. Taken together, our findings provided a sequential mechanism underlying how DA neurons may undergo caspase-independent cell death, even in the presence of cytoplasmic cytochrome c following inhibition of mitochondrial complex I.
Ema Druavesi Ema Druavesi is a former Fijian political organizer, who was General Secretary of the Soqosoqo ni Vakavulewa ni Taukei (SVT) at the time of the 2006 Fijian coup d'état. References Category:Year of birth missing (living people) Category:Living people Category:Fijian politicians Category:Soqosoqo ni Vakavulewa ni Taukei politicians Category:I-Taukei Fijian people
1. Technical Field The present invention relates to a component placing head and a component placing method that have a plurality of component holding members, capture an image of a component held by each component holding member, recognize a holding posture of the component, and place the component on a circuit board on basis of a result of the recognition. 2. Background Art In recent years, markets have been increasing their demands for miniaturization, high performance, and reduction in cost of electronic equipment that contains electronic circuits formed by placement of electronic components as a plurality of components on circuit boards. In an electronic component placing apparatus having a head, as an example of a component placing head, the plurality of electronic components are placed by the head on the circuit boards held on a stage and electronic circuits are thereby manufactured. In such an electronic component placing apparatus, holding postures of the electronic components held by the head, placement positions of the electronic components on the circuit board, and the like, are recognized by use of image-pickup devices provided on the stage or on the head, or the like, and the electronic components are placed on the circuit board on the basis of a result of the recognition (see Japanese unexamined Patent Publication No. 9-307297, for example). In order to meet the demands from the markets, on the other hand, electronic component placing apparatus have been desired to cope with persistent miniaturization of the electronic components and the circuit boards and to perform placement with high density and high accuracy of the electronic components on the circuit boards and have been desired to achieve a decrease in time span required for the placement so as to fulfill efficient placement and a reduction in the manufacturing cost of electronic circuits. Hereinbelow, an image-pickup device 210 provided in a head 200 in such a conventional electronic component placing apparatus will be described with reference to a fragmentary enlarged schematic explanatory view of the head 200 shown in FIG. 7. The head 200 has eight suction nozzles 201 as component holding members, arranged in a row, and FIG. 7 shows a section of the head 200 taken along a plane orthogonal to a direction of the arrangement. As shown in FIG. 7, the head 200 has the eight suction nozzles 201 capable of sucking and holding electronic components 1 at extremities of the nozzles, and each suction nozzle 201 is supported by a head frame 202 so as to be capable of moving up and down along a central axis of the nozzle (in vertical directions in FIG. 7) and capable of rotating about the central axis. As shown in FIG. 7, the image-pickup device 210 has a camera 211 that is provided to the left of the suction nozzle 201 in the drawing and that is capable of capturing an image of the electronic component 1 sucked and held by the suction nozzle 201, from underneath the electronic component in the drawing via two reflecting mirrors 212 and 213 placed on an optical axis of the camera. The image-pickup device 210 also has a linear guide rail 214 that is provided along the direction of the arrangement of the suction nozzles 201 to the upper left of the suction nozzle 201 in the drawing and that is fixed to the head frame 202. The camera 211 is supported by the head frame 202 through medium of the linear guide rail 214 so as to be capable of sliding along the linear guide rail 214, i.e., along the direction of the arrangement of the suction nozzles 201. A sliding device 215 for sliding the camera 211 along the linear guide rail 214 is fixed to the head frame 202 in neighborhood of a location where the linear guide rail 214 is installed. When images of the electronic components 1 held by the suction nozzles 201 are captured by the image-pickup device 210, an image of the electronic component 1 held by each suction nozzle 201 is sequentially captured from underneath via the reflecting mirrors 212 and 213 while the camera 211 is slid by the sliding device 215 along the linear guide rail 214. Each image captured in this manner is subjected to recognition processing in a control unit, or the like, provided in the head 200 and is recognized as a suction holding posture of each electronic component 1 relative to each suction nozzle 201. The suction holding posture is then corrected by rotating of the suction nozzle 201, or the like, so that the recognized suction holding posture coincides with a placement posture relative to a circuit board, and the electronic component 1 is thereafter placed on the circuit board. In the head 200 having the above structure, however, an image of the electronic component 1 held by the suction nozzle 201 is captured from underneath the electronic component 1, and it is therefore impossible to recognize a suction holding posture of the electronic component 1 with respect to the direction along the central axis of the suction nozzle 201 (i.e., the vertical direction in FIG. 7). For example, an electronic component 1 that is a minute electronic component such as a chip component is prone to be sucked and held in a position angled to the extremity of a suction nozzle 201 (what is called an angled position), it is difficult to recognize such a position on the basis of an image captured from underneath, and placement on a circuit board with such a position unrecognized may cause an error in the placement of the electronic component 1 on the circuit board or may cause a problem in that high-accuracy placement of electronic components cannot be achieved even if the placement error is avoided. In the head 200, the sliding device 215 is provided on the head frame 202 in the neighborhood of the linear guide rail 214 and of the camera 211. Vibrations accompanying operation of the sliding device 215 are therefore prone to be transmitted through the linear guide rail 214 to the camera 211, and this causes a problem in that the camera 211 influenced by the vibrations cannot capture a high-accuracy image of an electronic component 1. An increase in the sliding velocity of the camera 211 slid by the sliding device 215, for the purpose of a decrease in a time span required for the placement of an electronic component 1 by the head 200, strengthens the transmitted vibrations and makes the above problem more noticeable, while a decrease in the sliding velocity for the purpose of a reduction in the vibrations fails to allow the decrease in the time span required for the placement and fails to allow efficient operation for placing electronic components. In a head 200 provided with a board recognizing device for recognizing placement positions, or the like, for electronic components 1 on a circuit board, for example, the electronic components 1 can be placed with reliable recognition of the placement positions on the circuit board; however, recognition accuracy required of the board recognizing device differs with the required accuracy in placement of electronic components 1. Though a head 200 that is provided with a board recognizing device having a high recognizing accuracy so as to address the high-accuracy placement of electronic components is capable of addressing high-accuracy placement, a narrowed recognizable field of view of the device causes a problem, for example, in that placement of an electronic component 1 which does not require high-accuracy placement may rather increase a time span required for recognition and may lower a placing efficiency. In order to address such high-accuracy placement of electronic components, it is necessary to capture a clear image of a placement surface of a component sucked and held by a suction nozzle. Though simple capture of the image with illumination of the placement surface of the component may address capture of images of conventional general-purpose components, the simple capture for miniaturized components, components with diversified shapes, and the like, may cause non-uniform illuminance, or the like, on their placement surfaces having miniaturized shapes, special shapes, and the like, and may thereby cause a problem in that images of the components cannot be captured clearly and in that such electronic components cannot be placed with a high accuracy. Therefore, an object of the present invention is to solve the above-mentioned problems and to provide a component placing head and a component placing method that have a plurality of component holding members, capture an image of a component held by each component holding member, recognize a holding posture of the component, and place the component on a circuit board on the basis of a result of the recognition, the component placing head and the component placing method being capable of performing the recognition with a high efficiency and a high accuracy.
Introduction {#s1} ============ The prevalence of obesity and type 2 diabetes mellitus (T2DM) is increasing dramatically. However, the underlying mechanisms of the development of T2DM are still unclear. Insulin resistance and T2DM are strongly influenced by both genetic and environment factors (Hossain et al., [@B14]; Doria et al., [@B10]). It showed that low-grade chronic inflammationplayed an important role in the pathogenesis of insulin resistance and T2DM (Wellen and Hotamisligil, [@B35]). Recently, it indicated that environmental factors and host genetics can interact to control gut microbiota composition, which can contribute to the development of insulin resistanc and T2DM by triggering the immune response (Macdonald and Monteleone, [@B21]). Gut microbiota can be regulated by innate immune system and the overall balance in gut microbiota composition is an important factor ensuring normal host functions. Gut microbiota dysbiosis can contribute to an expanding list of chronic and metabolic diseases (Spiller and Sloan, [@B30]). Tripartite motif (TRIM) family proteins are implicated in the negative regulation of innate immune responses (Versteeg et al., [@B34]). We recently found that TRIM31, an E3 ubiquitin ligase of the TRIM family proteins, may directly bind to the nucleotide-binding oligomerization domain-like receptor (NLR) family pyrin domain-containing 3 (NLRP3) and negatively regulate NLRP3 inflammasome activity (Song et al., [@B29]). NLRP3 inflammasome is a multi-protein platform comprising NLRP3, ASC, and caspase-1, and has a fundamental role in host defense against microbial pathogens (Guo et al., [@B13]). Subsequent evidence suggests that NLRP3 inflammasome activation can increase the susceptibility of several diseases, including obesity, insulin resistance, T2DM, and some autoimmune disorders (Wen et al., [@B36]). Giving that TRIM31 plays a central role in regulating NLRP3 inflammasome activity and NLRP3 inflammasome activation can increase the risks of metabolic diseases, we aimed to determine glucose metabolic health, gut microbiota composition, and inflammatory cytokine levels in TRIM31^−/−^ mice, and further investigate whether or not certain gut microbiota taxon correlates with specific metabolic parameters and inflammation cytokines in TRIM31^−/−^ mice. Materials and methods {#s2} ===================== Study approval -------------- The study was approved by the Ethics Committee and the Scientific Investigation Board of Shandong University Qilu Hospital (Jinan, Shandong Province, China). All experimental procedures were performed in accordance with the recommendations found in the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication no. 85--23 revised 1996). Mice and study design --------------------- Ten TRIM31^−/−^ mice on a C57BL/6J background were generated by Cyagen Biosciences Company. (Guangzhou, China) with transcription activator-like effector nuclease (TALEN) technology, as we previously described (Song et al., [@B29]; Liu et al., [@B19]). The genotyping of TRIM31^−/−^ mice was confirmed by sequencing PCR fragments (250 bp) in the TALEN-targeting region amplified with genomic DNA, isolated from mouse tail tips with the following primers: forward 5′-GGCCTTGGATTTCTGTACTTTCACATC-3′ and reverse 5′-TGGGCCTGAACGTATTCTTATTCACAG-3′. Age- and weight-matched male C57BL/6J wild-type (WT) mice were as controls (*n* = 10). All the mice had the same origin and were raised in the same condition. Experimental mice were genotyped by genomic DNA sequencing. The sequence analysis of WT and TRIM31^−/−^ mice involved the sequences CATTGACTGTGGGCACAACTTCTGCCTG and CATTGACTGTGGG-ACAACTTCTGCCTG (-1). The sequence peaks were shown in Figure [S1](#SM2){ref-type="supplementary-material"}. The mice were bred in the same room with 12/12-h light-dark cycles at the Animal Facility of Shandong University Qilu Hospital (Jinan, China). Mice were fed *ad libitum* with normal chow food and sterile water throughout the experimental period. In our study, only male mice were used to prevent potential confounding factors with the hormone profile of female mice. Glucose tolerance test and insulin tolerance test ------------------------------------------------- We performed introperitoneal glucose tolerance test (IPGTT) and insulin tolerance test (ITT) in both 16- and 20-week old TRIM31^−/−^ and WT mice. Animals were given glucose (2 mg dextrose/g body weight) or insulin (1 U/kg body weight) by intraperitoneal injection, respectively. Then, blood glucose was measured before the injection (time 0) and at 15-, 30-, 60-, and 120-min intervals after injection. Blood glucose responses to the IPGTT and ITT was calculated as the area under the receiver operating characteristic curve (AUC) for each mouse according to the trapezoidal method. Homeostatic model assessment-insulin resistance (HOMA-IR) --------------------------------------------------------- Fasting serum insulin concentrations were measured in 20-week old TRIM31^−/−^ and WT mice by using an insulin ELISA kit (Abcam, Cambridge, UK). Insulin resistance was evaluated by the HOMA-IR score, calculated by fasting serum insulin (μU/ml)×fasting plasma glucose (mmol/l)/22.5. Biochemical analysis -------------------- Serum inflammatory cytokines, including IL-6 (interleukin-6), TNF-α (tumor necrosis factor α), IL-1β (interleukin-1β), and IL-10 (interleukin-10) were measured by the U-PLEX Assay Platform (Meso Scale Discovery, Rockville, MD), according to the manufacturer\'s instructions. All samples were tested in duplicate. Quantitative real-time RT-PCR ----------------------------- Total RNA was extracted from freshly isolated caecal samples by using TRIzol Reagent (Invitrogen, Carlsbad, CA). RNA was reverse transcribed from each sample using the Applied Biosystems cDNA Reverse Transcription kit (Applied Biosystems, Life Technologies). The cDNA was amplified with a SYBR® Green PCR Master Mix (RR420A, Takara Bio Inc., Otsu, Shiga, Japan). We used Oligo 7.0 software (Molecular Biology Insights, Inc., Cascade, USA) to design the sequences of the primers. The primer sequences for TNF-α, IL-1β, and β-actin genes for real-time RT-PCR are in Table [S1](#SM1){ref-type="supplementary-material"}. Western blot analysis --------------------- Protein extracts were separated by SDS-PAGE and transferred onto polyvinylidene fluoride membranes for incubation overnight at 4°C, with the corresponding primary antibodies for TNF-α (1:500; Abcam, Cambridge, UK), IL-1β (1:500; Abcam, Cambridge, UK), caspase-1 (1:1000; Abcam, Cambridge, UK), NLRP3 (1:1000; AdipoGen, CA), total IRS-1 (insulin receptor substrate-1) (1:1000; Abcam, Cambridge, UK), phosphorylated-IRS-1 (Ser307) (p- IRS-1) (1:200; Santa Cruz Biotechnology, Santa Cruz, CA, USA), total Akt and phosphorylated-Akt (Thr308) (p-AKT) (1:1000; Cell Signaling Technology, MA, USA), β-actin (1:1000; Abcam, Cambridge, UK) and tubulin (1:1000; ProteinTech, Wuhan, China), and appropriate secondary antibodies (1:5000; Abcam, Cambridge, UK) for 1-h at room temperature. Protein levels of TNF-α, IL-1β, caspase-1, NLRP3, IRS-1, and AKT were normalized to that of β-actin or tubulin. Gut microbiota analysis ----------------------- ### PCR amplification and sequencing Genomic DNA was extracted from caecal content of mice by using the EZNA DNA Kit (Omega Bio-tek, Norcross, GA), then the bacterial 16S ribosomal RNA (rRNA) gene targeting the V3-V4 region was amplified by using bar-coded universal primers, with 338F 5′-ACTCCTACGGGAGGCAGCA-3′ and 806R 5′-GGACTACHVGGGTWTCTAAT-3′. PCR reactions were performed in triplicate with the following mixture: 10 ng template DNA, 2 μl dNTPs (2.5 mmol/l), 0.8 μl forward primer (5 μmol/l) and 0.8 μl reverse primer (5 μmol/l), 0.4 μl FastPfu Polymerase, 4 μl 5 × FastPfu Buffer, and PCR-grade water in a final volume of 20 μl. Reactions were performed with the following cycling conditions: 95°C for 3 min, followed by 25 cycles at 95°C for 30 s, 57°C for 30 s, and 72°C for 45 s, with a final extension of 10 min at 72°C. Replicate amplicons were pooled and bead-purified by using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA). Reaction products were pair-end sequenced by using Illumina MiSeq technology (Illumina Inc., San Diego, CA) (Caporaso et al., [@B6]). ### Sequence analysis The 16S rRNA gene raw reads were analyzed by using Quantitative Insights Into Microbial Ecology (QIIME, <http://bio.cug.edu.cn/qiime/>) (Caporaso et al., [@B5]). Operational taxonomic units (OTUs) were created by clustering the reads with 97% similarity by using UPARSE v7.1 (<http://drive5.com/uparse/>). The rarefaction analysis and Shannon diversity index were calculated with representative sequences of OTUs and their relative abundance determined by QIIME (Caporaso et al., [@B5]) RDP Classifier (<http://rdp.cme.msu.edu/>) was used to analyze the phylogenetic affiliation of each 16S-rRNA gene sequence, with confidence threshold of 70% against the silva (SSU115) 16S rRNA database (Amato et al., [@B1]). PCoA plots were generated according to the matrix of distance calculated by using the weighted UniFrac algorithm (Lozupone et al., [@B20]). The heatmap profile was generated by using R (<http://www.r-project.org/>) and bacterial taxa differences were elucidated by the linear discriminant analysis (LDA) effect size (LEfSe). The LEfSe algorithm was used to draw the cladogram, with the Huttenhower Galaxy web application (The Huttenhower Lab, Boston, MA; <http://huttenhower.sph.harvard.edu/lefse/>) (Segata et al., [@B27]). Statistical analysis -------------------- Data were expressed as mean ± standard deviation (S.D.). Non-parametric variables were mathematically transformed to improve symmetry. Unpaired *t*-test was used to study differences in continuous variables between groups. Mann-Whitney *U*-test was performed to examine differences in bacterial composition between TRIM31^−/−^ and WT mice. The correlations between microbial composition and metabolic and inflammatory parameters were performed using Spearman\'s analysis. The statistical significance was determined by SPSS 21.0 (SPSS Inc., Chicago, IL) and *P* \< 0.05 was considered statistically significant. Results {#s3} ======= TRIM31^−/−^ mice exhibited glucose intolerance and insulin resistance --------------------------------------------------------------------- There were no differences in body weight and food intake between TRIM31^−/−^ and WT mice from birth to the end of experimental period (Figures [1E,F](#F1){ref-type="fig"}). Glucose metabolism status was evaluated in TRIM31^−/−^ mice from 8 weeks old. However, no difference in glucose tolerance and insulin tolerance was observed between TRIM31^−/−^ and WT mice until 16 weeks old. For 16 week-old mice, blood glucose level was higher at 15 min after intraperitoneal glucose administration in TRIM31^−/−^ mice, compared with WT mice (*P* \< 0.001; Figure [1A](#F1){ref-type="fig"}). Consistently, the AUC for IPGTT was greater in TRIM31^−/−^ mice (*P* \< 0.05; Figure [1B](#F1){ref-type="fig"}). No difference in glucose levels of ITT was observed between TRIM31^−/−^ and WT mice (Figures [1C,D](#F1){ref-type="fig"}). Fasting serum insulin concentration and HOMA-IR were higher in TRIM31^−/−^ mice, indicating insulin resistance in TRIM31^−/−^ mice (*P* \< 0.05, Figures [1G,H](#F1){ref-type="fig"}). ![Glucose metabolism of TRIM31^−/−^ and WT mice at age 16 weeks. **(A)** Glucose tolerance test, **(B)** AUC for glucose tolerance, **(C)** Insulin tolerance test, **(D)** AUC for insulin tolerance test, **(E)** Body weight at age 16 weeks, **(F)** Food intake, **(G)** Serum insulin level, **(H)** HOMA-IR level. *n* = 10, in each group. ^\*^*P* \< 0.05 and ^\*\*\*^*P* \< 0.001 TRIM31^−/−^ vs. WT mice. AUC, area under the receiver operating characteristic curve; HOMA-IR, Homeostasis model assessment of insulin resistance; WT, wild-type; TRIM31, tripartite motif-containing protein 31.](fphys-09-00024-g0001){#F1} For 20 week-old mice, there is no difference in body weight between TRIM31^−/−^ and WT mice (Figure [2E](#F2){ref-type="fig"}) the blood glucose value was higher at 30 min (*P* \< 0.001) and 60 min (*P* \< 0.01) after intraperitoneal glucose administration in TRIM31^−/−^ mice, indicating that impaired glucose tolerance was exacerbated in TRIM31^−/−^ mice with aging (Figure [2A](#F2){ref-type="fig"}). Consistently, the AUC for IPGTT was greater in TRIM31^−/−^ mice (*P* \< 0.01; Figure [2B](#F2){ref-type="fig"}). However, no difference in blood glucose levels of ITT was found between TRIM31^−/−^ and WT mice (Figures [2C,D](#F2){ref-type="fig"}). Fasting serum insulin concentration (*P* \< 0.01) and HOMA-IR (*P* \< 0.05) were significant higher in TRIM31^−/−^ mice (Figures [2F,G](#F2){ref-type="fig"}). ![Metabolic parameters of TRIM31^−/−^ and WT mice at age 20 weeks. **(A)** Glucose tolerance test, **(B)** AUC for glucose tolerance, **(C)** Insulin tolerance test, **(D)** AUC for insulin tolerance test, **(E)** Body weight at age 20 weeks, **(F)** Serum insulin level, **(G)** HOMA-IR level. *n* = 10, in each group. ^\*^*P* \< 0.05, ^\*\*^*P* \< 0.01, ^\*\*\*^*P* \< 0.001 TRIM31^−/−^ vs. WT mice. AUC, area under the receiver operating characteristic curve; HOMA-IR, Homeostasis model assessment of insulin resistance; WT, wild-type; TRIM31, tripartite motif-containing protein 31.](fphys-09-00024-g0002){#F2} Characteristics of 16S rRNA gene sequencing ------------------------------------------- To profile gut microbiota structure differences between TRIM31^−/−^ and WT mice, bacterial 16S rRNA gene V3-V4 region was sequenced by Illumina MiSeq platform. A total of 515,839 high-quality sequences were obtained, with an average of 36,846 sequences per sample. The Good\'s coverage of each group was \>97%, suggesting that the sequences identified represent most of the bacteria present in the samples. Taxonomy assignment showed the correlation between the duplicates to be \>99.5% at any taxonomy level, indicating that the accuracy and reproducibility of sequencing was reliable for further analysis. The OTU count was similar in TRIM31^−/−^ and WT mice (498.8 ± 67.6 vs. 511.9 ± 98.5). The diversity index (Shannon) and estimator of community richness (Chao) were uncomparable between the two groups, indicating the parallel community richness and diversity of gut microbiota between TRIM31^−/−^ and WT mice. Detailed information of these characteristics is shown in Table [S2](#SM1){ref-type="supplementary-material"}. Overall microbial structures of gut microbiota ---------------------------------------------- The overall microbiota structure differed between TRIM31^−/−^ and WT mice at the phyla, family, and genus levels, respectively (Figure [S2A](#SM3){ref-type="supplementary-material"}--[C](#SM3){ref-type="supplementary-material"}). Principal coordinate analysis (PCoA) showed an overview of gut microbial dynamics associated with genotype. Bacterial structures differed between TRIM31^−/−^ and WT mice, as demonstrated by the first three principal component (PC) scores accounting for PC1 = 25.36%, PC2 = 18.16%, and PC2 = 12.84% of total variation, respectively. These findings indicate a statistically significant clustering by genotype (Figure [S3](#SM4){ref-type="supplementary-material"}). Phylotypes in TRIM31^−/−^ and WT mice ------------------------------------- A cladogram represents the significant structure of gut microbiota from the phylum level to the bacteria level (Figure [3A](#F3){ref-type="fig"}). The figure includes a list of the predominant bacteria in TRIM31^−/−^ and WT mice as determined by LEfSe. The greatest differences between TRIM31^−/−^ and WT mice at the family level are shown in Figure [3B](#F3){ref-type="fig"}. The differences in gut microbiota composition at the family level between TRIM31^−/−^ and WT mice are shown in Figure [4](#F4){ref-type="fig"}. It indicates significant variations in gut microbiota composition between the two groups. The proportions of Prevotellaceae (a family in the phylum Bacteroidetes) and Veillonellaceae (a family of Firmicute phylum) were both higher in TRIM31^−/−^ than WT mice. ![Different profiles of gut microbiota between TRIM31^−/−^ and WT mice. **(A)** Cladogram representation of gut microbiota taxa, from the phylum level to the bacteria level. Red indicates taxa enriched in TRIM31^−/−^ mice, and green indicates taxa enriched in WT mice. The diameter of each circle is proportional to the taxon\'s abundance. **(B)** Histogram of the LDA scores for differentially abundant taxa (red: TRIM31^−/−^ mice; green: WT mice). LDA scores were calculated by LDA effect size, by using linear discriminant analysis. *n* = 7, in each group. LDA, linear discriminant analysis; WT, wide-type.](fphys-09-00024-g0003){#F3} ![Significantly different phylotypes between TRIM31^−/−^ and WT mice at the family level. Data for mice are shown as relative abundance (%) of families in each group. *n* = 7, in each group. ^\*\*^*P* \< 0.01, TRIM31^−/−^ vs. WT mice by Mann-Whitney *U*-test.](fphys-09-00024-g0004){#F4} Inflammatory cytokines in TRIM31^−/−^ and WT mice ------------------------------------------------- We further measured serum inflammatory cytokines levels in 20-week old TRIM31^−/−^ and WT mice, including three pro-inflammatory cytokines (IL-6, TNF-α, and IL-1β) and one anti-inflammatory cytokine (IL-10). We found that serum IL-1β and TNF-α levels were higher in TRIM31^−/−^ mice, compared with WT mice. However, no statistical difference in IL-6 levels was observed between the groups. In addition, IL-10, as an anti-inflammatory cytokine, it showed a tendency to be lower in TRIM31^−/−^ mice (Figures [5A--D](#F5){ref-type="fig"}). Then, we found that TNF-α and IL-1β expressions were significantly higher in caecal samples from 20-week old TRIM31^−/−^ than WT mice (Figures [5E,F,I,J](#F5){ref-type="fig"}). NLRP3 inflammasome is a multi-protein platform which comprises NLRP3, ASC, and caspase-1. We detected the protein expression of NLRP3 and caspase-1 in caecal tissue of TRIM31^−/−^ and WT mice and found that NLRP3 and caspase-1 protein expressions were significantly upregulated in TRIM31^−/−^ mice, compared with WT mice, indicating that TRIM31 deficiency could lead to the activation of the inflammasome (Figures [5F--H](#F5){ref-type="fig"}). ![Elevated level of pro-inflammation cytokines in TRIM31^−/−^ mice. **(A--D)** Serum IL-6, IL-10, TNF-α, and IL-1β concentrations, **(E)** qRT-PCR of mRNA levels of TNF-α and IL-1β. **(F--J)** Western blot analysis of caecal protein levels of TNF-α, IL-1β, caspase-1,and NLRP3. **(K--L)** Western blot analysis of p-IRS-1 and p-Akt protein levels in visceral adipose tissue. *n* = 10, in each group. ^\*^*P* \< 0.05, TRIM31^−/−^ vs. WT mice. IL-6, interleukin-6; IL-10, interleukin-10; TNF-α, tumor necrosis factor-α; IL-1β, interleukin-1β; p-IRS-1, phosphorylated- insulin receptor substrate-1; p-Akt, phosphorylated-Akt.](fphys-09-00024-g0005){#F5} Decreased p-Akt/Akt protein levels in TRIM31^−/−^ mice ------------------------------------------------------ The insulin receptor signaling pathway is a major mechanism underlying the development of glucose intolerance and insulin resistance. The IRS-1 /phosphatidylinositol 3-kinase (PI3K)/Akt axis plays a key role in insulin receptor signaling transduction. Thus, we further examined the total and phosphorylation levels of IRS-1(Ser307) and Akt (Thr308) in visceral adipose tissue of TRIM31^−/−^ and WT mice. As shown in Figure [5](#F5){ref-type="fig"}, elevated p-IRS-1/IRS-1 protein expression, and decreased Akt Thr308 phosphorylation were found in TRIM31^−/−^ mice (Figures [5K,L](#F5){ref-type="fig"}). Correlation analysis between gut microbiota composition and inflammatory cytokines and insulin level ---------------------------------------------------------------------------------------------------- Then, we further investigated whether specific phylotypes, such as Prevotellaceae and Veillonellaceae, were associated with fasting serum insulin concentration and inflammatory cytokine levels. Strikingly, the proportion of Prevotellaceae, composed of four genera, Prevotella, Alloprevotella, Hallella, and Paraprevotella, was positively correlated with caecal IL-1β mRNA level (*r* = 0.59, *P* = 0.04) (Figure [6A](#F6){ref-type="fig"}). In addition, Veillonellaceae was associated with higher serum insulin concentration and caecal TNF-α mRNA expression (Figures [6B,C](#F6){ref-type="fig"}). These data showed that specific phylotypes were significantly correlated with serum insulin and inflammatory cytokine levels, and may play an important role in insulin resistance and activated inflammation status in TRIM31^−/−^ mice. ![Gut microbiota taxa correlated with insulin and inflammation parameters in TRIM31^−/−^ mice. **(A)** Correlation of Prevotellaceae proportion with caecal IL-1β mRNA level. **(B)** Correlation of Veillonellaceae proportion with caecal TNF-α mRNA level. **(C)** Correlation of Veillonellaceae proportion with higher serum insulin level. Correlation analyses involved Spearman\'s correlation analyses. *n* = 7, in each group. TNF-α, tumor necrosis factor-α; IL-1β, interleukin-1β.](fphys-09-00024-g0006){#F6} Discussion {#s4} ========== It is considered that environment, host genetics, and microbiota interact to maintain the homeostasis of gut, weight control, glucose tolerance, and insulin sensitivity (Spor et al., [@B31]). Modifications of these three components may trigger the development of obesity, insulin resistance, and diabetes mellitus. For the genetic factors, our study showed that TRIM31 knockout mice had glucose intolerance and insulin resistance. The metabolic disorders were characterized in 20-week-old TRIM31^−/−^ mice, with no difference in body weight between TRIM31^−/−^ and control animals. As demonstrated in other animal models (Barnard et al., [@B3], [@B4]), insulin resistance precedes the development of obesity until age 20 weeks. In addition, to be consistent with previous studies, several knockout mice models also showed glucose intolerance and insulin resistance. In NLRP3 knockout mice, elimination of NLRP3 ameliorated obesity-induced inflammation and insulin resistance (Vandanmagsar et al., [@B33]), and caspase-1 knockout mice also improved glucose tolerance and insulin sensitivity than wild-type mice after feeding a high-fat diet (Stienstra et al., [@B32]). Cui et al. demonstrated that vitamin D3 1α-Hydroxylase knockout mice and Prion protein knockout mice developed insulin resistance (Cui et al., [@B8]; de Brito et al., [@B9]). Thus, impaired glucose metabolism has been observed and well-investigated in many transgenic animal models. In recent decades, chronic inflammatory responses and oxidative stress are associated with the development of several metabolic disorders, such as obesity, insulin resistance, and T2DM. We found that TRIM31^−/−^ mice had impaired glucose tolerance and decreased insulin sensitivity, accompanied by moderate inflammation activation. Serum TNF-α and IL-1β concentrations and caecal TNF-α and IL-1β expressions were higher in TRIM31^−/−^ mice. The serum inflammatory cytokines levels were different from our previous work (Song et al., [@B29]), due to that the mice model was different. In our previous work, the female mice were mainly investigated and the age of the mice was 6 weeks old. More importantly, the previous study focused on an alum-induced peritonitis model, and found that TRIM31 could inhibit NLRP3 inflammasome activity in mouse peritonitis *in vivo*. Many inflammatory cytokines have been reported to be associated with the development of metabolic disorders (Hotamisligil, [@B15]). Consistent with previous studies, some clinical trials suggest that increased TNF level results in impaired glucose homeostasis and insulin resistance in patients with T2DM (Gonzalez-Gay et al., [@B12]). IL-1β, as a prominent pro-inflammatory cytokine, can efficiently contribute to the generation of many inflammatory mediators (Arend et al., [@B2]). Our previous study demonstrated the potential functions of TRIM31 in the innate immune response, and TRIM31 deficiency facilitated NLRP3 inflammasome activation (Song et al., [@B29]). Consistent with previous study (Nie et al., [@B22]), elevated p-IRS-1/IRS-1 protein expression, and decreased Akt Thr308 phosphorylation were observed in TRIM31^−/−^ mice with impaired glucose tolerance and insulin resistance. Therefore, we speculated that TRIM31 deficiency could facilitate NLRP3 inflammasome activation and then have an important role in the development of metabolic disorders. Gut microbiota consists of trillions of commensal micro-organisms residing within our intestines (Human Microbiome Project Consortium, [@B16]). All the genes of the gut microbiome represent at least 100-fold unique genes more than in the human genome (Qin et al., [@B23]). The gut microbiota, considered an "external" organ, participates in several aspects of host physiology and metabolism. Normal host functions depend on the overall balance in the composition of gut microbiota. Dysbiosis of gut microbiota have been found to contribute to an expanding list of chronic diseases including obesity (Seganfredo et al., [@B26]), diabetes mellitus (Qin et al., [@B24]), inflammatory bowel disease (Sartor, [@B25]), and systemic inflammatory response syndrome (Shimizu et al., [@B28]). Given the key role of the gut microbiota in the innate immune system and the potential functions of TRIM31 in the innate immune response, we aimed to profile gut microbiota in TRIM31^−/−^ mice. Consistent with previous studies, our study showed that the dominant phyla in the mice were Firmicutes, Bacteroidetes, and Proteobacteria (Karlsson et al., [@B17]; Evans et al., [@B11]). The relative abundance of Bacteroidetes was increased and that of Firmicutes and Proteobacteria decreased in TRIM31^−/−^ mice. To be consistent with previous study, Kellermayer et al. found a lower proportion of Firmicutes in Toll-like Receptor 2 (TLR2) deficiency mice, with increased proportion of Bacteroidetes (Kellermayer et al., [@B18]). It also showed that the proportion of Bacteroidetes was greater in TLR2^−/−^ mice (Caricilli et al., [@B7]). Bacteroidetes are gram-negative bacteria, with lipopolysaccharide (LPS) in the outer membrane. We also found increased relative abundance of Bacteroidetes in TRIM31^−/−^ mice and microbiota in TRIM31^−/−^ mice could produce more LPS. Activation of TLR4 by LPS promotes glycolysis, which contributes to nucleotide biosynthesis and enhanced ATP production, thus leading to NLRP3 inflammasome activation. Then, NLRP3 inflammasome activation can promote the release of IL-1β from macrophages, and activated IL-1β can activate c-Jun N-terminal kinase(JNK), IκB kinase (IKK), and the IRS-1 /PI3K/Akt axis via the IL-1 receptor (IL-1R). Engagement of the insulin receptor by IRS-1 is impaired with downregulation of the PI(3)K--kinase Akt signaling pathway (Wen et al., [@B36]). Thus, based on our findings and previous studies, a hypothetical model was tentatively proposed to illustrate the potential mechanism underlying how gut microbiota regulates glucose metabolism in TRIM31^−/−^ mice (Figure [7](#F7){ref-type="fig"}). This may be the underlying mechanism by which the TRIM31^−/−^ mice showed impaired glucose tolerance and insulin resistance. ![A hypothetical model for the pathogenesis of glucose intolerance and insulin resistance in TRIM31^−/−^ mice. Gut microbiota may generate LPS inducing NLRP3 inflammasome activation and the release IL-1β. TRIM31 binds to NLRP3 to promote proteasomal degradation of NLRP3. TRIM31 expression is markedly reduced in TRIM31^−/−^ mice, which promotes the release of IL-1β. The activated IL-1β induces insulin resistance by IRS1 and the PI(3)K--kinase Akt signaling pathway. LPS, lipopolysaccharide; IL-1β, interleukin-1β; IRS1, insulin receptor substrate 1.](fphys-09-00024-g0007){#F7} Conclusion {#s5} ========== In conclusion, our study is novel in showing that TRIM31 deficiency is associated with impaired glucose metabolism and disrupted gut microbiota in mice, characterized by a clear difference in gut microbiota and inflammation activation, and gut microbiota is correlated with metabolic and inflammatory parameters. Our study provides a critical theoretical foundation for the putative roles of gut microbiota in the complicated molecular and cellular networks, which can contribute to building bridges between genotypes and phenotypes. A better understanding of the connections between TRIM31 deficiency and the development of impaired glucose metabolism and disrupted gut microbiota would be of great benefit and have potential implications for a wide range of common human metabolic disorders involving glucose intolerance, insulin resistance and diabetes mellitus. Author contributions {#s6} ==================== JC, FX, CC, MZ, LQ, JM, WS, XX, MZ (second author), and PH had substantial contributions to data curation, investigation, and methodology of the study. PH, JC, and MZ (second author) wrote, reviewed and edited the manuscript before submission. CG and YZ reviewed and edited the manuscript before submission. PH and JC had substantial contributions to conceptualization and formal analysis in the study. PH, MZ (second author), YZ, and CG made substantial contributions to supervision and validation. PH, CG, and YZ made substantial contributions to funding acquisition. Conflict of interest statement ------------------------------ The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This work was supported by the National 973 Basic Research Program of China (No. 2013CB530703, No. 2015CB553604), the Key Research and Development Plan of Shandong Province (No. 2015GSF118133), National key R & D plan (No. 2016YFC1300403), National Natural Science Foundation of China (No. 31770977, 81425004, 81770442, 61331001, 81530014, 81320108004, No. 81400284, 91339109, 81270350, 31400771), Program of Introducing Talents of Discipline to Universities (No. B07035), State Key Program of National Natural Science of China (No. 61331001, No. 81530014), International Collaboration and Exchange Program of China (No. 81320108004), Natural Science Foundation of Shandong Province (No. ZR2014CQ004), Clinical Medicine Science and Technology Innovation Plan of Jinan Science and Technology Bureau (No. 201602157 and 201506002), Shenzhen Science and Technology Research and Development Fund (No. JCYJ20160331183804137), and Science and Technology Project of Guangdong Province (No. 2017A020215005). Supplementary material {#s8} ====================== The Supplementary Material for this article can be found online at: <https://www.frontiersin.org/articles/10.3389/fphys.2018.00024/full#supplementary-material> ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. [^1]: Edited by: Gabriele Giacomo Schiattarella, University of Naples Federico II, Italy [^2]: Reviewed by: Rui Curi, University of São Paulo, Brazil; Cristina M. Sena, University of Coimbra, Portugal [^3]: This article was submitted to Clinical and Translational Physiology, a section of the journal Frontiers in Physiology
The natural history of long-term cardiac pacing. During the past ten years, 504 patients have received one or more pacemakers for complete heart block or other arrhythmia. Of these patients, 306 (61%) are alive. Actuarial analysis shows a steady attrition of 9.4% per year for the first five years, decreasing to 7% per year for the second five years. The overall survival was decreased for patients with congestive heart failure and advanced age and was not affected by the history of Stokes-Adams attacks, initial pulse rate below 50 per minute, or a QRS duration greater than 0.12 second prior to pacing. Cardiac problems were the primary cause of death in 71% of the patients. The natural history of patients with permanent pacemakers depends, more than any other factor, on the function of the left ventricle.
The present invention is directed to an optical waveguide that provides both weak and strong photon confinement in a unitarily formed waveguide device. Optical data transmission offers various advantages over other forms of data transmission, primarily with regard to bandwidth and size of the transmission medium (e.g., fiber-optic cables, waveguides, etc.). Additionally, recent developments have made more attractive the fabrication of integrated optical devices suitable for use in optical data transmission systems. Examples of such developments can be found in U.S. Pat. Nos. 5,790,583; 5,825,799; 5,878,070; 5,926,496; and 6,009,115, the contents of each of which are incorporated by reference herein. Those references describe various optical devices such as lasers, resonators and wayeguides, which are well-suited for use in construct ing data and telecommunication optical networks. Heretofore optical networks have routed or otherwise controlled the transmission of light (i.e., of an optical signal) by converting the light signal into an electrical signal, manipulating the converted electrical signal using electronic components, and then converting the electrical signal back into a light signal. Such conversion-intensive signal processing is, however, undesirable because it slows and complicates data flow. It is therefore desirable, whenever a light signal is to be manipulated, to avoid converting light signals to electrical signals. Rather, it is preferable to instead use optical devices to manipulate the light signal directly, and thereby simplify and speed operation of the optical network. Eliminating many of the electronic components from optical networks also facilitates the integration of very small (i.e., nanometer scale) optical components in the optical networks. In some cases such optical components may comprise a plurality of integrated devices formed on a single substrate much in the same manner as the integrated electrical semiconductor devices which are today in widespread use. The waveguides currently used in optical networking may vary in their size and construction because different waveguide configurations are preferred for different uses. A new generation of optical waveguide devices now employed in optical data systems uses nanostructure (i.e., nanometer scale) deeply etched waveguides to control light pulses. Such nanostructure deeply etched waveguides strongly confine the light transmitted therein, and offer benefits such as reduced overall linear insertion losses, and maximized optical power coupling efficiency into the nanostructure waveguides. Other optical components may include waveguides which weakly confine the light transmitted therethrough, such as, for example, shallow etched waveguides. By way of example, conventional shallow etched waveguides transmit light efficiently and so are suited for use whenever light is to be sent a substantial distance. For various reasons dictated by the laws of optics, it is eventually preferable to transmit an optical signal through weakly-confining, rather than strongly-confining waveguides. Such weakly-confining waveguides are known, and may be generally characterized as two-dimensional strip waveguides. Weakly-confining waveguides typically have a core width of at least 2 xcexcm. In contrast, strongly-confining waveguides may be deeply etched and have a width of not more than 1 xcexcm. The deeply-etched structure of such waveguides minimizes leakage of optical power carried by the tail of the guided mode into the substrate. Although nanostructure optical devices employ nanostructure deeply etched waveguides, the light pulses eventually will, because of signal transmission issues, pass into weakly-confining conventional shallow etched waveguides, which have lower propagation losses than strongly-confining waveguides. Such weakly-confining waveguides may take the form of shallow etched waveguides and are preferable for transmission of light pulses because they are single mode, and because they are relatively easy to fabricate. Arranging for the efficient passage of light between the two types of waveguides is, however, difficult. For example, light transmitted between weakly-confining and strongly-confining waveguides will be subject to losses, such as reflection loss, which occurs when light propagates from one waveguide to another. Although light can be transferred from a conventional weakly-confining strip waveguide to a nanostructure deeply etched waveguide at a butt joint, such a connection is undesirable because it is subject to losses. The small cross-section of the nanostructure deeply etched waveguide makes its coupling efficiency to the conventional weakly-confining strip waveguide poor. This occurs because the required deep etch of the nanostructure strongly-confining waveguide makes such a structure multi-mode, while the weakly-confining waveguide is single-mode. This means that a significant part of the coupled optical power transmitted into these sections will be carried by the higher order modes and will be radiated when it arrives at the devices which are served by the nanostructure deeply etched waveguide. This effect increases the linear insertion loss of such devices. The term xe2x80x9cwaveguidexe2x80x9d will be understood by those skilled in the art to refer to optical components having a core of material surrounded by cladding, with both the core and cladding being transparent to light and having a respective index of refraction. The core may be a buried structure, in which case it is completely surrounded by cladding. Alternative, the core may be a ridge or strip structure, in which case it is partially surrounded by cladding, and partially surrounded by another medium such as, for example, air or a vacuum having respective index of refraction. To xe2x80x9cstrongly-confinexe2x80x9d generally refers to a difference in refractive indices (xcex94n) between the waveguide core, cladding, and surrounding medium (if provided) of at least a particular amount. To xe2x80x9cweakly-confinexe2x80x9d refers to a waveguide in which the difference in refractive indices between the waveguide core, cladding, and surrounding medium (if provided) is less than that particular amount. A waveguide may be a photonic-wire waveguide, which provides a waveguide core surrounded in all directions transverse to photon propagation direction, such as, for example, both in a width and thickness direction, by a relatively low refractive index (compared with the core) medium such as air, silica, or other relatively low refractive index material, to provided strong photon confinement in all directions perpendicular to their propagation direction in and through .the waveguide core. A waveguide may also be a photonic-well waveguide, which provides a waveguide core surrounded on opposite sides in a direction transverse to photon propagation direction, such as, for example, in a width direction, by a relatively low refractive index medium or material, to provide strong photon confinement in a direction perpendicular to their propagation direction in and through the waveguide core. Thus, there exists a need in the art for an optical component that overcomes the above-described shortcomings of the prior art. In particular, there is a need for devices which increase the coupling efficiency between weakly-confining and strongly-confining waveguides, and which reduce insertion losses at such junctions by decreasing the scattering loss from the side walls of the input and output sections which is due to a shallow etch. The present invention is directed to a novel waveguide structure that provides, in a unitarily formed waveguide, weak photon confinement and strong photon confinement along a propagation direction defined by a core through the waveguide. In an embodiment of the present invention, an optical waveguide through which an optical signal may propagate in a propagation direction and along an optical path comprises a first waveguide section providing weak confinement of the optical signal in a direction generally transverse to the propagation direction and a second waveguide section providing strong confinement of the optical signal in all directions relative to the propagation direction. A tapered neck is provided between the first and said second waveguide sections and a core is defined through the first and second waveguide sections and the tapered neck, and through which the optical signal may propagate in the propagation direction. The present invention is also directed to a method of fabricating a waveguide having a weakly-confining waveguide section and a strongly-confining waveguide section optically coupled by a neck that simultaneously tapers in two directions. The invention accordingly comprises the features of construction, combination of elements, and arrangement of parts which will be exemplified in the disclosure herein. The scope of the invention will be indicated in the claims.
is n? -73, -1/4, 0, 1 Factor 5*w**2 + 175*w - 1470. 5*(w - 7)*(w + 42) Suppose -2*f**2 + 54*f = 0. Calculate f. 0, 27 Solve j**3/6 + 5*j**2/6 - j = 0. -6, 0, 1 Factor -3*r**3 - 111*r**2 + 483*r - 369. -3*(r - 3)*(r - 1)*(r + 41) Factor 15*w**3/8 + 159*w**2/2 + 867*w/8 + 123/4. 3*(w + 1)*(w + 41)*(5*w + 2)/8 Factor -3*i**3 - 63*i**2 - 216*i - 204. -3*(i + 2)**2*(i + 17) Factor -2*q**3 + 12*q**2 + 320*q. -2*q*(q - 16)*(q + 10) Factor -c**5/4 - c**4/2 + c**3 + 2*c**2. -c**2*(c - 2)*(c + 2)**2/4 Determine h so that -2*h**2/19 - 136*h/19 - 2312/19 = 0. -34 Determine y, given that -45*y**3 - 23880*y**2 + 31900*y - 10640 = 0. -532, 2/3 Factor x**3/9 - 109*x**2/9 + 1085*x/3 - 3283. (x - 67)*(x - 21)**2/9 Solve -p**2 + 191*p = 0. 0, 191 Factor 4*s**4 - 248*s**3 + 1344*s**2. 4*s**2*(s - 56)*(s - 6) Factor z**2 + 132*z - 133. (z - 1)*(z + 133) Factor -3*n**2 - 396*n - 1536. -3*(n + 4)*(n + 128) What is t in -3*t**3 - 1803*t**2 - 269997*t + 271803 = 0? -301, 1 Suppose -w**4 + 8*w**3 - 5*w**2 - 14*w = 0. What is w? -1, 0, 2, 7 Factor 4*z**2 + 56*z - 828. 4*(z - 9)*(z + 23) Factor -2*m**3/13 + 56*m**2/13 + 14*m + 124/13. -2*(m - 31)*(m + 1)*(m + 2)/13 Factor -2*x**3 + 1914*x**2 - 3822*x + 1910. -2*(x - 955)*(x - 1)**2 Solve -v**2/2 + 11*v/2 + 513 = 0 for v. -27, 38 Solve -u**4/3 - 17*u**3/3 - 56*u**2/3 - 52*u/3 = 0 for u. -13, -2, 0 Factor -4*r**2 - 1880*r. -4*r*(r + 470) Suppose 2*w**2/13 - 1314*w/13 = 0. What is w? 0, 657 Solve -a**4/5 + 192*a**3/5 - 7943*a**2/5 - 25500*a + 135252/5 = 0. -13, 1, 102 Determine n so that n**4 + 9*n**3 - 31*n**2 - 81*n + 198 = 0. -11, -3, 2, 3 Let 4*b**3 - 544*b**2 + 5140*b - 12600 = 0. What is b? 5, 126 Find s such that 338*s**2/7 + 7176*s/7 + 38088/7 = 0. -138/13 Determine m, given that 2*m**5/3 + 6*m**4 - 14*m**3/3 - 46*m**2 - 36*m = 0. -9, -2, -1, 0, 3 Let -18*t**5 + 3066*t**4 - 127982*t**3 - 219474*t**2 - 103200*t - 14792 = 0. Calculate t. -1, -1/3, 86 Find c such that -4*c**3 - 44*c**2 - 136*c - 96 = 0. -6, -4, -1 Factor -4*w**2 + 132*w + 280. -4*(w - 35)*(w + 2) Factor -2*s**2 - 70*s + 72. -2*(s - 1)*(s + 36) Factor -i**2/10 + 18*i/5 + 41/2. -(i - 41)*(i + 5)/10 Find x, given that 2*x**4 + 12*x**3 - 174*x**2 + 128*x + 312 = 0. -13, -1, 2, 6 Factor z**4/3 + 14*z**3/3 - 16*z**2/3 - 14*z/3 + 5. (z - 1)**2*(z + 1)*(z + 15)/3 Factor -l**5/6 + 10*l**4/3 - 14*l**3 - 272*l**2/3 + 2176*l/3 - 1024. -(l - 8)**3*(l - 2)*(l + 6)/6 Solve -t**5 - 83*t**4 - 1582*t**3 + 6326*t**2 + 17071*t + 9245 = 0 for t. -43, -1, 5 Factor 4*p**2/5 - 3216*p/5 - 6448/5. 4*(p - 806)*(p + 2)/5 Factor -4*x**2/5 + 4174*x/5 - 2086/5. -2*(x - 1043)*(2*x - 1)/5 Suppose -16*k**3 - 2608*k**2 + 7932*k - 5976 = 0. What is k? -166, 3/2 Factor -272*s**2 + 300*s - 28. -4*(s - 1)*(68*s - 7) Find s such that -s**2/4 + 29*s/2 - 841/4 = 0. 29 What is k in -8*k**5/5 + 26*k**4/5 + 64*k**3/5 + 6*k**2 = 0? -1, -3/4, 0, 5 Determine v, given that -2*v**3/9 - 2*v**2/3 + 8*v/9 + 8/3 = 0. -3, -2, 2 Determine s, given that 2*s**2/9 + 352*s/9 - 712/9 = 0. -178, 2 Find z such that -14*z**4/3 + 178*z**3/3 + 107*z**2/6 - 13*z/2 = 0. -1/2, 0, 3/14, 13 Factor 4*g**4 + 44*g**3 + 180*g**2 + 324*g + 216. 4*(g + 2)*(g + 3)**3 Factor -2*l**5/11 - 8*l**4/11 - 2*l**3/11 + 20*l**2/11 + 8*l/11 - 16/11. -2*(l - 1)**2*(l + 2)**3/11 Solve 5*d**2 + 115*d + 450 = 0 for d. -18, -5 Factor -15*z**4 - 18*z**3 + 9*z**2 + 12*z. -3*z*(z + 1)**2*(5*z - 4) Determine p, given that -2*p**4 - 14*p**3 - 34*p**2 - 34*p - 12 = 0. -3, -2, -1 Factor 2*g**4 + 104*g**3 + 1350*g**2 - 104*g - 1352. 2*(g - 1)*(g + 1)*(g + 26)**2 Factor -g**3/7 - 43*g**2/7 - 83*g/7 - 41/7. -(g + 1)**2*(g + 41)/7 Factor -4*x**2/3 + 4*x/3. -4*x*(x - 1)/3 Solve 2*o**5/9 - 14*o**4/9 + 22*o**3/9 - 10*o**2/9 = 0 for o. 0, 1, 5 Factor -m**4 - 24*m**3 - 69*m**2 + 214*m - 120. -(m - 1)**2*(m + 6)*(m + 20) Factor -2*g**5/7 - 16*g**4/7 + 40*g**3/7. -2*g**3*(g - 2)*(g + 10)/7 Factor -5*l**4 + 5980*l**3 + 5985*l**2. -5*l**2*(l - 1197)*(l + 1) Solve -3*k**4 - 468*k**3 - 21126*k**2 - 217260*k - 196599 = 0 for k. -71, -13, -1 Factor -g**5/5 - 9*g**4/5 - 29*g**3/5 - 43*g**2/5 - 6*g - 8/5. -(g + 1)**3*(g + 2)*(g + 4)/5 Let 2*o**4/7 - 16*o**3/7 - 68*o**2/7 + 16*o/7 + 66/7 = 0. Calculate o. -3, -1, 1, 11 Factor -4*y**2 + 33424*y - 69822736. -4*(y - 4178)**2 Let -3*p**2 - 147*p - 540 = 0. Calculate p. -45, -4 What is g in g**4/5 + 6*g**3 + 324*g**2/5 + 1458*g/5 + 2187/5 = 0? -9, -3 Factor 4*b**3 - 108*b**2 - 496*b. 4*b*(b - 31)*(b + 4) Determine u, given that 1372*u**3/3 - 490*u**2 + 175*u - 125/6 = 0. 5/14 Suppose 5*i**3 + 70*i**2 + 285*i + 360 = 0. What is i? -8, -3 Factor -3*w**4 + 45*w**3 + 54*w**2 - 96*w. -3*w*(w - 16)*(w - 1)*(w + 2) Let 65*y**5 + 306*y**4 - 652*y**3/5 - 934*y**2/5 - 249*y/5 - 4 = 0. Calculate y. -5, -4/13, -1/5, 1 What is i in -2*i**2/19 + 54*i/19 + 56/19 = 0? -1, 28 Solve 4*m**5/11 + 2*m**4 - 180*m**2/11 - 324*m/11 - 162/11 = 0. -3, -3/2, -1, 3 Factor -2*s**4 - 46*s**3 + 174*s**2 + 478*s + 260. -2*(s - 5)*(s + 1)**2*(s + 26) Let -27*r**5/5 + 711*r**4/5 + 591*r**3/5 - 2763*r**2/5 - 1932*r/5 - 324/5 = 0. Calculate r. -2, -1/3, 2, 27 Determine y, given that 5*y**3 - 15*y**2 = 0. 0, 3 Factor 2*x**3/15 - 112*x**2 + 31360*x - 8780800/3. 2*(x - 280)**3/15 Determine w, given that w**3/8 - 3*w**2/2 + 9*w/2 = 0. 0, 6 Factor 2*s**3/7 + 20*s**2/7 + 2*s - 36/7. 2*(s - 1)*(s + 2)*(s + 9)/7 Let -7*o**5/3 - 128*o**4/3 - 617*o**3/3 - 628*o**2/3 - 44*o = 0. Calculate o. -11, -6, -1, -2/7, 0 Solve 4*u**3 - 8*u**2 - 32*u = 0. -2, 0, 4 Factor -25*n**3 + 315*n**2 + 480*n + 140. -5*(n - 14)*(n + 1)*(5*n + 2) Solve 3*a**4 + 7884*a**3 + 7769682*a**2 + 3403120716*a + 558962577603 = 0. -657 What is p in 2*p**3/3 + 100*p**2/3 = 0? -50, 0 Factor -20*l**3 - 12*l**2 + 608*l - 240. -4*(l - 5)*(l + 6)*(5*l - 2) Factor 36*s**4 + 2316*s**3 + 35269*s**2 - 63690*s + 27225. (s + 33)**2*(6*s - 5)**2 What is b in -2*b**4 - 10*b**3 + 94*b**2 - 78*b - 180 = 0? -10, -1, 3 Find a, given that 3*a**3 - 78*a**2 + 333*a - 378 = 0. 2, 3, 21 Solve 3*q**4/8 + 261*q**3/2 - 1047*q**2/8 = 0. -349, 0, 1 Factor 5*s**4 - 15*s**3 - 35*s**2 + 75*s + 90. 5*(s - 3)**2*(s + 1)*(s + 2) Solve -4*s**5 - 32*s**4 - 16*s**3 + 88*s**2 + 20*s - 56 = 0 for s. -7, -2, -1, 1 Factor 25*j**2 - 690*j + 1040. 5*(j - 26)*(5*j - 8) Factor -5*k**2 + 110*k - 105. -5*(k - 21)*(k - 1) Find q, given that -q**4/2 - 116*q**3 - 6843*q**2 - 13340*q - 13225/2 = 0. -115, -1 Factor z**5/7 + 18*z**4/7 + 97*z**3/7 + 144*z**2/7 + 64*z/7. z*(z + 1)**2*(z + 8)**2/7 Factor 54*w**3 - 15*w**2/2 - 3*w/2. 3*w*(4*w - 1)*(9*w + 1)/2 Factor 5*p**2 - 150*p + 945. 5*(p - 21)*(p - 9) Factor -4*z**4 + 56*z**3 + 64*z**2 - 56*z - 60. -4*(z - 15)*(z - 1)*(z + 1)**2 Solve 16*x**5/7 + 148*x**4/7 - 332*x**3/7 - 236*x**2/7 + 316*x/7 + 88/7 = 0 for x. -11, -1, -1/4, 1, 2 Let -t**2 + 20*t = 0. Calculate t. 0, 20 Find g, given that -g**5/3 + g**4/3 + g**3/3 - g**2/3 = 0. -1, 0, 1 What is r in -33157124*r**4/3 - 8214752*r**3/3 - 148512*r**2 - 9088*r/3 - 64/3 = 0? -2/11, -2/91 Suppose 4*p**2 + 72*p - 160 = 0. What is p? -20, 2 Factor -4*d**2 + 276*d + 864. -4*(d - 72)*(d + 3) Solve 2*s**5/13 + 6*s**4/13 - 74*s**3/13 - 30*s**2 - 504*s/13 = 0. -4, -3, 0, 7 Factor 2*v**2/11 + 200*v/11 + 768/11. 2*(v + 4)*(v + 96)/11 Factor 2*w**4/7 + 44*w**3/7 + 80*w**2/7. 2*w**2*(w + 2)*(w + 20)/7 Factor -3*m**3/7 + 12*m**2/7 + 36*m/7. -3*m*(m - 6)*(m + 2)/7 Determine q so that -2*q**5 - 18*q**4 - 40*q**3 - 24*q**2 = 0. -6, -2, -1, 0 Determine g, given that 12*g**5 - 68*g**4 - 684*g**3 - 220*g**2 = 0. -5, -1/3, 0, 11 Suppose 3*i**3/2 - 23*i**2/4 - i = 0. What is i? -1/6, 0, 4 Solve -4*o**3 + 28128*o**2 - 65932032*o + 51514894336 = 0 for o. 2344 Factor 4*z**4 - 20*z**3 + 16*z**2. 4*z**2*(z - 4)*(z - 1) Factor -10*p**3 + 65*p**2 - 110*p + 40. -5*(p - 4)*(p - 2)*(2*p - 1) Factor t**5 + 18*t**4 + 108*t**3 + 216*t**2. t**2*(t + 6)**3 Determine a so that -a**5/5 + 9*a**4/5 + a**3/5 - 9*a**2/5 = 0. -1, 0, 1, 9 Let -126025*s**4/2 - 183535*s**3/2 - 30225*s**2 - 1490*s - 20 = 0. What is s? -1, -2/5, -2/71 Factor y**3/9 + 304*y**2/9 + 23405*y/9 + 45602/9. (y + 2)*(y + 151)**2/9 Determine h, given that 2*h**2/7 - 162*h + 1132/7 = 0. 1, 566 Determine i, given that i**5 - 20*i**4 + 48*i**3 + 62*i**2 - 193*i + 102 = 0. -2, 1, 3, 17 Suppose -10*n**4 - 9442*n**3 - 9408*n**2/5 + 60
Q: How to go about freeing an HTML5 video from memory? I have a page that uses a lot of videos, I would like to clear the videos from memory when they are paused, but I am unsure how to do it. I tried using: function pausevideo1(){ var video = document.getElementById("video1"); video.pause(); video.src ="" } but this clears the video from the page entirely so it cannot be played again. Is it possible to clear a video when its paused and reload it again when it is activated? A: Before you clear the src of the video, save the src in the element's dataset, and retrieve it when the user clicks play: function pausevideo1(){ var video = document.getElementById("video1"); video.pause(); video.dataset.vidSrc = video.src; // save src video.src =""; } function loadVideo(id) { var video = document.getElementById(id); video.src = video.dataset.vidSrc; // retrieve src } Note: dataset is compatible in IE > 10. If you need a solution that is compatible with old browsers, then you can use a similar solution, but use setAttribute and getAttribute to save the src into the element.
Q: "I made my own altcoin" The Bitcoin source is easy to fork, and people can make their own altcoins with relative ease. This site regularly sees questions from people who are having trouble with their custom coins, such as https://bitcoin.stackexchange.com/questions/21731/my-scrypt-altcoin-has-no-mainnet-hashrate. My feeling is that such questions should be closed, as they are effectively asking the community to debug code that they can't see. I don't see how anyone could answer except to make wild guesses about what might be wrong, or to just say "Fire up gdb". But I'd like to see if there's a community consensus on this. If so, this meta question could become the canonical place to point posters of such questions, to explain why their question is being closed. Edit: If nothing else, I propose a tag for such questions. Perhaps altcoin-development? A: These are a bit like the finance questions, professional accountants are probably not going to understand the currency sufficiently to be of use. I say that in most cases, because you can go out of your way to make the ramifications of your question clear, without the reader needing deep knowledge of how crypto currencies work. Analogies are great for this, but I digress. I closed the question you linked as unclear, because it is. There's not nearly enough information there to answer it. Any time debuggers / lints / segfaults / etc come in to play, one might immediately think of Stack Overflow but .. like accountants, do experts there understand the currency enough to sufficiently answer a well asked programming question? I'd say, if answerable, and obvious that a deep understanding of the currency sets the stage for the kind of expert that could answer it, they're probably okay here. But I think that would be rare - 'go find your debugger' is a strong indication of a poor question both here and on Stack Overflow.
Monday, January 17, 2011 Manga Monday:Wow, your friend really IS Jumbo! Slice-of-life comics often have a wide appeal. So much so, in fact, that it's surprising that the smaller publishers are just now realizing this and starting to bring them over to the U.S. Kiyohiko Azuma however, is a known author here, with his anime and manga series Azumanga Daioh being a staple of college anime clubs for years. So it makes perfect sense that Yen Press would rescue the license for Yotsuba&! and continue to bring it out for American audiences. Yotsuba&! (pronounced Yot-su-ba-to and meaning Yotsuba and...!) Volume 1 starts with the introduction of our titular character. A five-year-old girl with a vague resemblance to a four leaf clover, she's recently been adopted by Koiwai, who she calls 'dad,' and the two of them have just moved to town. Yotsuba's origins are a little bit vague, as she says she says she lived with her grandparents on an island 'to the left.' From that sketchy backstory comes one of the most charming, naïve and just plain odd little girls in manga. One has to wonder just what island she came from that she doesn't know what doorbells, air conditioners, or department stores are, and yet speaks Japanese perfectly well, as well as reads at an age-appropriate level. Beyond that oddness, though is the wonder of being a young child. Somewhere between preschool and adulthood, things like playing in the rain tend to lose their charm. And that's a shame. Along the way, we meet a slew of recurring characters from the neighborhood. The Ayase sisters take Yotsuba in as either another sister or perhaps as a pet, depending on the the personality of the sister. Their mother too joins in, becoming something of a surrogate mom to Yotsuba. We also get to meet Jumbo, who probably isn't all that tall by Western standards, but leaves people in shock and awe of his 'Jumbo-ness' at first meeting. All in all, Yotsuba&! has a certain 'Andy Griffith Show' feel to the neighborhood. The people in Yotsuba's life are essentially good people, and even the ones who tend to play jokes aren't mean-spirited about it. And that makes for a nice, reliably sweet comic suitable for all ages. Highs:A five-year-old's interpretation of global warning Lows: Of course, it's a manga, so Jumbo has a crush on the eldest Ayase sister
Q: Custom Options Value of Views Grouped Filter? How do I change the select values of a Views groups filter? By default, they are 1,2,3... Default <select> <option value="all">all</option> <option value="1">Group A</option> <option value="2">Group B</option> </select> Desired Result <select> <option value="all">all</option> <option value="mycustomname">Group A</option> <option value="mycustomname2">Group B</option> </select> A: I just did something similar for some data I'm exposing in Views for my custom module (it's kinda the same process no matter where views is getting its data). To do this in the most efficient/easy way possible, you should create your own filter handler. For a really simple example: <?php class custom_handler_filter_countries extends views_handler_filter_string { /** * Shortcut to display the exposed options form. */ function value_form(&$form, &$form_state) { $form['value'] = array( '#type' => 'select', '#title' => t('Countries'), '#options' => function_that_returns_options_you_want(), ); return $form; } } ?> Then, in your module's .info file, add the file containing this class (in this example, 'custom_handler_filter_countries.inc') to the .info file's files array (files[] = includes/custom_handler_filter_countries.inc). To make your particular view use this views handler instead of the default handler, you should be able to change the handler in hook_views_pre_view(), I think. (On my particular site, since I defined the views data table myself, I just set my custom handler for fields that needed it. For your site, since the normal handler is already defined by Address Field, you need to override the filter handler. This discussion may also have a possible solution: Custom Options Value of Views Grouped Filter?
(The following is the prepared testimony that Nick Hanauer delivered before the wage board in New York City on Monday, June 15th. Yesterday, the wage board embraced Hanauer’s suggestions, recommending a $15 minimum wage in New York City by 2018 and throughout New York state by 2021.) I’d like to start with a quote that may be familiar to you: From our perspective, raising the minimum wage is a job killer…If the minimum wage were increased, there would be price inflation for consumers or we would likely employ fewer people. —Domino’s Pizza CEO David Brandon. Raising the wage above $5.15 is a “job killer” at Domino’s? According to their 10-K filings, Domino’s and their franchisees currently employ 220,000 people—an increase of more than 70,000 (almost 30%) since 2004. “When wages go up, employment goes down.” This so-called “theory” is presented by industry and the economists we employ, as if it is an immutable law of physics describing the real world. The focus of my testimony this morning is to show that it is not. It’s not just that this claim isn’t always true. Or that it isn’t even usually true. It’s more or less never true. The claim that when wages rise, employment shrinks does not describe how the real world works. It is a scam and an intimidation tactic. The only thing true about this claim is that if business owners like me can get workers to believe it is true, that will be very advantageous to business owners like me. Which is why we say it again and again and again, even though it’s not true. It is really just a polite way of saying “I am rich, you are poor. I prefer to keep it that way.” Saying that if wages go up, the economic sky will fall is what I call “Chicken Little Economics.” According to the U.S. Department of Labor, “a review of 64 studies on minimum wage increases found no discernible effect on employment.” And contrary to popular belief, relatively large minimum-wage hikes like those recently passed in Seattle, San Francisco, and Los Angeles are not unprecedented. For example, the federal minimum wage jumped 88% in one year, from 40 cents an hour in 1949 to 75 cents in 1950. Yet despite the usual warning from the Chicken Littles at the National Association of Manufacturing that the hike would prove “a reckless jolt to the economic system,” unemployment plummeted, from 5.9% in 1949 to 2.9% in 1953. Likewise, my home state of Washington raised the minimum wage for tipped workers by 85% between 1988 and 1990—yet over the following decade restaurant employment growth somehow managed to outpace the nation as a whole. I live in Seattle, the first major city in the US to enact a $15 minimum wage. But a high minimum wage was not a departure for us or something new. Seattle already had the highest minimum wage in the country. Rather, $15 was a continuation of an economic strategy that already was allowing our city to outperform yours. Our current state minimum wage is $9.47—30% higher than the federal minimum. Seattle’s minimum wage is now $11.00 , 52% higher than the national minimum. But we have no tip penalty in our state, so our tipped workers make $11 plus tips, 513% higher than the federal tipped minimum of $2.13, and more than twice the $5 still paid here in NY. So, if the good people from the industry were right, that a higher minimum wage killed jobs, then we should have no restaurants in Seattle, right? You would have to bring food and cooking equipment when you came to visit us in the hinterlands. How could it not be otherwise, with these stratospherically high wages? But here’s a really odd thing. Not only do we still have some restaurants in Seattle, we have a lot of them. In fact, we have more of them per capita than even—wait for it—New York City. According to a Bloomberg analysis, of all major cities in the US, Seattle ranks second in restaurants per capita. New York is number four. Read it and weep, New York. OK, so surely the number one spot will be held by some low-wage paradise, right? Not hardly. The number one spot is San Francisco, the only place in America that pays restaurant workers $12.25, even more than Seattle. Why? How can this be? They told us that high wages killed jobs!! And business! And the economy! Nonsense. Seattle has more restaurants than New York because that’s how capitalism works. The fundamental law of capitalism is: when workers have more money, businesses have more customers, and need to hire more workers. In places where wages are high, business is good—particularly for restaurants. Let me say that another way. When restaurants pay restaurant workers enough so that even they can afford to eat in restaurants, that isn’t bad for the restaurant business—it’s great for it, despite what the good folks at the National Restaurant Association may tell you. With the highest minimum wage in the country, my state somehow manages to outpace the rest of the country in small business job growth. According to Paychex IHS Small Business Job Index, Washington, after leading for most of the last two years, is still number two in small business job creation. Why? Because a person earning $7.25 an hour, or $2.13 plus tips, isn’t eating in restaurants. Or visiting the hair salon. Or taking piano lessons. Or sending mom flowers on Mother’s Day. So why the disconnect? Why do so many people—good people—claim that when wages go up it will be bad for employment and the economy? And yet, it never is. The answer is simple. From the point of view of the individual business owner, paying workers more is bad. Paying them less is good. But only from the point of view of the individual business owner—which is as far as most business owners think. Here’s how that thinking goes. I’ll run my business and pay poverty wages and make high profits. And hopefully, everyone else who runs a businesses will pay their workers well. Your workers have the money to buy what my company makes. But, sadly, my workers will not be able to reciprocate and buy the products your company makes. Your workers will pay taxes. Sadly, my workers won’t be able to afford taxes. In fact, they will need taxpayer-funded services like food stamps and Medicaid that your workers’ taxes will pay for. So I ask you: who wouldn’t want that deal? But the problem with that deal is that it is both morally questionable and economically unsustainable. It’s what we call a free rider problem. Because while it is awesome if I can get you to go along with that deal, it won’t work out if everyone gets that deal. Because if every company owner paid every worker poverty wages, then who will buy the stuff? And who will pay the taxes? All human endeavors depend on solving obvious collective action problems. Should we put our own fires out? Or should we have a fire department? Should we hack our way through the forest to visit and trade? Or should we build roads? Should only some businesses pay workers enough to support themselves without tax payer assistance? Or should all businesses be required to do so? It’s pretty obvious to me that the latter is the only answer. And it’s also really obvious why business people and industries who currently have this incredible deal—they pay poverty wages, while being supported by businesses who pay decent wages—want to keep that deal. Who wouldn’t? But wake up, New York. That is what is going on here. That is what is going on every place industry objects to paying workers fair wages. And it always has. Raising wages for fast food workers won’t reduce employment or harm business. The people saying it will are simply trying to scare you and intimidate their workers. In fact, paying your fast food workers decent wages will be great for those workers, great for business and great for New York’s taxpayers too. And it may finally give you New Yorkers a shot at having a restaurant industry as robust as the one we enjoy in the hinterlands like Seattle and San Francisco. Comments comments
Edward William Carlson Edward William Carlson was an American painter known specifically for his miniature portraits. He exhibited works at the Art Institute of Chicago, Arts Club of Chicago, Royal Academy (Kungliga Akademien för de fria konsterna) in Stockholm, Sweden, National Academy of Design in New York, Swedish Club of Chicago, and the Cincinnati Museum for Art among others. Childhood Edward William Carlson (May 4, 1883 – July 26, 1932) was an American miniature portraitist. His parents were Swedish immigrants Minnie and John. Carlson spent most of his childhood in Chicago, Illinois where his parents owned and operated the Englewood Home Laundry. At four years of age circa 1887 Carlson fell ill with scarlet fever, and as a result, lost both his hearing and eventually his speech. Carlson was one of eight siblings though two died young. His remaining brothers and sisters, of whom he was the oldest, were Enoch, Amanda, Esther, Arvid and John. Circa 1900 the Carlson family moved near Grovertown, Indiana where they bought or leased a farm near those of his mother's brothers. At this time Edward Carlson's occupation is a farmhand. He was seventeen years old. Adulthood As an adolescent Carlson showed an aptitude for painting. Later, after working on the family farm in Indiana, he returned to Chicago and attended the School of the Art Institute of Chicago. Also attending the Art Institute was Eva Randolph Dorchester (August 28, 1880 – October 14, 1926) of Sherman, Texas who was deaf-mute since birth. Eva had been born in Kentucky. Her father, C. B. Dorchester, was a banker. Before attending the Art Institute (1898-1901) Eva had been a student at the Austin Ward 11, Texas School for the Deaf in Travis, Texas, from 1888 to 1900. Carlson and Eva met at the Art Institute and their relationship grew. Between 1907 and 1910 Carlson boarded at various residences in Chicago, probably while he was attending the School of the Art Institute of Chicago. By 1910 he was living with his uncle August Holmquist, his aunt Hanna, and his young cousins Alma, Ebba, Alice and Violet at their home on 2700, West 23rd Street, Chicago. Eva graduated from the School of the Art Institute of Chicago on June 23, 1911, having taken the three year course in drawing, painting and sculpture. That same year, after visiting Edward's family in Indiana, Edward and Eva were married on Wednesday, October 4, 1911, in Hopkins, Texas. Afterward they returned to Chicago. At an exhibit at the Swedish Club of Chicago in 1912 Edward won the prize for miniature painting. On December 3, 1913 Eva gave birth to their daughter, Marjorie Nellie. In 1915 Carlson again exhibited some of his portraits at the Swedish Club of Chicago. "Among the miniature portraits there are six by Edward Carlson of Chicago, which are the pride of the exhibit." An accomplished miniature portraitist, Edward's work appeared often at the Art Institute of Chicago, and in other venues. In 1920 a number of his portraits were included in an exhibition of one hundred pieces by forty artists, which traveled first to New York and then to the cities of Stockholm, Göteborg (Gothenburg) and Malmö, Sweden. The art critic Elisabeth Luther Cary, the first full-time art critic for the New York Times, wrote, "The small group of miniatures by Ed. W. Carlson sets a neat period [to the end] of the exhibition. They are careful and expert in execution and show unremitting interest in essential character [of his subjects] which is the best gift Sweden has sent to the art of America." On October 14, 1926, at 46 years of age, Eva died of stomach cancer in her mother's home in Texas. Marjorie was twelve years old. In July of the following year, Edward and Marjorie took the train to the National Fraternal Society for the Deaf (NFSD) convention, attended by about 800 people, in Denver, Colorado. After the convention Edward and Marjorie headed west. "Ed. W. Carlson and 13-year-old daughter, Majorie, left Denver for Spokane, Portland, California and Texas points a two months' trip. It was very touching to see the tender care with which Carlson consoled his little girl [whose] mother died in Texas...." In 1929 Carlson was awarded first prize in the miniature category from the Swedish Club of Chicago. By 1930 Edward and Marjorie were boarders with the Hooper family, who were originally from Texas, at 7143 Evans Avenue, Chicago. Edward died five years after the death of Eva, on Tuesday, July 26, 1932, after a long struggle with asthma, at his sister's home in Chikaming, Berrien, Michigan, USA. He was forty-nine years of age. Marjorie was now eighteen years old. Memberships Carlson was a founding member of the Chicago Society of Miniature Painters, a member of the Society of Western Artists, and the Chicago Society of Artists. Paintings Paintings by Edward William Carlson include: A Man A Priest A Study of Profile Baby Beatrice Edward Hines, Jr. Grace Lady in Green Lieutenant Barsanti Little Nellie Margaret Miss Annie Page Miss Ruth Larson Miss S. Mother Mr. John Olson Mr. William S. Taylor Mrs. A. W. Loeb Mrs. A. X. Schmitt Mrs. Emil Wetten Mrs. H. Mrs. Mabel Sykes Mrs. Manff and Two Children My Daughter Marjorie Portrait of a Boy in Uniform Portrait of an Artist Portrait of Arthur Portrait of Elsie Portrait of Esther Portrait of Katherine Wilson Portrait of Kathryn S. Portrait of Mabel Sykes Portrait of Miss E. H. Portrait of Miss M. Portrait of Miss R. Portrait of Miss S. Portrait of Mother Portrait of Mr. C. S. Peterson Portrait of Mr. J. F. B. Portrait of Mrs. C. B. Dorchester Portrait of Mrs. C. S. Terry Portrait of Mrs. F. C. Dillard Portrait of Mrs. S. Portrait of Mrs. W. Portrait of My Baby Portrait of My Sister Portrait of Reverend H. Portrait of the Late Mr. L. T. W. Rev. P. J. Hasenstat Robert Winslow Winchell Teddy Lindstrom The Late Mr. Frederick Waskow The Late Mr. Lindsay F. Woodcock The Late Mr. Tom Randolph Virginia Footnotes References Category:American male painters Category:Portrait miniaturists Category:American portrait painters Category:20th-century American painters Category:Artists from Illinois Category:Artists from Chicago Category:1883 births Category:1932 deaths Category:People from Chicago Category:American people of Swedish descent
# -*- coding: utf-8 -*- """ Single-Machine Model Parallel Best Practices ================================ **Author**: `Shen Li <https://mrshenli.github.io/>`_ Model parallel is widely-used in distributed training techniques. Previous posts have explained how to use `DataParallel <https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html>`_ to train a neural network on multiple GPUs; this feature replicates the same model to all GPUs, where each GPU consumes a different partition of the input data. Although it can significantly accelerate the training process, it does not work for some use cases where the model is too large to fit into a single GPU. This post shows how to solve that problem by using **model parallel**, which, in contrast to ``DataParallel``, splits a single model onto different GPUs, rather than replicating the entire model on each GPU (to be concrete, say a model ``m`` contains 10 layers: when using ``DataParallel``, each GPU will have a replica of each of these 10 layers, whereas when using model parallel on two GPUs, each GPU could host 5 layers). The high-level idea of model parallel is to place different sub-networks of a model onto different devices, and implement the ``forward`` method accordingly to move intermediate outputs across devices. As only part of a model operates on any individual device, a set of devices can collectively serve a larger model. In this post, we will not try to construct huge models and squeeze them into a limited number of GPUs. Instead, this post focuses on showing the idea of model parallel. It is up to the readers to apply the ideas to real-world applications. .. note:: For distributed model parallel training where a model spans multiple servers, please refer to `Getting Started With Distributed RPC Framework <rpc_tutorial.html>`__ for examples and details. Basic Usage ----------- """ ###################################################################### # Let us start with a toy model that contains two linear layers. To run this # model on two GPUs, simply put each linear layer on a different GPU, and move # inputs and intermediate outputs to match the layer devices accordingly. # import torch import torch.nn as nn import torch.optim as optim class ToyModel(nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = torch.nn.Linear(10, 10).to('cuda:0') self.relu = torch.nn.ReLU() self.net2 = torch.nn.Linear(10, 5).to('cuda:1') def forward(self, x): x = self.relu(self.net1(x.to('cuda:0'))) return self.net2(x.to('cuda:1')) ###################################################################### # Note that, the above ``ToyModel`` looks very similar to how one would # implement it on a single GPU, except the five ``to(device)`` calls which # place linear layers and tensors on proper devices. That is the only place in # the model that requires changes. The ``backward()`` and ``torch.optim`` will # automatically take care of gradients as if the model is on one GPU. You only # need to make sure that the labels are on the same device as the outputs when # calling the loss function. model = ToyModel() loss_fn = nn.MSELoss() optimizer = optim.SGD(model.parameters(), lr=0.001) optimizer.zero_grad() outputs = model(torch.randn(20, 10)) labels = torch.randn(20, 5).to('cuda:1') loss_fn(outputs, labels).backward() optimizer.step() ###################################################################### # Apply Model Parallel to Existing Modules # ---------------------------------------- # # It is also possible to run an existing single-GPU module on multiple GPUs # with just a few lines of changes. The code below shows how to decompose # ``torchvision.models.resnet50()`` to two GPUs. The idea is to inherit from # the existing ``ResNet`` module, and split the layers to two GPUs during # construction. Then, override the ``forward`` method to stitch two # sub-networks by moving the intermediate outputs accordingly. from torchvision.models.resnet import ResNet, Bottleneck num_classes = 1000 class ModelParallelResNet50(ResNet): def __init__(self, *args, **kwargs): super(ModelParallelResNet50, self).__init__( Bottleneck, [3, 4, 6, 3], num_classes=num_classes, *args, **kwargs) self.seq1 = nn.Sequential( self.conv1, self.bn1, self.relu, self.maxpool, self.layer1, self.layer2 ).to('cuda:0') self.seq2 = nn.Sequential( self.layer3, self.layer4, self.avgpool, ).to('cuda:1') self.fc.to('cuda:1') def forward(self, x): x = self.seq2(self.seq1(x).to('cuda:1')) return self.fc(x.view(x.size(0), -1)) ###################################################################### # The above implementation solves the problem for cases where the model is too # large to fit into a single GPU. However, you might have already noticed that # it will be slower than running it on a single GPU if your model fits. It is # because, at any point in time, only one of the two GPUs are working, while # the other one is sitting there doing nothing. The performance further # deteriorates as the intermediate outputs need to be copied from ``cuda:0`` to # ``cuda:1`` between ``layer2`` and ``layer3``. # # Let us run an experiment to get a more quantitative view of the execution # time. In this experiment, we train ``ModelParallelResNet50`` and the existing # ``torchvision.models.resnet50()`` by running random inputs and labels through # them. After the training, the models will not produce any useful predictions, # but we can get a reasonable understanding of the execution times. import torchvision.models as models num_batches = 3 batch_size = 120 image_w = 128 image_h = 128 def train(model): model.train(True) loss_fn = nn.MSELoss() optimizer = optim.SGD(model.parameters(), lr=0.001) one_hot_indices = torch.LongTensor(batch_size) \ .random_(0, num_classes) \ .view(batch_size, 1) for _ in range(num_batches): # generate random inputs and labels inputs = torch.randn(batch_size, 3, image_w, image_h) labels = torch.zeros(batch_size, num_classes) \ .scatter_(1, one_hot_indices, 1) # run forward pass optimizer.zero_grad() outputs = model(inputs.to('cuda:0')) # run backward pass labels = labels.to(outputs.device) loss_fn(outputs, labels).backward() optimizer.step() ###################################################################### # The ``train(model)`` method above uses ``nn.MSELoss`` as the loss function, # and ``optim.SGD`` as the optimizer. It mimics training on ``128 X 128`` # images which are organized into 3 batches where each batch contains 120 # images. Then, we use ``timeit`` to run the ``train(model)`` method 10 times # and plot the execution times with standard deviations. import matplotlib.pyplot as plt plt.switch_backend('Agg') import numpy as np import timeit num_repeat = 10 stmt = "train(model)" setup = "model = ModelParallelResNet50()" # globals arg is only available in Python 3. In Python 2, use the following # import __builtin__ # __builtin__.__dict__.update(locals()) mp_run_times = timeit.repeat( stmt, setup, number=1, repeat=num_repeat, globals=globals()) mp_mean, mp_std = np.mean(mp_run_times), np.std(mp_run_times) setup = "import torchvision.models as models;" + \ "model = models.resnet50(num_classes=num_classes).to('cuda:0')" rn_run_times = timeit.repeat( stmt, setup, number=1, repeat=num_repeat, globals=globals()) rn_mean, rn_std = np.mean(rn_run_times), np.std(rn_run_times) def plot(means, stds, labels, fig_name): fig, ax = plt.subplots() ax.bar(np.arange(len(means)), means, yerr=stds, align='center', alpha=0.5, ecolor='red', capsize=10, width=0.6) ax.set_ylabel('ResNet50 Execution Time (Second)') ax.set_xticks(np.arange(len(means))) ax.set_xticklabels(labels) ax.yaxis.grid(True) plt.tight_layout() plt.savefig(fig_name) plt.close(fig) plot([mp_mean, rn_mean], [mp_std, rn_std], ['Model Parallel', 'Single GPU'], 'mp_vs_rn.png') ###################################################################### # # .. figure:: /_static/img/model-parallel-images/mp_vs_rn.png # :alt: # # The result shows that the execution time of model parallel implementation is # ``4.02/3.75-1=7%`` longer than the existing single-GPU implementation. So we # can conclude there is roughly 7% overhead in copying tensors back and forth # across the GPUs. There are rooms for improvements, as we know one of the two # GPUs is sitting idle throughout the execution. One option is to further # divide each batch into a pipeline of splits, such that when one split reaches # the second sub-network, the following split can be fed into the first # sub-network. In this way, two consecutive splits can run concurrently on two # GPUs. ###################################################################### # Speed Up by Pipelining Inputs # ----------------------------- # # In the following experiments, we further divide each 120-image batch into # 20-image splits. As PyTorch launches CUDA operations asynchronously, the # implementation does not need to spawn multiple threads to achieve # concurrency. class PipelineParallelResNet50(ModelParallelResNet50): def __init__(self, split_size=20, *args, **kwargs): super(PipelineParallelResNet50, self).__init__(*args, **kwargs) self.split_size = split_size def forward(self, x): splits = iter(x.split(self.split_size, dim=0)) s_next = next(splits) s_prev = self.seq1(s_next).to('cuda:1') ret = [] for s_next in splits: # A. s_prev runs on cuda:1 s_prev = self.seq2(s_prev) ret.append(self.fc(s_prev.view(s_prev.size(0), -1))) # B. s_next runs on cuda:0, which can run concurrently with A s_prev = self.seq1(s_next).to('cuda:1') s_prev = self.seq2(s_prev) ret.append(self.fc(s_prev.view(s_prev.size(0), -1))) return torch.cat(ret) setup = "model = PipelineParallelResNet50()" pp_run_times = timeit.repeat( stmt, setup, number=1, repeat=num_repeat, globals=globals()) pp_mean, pp_std = np.mean(pp_run_times), np.std(pp_run_times) plot([mp_mean, rn_mean, pp_mean], [mp_std, rn_std, pp_std], ['Model Parallel', 'Single GPU', 'Pipelining Model Parallel'], 'mp_vs_rn_vs_pp.png') ###################################################################### # Please note, device-to-device tensor copy operations are synchronized on # current streams on the source and the destination devices. If you create # multiple streams, you have to make sure that copy operations are properly # synchronized. Writing the source tensor or reading/writing the destination # tensor before finishing the copy operation can lead to undefined behavior. # The above implementation only uses default streams on both source and # destination devices, hence it is not necessary to enforce additional # synchronizations. # # .. figure:: /_static/img/model-parallel-images/mp_vs_rn_vs_pp.png # :alt: # # The experiment result shows that, pipelining inputs to model parallel # ResNet50 speeds up the training process by roughly ``3.75/2.51-1=49%``. It is # still quite far away from the ideal 100% speedup. As we have introduced a new # parameter ``split_sizes`` in our pipeline parallel implementation, it is # unclear how the new parameter affects the overall training time. Intuitively # speaking, using small ``split_size`` leads to many tiny CUDA kernel launch, # while using large ``split_size`` results to relatively long idle times during # the first and last splits. Neither are optimal. There might be an optimal # ``split_size`` configuration for this specific experiment. Let us try to find # it by running experiments using several different ``split_size`` values. means = [] stds = [] split_sizes = [1, 3, 5, 8, 10, 12, 20, 40, 60] for split_size in split_sizes: setup = "model = PipelineParallelResNet50(split_size=%d)" % split_size pp_run_times = timeit.repeat( stmt, setup, number=1, repeat=num_repeat, globals=globals()) means.append(np.mean(pp_run_times)) stds.append(np.std(pp_run_times)) fig, ax = plt.subplots() ax.plot(split_sizes, means) ax.errorbar(split_sizes, means, yerr=stds, ecolor='red', fmt='ro') ax.set_ylabel('ResNet50 Execution Time (Second)') ax.set_xlabel('Pipeline Split Size') ax.set_xticks(split_sizes) ax.yaxis.grid(True) plt.tight_layout() plt.savefig("split_size_tradeoff.png") plt.close(fig) ###################################################################### # # .. figure:: /_static/img/model-parallel-images/split_size_tradeoff.png # :alt: # # The result shows that setting ``split_size`` to 12 achieves the fastest # training speed, which leads to ``3.75/2.43-1=54%`` speedup. There are # still opportunities to further accelerate the training process. For example, # all operations on ``cuda:0`` is placed on its default stream. It means that # computations on the next split cannot overlap with the copy operation of the # prev split. However, as prev and next splits are different tensors, there is # no problem to overlap one's computation with the other one's copy. The # implementation need to use multiple streams on both GPUs, and different # sub-network structures require different stream management strategies. As no # general multi-stream solution works for all model parallel use cases, we will # not discuss it in this tutorial. # # **Note:** # # This post shows several performance measurements. You might see different # numbers when running the same code on your own machine, because the result # depends on the underlying hardware and software. To get the best performance # for your environment, a proper approach is to first generate the curve to # figure out the best split size, and then use that split size to pipeline # inputs. #
Conspiracy theorist Alex Jones arrested for DWI in Texas Authorities in Texas say conspiracy theorist Alex Jones was arrested in Texas on a misdemeanor charge of driving while intoxicated AUSTIN, Texas -- Conspiracy theorist Alex Jones was arrested in Texas on a misdemeanor charge of driving while intoxicated, authorities said Tuesday. Jones was booked into an Austin jail shortly after midnight and released on bond a few hours later, Travis County Sheriff's Office spokeswoman Kristen Dark said. She said she had no further details and an arrest report was not immediately available. Jones is being sued in Austin by the parents of a 6-year-old victim of the 2012 Sandy Hook massacre who claim the Infowars host used his show to promote falsehoods that the shooting was a hoax. His attorney in that case did not immediately respond to a Tuesday message seeking comment about his arrest. Jones founded Infowars and produces his radio show in Austin. An article posted on his Infowars website said Jones discussed the arrest on his show Tuesday and claimed that his blood-alcohol level was under the legal limit of .08 percent.
Connect with Us Island County Sheriff's Office releases sketch of burglar The Island County Sheriff's Office released this sketch Tuesday of the man suspected of breaking into a South Whidbey home on Dec. 21. — image credit: Island County Sheriff's Office The Island County Sheriff's Office has released a sketch of the man who broke into a home on Kolia Place and assaulted a homeowner during a burglary last week. A police sketch artist met with the crime victims of the daylight break-in to create a drawing of the burglar. An initial report described the prowler as a thin white man, with long reddish hair in a ponytail, who was wearing a khaki-colored shirt, Dockers-style pants and a blue baseball cap at the time of the break-in. The burglar assaulted a resident who discovered the prowler in his home near South Whidbey State Park just after 10:30 a.m. Dec. 21. The prowler, who told the homeowner his name was "Jeff," escaped from the home after hitting the homeowner in the face and then ran off into the nearby woods. A police search for the intruder came up empty. Anyone with information on the burglary can contact Detective Mark Plumberg via the ICOM dispatch center at 360-679-9567. We encourage an open exchange of ideas on this story's topic, but we ask you to follow our guidelines for respecting community standards. Personal attacks, inappropriate language, and off-topic comments may be removed, and comment privileges revoked, per our Terms of Use. Please see our FAQ if you have questions or concerns about using Facebook to comment.
/** * * WARNING! This file was autogenerated by: * _ _ _ _ __ __ * | | | | | | |\ \ / / * | | | | |_| | \ V / * | | | | _ | / \ * | |_| | | | |/ /^\ \ * \___/\_| |_/\/ \/ * * This file was autogenerated by UnrealHxGenerator using UHT definitions. * It only includes UPROPERTYs and UFUNCTIONs. Do not modify it! * In order to add more definitions, create or edit a type with the same name/package, but with an `_Extra` suffix **/ package unreal; @:glueCppIncludes("Classes/Engine/SkeletalMeshLODSettings.h") @:noCopy @:noEquals @:uextern @:ustruct extern class FSkeletalMeshLODGroupSettings { /** The optimization settings to use for the respective LOD level **/ @:uproperty public var ReductionSettings : unreal.FSkeletalMeshOptimizationSettings; /** Pose which should be used to reskin vertex influences for which the bones will be removed in this LOD level, uses ref-pose by default **/ @:uproperty public var BakePose : unreal.UAnimSequence; /** Weight of how much consider for BonesToPrioritize. 0 means nothing, and 1 means take all source **/ @:uproperty public var WeightOfPrioritization : unreal.Float32; /** Bones which should be prioritized for the quality, this will be weighted toward keeping source data. **/ @:uproperty public var BonesToPrioritize : unreal.TArray<unreal.FName>; /** Bones which should be removed from the skeleton for the LOD level **/ @:uproperty public var BoneList : unreal.TArray<unreal.FBoneFilter>; /** Bones which should be removed from the skeleton for the LOD level **/ @:uproperty public var BoneFilterActionOption : unreal.EBoneFilterActionOption; /** Used to avoid 'flickering' when on LOD boundary. Only taken into account when moving from complex->simple. **/ @:uproperty public var LODHysteresis : unreal.Float32; /** The screen sizes to use for the respective LOD level **/ @:uproperty public var ScreenSize : unreal.FPerPlatformFloat; }
Identification of Poly(ADP-Ribose) Polymerase Macrodomain Inhibitors Using an AlphaScreen Protocol. Macrodomains recognize intracellular adenosine diphosphate (ADP)-ribosylation resulting in either removal of the modification or a protein interaction event. Research into compounds that modulate macrodomain functions could make important contributions. We investigated the interactions of all seven individual macrodomains of the human poly(ADP-ribose) polymerase (PARP) family members PARP9, PARP14, and PARP15 with five mono-ADP-ribosylated (automodified) ADP-ribosyltransferase domains using an AlphaScreen assay. Several mono-ADP-ribosylation-dependent interactions were identified, and they were found to be in the micromolar affinity range using surface plasmon resonance (SPR). We then focused on the interaction between PARP14 macrodomain-2 and the mono-ADP-ribosylated PARP10 catalytic domain, and probed a ~1500-compound diverse library for inhibitors of this interaction using AlphaScreen. Initial hit compounds were verified by concentration-response experiments using AlphaScreen and SPR, and they were tested against PARP14 macrodomain-2 and -3. Two initial hit compounds and one chemical analog each were further characterized using SPR and microscale thermophoresis. In conclusion, our results reveal novel macrodomain interactions and establish protocols for identification of inhibitors of such interactions.
Graveyardbride Administrator Posts: 5,582 Administrator Cemetery Vandalized, Infant's Body Pulled from Grave Quote Select Post Select Post Deselect Post Deselect Post Link to Post Link to Post Member Give Gift Member Back to Top Post by Graveyardbride on Cemetery Vandalized, Infant’s Body Pulled from Grave Deputies in Clay County are trying to figure out who dug up an infant’s grave at the Fowler Cemetery, located at 4770 Witch Hazel Road in Middleburg, Florida. The graveyard is privately-owned. The family found several headstones vandalized and the remains of Makayla Merriweather, who died before birth in May 2007, had been removed from the grave. The child’s uncle discovered the vandalism while he and his mother, Maude Burroughs Jackson, were visiting the graves of relatives. “I saw him crying,” Jackson told reporters. “He said, ‘I can’t look at it! I can’t look at it!’ I said, ‘Look at what? What’s going on?’ And he said, ‘Makayla is on the ground.’” Fowler Cemetery is sacred ground for Jackson’s family. Her husband is buried there as are many other black residents of Middleburg’s Hill Top community. In addition to the disinterment, vandals overturned several headstones and shattered some of the monuments. “I watched so many people today pass by and just cry,” Jackson continued. At this point, authorities have not been able to determine if the child’s body was removed by the vandals or an animal. A spokesperson explained the grave was too shallow to determine who, or what, may have pulled the tiny corpse from its resting place. “I just hope whoever did this will be brave enough to stand up and say, ‘I made a mistake. I did it,’” Jackson added. “The desecration of a grave is a second-degree felony and a personal offense to me as the Sheriff of our county,” Sheriff Darryl Daniels said, “I will exhaust every resource to find out who perpetrated this crime and will follow this case to its conclusion in court.” Anyone with information is encouraged to contact the Clay County Sheriff’s non-emergency line at (904) 264-6512. Source: WJAX/WAWS, October 30, 2019.
Mortgage Fraud Ringleader Jailed for Scam Involving 70 Homes Michael Anthony Prieskorn, 37, Ellendale, Minnesota, was sentenced on charges stemming from a mortgage fraud scheme that resulted in losses of at least $18 million for mortgage lenders. On May 10, 2012, United States District Court Judge Paul A. Magnuson sentenced the defendant for orchestrating the scheme, which involved the purchase of approximately 70 residential properties in Florida and Minnesota between December 2006 and April 2007. Prieskorn was sentenced to 72 months in prison on one count of conspiracy to commit wire fraud and one count of engaging in an illegal monetary transaction. Prieskorn was indicted on January 20, 2010, and pleaded guilty on March 23, 2010. As previously reported by Mortgage Fraud Blog, Prieskorn admitted he and others conspired to obtain mortgage loan proceeds by luring buyers to purchase properties. In return, Prieskorn promised the buyers $5,000 for every property purchased. He also promised to make all mortgage payments and pay all other bills associated with the properties for a specific term, after which, he would sell the properties at no cost to the original buyers or “investors.” Prieskorn maintained that the mortgage loans were risk free to their investors, knowing all the while the 20 investors were responsible for the loans. Following the closing of these real estate transactions, many investors defaulted on their mortgage loans and were forced into short sales or foreclosure. Yet, Prieskorn admitted receiving at least $1 million in gross receipts as a result of the scam. In pleading guilty, Prieskorn also admitted concealing from mortgage lenders that he temporarily deposited funds into the bank accounts of some investors to misrepresent the true financial status of those buyers, thereby inducing lender approval of the mortgage loans. He also concealed from the 20 mortgage lenders that he paid the down payments and closing costs for the investors. In furtherance of the scheme, Prieskorn transferred money, by wire, into investors’ bank accounts and caused the faxing of fraudulent mortgage loan applications to potential mortgage lenders. He also caused lenders to make wire transfers of mortgage loan proceeds on related real estate transactions. Specific to the monetary transaction count, Prieskorn structured financial transactions to conceal that he was the recipient of funds from the fraud. Those transactions included a $225,000 transfer on May 7, 2007. On February 8, 2011, Judge Magnuson sentenced Prieskorn‘s co-defendant Richard Matthew Laho, 55, Buffalo, Minnesota, to five years of probation on one count of mail fraud. He was also indicted on January 20, 2010, and pleaded guilty on July 8, 2010. In his plea agreement, Laho admitted that in March and April of 2007, he took part in the scheme by participating in a real estate purchase in Naples, Florida. In that transaction, the buyer was given $5,000 for purchasing the property and falsely told that all mortgage payments and other bills associated with the property would be paid for him. He also was told that the property eventually would be sold as an investment. Laho admitted misleading the lender into believing, however, that the buyer was intending to be the true owner and resident of the home. The property eventually went into foreclosure, resulting in a loss to the mortgage lender of between $490,000 and $690,000. “Mortgage fraud creates so much harm to individuals, businesses, and our economy, but today’s sentencing is a strong reminder how serious our courts consider this criminal activity,” said Kelly R. Jackson, Special Agent in Charge, IRS-Criminal Investigation, (IRS-CID), St. Paul Field Office Field office. “IRS-CID is committed to ‘following the money trail’ to ensure that those who engage in these illegal activities are vigorously investigated and brought to justice.” This case was the result of an investigation by the Internal Revenue Service-Criminal Investigation Division, the Eagan Police Department, the Minnesota Department of Commerce, the U.S. Secret Service, the Minnesota Financial Crimes Task Force, and the Minnesota Bureau of Criminal Apprehension. It was prosecuted by Assistant U.S. Attorneys Tracy L. Perzel and Robert M. Lewis. This law enforcement action is in part sponsored by the interagency Financial Fraud Enforcement Task Force. The task force was established to wage an aggressive, coordinated and proactive effort to investigate and prosecute financial crimes. It includes representatives from a broad range of federal agencies, regulatory authorities, inspectors general, and state and local law enforcement who, working together, bring to bear a powerful array of criminal and civil enforcement resources. The task force is working to improve efforts across the federal executive branch and, with state and local partners, investigate and prosecute significant financial crimes, ensure just and effective punishment for those who perpetrate financial crimes, combat discrimination in the lending and financial markets, and recover proceeds for victims of financial crimes. Allison Tussey The task force is a joke!! he scammed 18million that year think that was the 1st time? plea bargained to 72months on 1 ct Mail fraud his buddy only got probation?Alot of people got screwed here and I hope that guy gets screwed everyday in Jail.What a PRICK!!! Legal Disclaimer. The information and notices contained on Mortgage Fraud Blog are intended to summarize recent developments in mortgage fraud cases and mortgage banking matters nationwide. The posts on this site are presented as general research and information and are expressly not intended, and should not be regarded, as legal advice. Much of the information on this site concerns allegations made in civil lawsuits and in criminal indictments. All persons are presumed innocent until convicted of a crime. Readers who have particular questions about mortgage banking, mortgage fraud matters or who believe they require legal counsel should seek the advice of an attorney. The creators, editors and sponsors of Mortgage Fraud Blog do not intend to create a confidential relationship or an attorney-client relationship by communication via or arising from this site.
class Errors { static var instance : Errors; static public var dontlog : Bool; static public function get() { if (instance == null) { instance = new Errors(); } return instance; } public function new() { callBack = null; doTrace = true; count = 0; } static public function report(text : String) { get().add(text); get().count++; #if js untyped console.error("Error: " + text); #else print(text); #end addToLog(text); } static public function warning(text : String) { get().add(text); #if js untyped console.warn("Warning: " + text); #else print(text); #end } static public function print(text : String) { if (!Errors.get().doTrace) { return; } #if flash try { var esc = StringTools.replace(text, "\\", '/'); flash.external.ExternalInterface.call("console.log", esc); } catch (e :Dynamic) { trace(text); } #elseif (js && (flow_nodejs || nwjs)) Util.println(text); #elseif js untyped console.log(text); #else Sys.println(text); #end } function add(text : String) { if (callBack != null) { // To prevent infinite recursion, we block out recursive callbacks callBack(text); } } static public function getCount() : Int { return get().count; } static public function resetCount() : Void { get().count = 0; } static function addToLog(m : String) { if (dontlog) return; #if sys if (logFile == null) { logFile = sys.io.File.append(".compile-errors"); logFile.writeString(Date.now().toString() + "\n"); logFile.writeString('neko flow.n ' + Sys.args().join(' ') + "\n"); } logFile.writeString(m + "\n"); #end } static public function closeErrorLog() { #if sys if (logFile != null) { logFile.writeString("\n"); logFile.close(); } #end } #if sys static var logFile : sys.io.FileOutput; #end public var callBack : String -> Void; public var doTrace : Bool; private var count : Int; }
BFA Appointed Lead Counsel in Teva Pharmaceuticals July 11, 2017 Judge Stefan Underhill of the United States District Court of the District of Connecticut appointed Ontario Teachers’ Pension Plan Board (“Ontario Teachers”) as Lead Plaintiff of Galmi v. Teva Pharmaceutical Industries Ltd., approving their choice of Bleichmar Fonti & Auld LLP (“BFA”) as Lead Counsel. BFA most recently represented Ontario Teachers in In re Computer Sciences Corp. Securities Litigation in the United States District Court for the Eastern District of Virginia, achieving a $97.5 million settlement. At the time of the settlement, the settlement was the second-largest all cash recovery in the Eastern District of Virginia, and represented as much as 38% of recoverable damages at trial.
2000–01 PAOK F.C. season The 2000–01 season PAOK F.C. competed in the Super League Greece, the Greek Cup and the Uefa Cup. Players Squad Transfers Players transferred in Players transferred out Pre-season Competitions Overview Alpha Ethniki League table Results summary Results by round Matches Greek Cup First round Group 4 Note: For first time were established double matches in the phase of groups, thus each team played 10 matches. Because the phase began very early, in a period that teams should play preparation friendlies, the Hellenic Football Federation (EPO) allowed at the 5 first matches a maximum of 7 substitutions, something unusual in Greece, very probably and internationally, for matches of an official competition. Second round Quarter-finals Semi-finals Final UEFA Cup First round PAOK won 6–4 on aggregate. Second round PAOK won 3–1 on aggregate. Third round PAOK lose 4-0 on aggregate. Statistics Squad statistics ! colspan="13" style="background:#DCDCDC; text-align:center" | Goalkeepers |- ! colspan="13" style="background:#DCDCDC; text-align:center" | Defenders |- ! colspan="13" style="background:#DCDCDC; text-align:center" | Midfielders |- ! colspan="13" style="background:#DCDCDC; text-align:center" | Forwards |- |} Source: Match reports in competitive matches, uefa.com, epo.gr, rsssf.com</small> Goalscorers Source: Match reports in competitive matches, uefa.com, epo.gr, rsssf.com</small> Category:PAOK FC seasons PAOK
Mike Riley (referee) Michael Riley (born 17 December 1964) is an ex-professional football referee, who has refereed matches in the English Football League, Premier League, and for FIFA. Riley currently serves as the general manager of the Professional Game Match Officials Limited. Career Riley was born in Leeds in West Yorkshire. He became a national Football League referee in 1994, having previously served five years on their assistant referees' list. He was later granted FIFA status in 1999 allowing him to officiate international fixtures. In 2002, Riley refereed the English FA Cup Final between Arsenal and Chelsea, which he later stated was "the highlight of my career". Riley took charge of the 2004 Football League Cup Final, between Bolton and Middlesbrough, in a game that saw all three goals scored within the first 25 minutes. He awarded a penalty to Middlesbrough after seven minutes and cautioned five players during the course of the game. Riley also headed England's refereeing team alongside assistants Philip Sharp and Glenn Turner at the UEFA Euro 2004 finals. Riley refereed the controversial 2004 match between Manchester United and Arsenal, also known as the Battle of the Buffet, with the result ending Arsenal's record-breaking 49 match unbeaten run. Riley officiated the Football League Championship playoff final between West Ham United and Preston North End in 2005. West Ham ran out 1–0 victors, seeing them promoted to the FA Premier League. Riley was invited to go to Hong Kong to take charge of the 2006–07 Hong Kong FA Cup final between South China and Happy Valley in 2007. South China won by 3–1, allowing them to achieve a treble in local competitions (First Division League, Senior Shield and FA Cup). Riley gave three penalty kicks in the match, two for South China and one for Happy Valley. Mike Riley was appointed manager of the Professional Game Match Officials Board (PGMOB) in June 2009, replacing Keith Hackett. This effectively ended his career in refereeing matches. Career statistics References External links Mike Riley Referee Statistics at soccerbase.com Category:English football referees Category:Living people Category:1964 births Category:Sportspeople from Leeds Category:Premier League referees Category:FA Cup Final referees Category:UEFA Euro 2004 referees
/* * linux/arch/alpha/mm/init.c * * Copyright (C) 1995 Linus Torvalds */ /* 2.3.x zone allocator, 1999 Andrea Arcangeli <andrea@suse.de> */ #include <linux/pagemap.h> #include <linux/signal.h> #include <linux/sched.h> #include <linux/kernel.h> #include <linux/errno.h> #include <linux/string.h> #include <linux/types.h> #include <linux/ptrace.h> #include <linux/mman.h> #include <linux/mm.h> #include <linux/swap.h> #include <linux/init.h> #include <linux/bootmem.h> /* max_low_pfn */ #include <linux/vmalloc.h> #include <asm/system.h> #include <asm/uaccess.h> #include <asm/pgtable.h> #include <asm/pgalloc.h> #include <asm/hwrpb.h> #include <asm/dma.h> #include <asm/mmu_context.h> #include <asm/console.h> #include <asm/tlb.h> DEFINE_PER_CPU(struct mmu_gather, mmu_gathers); extern void die_if_kernel(char *,struct pt_regs *,long); static struct pcb_struct original_pcb; pgd_t * pgd_alloc(struct mm_struct *mm) { pgd_t *ret, *init; ret = (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO); init = pgd_offset(&init_mm, 0UL); if (ret) { #ifdef CONFIG_ALPHA_LARGE_VMALLOC memcpy (ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, (PTRS_PER_PGD - USER_PTRS_PER_PGD - 1)*sizeof(pgd_t)); #else pgd_val(ret[PTRS_PER_PGD-2]) = pgd_val(init[PTRS_PER_PGD-2]); #endif /* The last PGD entry is the VPTB self-map. */ pgd_val(ret[PTRS_PER_PGD-1]) = pte_val(mk_pte(virt_to_page(ret), PAGE_KERNEL)); } return ret; } pte_t * pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) { pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO); return pte; } /* * BAD_PAGE is the page that is used for page faults when linux * is out-of-memory. Older versions of linux just did a * do_exit(), but using this instead means there is less risk * for a process dying in kernel mode, possibly leaving an inode * unused etc.. * * BAD_PAGETABLE is the accompanying page-table: it is initialized * to point to BAD_PAGE entries. * * ZERO_PAGE is a special page that is used for zero-initialized * data and COW. */ pmd_t * __bad_pagetable(void) { memset((void *) EMPTY_PGT, 0, PAGE_SIZE); return (pmd_t *) EMPTY_PGT; } pte_t __bad_page(void) { memset((void *) EMPTY_PGE, 0, PAGE_SIZE); return pte_mkdirty(mk_pte(virt_to_page(EMPTY_PGE), PAGE_SHARED)); } #ifndef CONFIG_DISCONTIGMEM void show_mem(void) { long i,free = 0,total = 0,reserved = 0; long shared = 0, cached = 0; printk("\nMem-info:\n"); show_free_areas(); printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10)); i = max_mapnr; while (i-- > 0) { total++; if (PageReserved(mem_map+i)) reserved++; else if (PageSwapCache(mem_map+i)) cached++; else if (!page_count(mem_map+i)) free++; else shared += page_count(mem_map + i) - 1; } printk("%ld pages of RAM\n",total); printk("%ld free pages\n",free); printk("%ld reserved pages\n",reserved); printk("%ld pages shared\n",shared); printk("%ld pages swap cached\n",cached); } #endif static inline unsigned long load_PCB(struct pcb_struct *pcb) { register unsigned long sp __asm__("$30"); pcb->ksp = sp; return __reload_thread(pcb); } /* Set up initial PCB, VPTB, and other such nicities. */ static inline void switch_to_system_map(void) { unsigned long newptbr; unsigned long original_pcb_ptr; /* Initialize the kernel's page tables. Linux puts the vptb in the last slot of the L1 page table. */ memset(swapper_pg_dir, 0, PAGE_SIZE); newptbr = ((unsigned long) swapper_pg_dir - PAGE_OFFSET) >> PAGE_SHIFT; pgd_val(swapper_pg_dir[1023]) = (newptbr << 32) | pgprot_val(PAGE_KERNEL); /* Set the vptb. This is often done by the bootloader, but shouldn't be required. */ if (hwrpb->vptb != 0xfffffffe00000000UL) { wrvptptr(0xfffffffe00000000UL); hwrpb->vptb = 0xfffffffe00000000UL; hwrpb_update_checksum(hwrpb); } /* Also set up the real kernel PCB while we're at it. */ init_thread_info.pcb.ptbr = newptbr; init_thread_info.pcb.flags = 1; /* set FEN, clear everything else */ original_pcb_ptr = load_PCB(&init_thread_info.pcb); tbia(); /* Save off the contents of the original PCB so that we can restore the original console's page tables for a clean reboot. Note that the PCB is supposed to be a physical address, but since KSEG values also happen to work, folks get confused. Check this here. */ if (original_pcb_ptr < PAGE_OFFSET) { original_pcb_ptr = (unsigned long) phys_to_virt(original_pcb_ptr); } original_pcb = *(struct pcb_struct *) original_pcb_ptr; } int callback_init_done; void * __init callback_init(void * kernel_end) { struct crb_struct * crb; pgd_t *pgd; pmd_t *pmd; void *two_pages; /* Starting at the HWRPB, locate the CRB. */ crb = (struct crb_struct *)((char *)hwrpb + hwrpb->crb_offset); if (alpha_using_srm) { /* Tell the console whither it is to be remapped. */ if (srm_fixup(VMALLOC_START, (unsigned long)hwrpb)) __halt(); /* "We're boned." --Bender */ /* Edit the procedure descriptors for DISPATCH and FIXUP. */ crb->dispatch_va = (struct procdesc_struct *) (VMALLOC_START + (unsigned long)crb->dispatch_va - crb->map[0].va); crb->fixup_va = (struct procdesc_struct *) (VMALLOC_START + (unsigned long)crb->fixup_va - crb->map[0].va); } switch_to_system_map(); /* Allocate one PGD and one PMD. In the case of SRM, we'll need these to actually remap the console. There is an assumption here that only one of each is needed, and this allows for 8MB. On systems with larger consoles, additional pages will be allocated as needed during the mapping process. In the case of not SRM, but not CONFIG_ALPHA_LARGE_VMALLOC, we need to allocate the PGD we use for vmalloc before we start forking other tasks. */ two_pages = (void *) (((unsigned long)kernel_end + ~PAGE_MASK) & PAGE_MASK); kernel_end = two_pages + 2*PAGE_SIZE; memset(two_pages, 0, 2*PAGE_SIZE); pgd = pgd_offset_k(VMALLOC_START); pgd_set(pgd, (pmd_t *)two_pages); pmd = pmd_offset(pgd, VMALLOC_START); pmd_set(pmd, (pte_t *)(two_pages + PAGE_SIZE)); if (alpha_using_srm) { static struct vm_struct console_remap_vm; unsigned long vaddr = VMALLOC_START; unsigned long i, j; /* Set up the third level PTEs and update the virtual addresses of the CRB entries. */ for (i = 0; i < crb->map_entries; ++i) { unsigned long pfn = crb->map[i].pa >> PAGE_SHIFT; crb->map[i].va = vaddr; for (j = 0; j < crb->map[i].count; ++j) { /* Newer console's (especially on larger systems) may require more pages of PTEs. Grab additional pages as needed. */ if (pmd != pmd_offset(pgd, vaddr)) { memset(kernel_end, 0, PAGE_SIZE); pmd = pmd_offset(pgd, vaddr); pmd_set(pmd, (pte_t *)kernel_end); kernel_end += PAGE_SIZE; } set_pte(pte_offset_kernel(pmd, vaddr), pfn_pte(pfn, PAGE_KERNEL)); pfn++; vaddr += PAGE_SIZE; } } /* Let vmalloc know that we've allocated some space. */ console_remap_vm.flags = VM_ALLOC; console_remap_vm.addr = (void *) VMALLOC_START; console_remap_vm.size = vaddr - VMALLOC_START; vmlist = &console_remap_vm; } callback_init_done = 1; return kernel_end; } #ifndef CONFIG_DISCONTIGMEM /* * paging_init() sets up the memory map. */ void paging_init(void) { unsigned long zones_size[MAX_NR_ZONES] = {0, }; unsigned long dma_pfn, high_pfn; dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT; high_pfn = max_pfn = max_low_pfn; if (dma_pfn >= high_pfn) zones_size[ZONE_DMA] = high_pfn; else { zones_size[ZONE_DMA] = dma_pfn; zones_size[ZONE_NORMAL] = high_pfn - dma_pfn; } /* Initialize mem_map[]. */ free_area_init(zones_size); /* Initialize the kernel's ZERO_PGE. */ memset((void *)ZERO_PGE, 0, PAGE_SIZE); } #endif /* CONFIG_DISCONTIGMEM */ #if defined(CONFIG_ALPHA_GENERIC) || defined(CONFIG_ALPHA_SRM) void srm_paging_stop (void) { /* Move the vptb back to where the SRM console expects it. */ swapper_pg_dir[1] = swapper_pg_dir[1023]; tbia(); wrvptptr(0x200000000UL); hwrpb->vptb = 0x200000000UL; hwrpb_update_checksum(hwrpb); /* Reload the page tables that the console had in use. */ load_PCB(&original_pcb); tbia(); } #endif #ifndef CONFIG_DISCONTIGMEM static void __init printk_memory_info(void) { unsigned long codesize, reservedpages, datasize, initsize, tmp; extern int page_is_ram(unsigned long) __init; extern char _text, _etext, _data, _edata; extern char __init_begin, __init_end; /* printk all informations */ reservedpages = 0; for (tmp = 0; tmp < max_low_pfn; tmp++) /* * Only count reserved RAM pages */ if (page_is_ram(tmp) && PageReserved(mem_map+tmp)) reservedpages++; codesize = (unsigned long) &_etext - (unsigned long) &_text; datasize = (unsigned long) &_edata - (unsigned long) &_data; initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin; printk("Memory: %luk/%luk available (%luk kernel code, %luk reserved, %luk data, %luk init)\n", (unsigned long) nr_free_pages() << (PAGE_SHIFT-10), max_mapnr << (PAGE_SHIFT-10), codesize >> 10, reservedpages << (PAGE_SHIFT-10), datasize >> 10, initsize >> 10); } void __init mem_init(void) { max_mapnr = num_physpages = max_low_pfn; totalram_pages += free_all_bootmem(); high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); printk_memory_info(); } #endif /* CONFIG_DISCONTIGMEM */ void free_reserved_mem(void *start, void *end) { void *__start = start; for (; __start < end; __start += PAGE_SIZE) { ClearPageReserved(virt_to_page(__start)); init_page_count(virt_to_page(__start)); free_page((long)__start); totalram_pages++; } } void free_initmem(void) { extern char __init_begin, __init_end; free_reserved_mem(&__init_begin, &__init_end); printk ("Freeing unused kernel memory: %ldk freed\n", (&__init_end - &__init_begin) >> 10); } #ifdef CONFIG_BLK_DEV_INITRD void free_initrd_mem(unsigned long start, unsigned long end) { free_reserved_mem((void *)start, (void *)end); printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10); } #endif
Rheumatoid arthritis: from autoimmunity to synovitis and joint destruction. Rheumatoid arthritis is an autoimmune disease characterized by the production of two known antibodies - rheumatoid factor and anti-citrullinated peptide antibody (ACPA) - against common autoantigens that are widely expressed within and outside the joints. The interactions between genes and environment are crucial in all stages of the disease, involving namely genes from major histocompatibility complex locus, and antigens such as tobacco or microbes (e.g. Porphyromonas gingivalis). T and B cells are activated as soon as the earliest phases of the disease, rheumatoid arthritis appearing as a Th1 and Th17 disease. Inflammatory cytokines have a considerable importance in the hierarchy of the processes involved in RA. The joint destruction seen in RA is caused not only by cytokine imbalances, but also by specific effects of the Wnt system and osteoprotegerin on osteoclasts and by matrix production dysregulation responsible for cartilage damage. Both innate and adaptative immunity demonstrated their respective cornerstone position in rheumatoid arthritis, since targeted treatments has been efficiently developed against TNF-α, IL-6 receptor, IL-1β, CD20 B cells and T-cell/Dendritic cell interactions.
--- abstract: 'We show how we can globally edit images using textual instructions: given a source image and a textual instruction for the edit, generate a new image transformed under this instruction. To tackle this novel problem, we develop three different trainable models based on RNN and Generative Adversarial Network (GAN). The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end. To train these systems, we use Amazon Mechanical Turk to collect textual descriptions for around 2000 image pairs sampled from several datasets. Experimental results evaluated on our dataset validate our approaches. In addition, given that the filter bank model is a good compromise between generality and performance, we investigate it further by replacing RNN with Graph RNN, and show that Graph RNN improves performance. To the best of our knowledge, this is the first computational photography work on global image editing that is purely based on free-form textual instructions.' author: - 'Hai Wang [[^1]]{} Jason D. Williams [[^2]]{} Sing Bing Kang [[^3]]{}' bibliography: - 'egbib.bib' title: | Learning to Globally Edit Images\ with Textual Description --- Introduction {#intro} ============ Consumers are increasing relying on portable embedded devices such as smartphones and tablets for their everyday activities. These devices tend to have small form factors that preclude fine-grain spatial control using the display. Adding voice-based instruction (systems such as Siri, Cortana, and Alexa) significantly enhances the capabilities of such devices. An application that would significantly benefit is photo editing. With few exceptions, interactive photo editing systems are primarily manual and often require significant display real estate for the controls. To allow a photo editing system to be voice-controlled, the mapping of voice to text to invocation of image operations requires domain-specific conversion of text to APIs. One solution is to handcraft this conversion by manually defining rules for editing effects (as was done in [@pixeltone]). However, this approach is hard to scale. ![Overview of our system. The inputs are an image and textual command, with the output being the result of applying the command to the input image.[]{data-label="fig:intro"}](figs/intro.jpg){width="75.00000%"} ![Our GAN-based system.[]{data-label="fig:overall"}](figs/overall.jpg){width="90.00000%"} In this paper, we demonstrate global image editing through text, as illustrated in Figure \[fig:intro\]. Compared to other work [@imgsp; @pixeltone], our system is end-to-end trainable and easier to extend, since it does not require significant handcrafting of rules. We designed three different models based on Generative Adversarial Network (GAN) [@GAN]. Our main contributions are: - We believe our work is the first to tackle the general image editing problem under free-form text descriptions. - We collected a database of image transformation pairs and their corresponding textual descriptions. - We designed three different models: handcrafted bucket-based model, pure end-to-end model, and filter bank based model. Experimental results demonstrate the effectiveness of our approaches. - We are the first method to apply graph RNN to text-image synthesis and demonstrate its effectiveness. In our work, we limit image editing to global transforms[^4]. Related work {#relatedwork} ============ In this section, we briefly describe two voice-assisted systems, namely, PixelTone [@pixeltone] for image editing, and Image Spirit [@imgsp] for refining a parsed image. We also survey representative approaches for automatic image editing (specifically, image enhancement and style transfer), joint image-language analysis, and techniques that use attention or graph RNN. #### **PixelTone and Image Spirit.** From application side, PixelTone [@pixeltone] is the system most related to ours. It allows the user to edit the image through the voice command such as “change the t-shirt to blue", after the t-shirt region is tagged. The system contains a speech recognition engine, a text analysis module, and an execution module. After converting the user’s voice command to text, the text analysis module produces the atomic operation which can be run by the execution module. The text analysis module is based on predefined rules and NLP techniques such as tokenization and part of speech tagging. One limitation is that the predefined rules are manually constructed. Image Spirit [@imgsp] is a system that parses an image into regions with semantic labels, and allows the user to verbally refine the result. Typical verbal commands include correcting an object label and refining a specific label. Based on the initial image parsing result, Image Spirit updates the local relationship between different objects in an MRF [@mrf] in response to the utterance input, resulting in the enhanced result. As with PixelTone, the commands are also predefined, and the scenario of refining the image parsing result is different from our image editing scenario. Unlike PixelTone and Image Spirit, our approach does not rely on pre-define commands or rules, rather, it learns an end-to-end model which takes arbitrary text and learns corresponding transformations, based on a corpus. #### **Image Manipulation with Language.** Concurrent with our work, there are several techniques that address end-to-end trainable model for image manipulation with language [@chen2017language; @Seitaro2017; @nlp4seg]. Chen et al. [@chen2017language] developed attentive models capable of combining text and image to produce a new image. To extract meaningful information from text, they use the attention mechanism [@luong2015effective]. They demonstrate two editing tasks of image segmentation and colorization with natural language; different losses are used for training the different tasks. The work of Seitaro et al. [@Seitaro2017] is similar to Chen et al. [@chen2017language], but they only consider MNIST dataset with instructions related to position moving such as “moving 6 to the bottom." By creating the artificial dataset, they explored what the model can learn; as a side effect, training on an artificial dataset limits practicality. By comparison, our model focuses on general textual instructions, which makes it extensible to different kinds of instructions. Further, instead of using attention mechanism [@luong2015effective; @chen2017language], we use graph RNN [@peng2017cross]. Finally, our model is trained on real dataset collected from Amazon Mechanical Turk. #### **Automatic Image Editing: Enhancement and Style Transfer.** There are a number of approaches for automatic image enhancement. Machine learning techniques have been used to train on original-enhanced image pair databases for enhancing images [@Vladimir; @S.Hwang; @A.Kapoor; @jianzhou]. The approach of [@Zhicheng] is based on a trained deep neural network to predict the enhanced image. Another form of image editing is style transfer (exemplified by [@gatys; @Youngbae; @JoonYoungLee; @Yiming]). Here, given one image and reference image, the goal is to generate the new image according to the reference image’s style. The mapping is purely image-based. Unlike our work, all these techniques do not act on a textual description. #### **Joint Image and Language Analysis.** A significant amount of work has been done on joint image-language analysis. Topics in this space include image caption generation [@st; @minde; @jeff; @chenxi; @bodai], video story telling [@Ting; @Venugopalan], visual question answering [@antol; @yezhou; @Nasrin; @Das1], image retrieval under natural language [@nlp4image], object retrieval under language [@nlor; @gtp; @nlp4seg], image synthesis from text [@GATIS; @att2img; @xu2017attngan; @hong2018inferring] and referring expression generation [@gcuod; @mohit2017; @cvpr2017]. The topics of image retrieval under natural language, object retrieval, image synthesis from text, and referring expression generation are most relevant to our work. Ulyanov et al. [@nlp4image] use natural language to guide image retrieval; image-text correspondence is used to find a common embedding space. There are techniques that, given text and image, localize a target object as a bounding box [@nlor; @gtp] or segment [@nlp4seg] within the image. The work of Mirza and Osindero [@gcuod] addresses the problem of referring expression generation, i.e., given image and a bounding box, generate the expression that can describe it. The techniques of [@cvpr2017] and [@mohit2017] generalize this problem in the context of reinforcement learning. Given image objects and text, the approaches of [@inlpali] and [@dvsa] find the alignment between them. Kong et al. [@ttico] find the alignment between text and RGB-D image, and use the text description to guide 3D semantic parsing. They show that image information helps to improve the language analysis result. In [@GATIS; @att2img], the output image is synthesized from noise vector and text description, but in our work, we begin from the original image and try to transform the image under text description. The technique of [@GATIS] generates a fixed size image while our output image size depends on the input size, which complicates the image generation problem. Additionally, we rely on basic image concepts such as saturation and brightness while the techniques of [@GATIS; @att2img; @xu2017attngan] analyzes image content (with only image concept involved being color). In summary, all these techniques that focus on joint image-language analysis are not designed to transform the image using text. However, if we were to transform the image locally, as in “change the color of the dog on right to white”, we would need to first localize the dog before applying the color change. Here, techniques such as [@nlor; @gtp; @nlp4seg] would be good candidate components to add to our system. #### **Attention and graph RNN.** Attention has been used in various joint image and text problems, and generally there are two different attention mechanism: attention between different tokens in text [@chen2017language], and attention between tokens in text and pixels in image [@chenxi; @xu2017attngan]. Graph RNN is first used for cross-sentence $N$-ary relation extraction [@peng2017cross], and it subsumes plain&tree RNN [@tai2015improved]. Briefly, a graph RNN generalizes a linear-chain RNN by incorporating arbitrary long-ranged dependencies besides word adjacency. A word might have precedents other than the prior word, and its LSTM unit is expanded to include one forget gate for each precedent. For efficient training, a graph is decomposed into a forward pass and a backward pass, each consisting of edges pointing forward and backward, respectively. Backpropagation is then conducted on these two directed acyclic graphs, similarly to BiLSTM. (If a graph LSTM contains no edges other than word adjacency, it reduces to BiLSTM.) Additional dependencies include syntactic dependencies, discourse relations, coreference, and connections between roots of adjacent sentences [@peng2017cross]. In our work, we handle only global editing; given our limited training data, how to extract meaningful semantics from text is crucial. As such, graph RNN might be a more natural choice than attention since they can utilize graph structure provided by parser [@manning2014stanford]. To the best of our knowledge, this is the first time that graph structured RNN is used in a joint text and image analysis problem. Model ===== Our goal is to use text and an image as input and generate a new image globally transformed under the text description. This problem is well-suited to the adversarial framework provided by Generative Adversarial Network [@GAN; @c_gan]. The GAN objective function is a min-max problem, which is typically optimized in an alternating manner: $$\begin{aligned} \min_{\theta_{G}} \max_{\theta_{D}} \ ( &\mathbb{E}_{x \sim p_{data}(x)} [\log D_{\theta_{D}}(x)] + \mathbb{E}_{z \sim p_{z}(z)} [\log ( 1-D_{\theta_{D}} (G_{\theta_{G} }(z) ) )] ),\end{aligned}$$ where $D_{\theta_{D}}$ is the discriminator with parameter $\theta_{D}$ and $G_{\theta_{G}}$ is the generator with parameter $\theta_{G}$. The generator tries to confuse the discriminator while the discriminator differentiates between samples from true data distribution $p_{data}$ and samples from the generator given noise data distribution $p_{z}(z)$. In our work, the image transformation is achieved by the generator, which consists of an encoder-decoder architecture and a Recurrent Neural Network (RNN). As for the discriminator, $p_{z}(z)$ is the original image while $p_{data}$ is the corresponding edited image. The system is depicted in Figure \[fig:overall\]. We design three models, and each model handles the text information differently: 1. Hand-crafted bucket-based model, where similar image transformations are grouped prior to training as buckets. Each bucket has its own encoder-decoder architecture. 2. End-to-end model, with a single encoder-decoder architecture to handle the image and an RNN to handle text. 3. Filter-bank model, where transformations are specified as trained convolution filters. All these models have exactly the same discriminator, and they only differ in the generator. Discriminator ------------- We describe the discriminator first since it is same for all three models. We consider the conditional GAN (c-GAN) [@c_gan] where the loss is also conditioned on the input image. However, compared with [@c_gan], our discriminator also need to be text aware because the image is enhanced under the corresponding text description. Our discriminator uses four inputs: original image $I_{input}$, ground truth image $I_{gt}$ or generated image $I_{g}$, and the corresponding text $T_{des}$. As with [@GATIS], for each image pair, we also consider sampling random texts (see in supplementary) to make the discriminator more text-aware. Let $h(x)$ be an encoding function (e.g., an RNN) which can encode text $x$ into a vector. We first encode the text and down-sample the image before we depth-concatenate $I_{input}$, $I_{g}$, and $h(T_{des})$, then we feed the resulting vector to the discriminator with a negative label. In contrast, we feed the triple $I_{input}$, $I_{gt}$ and $h(T_{des})$ to the discriminator with a positive label. Additionally, the triple $I_{input}$, $I_{gt}$ and $h(T_{random})$ and $I_{input}$, $I_{g}$ and $h(T_{random})$ are treated as negative instances. The discriminator loss is just summed over all instances. ![Our discriminator architecture.[]{data-label="fig:discriminator"}](figs/discriminator.jpg){width="80.00000%"} ![Our end-to-end model.[]{data-label="fig:end-to-end"}](figs/end-to-end.jpg){width="\textwidth"} The discriminator architecture is depicted in Figure \[fig:discriminator\]. It is has fewer layers, and each layer contains convolution, instance normalization [@instancenorm], and activation function. Compared with [@GATIS], we use the sampled text in the context of c-GAN while [@GATIS] use the sampled text in basic GAN. More details on the discriminator are provided in the supplementary file. Generators {#transformation_text} ---------- All three generators take an image and text as input and generate a corresponding transformed image. Before describing our three models, we define the loss function. Given original image $I_{input}$, generated image $I_{g}$ and ground truth $I_{gt}$, we use the following losses to train the generator: - **Content loss**: $$l_{\text{content}} = \frac{1}{WHC}\sum_{c=1}^{C}|I_{g}^{c}-I_{gt}^{c}|_{1} ,$$ where $W$, $H$, and $C$ are the image width, image height, and channel number, respectively. $l_{\text{content}}$ measures $l_{1}$ loss of the generated and ground truth images. - **Adversarial loss**: $$l_{\text{adversarial}} = 1 - \log (D_{\theta_{D}}(I_{input}, I_{g}, h(T_{des}))) ,$$ where $h(T_{des})$ is defined in (\[eq:Tdes\]). $l_{\text{adversarial}}$ comes from the discriminator; it measures the similarity of the generated image with respect to the ground truth, conditioned on the input image. By minimizing it, the generator tries to fool the discriminator. - **Perceptual loss**: $$l_{\text{perceptual}} = \frac{1}{L}||F_{vgg\_19}(I_{gt}) - F_{vgg\_19}(I_{g})||_{2} ,$$ where $F_{vgg\_19}(I) = \text{Concat}(RELU_{2}(I), RELU_{3}(I), RELU_{4}(I))$, $RELU_{i}(I)$ is the feature after $RELU$ activation function [@Relu] in $i$th layer in VGG-19 network for image $I$, and $L$ is the length of the concatenated feature. As with [@Perceptual_feifei; @Photo_Realistic], we use the pre-trained VGG-19 network [@vgg] to extract the high-level visual feature. The final loss for the generator is a weighted combination of those three losses: $$l_{G} = l_{\text{content}} + \alpha l_{\text{adversarial}} + \beta \ l_{\text{perceptual}}.$$ where $\alpha=1$ and $\beta=0.02$ based on tuning the validation set. ![Bucket model.[]{data-label="fig:bucket"}](figs/bucket_model.jpg){width="70.00000%"} ![Filter bank model.[]{data-label="fig:filterbank"}](figs/filterbank_model.jpg){width="70.00000%"} #### **Bucket Model.** One design is the bucket model, which is based on the idea that similar image transformations should be grouped as buckets. Each bucket represents a different image transformation (e.g., one for increasing the brightness, another for reducing the contrast). The disadvantage is the grouping is manual. The architecture of the bucket model is shown in Figure \[fig:bucket\]. Given some text, we train an RNN to generate a distribution over buckets, and the final generated image is a weighted linear combination of different buckets. Let $$h(T_{des}) = \overrightarrow{RNN}(t_{1}, \ldots ,t_{n}) || \overleftarrow{RNN}(t_{1}, \ldots ,t_{n}) , \label{eq:Tdes}$$ where $\overrightarrow{RNN}$ and $\overleftarrow{RNN}$ are the last hidden state vectors when text is fed to RNN in opposite directions. Let weight $\alpha = \text{softmax}(h(T_{des}))$, with $K_b$ buckets and the output of each bucket being $I_{k}$. (In our work, $K_b = 5$.) The final output image $I_{g}$ is a weighted linear combination from the different buckets, i.e., $I_{g} = \sum_{k=1}^{k=K} \alpha_{k}I_{k}$. In this model, each bucket has its own encoder-decoder architecture. The encoder is a down-sampling procedure which contains a series of conv-batch normalization [@batchnorm; @instancenorm] Leaky ReLU units [@Relu]. The decoder is similarly constructed, except in reverse order to constitute an up-sampling procedure. In our implementation, the down-sampling and up-sampling networks have the same depth, and optionally we can use skip connection [@resnet], i.e., we concatenate the $i$th layer in down-sampling network with the $(N-i)$th layer in up-sampling network. To group the image, several methods can be used: surface form level word matching, cluster over word (sentence) embedding, or manually design the buckets. In our work, however, we manually designed the buckets based on the bigram distribution shown in the supplementary file. We have tried using automatic grouping methods, but they appear to be less effective. #### **End-to-End Model.** The bucket model, while straightforward, requires some handcrafting of the buckets. Inspired by [@GATIS], we also design an end-to-end model. We use another RNN (which is different from that used in the discriminator) to encode the text to a vector; this vector is then concatenated with the image vector. As with the bucket model, we also use the encoder-decoder framework to encode the image. The overall architecture of the end-to-end model is shown in Figure \[fig:end-to-end\]. Given an image $I_{img}$, we first encode it through a deep convolution neural network as $\text{Encode}(I_{img})$, followed by a depth concatenation between this image vector and the text vector: $$h(T_{des}, I_{img}) = \text{DepthConcat} (\text{Encode}(I_{img}), h(T_{des} )) .$$ Subsequently, we feed $h(T_{des}, I_{img})$ to the decoder. #### **Filter Bank Model.** An end-to-end model is conceptually elegant. However, making it work is difficult due to limited expressive power, especially if we consider that image transformations can be bidirectional. An example is with respect to brightness, where the user can specify to “increase the brightness" or “decrease the brightness". The bucket model is easy to understand, but it requires pre-designing the buckets; incorporating additional data will likely to require changes to the bucket design. Our third model, the filter bank model, is designed to combine the advantages of these two models. The architecture for the filter bank model is depicted in Figure \[fig:filterbank\]. Given a description, an RNN is used to encode it and generate a distribution over different filters, which is conceptually the same as the bucket model. Each filter $F_{k}$ is a $k \times k \times c_{in} \times c_{out}$ convolution filter, and the final image is a weighted linear combination of different images. Given an image $I_{img}$, to generate the enhanced image based on filter $F_{k}$, we first use the encoder to encode the image as a hidden vector. We then convolve this hidden vector with filter $F_{k}$, and the result is fed to the decoder. With $K_f$ filters, we have $K_f$ different images generated. (In our work, $K_f = 5$.) For the $k$th filter, we have $$I_{k} = \text{Decoder}(\text{Conv}(\text{Encode}(I_{img}), F_{k})) .$$ The final output image $I_{g}$ is obtained using $I_{g} = \sum_{k=1}^{k=K} \alpha_{k}I_{k}$. This model is similar to that described in [@stylebank], but there is a major difference: The model in [@stylebank] is used for style transfer and each filter corresponds to one pre-determined style. During training, each training instance contains an image pair and corresponding filter id, and it only optimizes the corresponding filter and the shared encoder-decoder parameter. By comparison, for our filter bank model, the filters are jointly trained automatically from image pairs and the model learns how to decompose the transformation automatically (the only manual step is specifying the number of filters). Data Collection {#datacollect} =============== To train our models, we need original-edited image pairs with associated text descriptions. To the best of our knowledge, there is no such existing dataset. The MIT-Adobe 5k dataset [@Vladimir] consists of original-edited image pairs generated by five professional photographers, but it does not contain text that describe the image transformation (such as brightness change and color balance). For each image pair, the list of operations used to generate the edited image is given; an operation consists of a software editing command and its associated parameters. The fine granularity of information is not useful for associating general casual description of the image transformation with the original-edited image pair. Other publicly available text-image datasets such as MS-COCO [@ms_coco], ReferIt [@referit], and Flickr30k Entities [@ijcv2016] contain text that describe the image content, but such text are not related to image editing or style. In addition, these datasets do not contain edited versions of the original. As such, we ran a user study to collect our own dataset through Amazon Mechanical Turk. We use a random subset of the MIT-Adobe 5k dataset; for a given original-edited image pair, we ask the subject to type in a phrase to describe the image transformation. We also flip the order of the image pair to sample the reverse transformation. Each task (“hit" in AMT parlance) involves describing transformations for 8 pairs. For each image pair, the subject was asked to rate the image transformation and describe the image operations that are applied to the original to produce the edited version. Procuring reliable data from such a user study is difficult because most Turkers are not highly familiar with concepts of photography, and as such, have only rudimentary vocabularies to describe visual changes. Initially, to assist with the task, we provided several example image pairs with plausible responses as guidelines. This unfortunately resulted in subjects copying and pasting example responses regardless of relevance. Even if they do not copy and paste responses, many users are not familiar with imaging concepts and provided inappropriate text. In response to these issues, we made the following changes: (1) disabled copy and paste, (2) added examples (with explanations) that would cause their work to be rejected, (3) added a qualification test to see if the subject understands color and contrast, and (4) used heuristics to manually filter out “bad" responses. The new data are significantly better than those obtained through the trial run. By disabling cut-and-paste, the responses are much more varied. By explaining why responses may be rejected and enforcing a qualification test, data noise is significantly reduced. The interface with examples is shown in Figure \[fig:example\]. (See in the supplementary file for additional examples, the qualification test, and task interface.) ![Guidelines and example responses provided in the user study.[]{data-label="fig:example"}](figs/mturk2.jpg){width="80.00000%"} [**Model**]{} [**$p$-value**]{} ------------------------- ---------------------- End2end vs. GT $7.4 \times 10^{-6}$ GT vs. Bucket(a) 0.007 End2end vs. Bucket(f) 0.02 GT vs. Bucket(f) 0.02 End2end vs. Bucket(a) 0.06 FB vs. End2end 0.09 FB vs. GT 0.09 FB vs. Bucket(f) 0.53 FB vs. Bucket(a) 0.58 Bucket(a) vs. Bucket(f) 0.67 Once the data have been collected, we further manually check the response. We removed responses that are obviously inconsistent with the actual image operation, too generic (e.g., “beautify the image"), or are not descriptions (e.g., “the edited image need to be brighten" in response to the edited image being a darkened version of the original). Totally 370 responses are removed in this way, which count 15% of all the raw responses. We end up with 1884 image pairs and annotations, with each image pair having on average 1.6 text annotations. 1378 image pairs are used for training, 252 for validation and 252 for test. These image pairs contain multiple image transformation directions, e.g., improving the color balance, increase/decrease the image brightness, increase/reduce the image saturation, deepen the colors and keep the image the same. For additional statistics on the collected data, see the supplementary file. Implementation ============== We use Pytorch to implement our models. We tried two different versions of encoder-decoder: one is a typical encoder-decoder without skip connection, with the other with skip connection [@resnet; @pix2pix]. We find the version with skip connection has better performance and faster training. We use Adam [@Adam] as the optimizer, with the initial learning rate 0.001, and will half the learning rate if we don’t observe the loss reduction on validation set. Our RNN is one layer bidirectional Gated Recurrent Unit (GRU) [@gru] with a hidden size of 128. The vocabulary size is around 4k and word embedding size is 200. We initialize the word embedding with the pre-trained Glove word embedding [@glove]. For our bucket and filter bank models, due to memory constraints, we limit the numbers of buckets and filters to 5 each, i.e., $K_b = K_f = 5$. For our bucket model, we have $K_b$ encoder-decoder pairs without any shared parameters. As a result, the optimization process requires a large amount of memory; in addition, it is slow, especially during back propagation. To overcome this problem, we pre-train the $K_b$ independent encoder-decoder pairs separately and fix them when training the bucket model. Additionally, for the bucket model, after training, we compute two image outputs, one with the highest weight (“argmax", i.e., Bucket(a)) and another being a weighted average (“fusion", i.e., Bucket(f)). On the other hand, the filter bank and end-to-end models are trained from scratch, since their memory requirements are not as severe and the training is faster. Specifically, bucket model, including the parallel pre-training for different buckets, totally takes 40 hours and needs 4 GPUs, while filter bank takes 25 hours and only need 1 GPU, and the end-to-end model only needs 20 hours and 1 GPU. More implementation details are given in the supplementary file. Experimental Results {#result} ==================== In this section, we first report results for automatic image enhancement (without text) as a sanity check. We then describe the results of a user study to evaluate the performance of our models in producing the edited image given an input image and text description. Finally, we show the effects of the trained filters from our filter bank model. Automatic Image Enhancement --------------------------- We first investigate the performance of c-GAN with encoder-decoder architecture in the context of automatic image enhancement, without any text used. We randomly selected 1200 image pairs (results from one expert) from MIT-Adobe fiveK and train the model; a representative result on the validation set is shown in Figure \[fig:autoenhancment\]. We quantitatively evaluate our automatic image enhancement performance. Table \[table:score\_auto\] lists the L2 error (in L\*ab space) for our method. Even though the randomly selected dataset in [@S.Hwang] is not the same with ours[^5], but in general, we believe our results are representative. The results indicate that c-GAN with encoder-decoder as generator is suitable for our problem. [.33]{} ![Examples of automatic image enhancement.[]{data-label="fig:autoenhancment"}](figs/result_im_en_1_input.jpg "fig:"){width=".7\linewidth"} \[fig:sfig1\] [.33]{} ![Examples of automatic image enhancement.[]{data-label="fig:autoenhancment"}](figs/result_im_en_1_output.jpg "fig:"){width=".7\linewidth"} \[fig:sfig2\] [.33]{} ![Examples of automatic image enhancement.[]{data-label="fig:autoenhancment"}](figs/result_im_en_1_gt.jpg "fig:"){width=".7\linewidth"} \[fig:sfig3\] [.33]{} ![Examples of automatic image enhancement.[]{data-label="fig:autoenhancment"}](figs/result_im_en_2_input.jpeg "fig:"){width=".7\linewidth"} [.33]{} ![Examples of automatic image enhancement.[]{data-label="fig:autoenhancment"}](figs/result_im_en_2_output.jpeg "fig:"){width=".7\linewidth"} [.33]{} ![Examples of automatic image enhancement.[]{data-label="fig:autoenhancment"}](figs/result_im_en_2_gt.jpeg "fig:"){width=".7\linewidth"} Input Hwang et al. [@S.Hwang] Ours ------- ---------------- ------------------------- ---------------- Error 17.1 $\pm$ 0.9 15.0 $\pm$ 0.8 12.1 $\pm$ 0.9 : Comparisons of average L2 error on test sets, with standard error of 95%. \[table:score\_auto\] Image Transformation from Text Description ------------------------------------------ Since there is no existing benchmark, we design a user study for such an evaluation. We are specifically interested in how well the edited image fit the text description given an input image, for all the models and ground truth[^6]. We want to extract metrics that are both absolute (through standalone rating) and relative (pairwise comparison). One representative result is given in Figure \[fig:hancmentwithtext\]. [.33]{} ![Example of image editing under textual description “enhance white balance and contrast.” More examples are in the supplementary material.[]{data-label="fig:hancmentwithtext"}](figs/real_A.jpg "fig:"){width=".7\linewidth"} [.33]{} ![Example of image editing under textual description “enhance white balance and contrast.” More examples are in the supplementary material.[]{data-label="fig:hancmentwithtext"}](figs/fake_B.jpg "fig:"){width=".7\linewidth"} [.33]{} ![Example of image editing under textual description “enhance white balance and contrast.” More examples are in the supplementary material.[]{data-label="fig:hancmentwithtext"}](figs/real_B.jpg "fig:"){width=".7\linewidth"} #### **Standalone rating:** The subject is shown an original-edited image pair with text that describes the image transformation, and is asked to rate (on a scale of one to five stars) based on the instruction “how well does the edited image follow the instructions?". There are five different pair versions, with the original image the same throughout and the edited image from ground truth, bucket model (fusion and argmax), filter bank model, and end-to-end model, respectively. The order of appearance is randomized. Each subject is shown eight image pairs corresponding to two different original images. For this portion of the user study, each image pair get five ratings, and the rating for that pair is averaged. #### **Pairwise comparison:** The subject is shown two image pairs as well as the text description, and is asked to pick the pair that fits the text better. The same four versions are used. Each subject makes eight comparisons. We obtained responses from 120 subjects; the reward for each task or “hit" is US\$0.20. (The user study interface is shown in the supplementary file.) Results of the user study are listed in Tables \[table:stat\]. [**Model**]{} [**Mean**]{} [**Std Dev**]{} --------------- -------------- ----------------- Ground Truth 3.53 1.22 Bucket(f) 3.36 1.25 Bucket(a) 3.33 1.21 Filter Bank 3.31 1.22 End-to-End 3.19 1.30 : Standalone rating for the different models.[]{data-label="table:stat"} ![Pairwise rating between different models. Red dash line represents equal rating 0.5. []{data-label="fig:pairwise"}](figs/pairwise.jpg){width="90.00000%"} Table \[table:stat\] shows that the bucket model has the highest rating among all the three models. This is not surprising since the bucket model is customized, with the disadvantage of being less scalable. The filter bank model is next best with the end-to-end model being third. While the end-to-end model is the most conceptually elegant with the least amount of user specification, it has only one encoder-decoder, which limits its expressive power. It is less able to learn multiple directional transformations. While the filter bank model also has only one encoder-decoder, it has filters between them; the filters can be interpreted as a type of bucket model with shared encoder-decoder parameters among the buckets. Compared with the ground truth, however, the differences are not significant. Please note that the ground truth version has a score of only 3.53; this may be due to most users being not familiar with image concepts. Figure \[fig:pairwise\] shows pairwise ratings between different models, which is consistent with Table \[table:stat\]. Table \[table:pairwise\] lists the $p$-values between the scores of different models, where smaller $p$-values implies larger difference between models[^7]. Based on this table, the filter bank model is very close to the bucket model while the end-to-end model is less similar to the bucket model or filter bank model. From the practical point of view, the filter bank model appears to be the best choice since the performance is good while not requiring much manual effort (apart from selecting the number of filters). Additionally, it requires less memory than the bucket model. For same encoder-decoder architecture with $K_b$ buckets, it only need $1/K_b$ memory as that for the bucket model. In addition, the filter bank model does not require pre-training for different buckets, making it much more efficient. For the same amount of memory, the filter bank model can afford to incorporate more filters than there are buckets. Effects of Automatically Trained Filters {#filters_anaylysis} ---------------------------------------- A significant advantage of the filter bank model is that we do not need to manually design the filters. In this section, we show some results of applying the automatically trained filters. Interestingly, each filter appears to correspond to a specific transformation. For example, the filter $F_{1}$ corresponds to brightness reduction while filter $F_{2}$ corresponds to brightness increase. This is consistent with [@stylebank], except that we do not explicitly specify each filter’s function. Figure \[fig:bank\] shows images generated by different filters. [0.28]{} ![Effect of different filters.[]{data-label="fig:bank"}](figs/filter_input.jpg "fig:"){width="0.7\linewidth"} [.28]{} ![Effect of different filters.[]{data-label="fig:bank"}](figs/filter_0.jpg "fig:"){width="0.7\linewidth"} [.28]{} ![Effect of different filters.[]{data-label="fig:bank"}](figs/filter_1.jpg "fig:"){width="0.7\linewidth"} [.28]{} ![Effect of different filters.[]{data-label="fig:bank"}](figs/filter_2.jpg "fig:"){width="0.7\linewidth"} [.28]{} ![Effect of different filters.[]{data-label="fig:bank"}](figs/filter_3.jpg "fig:"){width="0.7\linewidth"} [.28]{} ![Effect of different filters.[]{data-label="fig:bank"}](figs/filter_4.jpg "fig:"){width="0.7\linewidth"} Observations on Learned Transformation {#transformation_anaylysis} -------------------------------------- Even though our models were trained on global tonal adjustments, they are able to learn local transformations. The RGB remapping distributions in Figure \[fig:rgb\_remapping\] for two representative images show that our transformation, unlike its counterpart for expert A in the MIT-Adobe 5k dataset, is local. This is evident from the significantly more spread out distributions; for our method, each RGB input is mapped to wider range of outputs compared with the expert A. This results demonstrate that our models learn much more complicated mapping other than a single global mapping. [0.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/R_channel_a4658.jpg "fig:"){width="0.8\linewidth"} [.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/G_channel_a4658.jpg "fig:"){width="0.9\linewidth"} [.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/B_channel_a4658.jpg "fig:"){width="0.9\linewidth"} [.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/expertA-original_shotzero-a4658-Duggan_090201_4929_real_B.jpg "fig:"){width="1.0\linewidth"} [0.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/R_channel_a4697.jpeg "fig:"){width="0.8\linewidth"} [.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/G_channel_a4697.jpeg "fig:"){width="0.8\linewidth"} [.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/B_channel_a4697.jpeg "fig:"){width="0.85\linewidth"} [.23]{} ![RGB remapping distributions. For the expert enhanced image (expert A in MIT-Adobe 5k dataset), the mappings are almost one-to-one (in blue), while those for our edited images (in red) are not, demonstrating our editing is local.[]{data-label="fig:rgb_remapping"}](figs/transforming/original_shotzero-expertA-a4967-kme_2360_real_A.jpg "fig:"){width="1.03\linewidth"} Using Graph RNN on Filter Bank Model {#graph_rnn} ------------------------------------ The filter bank model is a good compromise between manual effort and performance. To further investigate the influence of text encoding, we also replaced RNN with Graph RNN, more specifically, Graph GRU (Gated Recurrent Unit). The graph structure is obtained from [@manning2014stanford]; the last hidden state of graph RNN in both directions are used to represent the text. Please note that that conventional RNN is still used in the discriminator. Table \[table:score\_rnns\] shows that Graph GRU performed better than plain GRU. One explanation is Graph GRU can utilize the dependency structures between different tokens and ignore the less important words in textual instructions. We also found that the Graph GRU is more effective when the text description is long and ambiguous. For more analysis with Graph RNN, see the supplementary file. Graph GRU GRU ------------------- ----------------- ----------------- Standalone rating 3.35 $\pm$ 1.24 3.31 $\pm$ 1.22 Pairwise rating 0.52 0.48 : Performance comparison between different RNNs in our filter bank model. \[table:score\_rnns\] Concluding Remarks {#conclusion} ================== We show how we train a system to globally edit an image given a general textual command. To this end, we propose three models (bucket, filter bank, and end-to-end), which have different requirements in terms of initialization, memory requirements, and amount of training. Given the lack of database on image pair with text descriptions, we collected one on our own. Experimental results validate our models, and we believe our work is the first to address the general computational photography application of editing images purely through textual description. One current limitation is we handle only editing based on global transformations (even the learned the transformation is local). To allow object-based editing, we would need to integrate object segmentation with a natural language module [@nlp4seg; @hong2018inferring] to our system or design a joint model which can simultaneously segment and transform [@hong2018inferring]. At the same time, we found it’s difficult to obtain large scale, while diverse enough data, one possible way to alleviate this issue is data augmentation [@dong2017i2t2i]. Given the facts user usually prefer a series of simple, consecutive and coherent textual description, another interesting direction is to extend our work in chat environment [@Das1; @sharma2018chatpainter]. Finally, we can also investigate the multi-DSSM [@huang2013learning] loss in our model [@xu2017attngan]. We leave the development of all these functionality for future work and we believe this will be an exciting and important research topic. [^1]: TTIC, Chicago, IL, 60637. USA. Email: haiwang@ttic.edu, work done at MSR [^2]: Apple, Cupertino, CA, 95014. USA, Email: jdw@alumni.princeton.edu, work done at MSR [^3]: Microsoft Research, Redmond, WA, 98052, USA. Email: sbkang@microsoft.com [^4]: Code is available at <https://github.com/sohuren/Img_edit_with_text>. Supplementary file is online at author’s homepage [^5]: The random dataset from [@S.Hwang] is not publicly available, so we instead randomly select the same number of images. [^6]: Please note that we are less interested in how measuring how close the generated edited image is to the ground truth, because such a metric takes the text description out of the loop. [^7]: For a given model, we calculate the average score for each image pair and then evaluate the $p$-values between the scores of different models.
Moglea ornaments with Neon cord I have been a big admire of Moglea stationery since the very begging of the company and I'm excited to collaborate with them on the packaging ofMoglea ornaments, with our studio's metallic and Neon cord. Love to see Moglea grow and developing such a distintive style of hand-painted cards and paper products. Photos by Moglea.
If the ``input`` option is set to ``string``, this option specifies the format of the date. This must be a valid `PHP date format`_. .. _`PHP date format`: https://secure.php.net/manual/en/function.date.php
Update, July 5: On Thursday afternoon, President Trump confirmed that he had accepted EPA Administrator Scott Pruitt's resignation: …on Monday assume duties as the acting Administrator of the EPA. I have no doubt that Andy will continue on with our great and lasting EPA agenda. We have made tremendous progress and the future of the EPA is very bright! — Donald J. Trump (@realDonaldTrump) July 5, 2018 This is reportedly Pruitt's resignation letter, which cites the "unrelenting attacks on me": Previously: Monday night brought another round of damaging revelations for the embattled head of the EPA, Scott Pruitt. The Washington Post reported that Pruitt, who somehow ​still has his job, used his position to try to get his wife a job with at least a $200,000 salary and that he had employees book his hotel rooms on their personal cards — and then didn't reimburse them. The story is the latest in a long string of revelations about how Pruitt is using his office to make money for himself. Since this has been going on for a while, we've rounded up some of the worst things Pruitt has been accused of over the last year and a half. Using EPA Employees To Get His Wife A Job July 2, 2018: Pruitt Had An EPA Employee Try To Find His Wife A High-Paying Job [Former EPA associate administrator for the Office of Policy, Samantha] Dravis, who The Post recently reported had helped seek employment for Pruitt's wife, Marlyn, told investigators that the administrator wanted his spouse to find a post with an annual salary of more than $200,000, according to one individual familiar with the matter. [Washington Post] June 5, 2018: Pruitt Tasked EPA Employee With Securing His Wife A Chick-fil-A Franchise Three months after Scott Pruitt was sworn in as head of the Environmental Protection Agency, his scheduler emailed Dan Cathy, chief executive of the fast-food company Chick-fil-A, with an unusual request: Would Cathy meet with Pruitt to discuss "a potential business opportunity"? A call was arranged, then canceled, and Pruitt eventually spoke with someone from the company's legal department. Only then did he reveal that the "opportunity" on his mind was a job for his wife, Marlyn. "The subject of that phone call was an expression of interest in his wife becoming a Chick-fil-A franchisee," company representative Carrie Kurlander told The Washington Post via email. [Washington Post] Renting A Discounted DC Condo From A Lobbyist's Wife March 30, 2018: Pruitt Got A Sweet Deal On A Condo — From The Wife Of A Top Energy Lobbyist The head of the Environmental Protection Agency paid just $50 a night to stay in a Capitol Hill condominium linked to a prominent Washington lobbyist whose firm represents a roster of fossil fuel companies… [Lobbyist J. Steven Hart's] firm's clients include Exxon Mobil Corp. and the major liquefied natural gas exporter Cheniere Energy Inc. — companies that have billions at stake in regulatory decisions over which Pruitt presides. [NBC News] It emerged in April that Pruitt overstayed his welcome in the condo, and his landlords changed the locks on him. And it was later revealed that: April 21, 2018: Turns Out Pruitt Did Meet With The Lobbyist Scott Pruitt, the head of the Environmental Protection Agency, met personally last year with J. Steven Hart, the lobbyist whose wife had rented him a $50-a-night Capitol Hill condo, a disclosure that contradicts earlier statements that E.P.A. lobbying by Mr. Hart had not occurred. The meeting was set up on behalf of an executive associated with Smithfield Foods, the world's largest pork processor and hog producer. Previously, Mr. Hart and his lobbying firm, Williams & Jensen, had maintained that Mr. Hart never lobbied Mr. Pruitt in 2017, when Mr. Pruitt was living in a condo co-owned by Mr. Hart's wife, or in the time since then. [New York Times] Crazy Spending March 20, 2018: $105,000 Spent On First-Class Flights Pruitt has drawn criticism for regularly booking first-class flights rather than the coach tickets recommended by EPA protocol. The agency has said the expensive flights were necessary because of the high number of security threats Pruitt has received. That $105,000 figure doesn't include an additional $58,000 Pruitt rang up on charter flights and a military jet to carry him and his staff from an event with President Donald Trump in Cincinnati to catch a connecting flight to Europe out of New York, according to previously released records. [Politico] April 7, 2018: Pruitt Obliterating His Budget With Huge Security Detail And First-Class Travel Environmental Protection Agency chief Scott Pruitt's concern with his safety came at a steep cost to taxpayers as his swollen security detail blew through overtime budgets and at times diverted officers away from investigating environmental crimes. Altogether, the agency spent millions of dollars for a 20-member full-time detail that is more than three times the size of his predecessor's part-time security contingent. [CNBC] April 16, 2018: A $43,000 Soundproof Booth For His Office! The $43,000 purchase of a soundproof booth for Environmental Protection Agency Administrator Scott Pruitt's office violated federal law, the Government Accountability Office concluded Monday…. The $24,570 "privacy booth for the administrator" was ordered in August from a Virginia-based company that specializing in soundproofing materials. The total price of around $43,000 included renovations to prepare space for the booth, including removing a closed circuit camera system. [CNN] Miscellaneous Bad Behavior July 2, 2018: Pruitt Had Employees Put His Hotels On Personal Credit Cards, Did Not Reimburse Them According to a current and former EPA official, Pruitt routinely asked his assistants — including then-executive scheduler Sydney Hupp — to put hotel reservations on their personal credit cards rather than his own. In one instance, according to former deputy chief of staff Kevin Chmielewski, Hupp was stuck with a bill of roughly $600 for a booking she had made for the administrator's family during the transition. Chmielewski said in an interview last month that he was in Jackson's office when Hupp approached Pruitt's chief of staff to explain that the period for transition reimbursements had expired and that Pruitt had not covered the bill. [Washington Post] June 7, 2018: Pruitt Used Staffers To Fetch Him Fancy Food And Drink Beyond the protein bars, Pruitt also has a well-known sweet tooth, and often tells staffers to make a grocery run to get his preferred sweets, cookies, and Greek yogurt, among other items, sources say. Pruitt's tastes in snacks are rather refined, according to former aides. He is particularly fond of finger food from the upscale eatery Dean & Deluca, according to a former EPA official. Pruitt is also particular about his coffee tastes, the former official said, and would often direct an aide to brew him pour-over coffee, which he prefers to more run-of-the-mill brewing methods. [The Daily Beast] April 9, 2018: Pruitt Lied About Giving His Closest Aides Big Raises The EPA administrator has said he "didn't know" about unusual salary bumps given to a pair of trusted aides, but a message from one of those staffers claims otherwise. [The Atlantic]
A group of Utica Academy for International Studies are bringing their world to elementary students each week. The students – seniors at the International Baccalaureate high school – are continuing a weekly after-school foreign language club at Beck Elementary. The goal, according to UAIS senior Riya Mathews, is to give the 20 students participating in the program a taste of different cultures and languages throughout the school year. “We want them to be more open minded about what’s in the world,” she said. UAIS Mathews, Binsu Varughese, Amna Wani and Magda Wojtara will be introducing students to Arabic, Polish, German, Spanish, French, sign language and a general internationalism every Friday afternoon. Beck Elementary sixth-grader Brandon Bizze is participating for the second time in the club “because it is a lot of fun.” He said he plans to continue studying languages. “If you meet someone who speaks another language and you want to talk to them, it is important to know their language,” he said. This is the third year that UAIS students have offered the club to students. The program was created and sponsored by now UAIS graduates Giuliana Cusumano and Kaelyn Fife. Fife is studying Neuroscience at Lyman Briggs at Michigan State University and Cusumano is studying International Relations at James Madison College at MSU. For the students, the project also fulfills their high school’s “CAS” requirement. As part of their graduation requirements, students at the International Baccalaureate High School are required to perform 150 hours of CAS Creativity, Action and Service.
/* * Copyright 2018 Jos van den Oever <jos@vandenoever.info> * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of * the License or (at your option) version 3 or any later version * accepted by the membership of KDE e.V. (or its successor approved * by the membership of KDE e.V.), which shall act as a proxy * defined in Section 14 of version 3 of the license. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ #include "test_list_types_rust.h" #include <QTest> #include <QSignalSpy> class TestRustListTypes : public QObject { Q_OBJECT private slots: void testConstructor(); void testStringGetter(); void testStringSetter(); void testBool(); void testOptionalBool(); void testInt8(); void testUint8(); void testInt16(); void testUint16(); void testInt32(); void testUint32(); void testInt64(); void testUint64(); void testFloat(); void testDouble(); void testString(); void testOptionalString(); void testByteArray(); void testOptionalByteArray(); }; template <typename V, typename Set, typename Get> void testSetter(const V v, Set set, Get get) { // GIVEN List list; QSignalSpy spy(&list, &List::dataChanged); // WHEN bool ok = (list.*set)(0, v); QVERIFY(ok); // THEN QVERIFY(spy.isValid()); QCOMPARE(spy.count(), 1); QCOMPARE((V)(list.*get)(0), v); } int getRoleFromName(const QAbstractItemModel& model, const char* name) { auto names = model.roleNames(); auto i = names.constBegin(); while (i != names.constEnd()) { if (i.value() == name) { return i.key(); } ++i; } return -1; } template <typename V> void testDataSetter(const char* roleName, const V v) { // GIVEN List list; QSignalSpy spy(&list, &List::dataChanged); // WHEN int role = getRoleFromName(list, roleName); auto index = list.index(1, 0); const QVariant vv = QVariant::fromValue(v); QVERIFY(!vv.isNull()); bool ok = list.setData(index, vv, role); QVERIFY(ok); // THEN QVERIFY(spy.isValid()); QCOMPARE(spy.count(), 1); QCOMPARE(list.data(index, role), vv); } template <typename V> void testOptionalDataSetter(const char* roleName, const V v) { // GIVEN List list; QSignalSpy spy(&list, &List::dataChanged); int role = getRoleFromName(list, roleName); auto index = list.index(1, 0); QVERIFY(list.data(index, role).isNull()); // WHEN QVariant vv = QVariant::fromValue(v); if (vv.isNull()) { vv = QVariant(); } bool ok = list.setData(index, vv, role); QVERIFY(ok); // THEN QVERIFY(spy.isValid()); QCOMPARE(spy.count(), 1); QCOMPARE(list.data(index, role), vv); } template <typename V, typename Set, typename Get> void test(const V v, Set set, Get get, const char* roleName) { testSetter(v, set, get); testDataSetter(roleName, v); } template <typename V, typename Set, typename Get> void testOptional(const V v, Set set, Get get, const char* roleName) { testSetter(v, set, get); testOptionalDataSetter(roleName, v); } void TestRustListTypes::testConstructor() { List list; } void TestRustListTypes::testBool() { test(true, &List::setBoolean, &List::boolean, "boolean"); test(false, &List::setBoolean, &List::boolean, "boolean"); } void TestRustListTypes::testOptionalBool() { testOptional(QVariant(), &List::setOptionalBoolean, &List::optionalBoolean, "optionalBoolean"); testOptional(QVariant(true), &List::setOptionalBoolean, &List::optionalBoolean, "optionalBoolean"); testOptional(QVariant(false), &List::setOptionalBoolean, &List::optionalBoolean, "optionalBoolean"); } void TestRustListTypes::testInt8() { test(0, &List::setI8, &List::i8, "i8"); test(1, &List::setI8, &List::i8, "i8"); test(std::numeric_limits<int8_t>::min(), &List::setI8, &List::i8, "i8"); test(std::numeric_limits<int8_t>::max(), &List::setI8, &List::i8, "i8"); } void TestRustListTypes::testUint8() { test(0, &List::setU8, &List::u8, "u8"); test(1, &List::setU8, &List::u8, "u8"); test(std::numeric_limits<uint8_t>::min(), &List::setU8, &List::u8, "u8"); test(std::numeric_limits<uint8_t>::max(), &List::setU8, &List::u8, "u8"); } void TestRustListTypes::testInt16() { test(0, &List::setI16, &List::i16, "i16"); test(1, &List::setI16, &List::i16, "i16"); test(std::numeric_limits<int16_t>::min(), &List::setI16, &List::i16, "i16"); test(std::numeric_limits<int16_t>::max(), &List::setI16, &List::i16, "i16"); } void TestRustListTypes::testUint16() { test(0, &List::setU16, &List::u16, "u16"); test(1, &List::setU16, &List::u16, "u16"); test(std::numeric_limits<uint16_t>::min(), &List::setU16, &List::u16, "u16"); test(std::numeric_limits<uint16_t>::max(), &List::setU16, &List::u16, "u16"); } void TestRustListTypes::testInt32() { test(0, &List::setI32, &List::i32, "i32"); test(1, &List::setI32, &List::i32, "i32"); test(std::numeric_limits<int32_t>::min(), &List::setI32, &List::i32, "i32"); test(std::numeric_limits<int32_t>::max(), &List::setI32, &List::i32, "i32"); } void TestRustListTypes::testUint32() { test(0, &List::setU32, &List::u32, "u32"); test(1, &List::setU32, &List::u32, "u32"); test(std::numeric_limits<uint32_t>::min(), &List::setU32, &List::u32, "u32"); test(std::numeric_limits<uint32_t>::max(), &List::setU32, &List::u32, "u32"); } void TestRustListTypes::testInt64() { test(0, &List::setI64, &List::i64, "i64"); test(1, &List::setI64, &List::i64, "i64"); test(std::numeric_limits<int64_t>::min(), &List::setI64, &List::i64, "i64"); test(std::numeric_limits<int64_t>::max(), &List::setI64, &List::i64, "i64"); } void TestRustListTypes::testUint64() { test(0, &List::setU64, &List::u64, "u64"); test(1, &List::setU64, &List::u64, "u64"); test(std::numeric_limits<uint64_t>::min(), &List::setU64, &List::u64, "u64"); test(std::numeric_limits<uint64_t>::max(), &List::setU64, &List::u64, "u64"); } void TestRustListTypes::testFloat() { test(0, &List::setF32, &List::f32, "f32"); test(1, &List::setF32, &List::f32, "f32"); test(std::numeric_limits<float>::min(), &List::setF32, &List::f32, "f32"); test(std::numeric_limits<float>::max(), &List::setF32, &List::f32, "f32"); } void TestRustListTypes::testDouble() { test(0, &List::setF64, &List::f64, "f64"); test(1, &List::setF64, &List::f64, "f64"); test(std::numeric_limits<double>::min(), &List::setF64, &List::f64, "f64"); test(std::numeric_limits<double>::max(), &List::setF64, &List::f64, "f64"); } void TestRustListTypes::testString() { test(QString(""), &List::setString, &List::string, "string"); test(QString("Konqi"), &List::setString, &List::string, "string"); test(QString("$𐐷𤭢"), &List::setString, &List::string, "string"); } void TestRustListTypes::testOptionalString() { testOptional(QString(), &List::setOptionalString, &List::optionalString, "optionalString"); testOptional(QString(""), &List::setOptionalString, &List::optionalString, "optionalString"); testOptional(QString("Konqi"), &List::setOptionalString, &List::optionalString, "optionalString"); testOptional(QString("$𐐷𤭢"), &List::setOptionalString, &List::optionalString, "optionalString"); } void TestRustListTypes::testByteArray() { const char data[10] = {0x0,0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8,0x9}; test(QByteArray(data, 0), &List::setBytearray, &List::bytearray, "bytearray"); test(QByteArray(data, 10), &List::setBytearray, &List::bytearray, "bytearray"); } void TestRustListTypes::testOptionalByteArray() { testOptional(QByteArray(), &List::setOptionalBytearray, &List::optionalBytearray, "optionalBytearray"); const char data[10] = {0x0,0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8,0x9}; testOptional(QByteArray(data, 0), &List::setOptionalBytearray, &List::optionalBytearray, "optionalBytearray"); testOptional(QByteArray(data, 10), &List::setOptionalBytearray, &List::optionalBytearray, "optionalBytearray"); } void TestRustListTypes::testStringGetter() { List list; QCOMPARE(list.rowCount(), 10); QVariant value = list.data(list.index(0,0)); // value should be empty string in default implementation QVERIFY(value.isValid()); QCOMPARE(value.type(), QVariant::String); QCOMPARE(value.toString(), QString()); } void TestRustListTypes::testStringSetter() { // GIVEN List list; QSignalSpy spy(&list, &List::dataChanged); // WHEN const QModelIndex index(list.index(0,0)); const bool set = list.setData(index, "Konqi"); // THEN QVERIFY(set); QVERIFY(spy.isValid()); QCOMPARE(spy.count(), 1); QVariant value = list.data(list.index(0,0)); QCOMPARE(value.toString(), QString("Konqi")); } QTEST_MAIN(TestRustListTypes) #include "test_list_types.moc"
Staff Picks: EdTech, Blended Learning, Summer Learning Karen says, “Deborah’s heart warming story of turning a painful life event into a chance to make a difference is powerful. She was relentless in her pursuit to open schools in tough neighborhoods for students that needed them most. She built schools of culture and pride and shares all about them in her first book. Definitely worth a read!” Tom says, “We had great guest contributors this week including Byran & Emily Hassel’s summary of their work investigating strategies to leverage great teachers.” Caroline says, “I truly appreciate Bryan and Emily’s post about great teachers. Teaching is a tough job, especially with the conditions at most schools. Technology can greatly benefit teachers when its designed and implemented with both student and teacher in mind. Its all about expanding the impact and reach.” Carri says, “A piece like this one is really helpful for readers that are past the point of needing convinced about the merits of blended learning and are ready to get down into the nuts and bolts of making the shift.” Sarah says, “This is another great post from our Smart Teacher Susan Lucille Davis who is always thinking ahead of the curve with tech in her classroom. I used to love making electronic portfolios in high school and college. They’re such a great resource for internships, jobs, etc. and way to track student progress.” Allison says, “I read constantly during the summer as a kid! That was my favorite thing to do! I think it’s great they put together this list for parents to help guide them with book choices to make reading enjoyable. Kids are more likely to read something they enjoy.”
Bowden Glacier Bowden Glacier () is a glacier lying on the southeast flank of Salient Ridge that flows northeast to Blue Glacier, Victoria Land. It was named by the New Zealand Geographic Board in 1994 for Charles Bowden, first chairman of the Ross Dependency Committee during Sir Edmund Hillary's South Pole Expedition, part of the Commonwealth Trans-Antarctic Expedition in 1957. Bowden also served as a member of the Parliament of New Zealand until 1955. References Category:Glaciers of Scott Coast
Find Me At Screen Rant Sunday, May 20, 2012 Battleship BATTLESHIP ** SPOILERS ** What's the opposite of pride that you paid money to watch a movie in a movie theater? Shame? What's worse than shame? Battleship. In the noisy and excruciating Battleship, we humans built giant beacons on top of the mountains of Oahu (the Hawaiians must have been delighted to have their natural wonders defaced) to send a beacon to an Earth-like planet detected millions of miles away. The aliens took us up on our invitation to come to Earth and play Battleship with us, but these aliens are so incompetent, they crashed their communications ship into one of our satellites while entering the atmosphere; the debris of their ship lands in and wrecks Hong Kong while the rest of their fleet lands in the Pacific near Hawaii with no radar or means of communications. I believe this is what the movie says is what happened. Meanwhile, the US Navy is playing war games and gets caught in a force field the aliens erect over Hawaii, rendering their fleet without radar either. So now, both sets of fleets "can't see" each other - except they can because their ships are usually close enough to shoot each other - and thus can play the game Battleship with real ships, real guns, real explosions and real dying. Playing for the humans are the Dillion Panthers, led by Tim Riggins himself, Taylor Kitsch, a screw up of a lieutenant commander with poor character, terrible decision-making skills, and a yellow streak down his back. Landry is there too. So is Rihanna for some reason. There's also Admiral Liam Neeson, but he's largely benched; Neeson is in the movie for about ten minutes tops. Meanwhile, Kitsch's supermodel girlfriend Brooklyn Decker is inexplicably in a movie of her own; she's a physical therapist trudging through the Oahu mountains with The Man With No Legs, and they run afoul of the aliens, who are humanoids dressed like Master Chief from Halo. And get this, the aliens came all the way to Earth but they're vulnerable to sunlight. Battleship assaults the audience with relentless, mind-numbing nonsense, as our Navy sailors lose the game and get all their ships sunk, until the last hurrah when Kitsch and his crew re-commission the ancient USS Missouri, complete with the original crew from World War II and Korea manning the steam engines and guns, to sink the alien ships and save the world. No one ever says "You sank my battleship!", but that sinking feeling lingers long after one leaves the theater.
Q: Changing IdentityUser Type in ASP.NET Core 2.1 I want my User to have a Guid as the primary key, but when I create my own user type, my website throws an exception on startup. Anyone knows how to change IdentityUser type? I did this: services.AddIdentity<MyUser, MyRole>() .AddEntityFrameworkStores<UBContext>() .AddDefaultUI() .AddDefaultTokenProviders(); But when my program starts up I get this error: InvalidOperationException: No service for type 'Microsoft.AspNetCore.Identity.UserManager`1[Microsoft.AspNetCore.Identity.IdentityUser]' has been registered. Does that have something to do with the fact that the identity UI is now in a separate lib and a controller in that lib is expecting a UserManager<IdentityUser>? How can I override that? A: Ok, so found the problem. _LoginPartial.cshtml injects @inject SignInManager<IdentityUser> @inject UserManager<IdentityUser> so you need to make sure that they are updated to @inject SignInManager<MyUser> @inject UserManager<MyUser>
Q: Save data through session in C# web application I try to create a web application that got a button that change an image. this is my code: public partial class _Default : System.Web.UI.Page { private bool test ; protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { if (test) { Image1.ImageUrl = @"~/images/il_570xN.183385863.jpg"; test = false; }else { Image1.ImageUrl = @"~/images/BAG.png"; test = true; } } } my problem is that the page reload every time. meaning, after i click the button "test" return to it's initial value. how can i have a variable that i can access all through the session? please notice, i don't want to solve this specific image problem, but to know how to keep data until the user closed the page. A: You can store arbitrary values in Session Session["someKey1"] = "My Special Value"; Session["someKey2"] = 34; Or more complex values: Session["myObjKey"] = new MyAwesomeObject(); And to get them back out: var myStr = Session["someKey1"] as String; var myInt = Session["someKey2"] as Int32?; var myObj = Session["myObjKey"] as MyAwesomeObject;
CRPS and Hyperacusis Hyperacusis is a hearing condition which results in the sounds of everyday life becoming uncomfortably loud and often painful. The condition is surprisingly common, affecting around 2% of the general adult population. For many people hyperacusis is a minor annoyance that they learn to live with, but for some the condition affects them so greatly that they become isolated, largely withdrawing from interpersonal contact. In some sufferers, their sensitivity is limited to a particular sound or sounds and in those cases the terms phonophobia or misophonia may be applied. CRPS, Dystonia and Hyperacusis Research has shown that people suffering CRPS related dystonia are substantially more likely to suffer hyperacusis than the general adult population. One study put the number at more than one in three. Dystonia Dystonia is a movement disorder which causes uncontrollable contractions of the muscles in one or more parts of the body. This results in the painful twisting and distortion of those parts of the body affected. Dystonia is the most common movement disorder suffered by people with CRPS and is often a sign that they have reached stage 3 of the condition. In one study, of 185 CRPS patients studied, 121 of them were found to be suffering a movement disorder and of those, 91% were diagnosed with dystonia. Evidence of spread It is thought that the prevalence of hyperacusis among people with CRPS related dystonia may reflect the spreading of central sensitisation to the auditory circuitry connecting the ear to the brain; further evidence of how CRPS can gradually invade the body. Treatment for hyperacusis There is no ‘cure’ as such for hyperacusis. However, people suffering with the condition are often referred for sound therapy with an audiologist or ENT specialist and/or cognitive behavioural therapy with a psychologist. Both are often effective at helping people to adapt to life with the condition.
A report from ProPublica states the Republican National Committee is omitting how voters feel about President Trump.
Semiconductors are used in integrated circuits for a wide range of applications, including personal computers, music and/or video devices, multimedia devices, digital assistants, communications devices, and so forth. In general, integrated circuits manufactured using modern fabrication processes may be extremely consistent, with individual integrated circuits from a single wafer being substantially identical to one another in terms of performance. However, fabrication process variations (or simply, process variations) may occur. Process variations may impact field effect transistor channel widths and lengths, gate oxide thicknesses, doped material concentrations, and so forth. A fairly common side-effect due to variations in the fabrication process used to create integrated circuits may be changes in threshold voltage (VTH) of transistors in the integrated circuits. A change in threshold voltage may alter leakage current, which may impact dynamic random access memory (DRAM) charge retention times, transistor operating speeds, and so forth. FIG. 1a is a diagram of a prior art ring oscillator 100 used to characterize process variations in an integrated circuit. Ring oscillator 100 comprises an odd number of inverters 105-109 arranged serially in a loop. When an integrated circuit containing ring oscillator 100 is powered on, ring oscillator 100 will also be energized and automatically produce a clock signal at a frequency that is a function of inverters 105-109. The frequency of the clock signal may be measured to determine global process variations. For example, if the frequency of the clock signal is greater than an expected frequency based on nominal values for inverters 105-109, then the threshold voltage of at least one of the inverters may have decreased below an expected value. Similarly, if the frequency of the clock signal is smaller than the expected frequency, then the threshold voltage of at least one of the inverters may have increased beyond the expected value. FIG. 1b is a diagram of a prior art single stage of a ring oscillator 150. Rather than having only inverters arranged serially in a loop, each stage of ring oscillator 150 comprises an inverter 155 and a pass gate 160. Each stage also includes an effective load 165 modeled as a capacitor. Effective load 165 may be representative of a subsequent stage coupled to pass gate 160. Pass gate 160 may be used to make or break the loop. Pass gate 160 may be implemented using a field effect transistor (FET), such as an NFET or a PFET. Preferably, each stage of ring oscillator 150 includes a pass gate formed from the same type of FET. The use of a particular type of FET may allow for a characterization of process variations for that particular type of FET. For example, if NFETs are used to implement pass gate 160, then it may be possible to determine global process variations for NFETs. Similarly, if PFETs are used, then it may be possible to determine global process variations for PFETs. FIG. 2 is a diagram of an integrated circuit 200. Integrated circuit 200 includes integrated circuitry 205 that implements the functionality of integrated circuit 200. Integrated circuit 200 also includes several ring oscillators, such as ring oscillator 210 arranged along a top side of integrated circuit 200, ring oscillators 215-216 arranged along left and right edges of integrated circuit 200, ring oscillator 220 arranged on a lower right hand corner of integrated circuit 200, ring oscillator 225 formed in an interior of integrated circuit 200, and so forth. A ring oscillator may also be formed along more than one edge of integrated circuit 200. Using the ring oscillators may allow for a measurement of process variations throughout integrated circuit 200. In general, it is desirable to have multiple ring oscillators or a large ring oscillator distributed over different portions of integrated circuit 200 so that the elements of the ring oscillators may encounter process variations like the circuitry in integrated circuit 200. FIG. 2 may illustrate an exaggerated use of ring oscillators in an integrated circuit.
After AT&T had announced that it would be sending out notices to iPhone owners and taking action against those who are now illegally tethering without an approved plan–in the interest of fairness, as the carrier claims–it appears that long-time rival carrier Verizon Wireless will also be doing the same and joining AT&T on its quest to stop illegal tethering. Both carriers offer a tethering add-on plan option for users who need to share the mobile broadband data connection on their Android or iOS smartphone over WiFi, Bluetooth, and/or a USB connection at this time for an added monthly premium. Users who are trying to circumvent the data policies by utilizing a jailbreak app on an iPhone or a rooting their Android device to utilize a non-approved app that would exploit their smartphone’s native tethering capabilities without having to subscribe to the optional plan are now being asked–or forced–into a tethering plan. AT&T says that users who continue to violate the data policy will automatically be switched to the appropriate plan if they continue their behavior. In Verizon’s case, however, it seems that those who are using illegal tethering apps to tether are now being re-directed to a webpage when they try to connect to the Internet telling them that they need to be on the appropriate data tethering plan for $20 per month more. The change was noticed by some Verizon customers just a day after AT&T had announced its policy against tethering and taking a more hardline approach to the practice. In the past, Verizon Wireless has been pro-active against tethering without the appropriate data plan. The carrier had removed apps that would enable the feature without the appropriate data package from Android Market in the attempt of steering users into subscribing to a tethering add-on. Via: Electronista
Q: MySQL slow on SELECT WITH "OR" on multiple columns I have a table that has about 3mil rows. And while I use this query to select: SELECT order_code, store_debit, total_price FROM orders WHERE 4624603 IN (id, pid) AND `status` = -6; or this query (using OR) SELECT order_code, store_debit, total_price FROM orders WHERE (id = 4624603 OR pid = 4624603) AND `status` = -6; and that one took >17secs. But when I separate it into 2 queries: SELECT order_code, store_debit, total_price FROM orders WHERE id = 4624603 AND `status` = -6; and SELECT order_code, store_debit, total_price FROM orders WHERE pid = 4624603 AND `status` = -6; it returns results like instantly. How could I optimize the first query to make it runs as fast as the other 2. Thank you all! UPDATE: These are the indexes of the table cop_type payment_type, cod_type, cash_transfer, pid, status, created Normal BTREE all_type payment_type, cash_transfer, pid, status, created Normal BTREE theodoi user_id, pid, payment_type, cash_transfer, status, report_atm, pay_status, created Normal BTREE item_lib item_id, status, pay_status, payment_type, created Normal BTREE search_phone phone Normal BTREE search_order_code order_code Normal BTREE search_order_email email Normal BTREE order_work cod_id, status, district, payment_type, pay_status, cash_transfer, cod_type, free_ship Normal BTREE select_hot province, status, type, created, payment_type, pay_status, item_id Normal BTREE coupon_after_buy id, pid, status, free_ship Normal BTREE search_ofice office, created, status, ship_status Normal BTREE search_item item_id, office, created, status, ship_status Normal BTREE idx_ward_id ward_id Normal BTREE idx_street street_id Normal BTREE idx_group_code group_code, pid Normal BTREE idx_paytime payment_time Normal BTREE search_all pid, status, created Normal BTREE idx_country_type country_type Normal BTREE idx_book_time book_time Normal BTREE A: How about using a UNION of the last two? The in/or is probably forcing a table scan.
<?php /** * Genesis Sample. * * Onboarding config shared between Starter Packs. * * Genesis Starter Packs give you a choice of content variation when activating * the theme. The content below is common to all packs for this theme. * * @package Genesis Sample * @author StudioPress * @license GPL-2.0-or-later * @link https://www.studiopress.com/ */ return [ 'plugins' => [ [ 'name' => __( 'Atomic Blocks', 'genesis-sample' ), 'slug' => 'atomic-blocks/atomicblocks.php', 'public_url' => 'https://atomicblocks.com/', ], [ 'name' => __( 'Simple Social Icons', 'genesis-sample' ), 'slug' => 'simple-social-icons/simple-social-icons.php', 'public_url' => 'https://wordpress.org/plugins/simple-social-icons/', ], [ 'name' => __( 'Genesis eNews Extended (Third Party)', 'genesis-sample' ), 'slug' => 'genesis-enews-extended/plugin.php', 'public_url' => 'https://wordpress.org/plugins/genesis-enews-extended/', ], [ 'name' => __( 'WPForms Lite (Third Party)', 'genesis-sample' ), 'slug' => 'wpforms-lite/wpforms.php', 'public_url' => 'https://wordpress.org/plugins/wpforms-lite/', ], ], 'content' => [ 'blocks' => [ 'post_title' => 'Block Content Examples', 'post_content' => require dirname( __FILE__ ) . '/import/content/block-examples.php', 'post_type' => 'page', 'post_status' => 'publish', 'comment_status' => 'closed', 'ping_status' => 'closed', 'meta_input' => [ '_genesis_layout' => 'full-width-content' ], ], 'about' => [ 'post_title' => 'About Us', 'post_content' => require dirname( __FILE__ ) . '/import/content/about.php', 'post_type' => 'page', 'post_status' => 'publish', 'featured_image' => CHILD_URL . '/config/import/images/about.jpg', 'comment_status' => 'closed', 'ping_status' => 'closed', 'meta_input' => [ '_genesis_layout' => 'full-width-content', '_genesis_hide_singular_image' => true, ], ], 'contact' => [ 'post_title' => 'Contact Us', 'post_content' => require dirname( __FILE__ ) . '/import/content/contact.php', 'post_type' => 'page', 'post_status' => 'publish', 'comment_status' => 'closed', 'ping_status' => 'closed', ], 'landing' => [ 'post_title' => 'Landing Page', 'post_content' => require dirname( __FILE__ ) . '/import/content/landing-page.php', 'post_type' => 'page', 'post_status' => 'publish', 'page_template' => 'page-templates/landing.php', 'comment_status' => 'closed', 'ping_status' => 'closed', 'meta_input' => [ '_genesis_layout' => 'full-width-content', '_genesis_hide_breadcrumbs' => true, '_genesis_hide_singular_image' => true, '_genesis_hide_footer_widgets' => true, ], ], ], 'navigation_menus' => [ 'primary' => [ 'homepage' => [ 'title' => 'Home', ], 'about' => [ 'title' => 'About Us', ], 'contact' => [ 'title' => 'Contact Us', ], 'blocks' => [ 'title' => 'Block Examples', ], 'landing' => [ 'title' => 'Landing Page', ], ], ], 'widgets' => [ 'footer-1' => [ [ 'type' => 'text', 'args' => [ 'title' => 'Design', 'text' => '<p>With an emphasis on typography, white space, and mobile-optimized design, your website will look absolutely breathtaking.</p><p><a href="#">Learn more about design</a>.</p>', 'filter' => 1, 'visual' => 1, ], ], ], 'footer-2' => [ [ 'type' => 'text', 'args' => [ 'title' => 'Content', 'text' => '<p>Our team will teach you the art of writing audience-focused content that will help you achieve the success you truly deserve.</p><p><a href="#">Learn more about content</a>.</p>', 'filter' => 1, 'visual' => 1, ], ], ], 'footer-3' => [ [ 'type' => 'text', 'args' => [ 'title' => 'Strategy', 'text' => '<p>We help creative entrepreneurs build their digital business by focusing on three key elements of a successful online platform.</p><p><a href="#">Learn more about strategy</a>.</p>', 'filter' => 1, 'visual' => 1, ], ], ], ], ];
Background {#Sec1} ========== Rectal cancer radiotherapy is a complex problem because of the shape of target volumes and the need of minimizing the involvement of organs at risk (OAR) such as small bowel, bladder and femur heads \[[@CR1]\]. Lots of planning studies have demonstrated the advantages of Intensity-modulated radiation therapy (IMRT) in target coverage and normal tissue sparing over three-dimensional conformal radiotherapy (3D-CRT) for rectal cancer patients \[[@CR2]-[@CR4]\]. However, drawbacks of the IMRT technique have also been reported. The prolonged delivery time per fraction may worsen the accuracy of treatment because of increased intra-fractional patient motion. In addition, more monitor units (MU) and a bigger volume of normal tissue exposed to lower radiation dose would increase the possibility of radiation-induced secondary malignancies \[[@CR5],[@CR6]\]. Volumetric-modulated arc therapy (VMAT) is a technique enabling an intensity-modulated dose to be delivered during a continuous gantry rotation. For the dynamically moving multileaf collimator (MLC), variable dose rate and gantry rotation speed during the rotation, VMAT could achieve highly conformal dose distributions and is essentially an alternative form of IMRT \[[@CR7],[@CR8]\]. Moreover, the improvement in treatment delivery efficiency and the reduction in MU usage of this novel technique could overcome the reported shortages of IMRT \[[@CR9]\]. Most of the planning studies in various tumor sites have compared VMAT with either fixed field IMRT or 3D-CRT techniques \[[@CR10]-[@CR13]\]. However, the efficacy of VMAT may be organ-site dependent. In rectal cancer, VMAT has clear superiority over 3D-CRT with regard to improving dose conformity and OAR sparing \[[@CR14],[@CR15]\]. However, the distinction between VMAT and fixed field IMRT is not well documented. To our knowledge, there is only one planning study comparing IMRT and VMAT in rectal cancer until now. This study was done by Cilla et al. \[[@CR16]\] in Italy. However, the difference of single-arc (SA)-VMAT and double-arc (DA)-VMAT for rectal cancer patients was not compared in Cilla's study. In present study, we compared the dosimetric parameters among fixed field IMRT, SA-VMAT and DA-VMAT for rectal cancer patients and evaluated the efficacy of VMAT technique in rectal cancer treatment. Methods {#Sec2} ======= Patient and simulation {#Sec3} ---------------------- Fifteen patients with pathologically proven rectal cancer in locally advanced stage, subjected to radical postoperative radiotherapy were selected for this study. There were nine males and six females. The median age of these patients was 59 (range, 38--79). The research protocol was reviewed and approved by Ethics Committee of General Hospital of Ningxia Medical University. All patients were simulated in prone position with full bladder and immobilized with Belly-board to dislocate anteriorly as much as possible intestinal loops of small bowel. Computed tomography (CT) scans were acquired with 3-mm slice thickness through the L1 vertebral body to 5 cm below the perineum. Target volume definition {#Sec4} ------------------------ Target volumes were outlined on the planning CT scan by the treating radiation oncologist. The clinical target volume (CTV) was delineated according to published consensus guidelines \[[@CR17]\]. The planning target volume (PTV) was defined with margins around the CTV of: 0.5 cm lateral, 1 cm superior-inferior and 0.8 cm anterior-posterior. Bladder, small bowel and femur heads were contoured as OAR. The small bowel loops was outlined 3 cm above and below the PTV, and the bladder and femur heads were fully outlined. In addition, the healthy tissue was defined as the patient's volume included in the CT dataset minus the PTV volume. Dose constraints for PTV and normal tissue {#Sec5} ------------------------------------------ Dose prescription to the PTV was 50 Gy in 2 Gy per fraction. Dose constraints for the PTV were as follows; (1) ≥ 98% of the PTV receives ≥ 93% of the prescribed dose, (2) ≤ 10% of the PTV receives ≥ 105% of the prescribed dose, (3) ≤ 5% of the PTV receives ≥ 110% of the prescribed dose. (4) None of the PTV receives ≥ 115% of the prescribed dose. For OAR, small bowel V35 Gy \< 180 cc, V40 Gy \< 100 cc, V45 Gy \< 65 cc and no small bowel volume should receive 50 Gy; bladder D40% \< 40 Gy, D15% \< 45 Gy and no bladder volume should receive 50 Gy; femur heads D40% \< 40 Gy, D25% \< 45 Gy and no femur heads should receive 50 Gy. Planning techniques {#Sec6} ------------------- Three sets of plans, IMRT, SA-VMAT, and DA-VMAT, were created on the Eclipse Treatment Planning System (Version 11.0; Varian Medical Systems), and calculated using the Anisotropic Analytical Algorithm, using a 2.5 mm calculation grid, a tissue heterogeneity correction was applied. The same dose constraint parameters of PTV and normal tissue were used for IMRT and VMAT planning. IMRT planning: IMRT plans were optimised with Direct Machine Parameter Optimization (DMPO) approach using seven coplanar beams (0, 50,100,150, 210, 260 and 310) with a dose rate of 400 MU/min and beam energy of 6-MV photons. The maximal number of segments was set to 100 with a minimal number of MU per segment equal to 3. VMAT planning: VMAT plans were calculated using 6-MV photons with a maximum variable dose rate of 600 MU/min. Single-arc corresponded to a single 360° rotation and double-arc to two coplanars arcs of 360° sharing the same isocenter and optimised independently. These two arcs were delivered with opposite rotation (clock and counter-clock) and so minimize the off-treatment between the two beams time about 25 seconds. For SA-VMAT, field size and collimator rotation were determined by the automatic tool from Eclipse to encompass the PTV. We controlled that the collimator was always rotated to a value different from zero in order to avoid tongue and groove effect. For DA-VMAT, the first arc was similar to the one defined in the single-arc process except for the rotation of the collimator, which was 360 minus X for the second arc (X corresponded to the rotation of the collimator of the first arc). Plan evaluation and comparison {#Sec7} ------------------------------ Dosimetric parameters to analyze target coverage and dose distribution in the PTV are as follows; (1) mean dose, (2) Vn Gy, percentage of the volume receiving radiation ≥ n Gy, (3) D98%, minimum dose to 98% of the PTV, (4) D2%, maximum dose to 2% of the PTV, (5) conformality index (CI) defined as the volume encompassed by the 95% isodose divided by the PTV volume. For OAR, the analysis included the mean dose, the maximum dose expressed as D2% and a set of appropriate Vn Gy and Dn values. For healthy tissue, we detailed the volume of the body minus PTV receiving low doses (V5, V10, and V20 Gy). The number of MU per fraction required for each plan and the treatment delivery time (from beam-on to beam-off) was reported. Statistical analysis {#Sec8} -------------------- To appraise the difference between the techniques, Wilcoxon non-parametric two-sample test was applied. Data were considered statistically significant at *p* \< 0.05. Results {#Sec9} ======= PTV volumes, target coverage, conformity, and dose homogeneity {#Sec10} -------------------------------------------------------------- The median of the PTV volume outlined in the 15 patients was 1317.2 cc (range, 1048.4--1587.3). The IMRT, SA-VMAT and DA-VMAT plan met the prescription requirements for the PTV in all cases. Dosimetric parameters for the PTV in these three plans were listed and compared in Table [1](#Tab1){ref-type="table"}. DA-VMAT achieved the highest minimum PTV dose and the lowest maximal dose, resulting in the most homogeneous PTV dose distribution. DA-VMAT also yielded the best CI, although the difference was not statistically significant. Between SA-VMAT and IMRT, the target dose coverage was largely comparable; however, SA-VMAT was able to achieve a better V95 and V107. Dose distribution and DVH of PTV for a typical patient are shown in Figures [1](#Fig1){ref-type="fig"} and [2](#Fig2){ref-type="fig"}.Table 1**Dosimetric parameters comparison among IMRT, SA-VMAT and DA-VMAT technique for PTV (mean ± standard deviation)IMRTSA-VMATDA-VMAT*P*** **(IMRT vs SA-VMAT)*P*** **(IMRT vs DA-VMAT)**Dmean (Gy)52.0 ± 0.252.0 ± 0.451.6 ± 0.20.1670.001D2% (Gy)54.1 ± 0.254.0 ± 0.853.1 ± 0.30.3660.001D98% (Gy)49.2 ± 0.349.3 ± 0.249.5 ± 0.20.2300.015V95 (%)99.6 ± 0.399.9 ± 0.199.9 ± 0.10.0010.001V107 (%)10.2 ± 5.15.3 ± 3.90.6 ± 0.40.0170.001CI1.23 ± 0.051.28 ± 0.071.21 ± 0.070.0060.305MU499.0 ± 41.6418.5 ± 41.9438.8 ± 31.70.0010.001Treatment time (min)8.0 ± 0.71.5 ± 0.23.0 ± 0.30.0010.001IMRT: intensity modulated radiotherapy, SA-VMAT: single-arc volumetric modulated arc therapy, DA-VMAT: double-arc volumetric modulated arc therapy, PTV: Planning Target Volume, CI: conformity index, MU: monitor units.Figure 1**Representation of isodose distribution in axial, coronal and sagittal views for fixed IMRT (the first line) and DA-VMAT (the second line).** IMRT: intensity modulated radiotherapy, DA-VMAT: double-arc volumetric modulated arc therapy.Figure 2**Dose-volume histograms for PTV of an individual patient in present study.** IMRT: intensity modulated radiotherapy, DA-VMAT: double-arc volumetric modulated arc therapy. OAR {#Sec11} --- Table [2](#Tab2){ref-type="table"} shows the dosimetric parameters of the OAR including the small bowel, bladder, femur heads and healthy tissue. For small bowels, no sparing effort was devoted to this organ. The DVH parameters, Dmean, D2%, V15 and V30, were increased when use VMAT technique. For bladder, planning objectives were met by all techniques and no relevant difference was observed between DA-VMAT and IMRT, except the maximal dose. In addition, SA-VMAT increased the Dmean, V40 and V50 of bladder, when compared with IMRT. Concerning femurs heads, SA-VMAT and DA-VMAT both showed a significant increase in mean dose and D10. For healthy tissue, although V20 was significantly lower for VMAT than for IMRT, V5 and V10 were found to be significantly larger for VMAT with respect to IMRT. DVH of normal tissue for a typical patient are shown in Figures [3](#Fig3){ref-type="fig"} and [4](#Fig4){ref-type="fig"}.Table 2**Dosimetric parameters comparison for OAR (mean ± standard deviation)IMRTSA-VMATDA-VMAT*P*** **(IMRT vs SA-VMAT)*P*** **(IMRT vs DA-VMAT)**Small bowelDmean (Gy)12.1 ± 8.015.5 ± 7.715.3 ± 7.60.0010.002D2% (Gy)35.7 ± 12.140.0 ± 10.239.6 ± 10.00.0050.009V15 (cc)132.0 ± 110.2185.8 ± 130.0186.4 ± 128.80.0040.005V30 (cc)48.8 ± 20.077.5 ± 21.078.8 ± 21.50.0010.002BladderDmean (Gy)39.0 ± 3.342.2 ± 4.040.0 ± 3.60.0010.074D2% (Gy)52.9 ± 0.653.2 ± 0.752.4 ± 0.50.1320.005V30 (cc)437.1 ± 156.5479.4 ± 176.5463.2 ± 191.20.0610.460V40 (cc)281.9 ± 118.9366.2 ± 155.8321.1 ± 167.10.0020.140V50 (cc)112.0 ± 55.6185.7 ± 107.4151.1 ± 115.60.0010.112Femur headsDmean (Gy)19.9 ± 4.123.2 ± 3.424.3 ± 2.80.0250.012D10% (Gy)28.2 ± 5.529.6 ± 4.331.5 ± 4.10.1730.047Healthy tissuesV5 (cc)9454.9 ± 1999.39827.4 ± 2153.39815.3 ± 2142.00.0010.001V10 (cc)7833.9 ± 1591.88167.6 ± 1789.48331.8 ± 1733.70.0130.001V20 (cc)5224.2 ± 1084.54721.8 ± 884.34750.7 ± 876.50.0020.005OAR: organ at risk, IMRT: intensity modulated radiotherapy, SA-VMAT: single-arc volumetric modulated arc therapy, DA-VMAT: double-arc volumetric modulated arc therapy.Figure 3**Dose-volume histograms for small bowel and bladder of an individual patient in present study.** IMRT: intensity modulated radiotherapy, DA-VMAT: double-arc volumetric modulated arc therapy.Figure 4**Dose-volume histograms for femur head and healthy tissue of an individual patient in present study.** IMRT: intensity modulated radiotherapy, DA-VMAT: double-arc volumetric modulated arc therapy. MU and treatment delivery time {#Sec12} ------------------------------ The MU was significantly reduced by the use of VMAT. The lower MU combining with less beam mode-up procedures resulted in a much shorter treatment time with VMAT. Compared to a delivery in 8 min for IMRT, treatment delivery time with VMAT was definitely shorter and was 1.5 and 3.0 minutes for single and double arcs, respectively (Table [1](#Tab1){ref-type="table"}). Discussion {#Sec13} ========== Arc therapy was initially reported in rectal cancer by Duthoy et al. \[[@CR14]\] in a planning study comparing 3D-CRT and Intensity-modulated arc therapy (IMAT). They found IMAT plans were deliverable within a 5--10-minute time slot, and resulted in a lower dose to the small bowel than 3D-CRT plans, without creating significant underdosages in the PTV. Richetti et al. \[[@CR15]\] reported on their technical and clinical experience of 25 patients with locally advanced rectal cancer treated with VMAT and performed a planning comparison with a matched cohort of patients who underwent conventional conformal radiotherapy. VMAT improved conformality of doses, presented similar target coverage with lower maximum doses, significant sparing of femur heads and significant reduction of integral and mean dose to healthy tissue. Acute toxicity was limited to Grade 1--2 diarrhea in 40% and Grade 3 in 8% of VMAT patients, 45% and 5% of conventional conformal radiotherapy patients, compatible with known effects of concomitant chemotherapy. To our knowledge, there is only one planning study comparing IMRT and VMAT in rectal cancer until now. This study was done by Cilla et al. \[[@CR16]\] in Italy. VMAT had the highest level of conformality, but the dose distribution across PTV was less homogenous than with IMRT and 3D-CRT were found in their study. With respect to the V15 objective, they found small bowel irradiation to be significantly reduced to 171.2 cc with VMAT, compared with 199.5 cc and 227.4 cc with IMRT and 3D-CRT, respectively. In present study, we evaluated the feasibility and efficiency of IMRT, SA-VMAT and DA-VMAT for the treatment of rectal cancer. The major advantage of using the VMAT technique is the significant reduction in treatment times with its potential advantages in lower MU. This improvement is mainly from the elimination of all the non-beam-on times, such as MLC movements to realize the various segments of IMRT beams or gantry motion to reach the fixed position. The reduction of treatment delivery time is clinically relevant considering the patient comfort and infra-fraction motion. The higher delivery efficiency also allowed for more time for image-guided radiotherapy to further reduce the treatment margin and toxicity \[[@CR18]\]. Although there is a clear advantage of VMAT in terms of faster delivery and lower MU, this needs to be balanced with the dosimetric differences. Compared with IMRT, DA-VMAT provided better coverage of target but not the normal tissue sparing, especially for the small bowel, which was different from the conclusion made in Cilla's study. The first reason of the difference is that the healthy tissue which received dose below 20 Gy was increased when using VMAT technique in present analysis. Although this finding was supported by other dosimetric studies \[[@CR19]-[@CR21]\], the low dose bath may enlarge the V15 of small bowel. Another reason of the different small bowel sparing effect is that we enrolled postoperative rectal cancer patients in present study. It was shown that patients who receive postoperative radiotherapy have a larger portion of small bowel in the pelvis \[[@CR22]\]. Lastly, we followed the dose--volume constraints of RTOG0822 protocol and did not make lower dose-volume constraint in the planning procedure, which may contribute to the enlargement of V15 for small bowel. Conclusion {#Sec14} ========== VMAT is a new radiation technique that combines the ability to achieve highly conformal dose distributions with highly efficient treatment delivery. Considering the inferior role of normal tissue sparing, especially for small bowel, VMAT need further investigation in rectal cancer treatment. **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** JS and WK participated in creating IMRT and VMAT plans and data collection. YYW participated in acquiring data, data analysis and data interpretation and drafted the manuscript. ZD, GY and HZ participated in patient enrollment. All authors read and approved the final manuscript. This work was supported by Ningxia Science & Technology Supporting Plan Program (2012).
Amyotrophic lateral sclerosis (ALS; Lou Gehrigs disease) is a fatal neurodegenerative disorder that leads to rapidly progressive paralysis and respiratory failure. ALS is the third most common neurodegenerative disease in the Western World, and there are currently no effective therapies. Frontotemporal dementia (FTD) is the most common form of dementia in the population under the age of 65. An overlap between these two clinically distinct neurological diseases has long been recognized, but the molecular basis of this intersection was unknown. In 2011, the Neuromuscular Diseases Research Section (NDRS), a part of the Laboratory of Neurogenetics at the National Institute on Aging, identified the major genetic cause of both ALS and FTD. To do this, Dr. Traynor (chief of NDRU) organized a worldwide consortium, bringing together groups that had previously been competitors to focus their efforts towards identifying this gene. This was made possible by the next generation sequencing technologies available at the NIH. This innovative approach worked, and his group published the cause of chromosome 9-linked ALS/FTD in the journal Neuron in September 2011. In these cases, the disease is caused by a six base pair segment of DNA that is pathologically repeated over and over again, up to several thousand times. This so-called large hexanucleotide repeat disrupts the C9ORF72 gene located on chromosome 9. This is the most common genetic cause of both ALS and FTD identified to date, accounting for approximately 40% of all familial cases of ALS and FTD in European and North American populations. Further, Dr. Traynors group has shown that this mutation underlies about 8% of cases of sporadically occurring ALS and FTD that lack a family history. This represents the first time that a common genetic cause has been identified for the sporadic form of these diseases. In a separate publication in The New England Journal of Medicine, they have also shown that the same large hexanucleotide repeat expansion underlies 1% of patients clinically diagnosed with Alzheimer's disease. A one percent reduction in the number of AD cases would represent approximately $1 billion in healthcare cost savings annually. The discovery of the C9ORF72 hexanucleotide repeat expansion is a landmark discovery in our understanding of neurodegenerative disease. It has already greatly effected how these diseases are diagnosed, investigated and perceived, and provides a mechanistic link between two clinically distinct disorders, ALS and FTD. It also provides a distinct therapeutic target for gene therapy efforts aimed at ameliorating the disease, and such efforts are already well underway. In 2018, we published the largest genome-wide association study of ALS in collaboration with John Landers of the University of Massachusetts. This effort identified mutations in the KIF5A gene as a cause of familial and sporadic disease. In 2019, we published a data-driven Mendelian randomization paper in which we identified elevated cholesterol as a risk factor for ALS. Ongoing projects in the laboratory include: (1) genome sequencing of additional familial ALS samples to look for causative genes underlying motor neuron degeneration. DNA for these cases were obtained from our collaborators, Adriano Chio (Italy), Michael Sendtner (Germany), Ekaterina Rogaeva (Canada), and Vivian Drory (Israel), as well as our own efforts to recruit subjects locally and nationally; (2) Genetic studies of myasthenia gravis, a common form of neuromuscular disease in the general population. In summary, the current year has been incredibly successful in identifying genetic variants important in the pathogenesis of ALS using next generation sequencing technologies. Each of these studies employed large cohorts of research subjects, and utilized the sequencing and genotyping facilities available within the Laboratory of Neurogenetics, NIA. By understanding the cellular mechanisms underlying late-onset motor neurodegeneration, we also hope to shed light on the role of aging in the CNS and in age-related decline in mobility.
.c-custom-checkbox { $d: 14px; display: flex; align-items: center; label { @include userSelectNone(); display: flex; align-items: center; } &__box { @include nice-input(); display: flex; align-items: center; justify-content: center; line-height: $d; width: $d; height: $d; margin-right: $interiorMarginSm; } input { opacity: 0; position: absolute; &:checked + label > .c-custom-checkbox__box { background: $colorKey; &:before { color: $colorKeyFg; content: $glyph-icon-check; font-family: symbolsfont; font-size: 0.6em; } } &:not(:disabled) + label { cursor: pointer; } &:disabled + label { opacity: 0.5; } } }
’ I have a small but bulky pug cross jack Russell, who has had a problem with fleas, I have been using frontline every four weeks but still find the odd flea on him, have also treated the whole house with Acclaim household flea spray. Thinking of perhaps changing to advocate spot on. My dog weighs in at 10.5 kilos, so should I use up to 10kg size or maybe change to 10-25kg. This probably sounds like a silly question, but would hate to think that if I used the higher dose it would have a detrimental effect on my dog. Can you advise please.’ Answer from: Shanika Winters Thanks Marilyn for your question regarding flea control on your dog. It sounds as though you are doing all the correct things by treating your pet with a flea preparation regularly as well as having treated your home. It is really frustrating for both dog and owner when the fleas just do not seem to be going away. What are fleas and where have they come from? Fleas are a parasite that lives on our pet and in our homes, the adult fleas need to feed on blood from your pet in order to survive. It is important to be aware that adult fleas are not the only thing we need to get rid of, the flea life cycle involves eggs, which hatch into larvae, these then turn into pupae from which emerge the adult fleas. Unfortunately the fleas we see on a pet are only the tip of the iceberg as most of the flea population is in the form of eggs, larvae and pupae. It is good to hear that you have used the household spray to treat your home; the household spray is aimed at killing the flea population that is not on your pet and therefore helps to break the flea life cycle. Make sure you read all the instruction on household flea sprays, use the correct amount and take care if you have pet fish or caged birds as the chemicals can be toxic to them. It is also important not to use the household flea spray on your dog itself. If the flea infestation in your home is severe and humans are also getting bitten then in some cases a professional household flea treatment may be needed. The flea eggs larvae and pupae can survive in the nooks and crannies of your home, down between floor boards, skirting and soft furnishings. Changing flea treatment product? If you are finding that a product is not working for you and your pet then it is definitely a topic to discuss with your vet. The correct product to fit your pet’s needs can then be found. Some of the flea preparations cover various worms also such as round,tape, heart and lung worms. Correct administration of the product and the correct dosage should also help ensure success in the battle against the fleas. If you are planning on bathing your pet then do this before applying the flea product and make sure your pet has a dry coat before applying its next dose. Make sure that your pet has been weighed accurately ideally on the weighing scales at your vets to decide on which dose to use. The safety margins on the products your vet dispenses to you have been tried and tested, so provided you use the correct dose then the product should be effective. Could the fleas be resistant to the drug I am using? Resistance is when the drug is no longer effective; in the case of a flea product it would no longer kill most of the fleas on your pet. It is possible that the fleas are becoming resistant to certain drugs, but until any official data is released as vets we can’t say that fleas are definitely resistant to a particular product. However as an owner if you are not happy with a particular product then you should ask your vet for alternatives. Have all in contact animals been treated for fleas? It is very important to know that fleas can be coming onto your pet from other animals from your own dogs and cats through to those of neighbours and even wildlife that passes through your garden. We can’t treat the wildlife but we can ensure our own pets are treated for fleas; flea products are available in sprays, spot on and even oral forms so even a semi feral pet cat could be treated through its food. It is tricky when the other in contact animals are not your own, a chat with the neighbours is always worth a try. I hope that I have managed to answer your question which was not at all silly. If you are ever in doubt as to whether a product is working and which dose should be used please discuss this with your vet, this is what we are here to help you with. Shanika Winters MRCVS (online vet) If you have any worries about your pet, please make an appointment to see your vet – or try our online Symptom Guide.
SQL> @example SQL> SQL> drop table sales purge; Table dropped. SQL> SQL> create table sales (id number(10), num1 number(10), num2 number(10), num3 number(10), num4 number(10), txt1 varchar2(10)); Table created. SQL> SQL> insert into sales 2 select rownum,rownum,mod(rownum,1000),mod(rownum,10),null,dbms_random.string('U',10) from dual connect by rownum<10000; 9999 rows created. SQL> SQL> commit; Commit complete. SQL> SQL> create unique index salesi on sales (id); Index created. SQL> SQL> exec dbms_stats.gather_table_stats (ownname=>user,tabname=>'sales',method_opt=>'for all columns size 1'); PL/SQL procedure successfully completed. SQL> SQL> var t1 varchar2(40) SQL> var t2 varchar2(40) SQL> SQL> exec dbms_lock.sleep(2); PL/SQL procedure successfully completed. SQL> SQL> @hist SQL> set linesize 1000 SQL> set trims on SQL> set pagesize 1000 SQL> column table_name format a30 SQL> column column_name format a30 SQL> column hostogram format a30 SQL> SQL> select column_name,histogram from user_tab_col_statistics 2 where table_name = 'SALES' 3 order by 1 4 / COLUMN_NAME HISTOGRAM ------------------------------ --------------- ID NONE NUM1 NONE NUM2 NONE NUM3 NONE NUM4 NONE TXT1 NONE 6 rows selected. SQL> SQL> select count(*) from sales where txt1 > 'A'; COUNT(*) ---------- 9999 SQL> select count(*) from sales where num3>1; COUNT(*) ---------- 8000 SQL> SQL> insert into sales 2 select rownum+100000,rownum,mod(rownum,2000),mod(rownum,20),null,dbms_random.string('U',10) from dual connect by rownum<10000; 9999 rows created. SQL> commit; Commit complete. SQL> SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'sales',method_opt=>'for all columns size auto') PL/SQL procedure successfully completed. SQL> SQL> @hist SQL> set linesize 1000 SQL> set trims on SQL> set pagesize 1000 SQL> column table_name format a30 SQL> column column_name format a30 SQL> column hostogram format a30 SQL> SQL> select column_name,histogram from user_tab_col_statistics 2 where table_name = 'SALES' 3 order by 1 4 / COLUMN_NAME HISTOGRAM ------------------------------ --------------- ID NONE NUM1 NONE NUM2 NONE NUM3 FREQUENCY NUM4 NONE TXT1 HYBRID 6 rows selected. SQL> SQL> select count(*) from sales where txt1 > 'A'; COUNT(*) ---------- 19998 SQL> select count(*) from sales where num3>1; COUNT(*) ---------- 17000 SQL> SQL> insert into sales 2 select rownum+200000,rownum,mod(rownum,2000),mod(rownum,40),null,dbms_random.string('U',10) from dual connect by rownum<10000; 9999 rows created. SQL> commit; Commit complete. SQL> SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'sales',method_opt=>'for all columns size auto') PL/SQL procedure successfully completed. SQL> SQL> @hist SQL> set linesize 1000 SQL> set trims on SQL> set pagesize 1000 SQL> column table_name format a30 SQL> column column_name format a30 SQL> column hostogram format a30 SQL> SQL> select column_name,histogram from user_tab_col_statistics 2 where table_name = 'SALES' 3 order by 1 4 / COLUMN_NAME HISTOGRAM ------------------------------ --------------- ID NONE NUM1 NONE NUM2 NONE NUM3 FREQUENCY NUM4 NONE TXT1 HYBRID 6 rows selected. SQL> SQL> exec dbms_lock.sleep(2); PL/SQL procedure successfully completed. SQL> SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'sales',method_opt=>'for all columns size 1') PL/SQL procedure successfully completed. SQL> SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'sales',method_opt=>'for columns size 254 num2') PL/SQL procedure successfully completed. SQL> SQL> @hist SQL> set linesize 1000 SQL> set trims on SQL> set pagesize 1000 SQL> column table_name format a30 SQL> column column_name format a30 SQL> column hostogram format a30 SQL> SQL> select column_name,histogram from user_tab_col_statistics 2 where table_name = 'SALES' 3 order by 1 4 / COLUMN_NAME HISTOGRAM ------------------------------ --------------- ID NONE NUM1 NONE NUM2 HYBRID NUM3 NONE NUM4 NONE TXT1 NONE 6 rows selected. SQL> SQL> set echo off Table : SCOMP.SALES Column: ID Last analyzed: 2019-14-06 11:05:42 [Current: No Histogram] - 2019-14-06 11:05:36 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:42 0 buckets Column: NUM1 Last analyzed: 2019-14-06 11:05:42 [Current: No Histogram] - 2019-14-06 11:05:36 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:42 0 buckets Column: NUM2 Last analyzed: 2019-14-06 11:05:42 [Current: HYBRID] - 2019-14-06 11:05:36 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:42 0 buckets - 2019-14-06 11:05:42 0 -> 254 buckets CHANGE Column: NUM3 Last analyzed: 2019-14-06 11:05:42 [Current: No Histogram] - 2019-14-06 11:05:36 0 buckets - 2019-14-06 11:05:39 0 -> 20 buckets CHANGE - 2019-14-06 11:05:39 20 -> 40 buckets CHANGE - 2019-14-06 11:05:42 40 -> 0 buckets CHANGE Column: NUM4 Last analyzed: 2019-14-06 11:05:42 [Current: No Histogram] - 2019-14-06 11:05:36 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:39 0 buckets - 2019-14-06 11:05:42 0 buckets Column: TXT1 Last analyzed: 2019-14-06 11:05:42 [Current: No Histogram] - 2019-14-06 11:05:36 0 buckets - 2019-14-06 11:05:39 0 -> 254 buckets CHANGE - 2019-14-06 11:05:39 254 buckets - 2019-14-06 11:05:42 254 -> 0 buckets CHANGE PL/SQL procedure successfully completed. SQL> spool off
This site uses cookies. Getting some useful mobile casino tips you are going to go there? Townsquare may add new partners in the future or terminate its relationship with current partners. In anderen Projekten Commons. Navigation Hauptseite Themenportale Zufälliger Artikel. Hotel Manager for development and growth within the company and improving myvoice results year over year Lead a team of promotion associates in the execution of Harrah's New Orleans promotional VP of Casino Operations at L'Auberge Baton Rouge. Roulette history guide — find out how roulette began roulette history — learn about the history of roulette and how casino aquamarin gambling game came about, from its early days Jetzt 5 geniale Slots von Novomatic kostenlos testen the modern online casino casino municipale. Egal ob New years eve: To find out more, https: Watch casino online, Paypal malgorzata dydek casino, Online casino europa. Want to know more about working here? September durch den Www.casinoclub.com Rita Beste Spielothek in Amberg finden verwüstet. Virgin river hotel and casino mesquite, reviews of virgin usa hotels hotels in mesquite virgin river hotel and casino booking: Auberge casino resort lake Big Bang spelautomat - Spela Big Bang slots Gratis Online Blick auf Lake Charles. Diamond jo casino dubuque — — top casino max casino the diamond jo casino is a gambling casino and entertainment complex located in the port of dubuque, in dubuque, iowa the casino is owned and looking for hotels in diamond jo casino, dubuque, ia? Prepare your holiday in chamonix with the tourist office of Chamonix Valley. Best giochi casino slot machine gratis slots online free slot games ohne anmeldung Tournamania Win big coin http: By continuing to use this website, you agree to their use. Die Bevölkerungsdichte betrug ,6 Einwohner pro Quadratkilometer. Save my name and email in this browser for the next time I comment. Please check your email for a link to reset your password. Come play your favorite casino games online with genting casino registered office genting club star city, watson road, birmingham, england, b7 5sa,. Das Pro-Kopf-Einkommen betrug Video 2 doubledown casino hack 2 go to doubledown casino choose roulette open cheat engine press the small computer top left of. Third placed team had to play in the play-off round for the right to play in the group stage. Lauberge casino new years eve - Mit dem Spielen von. Navigation Hauptseite Themenportale Zufälliger Artikel. Lake Charles Blick auf Lake Charles. Video 2 doubledown casino hack 2 go to doubledown casino choose roulette open cheat engine press the small computer top left of. eve l auberge new years hotel casino rouge baton - Diese Seite wurde zuletzt am Dank einer Anwerbungskampagne wuchs in den er Jahren die Einwohnerzahl um Prozent. Conrad jupiters casino hotel: Lake Charles hat zwei Flughäfen. Indem Sie diese Website https: Das Pro-Kopf-Einkommen betrug Video 2 doubledown casino hack 2 go to doubledown casino choose roulette open cheat engine press the small computer top left of. Getting some useful mobile casino tips you are going to go there? Virgin river hotel and casino mesquite, reviews of virgin usa hotels hotels in mesquite virgin river hotel and casino booking: Die geheimen tricks der automatenspiele — book of ra tipps geheime tipps und tricks zu automatenspielen und book of ra auch zu hause genehmigen kann, bachelor live ticker vorher in ein weit entferntes casino fahren zu müssen lieber spielt man doch gleich direkt los und versucht dann alles während. I have tried my luck there numerous times, and I can only remember one time when I went away with any winnings. Renaissance Baton Rouge Hotel Address: She obviously didn't want to be there. My son tried to calm him down and it got worse. I would certainly make all the shows I could! Tickets are available here. His basic restaurant casino lucherberg inden was to shrug and say "what can I do? We're delighted to hear that you enjoyed our beautiful casino, Heather. There will also be a balloon drop giveaway and performances by LFR. About Us Help Center. While we have visited Lauberge Lake Charles for many years our pestana casino park hotel review time to B. Taxes, fees not included for deals content. L Auberge Casino Hotel Baton Rouge New Years Eve Video Room Review: L'Auberge Casino Resort, Baton Rouge, LA His basic response was to shrug and say "what can I do? The clientele, at least that night, left much to be desired and I would highly recommend you avoid this place for your own safety. I have tried my luck there numerous times, and I can only remember one time when I went away with any winnings. And to top that not even receiving an offer of a drink. Should have spent my money somewhere else. While we have visited Lauberge Lake Charles for many years our first time to B. Unfortunately, both times we didn't arrive until the evening so did not get to see the Mississippi view during daylight. We did not eat here so can't comment. However, I do agree with another reviewer about the street approach to the entrance, esp. Then, to make it even worse, the casino street entrance is confusing with dimly lit signs, esp. The 2nd time we parked in the garage and it is a bit tight. The layout of the casino is not easy to navigate with the main walkway in the middle with the table games which makes it near impossible to navigate on a weekend. You are always in someone's way so had to find alternative routes to get around. There were a number of questionably "ladies" around on both visits, About an hour is all we could take with no luck so took the long trek back to the car. As I write this Tropical Storm Gordon is approaching so hope they ride it out with no problems. Only standing, or sitting on floor with back to wall allowed. This 62 year old bad back music lover cannot stand to stand more than a few minutes. I know I am not alone here! Why do they not pull out one row of bleachers with those barely cushioned cheap chairs? Plenty of room for that, as the floor is wide open with most people around stage. They will not even let you bring in your own chairs! Cannot understand this and cannot get answer as to why. Truthfully, I have not been to event center local acts concert in a couple of years for reason stated,so their policy on chairs may have changed. I would certainly make all the shows I could! We were at the buffet and the line was long. There was people that were calling people to come break in line not just one or two , I'm talking at least A couple of us complained to the manager he seen what was going on on duty but it was clear he was scared of his job Just another dip the way management is at this casino We're disappointed to hear about your waiting experience in Bon Temps Buffet. We have shared your review with management in hopes to better improve future guest experiences. Casino drink service sucked even when I went to the bar — nonexistent while playing slots! Will never go back. Have been to the Lake Charles location and loved it. Expected better of this one. We're disappointed to hear about your dining experience, Mary. We hope you will give us a second chance in the future! Casino lacked people and typical casino noise. Beautiful Casino lobby area. Machines seemed a little stingy causing me to move on to another casino. Flights Vacation Rentals Restaurants Things to do. All of your saved places can be found here in My Trips. Log in to get trip updates and message other travelers. Log in Join Recently viewed Bookings Inbox. Reviewed 3 weeks ago. Reviewed August 19, Great New Years Eve. See all reviews. Show reviews that mention. At the stroke of midnight, the event celebrates with fireworks and the lowering of a custom-designed, LED-lighted red stick in Town Square. Doors open at The Radio Bar , at Government St. A DJ will play jazzy beats throughout the night, and there will be a Champagne toast at midnight. Guests will have a great view of the Red Stick Revelry fireworks and will receive complimentary Champagne. Meghan Montgomery will be performing. Festivities begin at 8 p. There will also be a balloon drop giveaway and performances by LFR. Festivities begin at 9 p. The bar will also offer a special menu of Champagne, chocolate and specialty cocktails as well as a free Champagne toast at midnight.
Lost in translation Interpreting from one language to another can seem like a daunting task, achievable only by professionals with years of training and advanced language skills. That's not always so. The reality is that anyone can act as an interpreter — even you! — if the situation requires it. All it takes is basic language skills, a willingness to help and the courage to make a difference. My first experience as an interpreter took place many years ago in Kobe, where I was learning Japanese. I was taking a walk one day when I came upon a group of American tourists talking to some local Japanese residents. They seemed to be having problems communicating. I approached to see what was happening. "We want to try a Japanese bath," said the Americans. "Where can we find a bath?" The Japanese had no idea what they were talking about. The only English word they caught was "bath." Unfortunately, they misunderstood and thought the Americans wanted a bus. Things went from bad to worse. The Americans began shouting "Bath! Bath!" and pretended to take their clothes off to convey their meaning. Naturally, the Japanese were startled by this strange behavior. In reply, they shouted "Bus! Bus!" and pointed to a nearby bus stop. The problem was clear. The Americans wanted to go to a sento, a public bath, but didn't understand Japanese. The Japanese wanted to help but couldn't understand English. Both groups were trying to communicate but couldn't connect. The frustration on both sides was mounting. It was obvious that they needed a translator to solve their problem. I looked around desperately but there was no professional interpreter at hand. What to do? Slowly, it dawned on me. Maybe I could translate... The idea seemed absurd. I was only a beginner who spoke basic Japanese. Everybody knows that translators are professionals with official certificates and high-level skills. Yet the need was clear and the timing was urgent. I stepped forward, positioned myself between the two groups and got to work. "You want to visit a Japanese bath, right?" I confirmed with the Americans. They nodded their heads in agreement. I conveyed this to the Kobe residents in basic Japanese. "So that's what they wanted," they replied. "We thought they wanted a bus!" In two short minutes, the problem was solved. Soon, the Americans were happily on their way to a nearby sento guided by their helpful Japanese hosts. Professional interpreters play an invaluable role on the world stage. But the world also needs individuals who can use foreign languages to promote communication in daily life. You don't have to be an expert to become a bridge of understanding between people. If you see a need, don't be shy. Step up and start translating!
Recently, in accordance with rapid development of vehicle related technology, a vehicle system in which convenience of a driver is significantly improved has been commercialized in various patterns. As a typical function of these convenience functions of a vehicle, there is a smart cruise control (SCC). The SCC is a convenience apparatus providing freedom in a longitudinal direction to a driver by automatically driving and braking the vehicle so that a speed of the vehicle is maintained as a speed set by the driver. In addition, as a convenience function of sensing a driving path, there is a lane departure warning system. The lane departure warning system, which is a kind of vision system using a camera sensor, may be configured to recognize a lane of a road using a camera to provide a warning to the driver in the case in which the vehicle departs from the lane due to drowsiness or carelessness of the driver. These two systems, which are systems that are currently commercialized, have been mounted as a driver convenience or safety system in a high-class vehicle. In addition, recently, a technology of providing a function of allowing the driver to set a predetermined speed or constantly maintaining a road speed limit to the driver has appeared. Further, a technology of allowing the driver to set a desired time at which the vehicle will arrive at a destination in order to arrive at the destination within a defined time or allowing the vehicle to automatically adjust a speed of the vehicle by reflecting a traffic volume, or the like, has been required.
This article is from the archive of our partner . Stick with us on this. There is now a preponderance of evidence to suggest that maybe — just maybe, with an infinitesimally small chance of happening — that President Obama will suddenly release secret files proving that aliens exist. Exhibit A: Former Clinton Chief of Staff John Podesta is returning to the White House. At first glance, Podesta signing on with the White House isn't terribly suggestive. Having worked as chief of staff to President Clinton from 1998 to 2001 and as head of the Obama administration transition team after 2008, he's no stranger to the executive branch. That is, until you read this piece from Slate's Dave Weigel. "Ever since he left the White House," Weigel writes, Podesta has "wanted public disclosure of what we know about alien life." Weigel points to a speech at the National Press Club in 2002, a clip of which is at right. "It's time to find out what the truth really is, that's out there," Podesta says. "We ought to do it, quite frankly, because the American people really can handle the truth." Again, this is a guy who was chief of staff to the president of the United States for three years. Do you think that a guy who has access to that level of information and also has an interest in UFOs isn't going to poke around a little? It reminds us of the repeated line of questioning from Sen. Ron Wyden of Oregon on the NSA's surveillance activity. He knew that the NSA was hiding something and did what he could for it to come out. Until Edward Snowden, nothing happened. Now a skeptic of the official UFO story is headed back to the White House.
Bitcoin widget mac Hi everyone, as I said in another post, I created a small bitcoin OSX dashboard widget.This Mac download was checked by our built-in antivirus and was rated as.Stay up to date with the latest Bitcoin price movements and forum discussion.Place a simple bit of code in the HTML of your website in each location where you want a widget. Follow these instructions to activate and enable JavaScript in Chrome.Looking for something to accurately display gemini btc price on my mac status bar. The easiest place to buy, use, and accept bitcoin, ethereum, and litecoin.Find all you need to know and get started with Bitcoin on bitcoin.org.With our new cryptocurrency price widgets you can get the best data directly into your app or site. Powerful and easy to use bitcoin wallet allowing users to easily control their own bitcoin.Check out the daily app ranking, rank history, ratings, features and reviews of top apps like Bitcoin Ticker Widget on Mac Store. The Hive wallet software is available as a free Mac OS X download on.Currency Converter latest version: Convert currencies from the Dashboard.To the right of the address bar, click the icon with 3 stacked horizontal lines.
/* * U-boot - setup.h * * Copyright (c) 2005-2007 Analog Devices Inc. * * SPDX-License-Identifier: GPL-2.0+ */ #ifndef _SHARED_RESOURCES_H_ #define _SHARED_RESOURCES_H_ void swap_to(int device_id); #define FLASH 0 #define ETHERNET 1 #endif /* _SHARED_RESOURCES_H_ */
Smart Speakers From Google And Amazon Could Turn Into Phone Replacements Amazon Echo and Google Home may transform into home phones. (Photo : The Tech Chap / YouTube) The tech giants Amazon and Google are reportedly working on adding voice-calling features to their Echo and Google Home smart speakers. The two companies are considering turning their gadgets into phone replacements. Google Home And Amazon Echo To Turn Into Phones According to The Wall Street Journal, Amazon and Alphabet's Google are working to develop a new use for their popular home speakers. Google Home and Amazon Echo may become home phones, allowing to be used to make or receive calls. Since smart home products have become a staple in the lives of many people, this new ability to make phone calls directly from the device may be seen as a really valuable addition. However, digital privacy advocates argue that this new functionality would give smart gadgets further control over consumers' digital lives at home. Google and Amazon appear to be working to overcome telecom regulations, concerns about privacy and emergency services. But another issue to overcome is the "inherent awkwardness" of having a phone conversation via a speaker. It is still uncertain if consumers would want to speak on a device able to record conversations. It is known that both the Google Home and Amazon Echo continuously record audio in order to enable AI responses. According to MacRumors, rather than conversations themselves, Amazon would only collect metadata from phone calls. It is still uncertain what Google would retain. Probably a Home-based call service would resemble Google Voice that does not record phone calls. Amazon is reportedly still taking into consideration multiple options for how the phone feature would work. It is possible that the Echo would get its own phone number. Calls to that number cold be forwarded to be answered remotely on a mobile phone. Another option is to sync the Echo with a user's contacts and existing phone number. Rumors On Similar Project From Apple Google and Amazon could allow external providers such as Vonage or Skype onto their platforms or they could develop the calling tool themselves. Since Apple was also rumored to be working on a connected smart home device that aims to compete with Google's Home and Amazon's Echo, the news is of interest to Apple followers. When Apple releases its rumored home hub, it will likely be on par with already existing products on the market. In case that the Google Home and Amazon Echo are set to gain calling capabilities, it can be expected that an Apple product will come with the same features. Rumors also suggest that Apple's smart home device will be powered by company's voice-based personal assistant, Siri. The AI digital assistant already built into Macs and iPhones would reportedly offer advanced microphone and speaker technology and be used to control HomeKit-enabled accessories. On top of serving as a hub for smart products, the device would also be able to answer queries, respond to typical Siri questions, play music and more. Apple's smart home device is rumored to be in the prototype testing phase of development. An official finalized plan for release has not been divulged yet. In case that the testing phase is not giving the expected results, it is possible as well that Apple could decide not to move forward with the.
Molecular identification of Salp15, a key salivary gland protein in the transmission of lyme disease spirochetes, from Ixodes persulcatus and Ixodes pacificus (Acari: Ixodidae). Salp15 is a multifunctional protein, vital to the tick in its need to obtain vertebrate host blood without stimulating a host inflammatory and immune response. The Salpl5 protein from both Ixodes scapularis Say and Ixodes ricinus (L.), the principal vectors of the Lyme disease spirochete in eastern North America and Europe, respectively, have been well characterized and found to bind the murine CD4 receptor, DC-SIGN, and the OspC protein of Borrelia burgdorferi. In the current study, we characterized the full salp15 gene in Ixodes pacificus Cooley & Kohls and Ixodes persulcatus Schulze, the principal vectors of Lyme disease spirochetes in western North America and Asia, respectively. In comparing the Salp15 protein of all four principal vector ticks of public health importance for the transmission of Lyme disease spirochetes, we find the 53 C-terminal amino acids to have a high degree of similarity. There are at least three clades in the tree of Salp15 and its homologues, probably representing a multigene family.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Test UiManager classes</title> <link rel="stylesheet" type="text/css" href="../css/jsUnitStyle.css"> <script type="text/javascript" src="../app/jsUnitCore.js"></script> <script type="text/javascript" src="../app/jsUnitTestManager.js"></script> <script type="text/javascript" src="../app/BaseUiManager.js"></script> <!--<script type="text/javascript" src="../app/ClassicUiManager.js"></script>--> <script type="text/javascript" src="../app/jsUnitParams.js"></script> <script type="text/javascript"> function testProblemDetailMessageFor_WithException() { var manager = new JsUnit.BaseUiManager(); var exception = new JsUnit.Error("some error"); exception.stack = "this is the stack trace"; assertEquals("Error message is:\n" + "\"some error\"\n" + "\n" + "Stack trace follows:\n" + "this is the stack trace", manager.problemDetailMessageFor(exception)); } function testProblemDetailMessageFor_WithAssertionFailure() { var manager = new JsUnit.BaseUiManager(); var exception = new JsUnit.Failure("some assertion failure", "accompanying message"); exception.stackTrace = "this is the stack trace"; assertEquals("\"some assertion failure\"\n" + "accompanying message\n" + "\n" + "Stack trace follows:\n" + "this is the stack trace", manager.problemDetailMessageFor(exception)); } </script> </head> <body> </body> </html>
Cold & Flu Wellness Tonic Cold and flu season…YUCK! Chances are, you or someone around you is sick which is no fun, no fun at all. This spicy detoxifying tonic made simply with water, cayenne, apple cider vinegar and honey is an excellent way to hydrate, fight inflammation and boost your immune system all at the same time. Each ingredient has impactful benefits (listed below the recipe) to help cure a common cold, alleviate sinus pressure and fight an oncoming flu. If you’re feeling any of those, get a sippin’! Here’s how to make it: BENEFITS: Cayenne Pepper: Cayenne pepper aids in breaking up and moving congested mucus. Once mucus begins to leave the body, relief from flu symptoms generally follows. It also increases the pulse of our lymphatic and digestive rhythms. By heating the body, the natural process of detoxification is streamlined causing us to sweat, another important process of detoxification. Honey:Honey contains antioxidant, antibacterial, and antimicrobial properties that fight against the virus, bacteria, and fungus to treat the cold and its underlying symptoms. It helps to soothe a sore or scratchy throat naturally and relieves irritation. I usually make something similar when I’m feeling under the weather! I don’t put as much vinegar, but I’m curious to try this recipe next time to see if it makes a difference! -Sisihttp://www.modernbeautygirl.com Apple cider vinegar is such a cure-all, it’s seriously amazing. I know people who drink a tablespoon every day of their lives. Love the addition of the cayenne, nothing clears out the sinuses like some hot pepper!
To · Talk · of · Many · Things · ... Cops While I'm harping about tv slogans, There's another one I take issue with. Not surprisingly, it comes from another Fox program. "...suspects are innocent until proven guilty..." Shouldn't that be "...innocent unless proven guilty...?"
This invention concerns cabinet structures having storage compartments or members such as drawers or racks and mounted for translational movement in the cabinet and more particularly, a stabilizing arrangement for such movable storage compartments in which there is a tendency of the member to be skewed in the plane of movement upon the application of transverse forces to the direction of movement. In mounting a drawer or other member mounted for relative translational movement with respect to a support structure, the member is generally mounted for in and out movement in the cabinet along a fixed line of movement and some means is provided for controlling the lateral orientation of the member during its movement. This lateral guidance, in the plane of movement, is commonly achieved by drawer guides which limit the degree of skew or tilt of the drawer occurring due to sideward directed components of the drawer pulling force. Such guide surfaces must provide some clearance in order to provide free movement, which degree increases with the decreasing degree of precision of the guides, and accordingly some looseness may be perceived in the opening and closing of the drawer. Such looseness may also create an increase in the degree of opening and closing effoct required in that the skew may produce jamming or relatively tight frictional drag greatly increasing the effort required in opening or closing the drawer. Similarly, the drawer mounting components may include guides in which the lifting of the drawer during its in and out movement is controlled, and similarly the clearance usually provided for free movement allows some degree of looseness. If the drawer is relatively heavy, the weight loading of the drawer is sufficient to provide adequate vertical stability of the drawer. However, for unloaded lightweight drawer structures, such as in dishwasher racks, the looseness both laterally and vertically produces an objectional sensation of sloppiness in moving the drawer in and out of the supporting cabinetry and other structure. While precision mounting components could substantially eliminate such looseness, in many applications the cost of precision components would be prohibitive, such as in mass-produced appliances. In most front-loading dishwashers, there is provided dishracks which are movable into and out of the interior of the dishwasher cabinet in order to enable loading of the dishware items. Such racks are typically supported by means of simple individually mounted plastic rollers or guides mounted on the dishrack on the interior of the cabinet. Simple roller mounting arrangements are necessitated both in the interests of minimizing manufacturing costs and also due to the fact that the interior cabinet is subjected to the washing water spray. Such simple support arrangements however result in a tendency for the rack to be skewed slightly upon uneven application of the pulling forces, creating a feeling of objectionable looseness to the person manipulating the rack. U.S. Pat. No. 3,323,853 discloses a torque equalizing arrangement in which an axle shaft is provided, supported on rollers bearing the weight of the drawer. The axle shaft has pinion gears secured at either end to the axle shaft in engagement with gear racks located on the underside of the drawer. The torsional interconnection of the pinion gears defeats the tendency to skew. This arrangement, while basically achieving the anti-skew effect, involves a relatively elaborate structure, i.e., the pinion gears and rack, which would add considerably to the cost of manufacture of the drawer. In addition, the bottom-mounting of the axle would not be suitable in upper dishwasher rack applications as it would interfere with the cleaning action of the dishwasher spray. The lightweight characteristic of such dishwasher racks tends to produce a certain vertical looseness in the movement of the rack as well, since the rollers are only lightly loaded, which would not be alleviated by the arrangement disclosed in U.S. Pat. No. 3,323,853. Accordingly, it is an object of the present invention to provide a motion stabilizer arrangement for members mounted for relative translational movement with respect to a supporting structure such as drawers or dishwasher racks mounted for in and out movement in a cabinet. It is a further object of the present invention to provide a motion stabilizer arrangement which is adaptable for the upper rack of a dishwasher. It is yet another object of the present invention to provide a motion stabilization arrangement which also vertically stabilizes the member in its movement by preloading the member such that lightweight racks may thereby be vertically stabilized in their movement.
Renal interstitial fibrosis and urothelial carcinoma associated with the use of a Chinese herb (Aristolochia fangchi). A new renal disease called 'Chinese-herb nephropathy' (CHN) has been reported to occur in women who have ingested slimming pills containing powdered extracts of the Chinese herb Stephania tetrandra (ST). Moderate to end-stage renal disease developed, requiring renal replacement therapy by dialysis or transplantation. Phytochemical analyses of the pills revealed the presence of aristolochic acids (AA) instead of tetrandrine, suggesting the substitution of ST (Han fang ji) by Aristolochia fangchi containing nephrotoxic and carcinogenic AA. A typical histological feature of CHN is a progressive interstitial fibrosis leading to a severe atrophy of the proximal tubules, as documented by the urinary excretion rates of markers of tubular integrity (reduction of neutral endopeptidase enzymuria and high levels of microproteinurias). Removal of the native kidneys and ureters in end-stage CHN patients provided a high prevalence of urothelial carcinoma (46%). Tissue samples contained AA-related DNA adducts, which are not only specific markers of prior exposure to AA but are also directly involved in tumorigenesis. Exposure to Aristolochia species (spp.) is associated with the development of renal interstitial fibrosis (CHN) and urothelial cancer in humans. Health professionals should be aware that in traditional Chinese medicine, Aristolochia spp. are considered interchangeable with certain other herbal ingredients and are also sometimes mistaken for ST, Akebia, Asarum, Clematis spp. and Cocculus spp. in herbal remedies.
A real-time proximity querying algorithm for haptic-based molecular docking. Intermolecular binding underlies every metabolic and regulatory processes of the cell, and the therapeutic and pharmacological properties of drugs. Molecular docking systems model and simulate these interactions in silico and allow us to study the binding process. Haptic-based docking provides an immersive virtual docking environment where the user can interact with and guide the molecules to their binding pose. Moreover, it allows human perception, intuition and knowledge to assist and accelerate the docking process, and reduces incorrect binding poses. Crucial for interactive docking is the real-time calculation of interaction forces. For smooth and accurate haptic exploration and manipulation, force-feedback cues have to be updated at a rate of 1 kHz. Hence, force calculations must be performed within 1 ms. To achieve this, modern haptic-based docking approaches often utilize pre-computed force grids and linear interpolation. However, such grids are time-consuming to pre-compute (especially for large molecules), memory hungry, can induce rough force transitions at cell boundaries and cannot be applied to flexible docking. Here we propose an efficient proximity querying method for computing intermolecular forces in real time. Our motivation is the eventual development of a haptic-based docking solution that can model molecular flexibility. Uniquely in a haptics application we use octrees to decompose the 3D search space in order to identify the set of interacting atoms within a cut-off distance. Force calculations are then performed on this set in real time. The implementation constructs the trees dynamically, and computes the interaction forces of large molecular structures (i.e. consisting of thousands of atoms) within haptic refresh rates. We have implemented this method in an immersive, haptic-based, rigid-body, molecular docking application called Haptimol_RD. The user can use the haptic device to orientate the molecules in space, sense the interaction forces on the device, and guide the molecules to their binding pose. Haptimol_RD is designed to run on consumer level hardware, i.e. there is no need for specialized/proprietary hardware.
(Photo by Drew Angerer/Getty Images) The US Department of Homeland Security (DHS) is expanding the kinds of information that it collects on immigrants to include social media information and search results. The new policy, which covers immigrants who have obtained a green card and even naturalized citizens, will take effect on October 18th. First spotted by Buzzfeed News, the announcement from the Trump regime was published in the Federal Register. The new policy will not only allow DHS to collect information about an immigrant’s Twitter, Instagram, and Facebook accounts, but it also mentions all “search results.” It’s not immediately clear if that means the agency will have access to things such as Google search histories nor is it clear how that would be obtained. The new policy includes 12 points of expansion on what DHS is allowed to collect, but numbers 5 and 11 seem to be the most alarming in their ability to reach inside the digital lives of immigrants to the US and anyone who interacts with those immigrants. From the announcement (emphasis mine): The Department of Homeland Security, therefore, is updating the “Department of Homeland Security/U.S. Citizenship and Immigration Services, U.S. Immigration and Customs Enforcement, U.S. Customs and Border Protection-001 Alien File, Index, and National File Tracking System of Records notice to: [...] (5) expand the categories of records to include the following: country of nationality; country of residence; the USCIS Online Account Number; social media handles, aliases, associated identifiable information, and search results; and the Department of Justice (DOJ), Executive Office for Immigration Review and Board of Immigration Appeals proceedings information [...] (11) update record source categories to include publicly available information obtained from the internet, public records, public institutions, interviewees, commercial data providers, and information obtained and disclosed pursuant to information sharing agreements; The term “information sharing agreements” isn’t defined in the policy, but it could conceivably cover both the types of surveillance agreements that the US has with countries like the UK, Canada, Australia, and New Zealand under Five Eyes, as well as the agreements that DHS has with companies like Google and internet service providers. As Buzzfeed points out, collecting this kind of information would also have a dramatic impact on every single person that interacts with immigrants to the US, since it would seemingly make all of their conversations on social media subject to surveillance. In the interest of full disclosure, yours truly is married to a US green card holder, so not only will my wife be subjected to this new rule, conceivably I will as well. The Department of Homeland Security’s US Citizenship and Immigration Services (USCIS) did not immediately respond to Gizmodo’s request for comment. We will update this post when we hear back. [Federal Register and Buzzfeed News] Update, September 27th, 11:10am: After trying to get somebody to talk to me, I finally got an email from the US Department of Homeland Security this morning. “This amendment does not represent a new policy. DHS, in its law-enforcement and immigration-process capacity, has and continues to monitor publicly-available social media to protect the homeland,” wrote Joanne F. Talbot from DHS Office of Public Affairs. “In an effort to be transparent, to comply with existing regulations, and due to updates in the electronic immigration system, DHS decided to update its corresponding Privacy Act system of records,” Talbot continued. “DHS published this notice in the Federal Register on Sept. 18 to comply with the administrative requirements of the Privacy Act to help address these requirements, not launch a new policy initiative.” But Ms. Talbot didn’t answer any of the questions that I sent to DHS, including questions about why this policy covers naturalized citizens. I responded by asking how it could be that this wasn’t a new policy if this notice was just now being published. And if it truly wasn’t a new policy, when was it enacted? I’ll update this post further if I hear back, but I’m not holding my breath. Update, September 28th, 8:00am: DHS has gotten back to us with assurances that search histories won’t be examined, and hints that naturalized citizens who went through the process before 2012 won’t have their social media monitored. You can read the full statement from DHS here.