text
stringlengths
0
7.84M
meta
dict
reconciliation songs My morning introspection had a catalyst. Barenaked Ladies’ new song ‘War On Drugs.’ The song verbalizes the exact changes I’m making in myself. Letting the tug-of-war relationships of my past go, ridding myself of the guilt and shame… saying goodbye to the demons haunting me, that kept me such company. Maybe it will be dull without all this drama, and maybe it will be odd to make myself happy, like I always thought I was supposed to feel, but never seemed to be. So one point for me. I’m listening to Coldplay’s The Sceintist as I write this. Any song where a man is starting his sentence with “I’m sorry” is a good one. A song about reconciliation. Well done. I like men who show up in the middle of the night if you’re fighting on the phone, or the guy who when you sneak out of his apartment, chases you into the street, finds you in a cab and pleads for you to please come back inside… “there are things that need to be said.” I guess I love people who can realize they’re making mistakes before they make them. Romantics who know what to do if they ever are in a relationship. Nobody said it was easy. It’s such a shame for us to part. Nobody said it was easy. No one ever said it would be this hard. I love reconciliation. I just always thought it would be with a boy; I never thought I’d be reconciling with myself.
{ "pile_set_name": "Pile-CC" }
Canada Wood Today | The Canada Wood Group Construction market statistics can be used to make informed decisions on market development Construction market statistics provide government, industry, and associations with a basis for making decisions on strategic direction and investments. For example, in the past a strong understanding of residential and non-residential statistics in Canada and the US contributed to the successful development of the wood mid-rise construction sector. Today, market statistics help guide the approach to tall wood construction. Unfortunately, there is not adequate construction data in China at this time put forward a similar data-based strategy for market development. Better information of building size, height, and material would provide key insights into where the opportunities for wood construction are. This enhanced market knowledge would not only benefit industry, but it could provide Chinese planners with insights on how a shift to wood construction could potentially reduce the carbon footprint of the built environment. What do we know? China already has good data available to provide us insights on the health of the construction market. For example, in 2014, China started 1.25 billion square metres of residential construction. In the same year 1.05 billion square metres of new residential construction was sold, accounting for 10.1 million housing units. When building common areas are accounted for (assuming 20% of floor area is not saleable) this puts the Chinese real estate market as a whole in relative balance in 2014. We are also able to break this information down on a more granular level (provincial/regional/municipal). Using this approach, we know that there are vast regional differences as the ratio of construction sold –to-started varies. In Shanghai, during 2014, 1.44 square metres of completed and future residential construction was sold for every 1 metre that was started, indicating high future construction activity. On the other hand, in Shanxi the overall ratio of sales to starts was only 0.65, indicating low demand. However, while this type of data is a good indicator of market activity, it provides little information for targeting wood building systems to specific markets. Click on images to enlarge Figure 1 – Ratio of residential floor area sold to started 2013/2014 (assumes 20% of floor area started is common area). Figure 2 Number of Housing Units Sold 2014 What do we need to develop an understanding of the wood construction sector? There are a number of key statistics that would help in the development of strategies around wood construction. It is hoped that in the coming months Canada Wood, with assistance from FPInnovations, will conduct an investigation with MOHURD to determine the feasibility of collecting additional statistics. Examples of statistics that would be useful include: Number of buildings started Statistics by number of storeys (floor area and number of units) Units started Primary construction material In China the challenge is this data lies with thousands of local permitting offices across the country. A system needs to be developed to funnel the data into a centralized location. The next step is to collaborate with one single municipal department to learn what statistics they currently collect and to see what they are willing to share. Once we understand what is possible to do with a single jurisdiction, we can start to consider how to consolidate construction data from across the country. Through addressing statistics of common interest for both MOHURD and Canada Wood this initiative is most likely to move forward.
{ "pile_set_name": "Pile-CC" }
Gösta Frykman Gösta Oskar Vilhelm Frykman (11 March 1909 – 26 February 1974) was a Swedish Army officer. Career Frykman was born in Vilhelmina, Sweden, the son of chief park ranger (överjägmästare) Dan Frykman and his wife Emy (née Forsgrén). He passed studentexamen in 1929 and became a second lieutenant in Älvsborg Regiment (I 15) in 1933. Frykman attended the Royal Swedish Army Staff College from 1940 to 1942, was press officer at the Defence Staff from 1943 to 1946 and was captain in the General Staff Corps in 1944. In 1946 he served as press officer in the camp staff during the Swedish extradition of Baltic soldiers. He was military organizer at the defense exhibition in Gävle in 1946 and became major at the Swedish Infantry Combat School in 1954. Frykman was lieutenant colonel at Skaraborg Regiment (P 4) in 1957 and was commander of the Swedish UN battalion in Gaza in 1961 which was part of United Nations Emergency Force. The same year his battalion was redeployed to the Congo during the Congo Crisis where he was commander of the Swedish UN battalion XI G from April 1961 to November 1961. Other work Frykman was a member of the inquiry within the Swedish National Board of Information (Statens informationsstyrelse) from 1941 to 1943 and chairman of the board of Fastigheter AB Bergslagen. Personal life On 4 April 1936 he married Ingrid Schollin-Borg (1914–2004), the daughter of captain Peter Schollin-Borg and Märtha (née Liedberg). He was the father of Jan Christer (born 1939), Jan Peter (born 1942), Åke (born 1944), Eva (born 1946) and Ingrid (born 1954). He died on 26 February 1976 in Saltsjöbaden and was buried in Galärvarvskyrkogården in Stockholm. Awards and decorations Knight of the Order of the Sword United Nations Medal References Category:1909 births Category:1974 deaths Category:Swedish Army lieutenant colonels Category:People of the Congo Crisis Category:People from Vilhelmina Municipality Category:Knights of the Order of the Sword Category:Burials at Galärvarvskyrkogården
{ "pile_set_name": "Wikipedia (en)" }
Starting this spring, five corporate giants — Anthem, Cigna, CVS Health, Humana and UnitedHealth Group — will control health insurance and pharmacy benefits for more than 125 million Americans. Why it matters: Most of this happened through rapid consolidation, and now the pressure is on these companies to prove they can better control both medical and drug spending with everything under the same roof. Driving the news: Anthem has been working for over a year to create its own pharmacy benefit manager, called IngenioRx, so it could sever ties with Express Scripts. Anthem's new prescription drug negotiator is now ready to go live by March, 10 months ahead of schedule, the company said Wednesday. This is the new landscape. These 5 companies will handle both drug and medical bills for millions of people across Medicare, Medicaid and employer-based insurance. UnitedHealth Group is the largest entity combining health insurance and pharmacy benefits, with UnitedHealthcare and OptumRx (a PBM that got significantly bigger after it absorbed Catamaran in 2015). CVS acquired Aetna to pair with its existing PBM, Caremark. Cigna now owns Express Scripts. Anthem will be moving millions of people onto IngenioRx this year. Humana also has its own PBM, and it's the fourth-largest by prescription volume. It's worth noting that several Blue Cross Blue Shield companies also own a PBM, Prime Therapeutics. What they're saying: PBMs "don't need to be independent entities with their own profit margins ... that adds costs," former Aetna CEO Mark Bertolini said in 2017. Some research says combining health care services and prescriptions under one benefit (not necessarily one common owner) could save money, if the insurer helps people manage their diseases. But insurers and PBMs have lived under the same roof before, and these companies have been doing the same work while U.S. health care spending has continued to rise. Reality check: These companies would not have pursued merging medical and drug plan offerings if they didn't think there was a lot of money to retain.
{ "pile_set_name": "OpenWebText2" }
Sunday, February 28, 2010 Tattoo Crosses Here we have a clean and simple, and recently tattooed, Gothic iron cross tattoo. Many tattoo crosses can have a clutter and other images mixed up with them but I tend to admire the cleanliness of this one. This simple cross is basic to the Gothic Iron Cross but is rendered in a thick-and-thin beveled line that lends itself well to the tattoo medium. The stark simplicity is its most memorable attribute. It is also meant to stand for a dagger of a type, a dagger for God’s work if you will, and that is why it has a point at its bottom. This is a very sharp looking gothic cross and I like it very much. I am the type of person who likes very slick, clean lines and for me the simpler something is, the better. If I were out and about and looking to get myself any kind of cross tattoo put on myself this is exactly what I would get. Because after seeing all of the tattoo crosses I have seen, and believe me, I have seen a lot of them, this one is what appeals to me the most. This is a whole selection of different tattoo crosses. The style that a person may be interested in depends on their personality and the meaning they attribute to the designs. Since their are so many tattoo crosses to choose from, let me give you a rundown of the two most popular ones. It’s a good starting point for your research. The Latin Cross is highly recognizable in the world of tattoos, as it is comprised of a vertical line intersected at right angles by a shorter horizontal line positioned around 1/3 of the way from the top. This uncomplicated design is often associated with Christianity and is frequently used to pay tribute to others. One of the most attractively designed crosses is the Celtic selection, where a knot is created in the space where both lines cross. The Gothic Cross mirrors the German style of elaborate wrought iron work displayed during the Edwardian and Victorian periods. This type of cross is often used to express pain, anger, and the Goth culture. Many designs associated with the Gothic Cross utilize dark imagery, such as barbed wire and daggers. Now, that you have a little background you can do a search on the internet and see if those are the designs you like and maybe even find a few more.
{ "pile_set_name": "Pile-CC" }
Kathy, As you know we are trying to eliminate all the plugs on the Gas Benchmark. Therefore we would like to begin pulling your positions from the GRMS system. I have attached a file that shows the position that I got when I queried GRMS for the June 14 position. I have also included the detail and a copy of what you reported. As you will see these positions are significantly different. Please look at the detail and let me know if you see the reason for the descrepancy (ie books that we have omitted). Let me know if you need more information. Thanks, Robin 713-345-7478
{ "pile_set_name": "Enron Emails" }
/* Copyright 2012 Yaqiang Wang, * yaqiang.wang@gmail.com * * This library is free software; you can redistribute it and/or modify it * under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation; either version 2.1 of the License, or (at * your option) any later version. * * This library is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser * General Public License for more details. */ package org.meteoinfo.data.mapdata.geotiff; import java.util.HashMap; /** * * @author yaqiang */ public class Tag implements Comparable { // <editor-fold desc="Variables"> private static HashMap map = new HashMap(); public static final Tag NewSubfileType = new Tag("NewSubfileType", 254); public static final Tag ImageWidth = new Tag("ImageWidth", 256); public static final Tag ImageLength = new Tag("ImageLength", 257); public static final Tag BitsPerSample = new Tag("BitsPerSample", 258); public static final Tag Compression = new Tag("Compression", 259); public static final Tag PhotometricInterpretation = new Tag("PhotometricInterpretation", 262); public static final Tag FillOrder = new Tag("FillOrder", 266); public static final Tag DocumentName = new Tag("DocumentName", 269); public static final Tag ImageDescription = new Tag("ImageDescription", 270); public static final Tag StripOffsets = new Tag("StripOffsets", 273); public static final Tag Orientation = new Tag("Orientation", 274); public static final Tag SamplesPerPixel = new Tag("SamplesPerPixel", 277); public static final Tag RowsPerStrip = new Tag("RowsPerStrip", 278); public static final Tag StripByteCounts = new Tag("StripByteCounts", 279); public static final Tag XResolution = new Tag("XResolution", 282); public static final Tag YResolution = new Tag("YResolution", 283); public static final Tag PlanarConfiguration = new Tag("PlanarConfiguration", 284); public static final Tag ResolutionUnit = new Tag("ResolutionUnit", 296); public static final Tag PageNumber = new Tag("PageNumber", 297); public static final Tag Software = new Tag("Software", 305); public static final Tag ColorMap = new Tag("ColorMap", 320); public static final Tag TileWidth = new Tag("TileWidth", 322); public static final Tag TileLength = new Tag("TileLength", 323); public static final Tag TileOffsets = new Tag("TileOffsets", 324); public static final Tag TileByteCounts = new Tag("TileByteCounts", 325); public static final Tag SampleFormat = new Tag("SampleFormat", 339); public static final Tag SMinSampleValue = new Tag("SMinSampleValue", 340); public static final Tag SMaxSampleValue = new Tag("SMaxSampleValue", 341); public static final Tag ModelPixelScaleTag = new Tag("ModelPixelScaleTag", 33550); public static final Tag IntergraphMatrixTag = new Tag("IntergraphMatrixTag", 33920); public static final Tag ModelTiepointTag = new Tag("ModelTiepointTag", 33922); public static final Tag ModelTransformationTag = new Tag("ModelTransformationTag", 34264); public static final Tag GeoKeyDirectoryTag = new Tag("GeoKeyDirectoryTag", 34735); public static final Tag GeoDoubleParamsTag = new Tag("GeoDoubleParamsTag", 34736); public static final Tag GeoAsciiParamsTag = new Tag("GeoAsciiParamsTag", 34737); public static final Tag GDALNoData = new Tag("GDALNoDataTag", 42113); private String name; private int code; // </editor-fold> // <editor-fold desc="Constructor"> static Tag get(int code) { return (Tag) map.get(new Integer(code)); } private Tag(String name, int code) { this.name = name; this.code = code; map.put(new Integer(code), this); } Tag(int code) { this.code = code; } // </editor-fold> // <editor-fold desc="Get Set Methods"> /** * Get name * * @return Name */ public String getName() { return this.name; } /** * Get code * * @return Code */ public int getCode() { return this.code; } // </editor-fold> // <editor-fold desc="Methods"> /** * To string * * @return String */ @Override public String toString() { return this.code + " (" + this.name + ")"; } /** * Compare to * * @param o Object * @return Int */ @Override public int compareTo(Object o) { return this.code - ((Tag) o).getCode(); } // </editor-fold> }
{ "pile_set_name": "Github" }
Had a great day here, its a proper little sun trap and out the wind. Loads to climb over and rummage round, I tried looking for stamped brick but they are thin on the ground, saw one from the high harbour as the tide was coming in, a flimsy excuse to go back again at low tide. I do reccomend this one if your in the area. Lots of Cast wheels and bits and bobs along the imediate shoreline also. The works as you see it today is not as it was in the late 19th Century. Most of the existing site was constructed at the turn of the 20th Century and evolved from there onwards according to refinement of the techniques for making the bricks. Mining by manual endeavor lasted from around 1850 to 1914, even though the Porth Wen brick works was taken over by a German named Herr Steibel in 1906. Now Herr Steibel would never have undertaken the management of the works unless he believed there would be a profitable return for his endeavours. Poor Misguided Fool. Herr Steibel’s tenure only lasted until 1908 when a Charles Tidy took over. Progress in manufacture revealed itself in how the individual bricks were made. Unlike Herr Steibel, who had continued with the traditional moulding and wire cutting into shape, Mr. Tidy used a simple shape pressing technique that cut out a time-consuming and therefore expensive stage of production. However, because of technical – or was it personality issues, as alleged elsewhere - the quality of the final product declined markedly and the work at Porth Wen came to an end in 1914. The beginning of the First World war should have been a profitable time serving the wartime steel industry; nonetheless, it failed. The brickworks remained unused until 1924. Production struggled on until 1949 when they finally gave up the ghost.
{ "pile_set_name": "Pile-CC" }
Soil contamination from waste impoundments or ponds, leakage of buried waste, or dumping of waste directly onto the ground, has heretofore been recognized as a serious problem both in this country and abroad. Many techniques have been proposed for addressing this problem, ranging from removal of contaminated soil for redisposal or treatment, to in-situ treatment by chemical reaction in an effort to neutralize contaminants or encase the contaminated soil in solid concrete or the like. One particular method for in-situ-treatment of contaminated soil heretofore proposed in the art involves driving one or more augers into the earth while simultaneously injecting treatment fluid through nozzles in or associated with the auger drill bits. The auger is carried by apparatus suitable for movement between successive drill positions, so that holes are drilled and soil is treated in a pattern that ultimately includes an entire contaminated field. While this technique and theory have the significant advantages of economy, and of not requiring removal of contaminated soil, with consequent danger of dispersing gaseous contaminants and dust, these theoretical advantages have not heretofore been realized in fact. One disadvantage of auger-type devices heretofore proposed lies in the small surface area and depth that can be treated in a single drilling operation. In an effort to increase coverage area and treatment efficiency, it has been proposed to provide multiple parallel augers rotated in an interlocking pattern. However, such multiple-auger systems still only cover a surface area of up to about thirty square feet in each penetration, and typically have a maximum penetration depth of about thirty-five feet. Furthermore, a rock or other obstruction can become wedged between the auger blades, locking the drill system and causing significant downtime for removal and repair. It is therefore a general object of the present invention to provide an apparatus and method for contaminated soil treatment that obtain the advantages of auger-type techniques heretofore theorized but not in fact obtained in the art. A more specific object of the invention is to provide an apparatus and method of the described character that are capable of enhanced depth of soil penetration as compared with techniques heretofore proposed, that cover greater surface area on each drilling operation, and that thus may be operated more efficiently than techniques heretofore proposed. Another and related object of the invention is to provide a single-bit apparatus of the described character in which the drill bit is driven with enhanced power as compared with prior art devices, thereby enabling both greater surface area coverage and greater depth of soil penetration. A further object of the present invention is to provide an apparatus and method that are adapted for enhanced control of fluid injection for soil treatment for obtaining greater drilling speed, increased treatment efficiency and more efficient treatment control than prior art techniques of a similar character.
{ "pile_set_name": "USPTO Backgrounds" }
Q: HQL with parameters NoSuchMethodError I am sure I am overlooking something obvious the following static query works fine hqlQuery = "select user from User as user where user.id = 'userid' "; but when I parametrize the query hqlQuery = "select user from User as user where user.id = :me "; Query query = session.createQuery(hqlQuery); I get a nasty stack dump from building the query. What am I overlooking? Exception in thread "main" java.lang.NoSuchMethodError: antlr.collections.AST.getLine()I at org.hibernate.hql.ast.HqlSqlWalker.generateNamedParameter(HqlSqlWalker.java:940) at org.hibernate.hql.antlr.HqlSqlBaseWalker.parameter(HqlSqlBaseWalker.java:4997) at org.hibernate.hql.antlr.HqlSqlBaseWalker.expr(HqlSqlBaseWalker.java:1413) at org.hibernate.hql.antlr.HqlSqlBaseWalker.exprOrSubquery(HqlSqlBaseWalker.java:4471) at org.hibernate.hql.antlr.HqlSqlBaseWalker.comparisonExpr(HqlSqlBaseWalker.java:3947) at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:2047) at org.hibernate.hql.antlr.HqlSqlBaseWalker.whereClause(HqlSqlBaseWalker.java:831) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:617) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:301) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:244) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:124) at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1770) A: Problem here is there is a conflict between the two ANT jar files(namely: antlr-2.7.6.jar from Hibernate Jar library & antlr-2.7.2 from struts-1.3) in your project. This appears to be a peculiar problem with Struts-1.3 & Hibernate applications. Please remove antlr-2.7.2.jar from your project(/WEB-INF/lib folder)& it should work fine.. Let me know if it works.. A: Looks like you mixed incompatible versions of Hibernate jars (probably ANTLR jar has a wrong version).
{ "pile_set_name": "StackExchange" }
Burpee 350 for MS ABOUT THE EVENT Welcome all to the official Burpee 350 W.O.D. site where we will all come together on Oct 4 or 5th 2014 to do 350 burpees for time to benefit Multiple Sclerosis. This event is being held by burpees4ms and is our main fundraiser of the year. Register as an individual (firebreather) , 2,3,4 or seven person team relay (where each participant does 50 burpees till they reach 350) or partner (175 each). Come show your support to this event and make all the donation goals a reality for those who live with the affliction of Multiple Sclerosis. Buy-in for the event is $15 per person plus a small charge to cover the credit card fees. $15 goes straight to the National Multiple Sclerosis Society. Our 2014 goal is to surpass the $100,000 mark. Lets make a difference! $66,636 has been raised to date!!!! Related Events Who is coming? Welcome all to the official Burpee 350 W.O.D. site where we will all come together on Oct 4 or 5th 2014 to do 350 burpees for time to benefit Multiple Sclerosis. This event is being held by burpees4ms and is our main fundraiser of the year. Register as an individual (firebreather) , 2,3,4 or seven person team relay (where each participant does 50 burpees till they reach 350) or partner (175 each). Come show your support to this event and make all the donation goals a reality for those who live with the affliction of Multiple Sclerosis. Buy-in for the event is $15 per person plus a small charge to cover the credit card fees. $15 goes straight to the National Multiple Sclerosis Society. Our 2014 goal is to surpass the $100,000 mark. Lets make a difference! $66,636 has been raised to date!!!!
{ "pile_set_name": "Pile-CC" }
In Vivo Induced Antigen Technology (IVIAT) has been well documented as a sensitive, fast, and inexpensive method for identifying novel genes of pathogenic bacteria that are specifically expressed during an actual infectious process. However, the use of IVIAT is limited to analysis of diseases where the pathogen infects a host that is capable of mounting an antibody response. In this application, we describe a modification of IVIAT called Change Mediated Antigen Technology (CMAT) that allows identification of both pathogen and host genes specifically expressed during infection. Proof of principle has been accomplished using Xanthomonas campestris infection of bean plants. In general, CMAT is potentially capable of identifying any gene that is expressed by any cell when it undergoes any sort of change. We intend to expand the application of CMAT to use it as a tool to study genes that are specifically expressed during oncogenesis in colorectal cancer. Genes that are discovered will potentially serve as excellent biomarkers for studying the efficacy of therapy of this disease and provide novel targets for new diagnostic and vaccine strategies. There are 3 Specific Aims. In Specific Aim 1, phage display libraries of human genomic DNA and cDNA from colorectal cancer tissue samples of subjects will be constructed in bacteriophage T7. Sufficient independent clones will be obtained to assure complete coverage of these libraries. In Specific Aim 2, a CMAT IgY probe will be created by immunizing chickens with colorectal cancer tissue samples from subjects. Antibodies produced in response to the immunogens will be purified from eggs and adsorbed with lysates made from healthy tissue of the same subjects. In Specific Aim 3, the CMAT IgY probe will be used to biopan the T7 libraries to enrich for clones expressing proteins made by colorectal cancer cells that are not made by healthy cells. A verification step employing Western blotting will be used to eliminate false positives. The cloned inserts in verified clones will be sequenced and the results subjected to genomic and proteomic analyses to generate a list of genes and their proteins that are expressed by transformed cells during different stages of colorectal cancer and not by healthy cells. This list will provide the starting material for the Phase II work, which will entail screening of the expressed proteins for their potential to serve as biomarkers in diagnosis of colorectal cancer and as possible targets for early diagnosis and vaccine strategies. Change Mediated Antigen Technology (CMAT) is a new method for identifying genes that are expressed when a cell undergoes a change. This project will use CMAT to identify proteins expressed by colorectal cancer cells that are not expressed by healthy cells. Such proteins are likely to be useful in monitoring treatment, diagnosing this disease, and in preventing it. [unreadable] [unreadable] [unreadable]
{ "pile_set_name": "NIH ExPorter" }
Effect of dendrimer generation on electron self-exchange kinetics between metal tris(bipyridine) core dendrimers. Here we report the first measurement of homogeneous electron transfer between oxidized and reduced metal tris(bipyridine) core dendrimers by NMR line-broadening; the results indicated that, as the generation of the dendrimer increased, the rate of self-exchange decreased.
{ "pile_set_name": "PubMed Abstracts" }
Gallamine triethiodide selectively blocks voltage-gated potassium channels in Ranvier nodes. The effects of gallamine on ionic currents in single intact Ranvier nodes of the toad Xenopus were investigated. The following fully reversible effects were observed: 1. With a test concentration of 1 mmol/l the current-voltage relation of steady-state potassium currents, IK ss exhibited a complete block of IK ss up to about V = 110 mV; with stronger depolarisations the block was incomplete. The peak sodium currents, in contrast, were not affected. 2. At the same test concentration the potassium permeability constant PK was reduced by 92% from its normal value, while the sodium permeability constant PNa decreased by only 8%. 3. Concentration-response relations of the block of PK yielded an apparent dissociation constant of 30 micromol/l and a steepness parameter of unity. Patch-clamp experiments on cloned Kv1.1, Kv1.2, Kv1.3 and Kv3.1 channels yielded apparent dissociation constants of 86, 19, >>100 and 121 micromol/l, respectively. Our findings show that gallamine is particularly well suited for separating potassium and sodium currents in axonal current ensembles. They also strongly suggest that potassium currents in Ranvier nodes of Xenopus are mainly carried by an ensemble of Kv1.1 and 1.2 channels.
{ "pile_set_name": "PubMed Abstracts" }
Daily Pilot Commentary: High-speed rail project is a train wreck By Keith Curry 1:55 PM PDT, March 21, 2014 Advertisement As a financial advisor to governments, your job is to tell sometimes hard truths about the financial implications of public plans. Governments ignore these financial consequences at the peril of taxpayers and long-term financial health. Unfortunately, the California High-Speed Rail Authority's financial plan, as it is currently conceived, ignores some hard truths. It would be a slow-moving train wreck that would do economic damage to California for generations. I should know. Fifteen years ago, I was the financial advisor to California High-Speed Rail. In 1999, the system was estimated to cost $24 billion to $34 billion and take 10 years to construct from downtown San Francisco to downtown Los Angeles. In looking at the various means available to the state to finance the project, it quickly became apparent to us that system revenues were too speculative and too far off in the future to be the basis of a financial strategy. At that time, train fares were estimated to be 60% of airfares between L.A. and the Bay Area. It was a time when air costs were higher (pre-Southwest expansion), and the key assumption was a less-than-three-hour travel time from city center to city center. Now, flight costs are actually lower in real terms, the travel time significantly longer, and the proposed system does not come anywhere close to connecting the city centers. In order to actually generate the construction funding required, we identified the fact that the system must seek its own dedicated funding source. Trying to use existing funding sources would simply take funds away from schools and other important state priorities. We specifically determined that trying to use state general obligation bonds would not be feasible and would exhaust the state's borrowing capacity and unfairly crowd out funding for water, highways and schools. If the state wanted to build this project, we said, it needed to be honest with Californians and ask them to consider enacting a dedicated funding source. We advised the rail authority to give voters the ability to decide for themselves, with all the financial pros and cons on the table, whether this project was worthy. So-called "manna from heaven" strategies, where assumptions were made about magic "private investment," would have been a dishonest approach to real project funding. At that time, a statewide quarter-cent sales tax, among other options, would have been sufficient, given the engineering estimates. Our job as financial advisors was to identify a strategy that would actually work, not one that would deceive the voters. Of course the state ignored our advice. Then-state Sen. Jim Costa passed a general obligation (GO) bond authorization for $9.9 billion that was narrowly approved by the voters. The Obama administration briefly offered a small amount of high-speed rail grant funding. California alone bit on this offer of federal support. State GO bonds are not a funding source; they are debt proceeds that must be paid back. The annual debt service on $9.9 billion would add approximately $650 million to the state budget each year for 30 years. While the state could receive a little over $3 billion in federal grant funds, federal requirements require repayment if the project is not completed as promised. Today's project business plan does indeed rely on "manna from heaven" in the form of imagined "private investment" once the project is underway. In the meantime, project costs have soared as the route has been adjusted to respond to political pressure. Passenger revenue estimates drop when travel time and end-to-end accessibility are sacrificed. Even if the state was to finance the full $9.9 billion and get all the federal grants, the project would still be $55 billion short of completion, and that is an optimistic number. Ridership constrained to just the Central Valley would produce an operating deficit, which would further exacerbate the state-funding drain. If the project was not completed as projected (a likely scenario), the federal government would seek reimbursement of its $3 billion. The project could not be built without a tax increase in 1999, and it cannot be built today without raising taxes, which I strongly oppose. Gov. Jerry Brown should level with the voters about this hard fact before this project becomes a financial train wreck that would saddle future generations with substantial debt and no real improvements in mobility or economic growth. Let's pull the switch before it is too late. KEITH CURRY is a Newport Beach councilman and candidate for the 74th Assembly district.
{ "pile_set_name": "Pile-CC" }
FILED United States Court of Appeals Tenth Circuit June 9, 2011 UNITED STATES COURT OF APPEALS Elisabeth A. Shumaker Clerk of Court FOR THE TENTH CIRCUIT CLARENCE E. GRISSOM, JR., Plaintiff-Appellant, v. No. 10-3245 (D.C. No. 5:09-CV-03128-SAC) RAY ROBERTS, Warden, El Dorado (D. Kan.) Correctional Facility; DANIEL A. JACKSON, CSI, El Dorado Correctional Facility; (FNU) BOKOR, A.R.N.P., Correct Care Solutions, El Dorado Correctional Facility; GEORGE MCNICKLE, M.D., El Dorado Correctional Facility; DON THOMPKINS, El Dorado Correctional Facility; R. SHERMAN, CSII, El Dorado Correctional Facility; C. CASTLMAN, COII, El Dorado Correctional Facility, Defendants-Appellees. ORDER AND JUDGMENT * Before TYMKOVICH and BALDOCK, Circuit Judges, and BRORBY, Senior Circuit Judge. * After examining the briefs and appellate record, this panel has determined unanimously that oral argument would not materially assist the determination of this appeal. See Fed. R. App. P. 34(a)(2); 10th Cir. R. 34.1(G). The case is therefore ordered submitted without oral argument. This order and judgment is not binding precedent, except under the doctrines of law of the case, res judicata, and collateral estoppel. It may be cited, however, for its persuasive value consistent with Fed. R. App. P. 32.1 and 10th Cir. R. 32.1. Clarence E. Grissom, Jr., a Kansas state prisoner proceeding pro se, appeals from the dismissal of his civil rights action. We have jurisdiction under 28 U.S.C. § 1291 and affirm. I. BACKGROUND Mr. Grissom filed an action under 42 U.S.C. § 1983. The district court screened his form complaint and numerous other filings under 28 U.S.C. § 1915A and entered a screening order. In that order, the district court identified three claims in his form complaint: (1) use of excessive force on August 27, 2008, at the El Dorado Correctional Facility; (2) denial of medical care for injuries sustained during that incident; and (3) creation of a false disciplinary report to cover up the incident. These claims were based on the following allegations. Defendants Daniel A. Jackson and C. Castlman, both correctional officers, told Mr. Grissom to come to his cell door to be restrained while they removed his wheelchair. Mr. Grissom resisted the order, responded obscenely, and threw water at Officer Jackson. Officer Jackson, who knew that Mr. Grissom suffers from chronic obstructive pulmonary disease, used pepper spray on him. Officer Jackson then called a “Condition 30,” which resulted in the arrival of a team of correctional officers. Unidentified members of that team hit Mr. Grissom with an electric shield while he was in his wheelchair, then forcibly removed him from his cell and carried him -2- to the shower, where they held him under hot water. He sustained a broken nose and facial bruises. Thereafter, Mr. Grissom was laid down, his underwear was cut off, and he was rolled onto his side so that defendant Bokor, an advanced registered nurse practitioner (A.R.N.P.), could administer an albuterol inhaler. A.R.N.P. Bokor looked at his face but provided no treatment. The next day, both of his eyes were black and blue, and his right eye was swollen shut. He requested medical treatment but was denied. Later, Officer Jackson, Officer Castlman, and A.R.N.P. Bokor created an allegedly false disciplinary report to justify their actions, charging Mr. Grissom with battery and disobeying orders. Mr. Grissom was found guilty and given sixty days of disciplinary segregation, forty dollars in fines, and ninety days “‘L.G.T.’” R. at 167. 1 Based on these allegations, Mr. Grissom requested damages and the termination of defendants’ employment. In its screening analysis, the district court first concluded that it lacked power to order that any defendants be fired. The court also determined that Mr. Grissom’s request that he be permitted to use his wheelchair while in segregation, which was set forth in an attachment to his form complaint, was improperly joined, identified no named defendant, and stated no supporting facts. The court further concluded that for the same reasons, still other claims, scattered throughout the attachments to his complaint and other filings, were improperly 1 Apparently, “L.G.T.” means “loss of good time.” -3- raised. The court informed Mr. Grissom that it would not consider any claims referred to only in his attachments, and that instead, he must file an amended complaint in order to add claims or defendants; motions, exhibits, or other papers were not proper for that purpose. The court also provided him an overview of joinder under the Federal Rules of Civil Procedure. The district court then dismissed two defendants, Correct Care Solutions and the El Dorado Minimum Clinic, because neither was a “person” for § 1983 purposes, a necessary element of a § 1983 claim. Id. at 176 (citing Will v. Mich. Dep’t of State Police, 491 U.S. 58, 66, 71 (1989)). Further, the court pointed out that Mr. Grissom failed to adequately identify the personal participation of defendants Roberts, McNickle, Thompkins, or Sherman. See R. at 176 (citing, inter alia, Trujillo v. Williams, 465 F.3d 1210, 1227 (10th Cir. 2006)). Thus, the court gave Mr. Grissom an opportunity to file a supplemental complaint alleging the necessary participation. The district court also instructed Mr. Grissom that a supplemental complaint was necessary to correct other shortcomings in his pleadings. As to his excessive force claim, the court reasoned that Mr. Grissom’s own statements and exhibits showed that “he was combative, disruptive, and very disrespectful”; he refused to obey orders”; he “had a history of battering or attempting to batter correctional officers”; and he “refused to be restrained and had thrown a cup of water on Jackson.” R. at 179-80. “Under such circumstances,” the court -4- concluded, “the use of some physical force such as pepper spray can hardly be considered repugnant to the conscience of mankind.” Id. at 180. 1 Moreover, the court noted that Mr. Grissom had not alleged severe pain or lasting injury as a result of the pepper spray, as required under Sampley v. Ruettgers, 704 F.2d 491, 495 (10th Cir. 1983). Therefore, the court concluded, Mr. Grissom had not advanced sufficient factual allegations to show an Eighth Amendment violation based on Officer Jackson’s use of pepper spray or his call for a Condition 30. Turning to the physical injuries Mr. Grissom alleged were caused by the forced removal from his cell, the district court observed that he had not described acts by any specific defendant that caused those injuries. Rather, he alleged he was beaten by a team of correctional officers. Therefore, the court permitted him to file a supplemental complaint to provide additional factual allegations of personal participation by named defendants. The district court next concluded that Mr. Grissom’s allegations did not support his claim that he was denied medical treatment in violation of the Eighth Amendment. Mr. Grissom’s filings indicated that A.R.N.P. Bokor immediately gave him an albuterol inhaler and examined his broken nose and facial injuries. Mr. Grissom did “not describe any additional treatment as having been prescribed or obviously necessary for his broken nose or facial abrasions” or “any 1 The district court apparently drew this standard from a line of Supreme Court cases discussed in Estelle v. Gamble, 429 U.S. 97, 105-06 (1976). -5- ‘substantial harm’ suffered as a result of any delay in treating his broken nose or facial injuries.” Id. at 184 (applying standards set out in Ramos v. Lamm, 639 F.2d 559, 575 (10th Cir. 1980), and Garrett v. Stratman, 254 F.3d 946, 950 (10th Cir. 2001)). Nor had he identified the officer who denied his request for medical treatment the next day as one of the named defendants. Again, the district court instructed Mr. Grissom that he could file a supplemental complaint to remedy these deficiencies. The district court further concluded that Mr. Grissom’s claim that Officers Jackson and Castlman and A.R.N.P. Bokor filed a false disciplinary report could be raised only in a writ of habeas corpus because it “involve[d] good time and the possibility of entitlement to a speedier release.” R. at 185 (citing Preiser v. Rodriguez, 411 U.S. 475 (1973)). The court also reasoned that Mr. Grissom could not recover damages on this claim unless he could show that his conviction of the charged offenses was “‘invalidated.’” R. at 186 (quoting Heck v. Humphrey, 512 U.S. 477, 487 (1994), and citing Edwards v. Balisok, 520 U.S. 641 (1997), for its extension of Heck to loss of good time credit in prison setting)). 2 Based on its analysis, the district court gave Mr. Grissom thirty days in which to file a supplemental complaint. Mr. Grissom filed a timely supplement, 2 Mr. Grissom also filed two motions requesting an injunction or a temporary restraining order with regard to the conditions of his later confinement at a different correctional facility. The court denied the motions on the ground that neither one provided a sufficient factual or legal basis for such relief. Mr. Grissom has not challenged these denials on appeal. -6- and he also filed numerous other papers outside the allotted time. The district court reviewed all of these filings and concluded that Mr. Grissom had not remedied the deficiencies in his complaint and had ignored most of the directions in the court’s screening order. The court found no mention of defendants McNickle, Thompkins, or Sherman in any of the additional filings, and no allegation that defendant Roberts had personally participated in any of the events underlying the three claims set out in the initial form complaint. Thus, the court dismissed the claims as to these defendants. Similarly, the court could find no specification of “which named defendant, if any, took acts that actually caused the injuries to his nose and face during [the] forced [cell] move.” R. at 490-91. Nor did the court find any additional allegations showing “either that [Officer] Jackson used more force than was reasonably necessary under the[] circumstances or that [he] applied the pepper spray and called a Condition 30 other than in a ‘good faith effort’ to restore institutional order.” Id. at 491. 3 Accordingly, the district court dismissed the excessive force claim without prejudice for failure to state a claim on which relief may be granted. The district court next concluded that Mr. Grissom failed to remedy the deficiencies in his claim that he was denied medical treatment. Although he appeared to claim that Officer Jackson had denied his request to see A.R.N.P. 3 The district court apparently was relying on Sampley, 704 F.2d at 495, which it had cited in its screening order, see R. at 178. -7- Bokor, he also stated that he had another inmate contact A.R.N.P. Bokor, who said there was nothing she could do “‘to fix [his] broken nose because it ha[d] been broken twice before and it wouldn’t do any good to fix it.’” Id. at 493 (quoting Supplement to Complaint, id. at 192). The court determined that this concession, read in light of Mr. Grissom’s continued failure “to allege that any particular treatment was prescribed or medically necessary for his broken nose beyond the immediate examination that was provided,” id. at 493, indicated nothing more than a difference of opinion on a matter of medical judgment, which is not actionable under the Eighth Amendment, see Estelle v. Gamble, 429 U.S. 97, 106-07 (1976). Consequently, Officer Jackson’s alleged denial of Mr. Grissom’s request for treatment also failed to state a claim under the Eighth Amendment. Therefore, the district court dismissed the claim of denial of medical treatment. The court next considered a multitude of claims described in the numerous filings Mr. Grissom submitted in response to the screening order, concluding that none of them were included in the original complaint, none had been added by a proper amendment, and none were properly joined because Mr. Grissom did not show they were related to the incident underlying the claims in his original complaint. Accordingly, the court dismissed all those claims without prejudice. This appeal followed. -8- II. DISCUSSION We review de novo the district court’s dismissal for failure to state a claim. Young v. Davis, 554 F.3d 1254, 1256 (10th Cir. 2009). “We review the complaint for plausibility; that is, to determine whether the complaint includes enough facts to state a claim to relief that is plausible on its face.” Id. (quotation omitted). In his appellate brief, Mr. Grissom provides a selective restatement of his factual allegations, but the full extent of his legal argument is that he thought what he did “was fair,” and he disagrees with “the way [he] was judged for not understanding the procedure.” Aplt. Br. at 4. These limited “arguments” are insufficient to merit appellate review, even taking into account that Mr. Grissom is not represented by an attorney. See Garrett v. Selby Connor Maddux & Janer, 425 F.3d 836, 840-41 (10th Cir. 2005) (concluding that pro se appellant forfeited right to appellate review of dismissal of complaint because he did not present any reasoned arguments supported by record citations or legal authority). Nonetheless, we have exercised our discretion to review the record and the applicable law, see id. at 841, and we see no error in the district court’s handling of this case. The court is commended for its considerable patience in providing Mr. Grissom a detailed explanation, in plain English, of the deficiencies in his complaint, and in providing him an opportunity to cure those deficiencies. Accordingly, we AFFIRM the judgment of the district court for substantially the same reasons set out in its screening order and its dismissal -9- order. Mr. Grissom’s motion to proceed on appeal without prepayment of fees is granted, and we remind him that he is obligated to continue making partial payments until the entire fee has been paid. Entered for the Court Wade Brorby Senior Circuit Judge -10-
{ "pile_set_name": "FreeLaw" }
Q: Powered rail on a slope powering glitch? I have placed a powered rail on a slope with a redstone torch underneath it as shown in the following two pictures. Now as you can see, the powered rail is in fact unpowered, despite the redstone torch underneath it. However, if I place a redstone torch next to the powered rail it is powered, but when I remove it, it continues to remain powered, as it should have done in the first place. Is this a glitch? Or some behaviour of redstone/minecart tracks that I am unaware of? A: This is a glitch relating to the updating of blocks. In short, the issue is that when the powered rail is placed, it does not check to see if the block it is on is already powered, but keeps its state until a nearby block updates, such as might happen when you place a redstone torch that would power it. When that redstone torch is removed again, the track checks to see if it still powered, and, realizing that it is, stays on. (c.f. the glitch that would give free power to tracks placed on a slope when one was removed from a chain of powered rails). Anything that causes the track to be updated should make the track powered; such as placing the tracks sequentially from bottom to top, meaning that the initial state of the powered rail will be flat: When the next rail is then placed, pulling the end of the powered track up, it updates and gets power. Another alternative, as Dan F mentioned, is to simply place the torch after the rail.
{ "pile_set_name": "StackExchange" }
Cycling induced by functional electrical stimulation in children affected by cerebral palsy: case report. Recently, the efficacy of functional electrical stimulation (FES) cycling have been demonstrated on the improvement of strength and motor control in adults with stroke. FES-cycling, providing a repetitive goal-oriented task, could facilitate cortical reorganization and utilization of residual cortico-spinal pathways. These benefits could be more enhanced in children because of the greater plasticity and flexibility of their central nervous system. The aim of the present case report study was to explore the feasibility of FES-cycling in children with cerebral palsy (CP) and to provide a set of instrumental measures able to evaluate the effects of this novel treatment on cycling and walking ability. Interventional study. Two ambulant outpatient children with diplegic CP were recruited by the "E. Medea" Scientific Institute. Patients followed a FES-cycling treatment for 30 minutes a day, 3 days a week for 7 weeks. Pre and post treatment tests were performed, namely clinical measures and electromyographic, kinematic and oxygen expenditure analysis during gait and cycling. The treatment was safe, feasible and well accepted by the 2 children. After treatment both patients achieved a more symmetrical muscular strategy during voluntary cycling and gait and a significant reduction of muscle co-contractions during cycling. These improvements were corroborated by a decrease in oxygen expenditure during the post test for one of the two children, the less impaired, implying a better exploiting of bi-articular muscles. FES-cycling is feasible and safe and it may be an alternative rehabilitation method for diplegic CP patients. The set of instrumental measurements proposed seems to be a valuable tool for functional assessment to identify subclinical anomalies and improvements on cycling and gait in CP patients.
{ "pile_set_name": "PubMed Abstracts" }
For Courses A written feature highlighting the services offered and the course layout, this helps build value to the golfer. Golf Guide Digital Platform WHAT IS DIGITAL ADVERTISING AND WHY IS IT IMPORTANT Digital advertising is the tactic of leveraging the internet and its properties to deliver promotional ads to consumers on various channels Like its predecessor—traditional advertising—a digital ad can help tell the story of your brand. Unlike traditional advertising, digital advertising is universal and flexible, enabling you to tell your business story or feature your golf course on the channels that your buyers frequent—through text, images, video, and more. Digital advertising has evolved considerably since the first clickable ad hit the internet in 1994. Today, instead of advertising creating noise that distracts from the content your buyers want to read, digital advertising can be part of an ongoing conversation that your Golf Course or business has with its customers. Digital ads are everywhere. They can be seen on the websites your buyer visits, on his or her’s mobile phone, on social media channels, and on his or her’s smartwatch. Because advertising proliferates across so many channels, including very personal channels, you need to be more cognizant than ever before about providing useful, engaging content. Luckily, due to behavioral targeting technologies and our engaging drone video platforms, these continuous conversations are possible. And by leveraging these technologies at scale, you can nurture your buyers in a very personalized way until they are ready to become customers. As Golf Guide marketers, we may feel like we have come a long way with digital advertising, but we are still in the early stages. With digital advertising continuing to gain momentum, it is more vital than ever before to make it an integral part of your holistic marketing mix.
{ "pile_set_name": "Pile-CC" }
/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ using System; using System.Collections.Generic; using Aliyun.Acs.Core.Transform; using Aliyun.Acs.Iot.Model.V20180120; namespace Aliyun.Acs.Iot.Transform.V20180120 { public class CreateProductTagsResponseUnmarshaller { public static CreateProductTagsResponse Unmarshall(UnmarshallerContext context) { CreateProductTagsResponse createProductTagsResponse = new CreateProductTagsResponse(); createProductTagsResponse.HttpResponse = context.HttpResponse; createProductTagsResponse.RequestId = context.StringValue("CreateProductTags.RequestId"); createProductTagsResponse.Success = context.BooleanValue("CreateProductTags.Success"); createProductTagsResponse.ErrorMessage = context.StringValue("CreateProductTags.ErrorMessage"); createProductTagsResponse.Code = context.StringValue("CreateProductTags.Code"); List<CreateProductTagsResponse.CreateProductTags_ProductTag> createProductTagsResponse_invalidProductTags = new List<CreateProductTagsResponse.CreateProductTags_ProductTag>(); for (int i = 0; i < context.Length("CreateProductTags.InvalidProductTags.Length"); i++) { CreateProductTagsResponse.CreateProductTags_ProductTag productTag = new CreateProductTagsResponse.CreateProductTags_ProductTag(); productTag.TagKey = context.StringValue("CreateProductTags.InvalidProductTags["+ i +"].TagKey"); productTag.TagValue = context.StringValue("CreateProductTags.InvalidProductTags["+ i +"].TagValue"); createProductTagsResponse_invalidProductTags.Add(productTag); } createProductTagsResponse.InvalidProductTags = createProductTagsResponse_invalidProductTags; return createProductTagsResponse; } } }
{ "pile_set_name": "Github" }
DaVincitag:typepad.com,2003:weblog-834479566833968762012-04-20T08:28:09+02:00TypePadWoodStEx 2011 — European Engineering and Design Students At Worktag:typepad.com,2003:post-6a0128777974db970c0168ea7082a7970c2012-04-20T08:28:09+02:002012-04-20T08:28:09+02:00via www.youtube.comMichal Jelinek Sir Jonathan Ive: The iMan cometh - London Life - Life & Style - Evening Standardtag:typepad.com,2003:post-6a0128777974db970c016302c2b51b970d2012-03-12T22:59:18+01:002012-03-12T22:59:18+01:00As Apple’s Senior Vice President of Industrial Design, he is the driving force behind the firm’s products, from the Mac computer to the iPod, iPhone and, most recently the iPad. He spoke exclusively to the Evening Standard at the firm’s Cupertino headquarters. via www.thisislondon.co.ukMichal Jelinek As Apple’s Senior Vice President of Industrial Design, he is the driving force behind the firm’s products, from the Mac computer to the iPod, iPhone and, most recently the iPad. He spoke exclusively to the Evening Standard at the firm’s Cupertino headquarters. An alternative way of reverse surfacing - Forza Motorsporttag:typepad.com,2003:post-6a0128777974db970c0168e7b7874a970c2012-02-21T07:05:36+01:002012-02-21T07:05:36+01:00Michal Jelinek Car Design News - Maya webinartag:typepad.com,2003:post-6a0128777974db970c0163016da94e970d2012-02-15T14:06:10+01:002012-02-15T14:06:10+01:00This webinar is focused on examining the benefits of using polygon modeling for concept development. In particular it will focus on the unique interaction that is possible between Autodesk Maya 2012 and Alias Automotive 2012 while exploring the workflow that moves data between both applications. This session will also explain...Michal Jelinek This webinar is focused on examining the benefits of using polygon modeling for concept development. In particular it will focus on the unique interaction that is possible between Autodesk Maya 2012 and Alias Automotive 2012 while exploring the workflow that moves data between both applications. This session will also explain best practices to achieve realistic Automotive Interior parts in Autodesk Maya and will conclude with the final design being completed in Autodesk Alias Automotive. Tatra 603 - a bit of history ...tag:typepad.com,2003:post-6a0128777974db970c0163009a71f0970d2012-02-02T19:33:03+01:002012-02-02T19:33:03+01:00... with new music and new edit. Could you believe that the original movie is 50 years old? It was made to promote the Tatra 603. Pretty long for a commercial, but great to watch.Michal Jelinek ... with new music and new edit. Could you believe that the original movie is 50 years old? It was made to promote the Tatra 603. Pretty long for a commercial, but great to watch.
{ "pile_set_name": "Pile-CC" }
/** * Marlin 3D Printer Firmware * Copyright (c) 2020 MarlinFirmware [https://github.com/MarlinFirmware/Marlin] * * Based on Sprinter and grbl. * Copyright (c) 2011 Camiel Gubbels / Erik van der Zalm * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <https://www.gnu.org/licenses/>. * */ #include "../../inc/MarlinConfigPre.h" #if HAS_UI_320x240 #include "ui_320x240.h" #include "../ultralcd.h" #include "../menu/menu.h" #include "../../libs/numtostr.h" #include "../../sd/cardreader.h" #include "../../module/temperature.h" #include "../../module/printcounter.h" #include "../../module/planner.h" #include "../../module/motion.h" #if DISABLED(LCD_PROGRESS_BAR) && BOTH(FILAMENT_LCD_DISPLAY, SDSUPPORT) #include "../../feature/filwidth.h" #include "../../gcode/parser.h" #endif #if ENABLED(AUTO_BED_LEVELING_UBL) #include "../../feature/bedlevel/bedlevel.h" #endif #if !HAS_LCD_MENU #error "Seriously? High resolution TFT screen without menu?" #endif static bool draw_menu_navigation = false; void MarlinUI::tft_idle() { #if ENABLED(TOUCH_SCREEN) if (draw_menu_navigation) { add_control(48, 206, PAGE_UP, imgPageUp, encoderTopLine > 0); add_control(240, 206, PAGE_DOWN, imgPageDown, encoderTopLine + LCD_HEIGHT < screen_items); add_control(144, 206, BACK, imgBack); draw_menu_navigation = false; } #endif tft.queue.async(); TERN_(TOUCH_SCREEN, touch.idle()); } void MarlinUI::init_lcd() { tft.init(); tft.set_font(MENU_FONT_NAME); #ifdef SYMBOLS_FONT_NAME tft.add_glyphs(SYMBOLS_FONT_NAME); #endif TERN_(TOUCH_SCREEN, touch.init()); clear_lcd(); } bool MarlinUI::detected() { return true; } void MarlinUI::clear_lcd() { #if ENABLED(TOUCH_SCREEN) touch.reset(); draw_menu_navigation = false; #endif tft.queue.reset(); tft.fill(0, 0, TFT_WIDTH, TFT_HEIGHT, COLOR_BACKGROUND); } #if ENABLED(SHOW_BOOTSCREEN) #ifndef BOOTSCREEN_TIMEOUT #define BOOTSCREEN_TIMEOUT 1500 #endif void MarlinUI::show_bootscreen() { tft.queue.reset(); tft.canvas(0, 0, TFT_WIDTH, TFT_HEIGHT); tft.add_image(0, 0, imgBootScreen); // MarlinLogo320x240x16 #ifdef WEBSITE_URL tft.add_text(4, 188, COLOR_WEBSITE_URL, WEBSITE_URL); #endif tft.queue.sync(); safe_delay(BOOTSCREEN_TIMEOUT); clear_lcd(); } #endif // SHOW_BOOTSCREEN void MarlinUI::draw_kill_screen() { tft.queue.reset(); tft.fill(0, 0, TFT_WIDTH, TFT_HEIGHT, COLOR_KILL_SCREEN_BG); tft.canvas(0, 60, TFT_WIDTH, 20); tft.set_background(COLOR_KILL_SCREEN_BG); tft_string.set(status_message); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), 0, COLOR_KILL_SCREEN_TEXT, tft_string); tft.canvas(0, 120, TFT_WIDTH, 20); tft.set_background(COLOR_KILL_SCREEN_BG); tft_string.set(GET_TEXT(MSG_HALTED)); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), 0, COLOR_KILL_SCREEN_TEXT, tft_string); tft.canvas(0, 160, TFT_WIDTH, 20); tft.set_background(COLOR_KILL_SCREEN_BG); tft_string.set(GET_TEXT(MSG_PLEASE_RESET)); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), 0, COLOR_KILL_SCREEN_TEXT, tft_string); tft.queue.sync(); } void draw_heater_status(uint16_t x, uint16_t y, const int8_t Heater) { MarlinImage image = imgHotEnd; uint16_t Color; float currentTemperature, targetTemperature; if (Heater >= 0) { // HotEnd currentTemperature = thermalManager.degHotend(Heater); targetTemperature = thermalManager.degTargetHotend(Heater); } #if HAS_HEATED_BED else if (Heater == H_BED) { currentTemperature = thermalManager.degBed(); targetTemperature = thermalManager.degTargetBed(); } #endif // HAS_HEATED_BED #if HAS_TEMP_CHAMBER else if (Heater == H_CHAMBER) { currentTemperature = thermalManager.degChamber(); #if HAS_HEATED_CHAMBER targetTemperature = thermalManager.degTargetChamber(); #else targetTemperature = ABSOLUTE_ZERO; #endif } #endif // HAS_TEMP_CHAMBER else return; TERN_(TOUCH_SCREEN, if (targetTemperature >= 0) touch.add_control(HEATER, x, y, 64, 100, Heater)); tft.canvas(x, y, 64, 100); tft.set_background(COLOR_BACKGROUND); Color = currentTemperature < 0 ? COLOR_INACTIVE : COLOR_COLD; if (Heater >= 0) { // HotEnd if (currentTemperature >= 50) Color = COLOR_HOTEND; } #if HAS_HEATED_BED else if (Heater == H_BED) { if (currentTemperature >= 50) Color = COLOR_HEATED_BED; image = targetTemperature > 0 ? imgBedHeated : imgBed; } #endif // HAS_HEATED_BED #if HAS_TEMP_CHAMBER else if (Heater == H_CHAMBER) { if (currentTemperature >= 50) Color = COLOR_CHAMBER; image = targetTemperature > 0 ? imgChamberHeated : imgChamber; } #endif // HAS_TEMP_CHAMBER tft.add_image(0, 18, image, Color); tft_string.set((uint8_t *)i16tostr3rj(currentTemperature + 0.5)); tft_string.add(LCD_STR_DEGREE); tft_string.trim(); tft.add_text(tft_string.center(64) + 2, 72, Color, tft_string); if (targetTemperature >= 0) { tft_string.set((uint8_t *)i16tostr3rj(targetTemperature + 0.5)); tft_string.add(LCD_STR_DEGREE); tft_string.trim(); tft.add_text(tft_string.center(64) + 2, 8, Color, tft_string); } } void draw_fan_status(uint16_t x, uint16_t y, const bool blink) { TERN_(TOUCH_SCREEN, touch.add_control(FAN, x, y, 64, 100)); tft.canvas(x, y, 64, 100); tft.set_background(COLOR_BACKGROUND); uint8_t fanSpeed = thermalManager.fan_speed[0]; MarlinImage image; if (fanSpeed >= 127) image = blink ? imgFanFast1 : imgFanFast0; else if (fanSpeed > 0) image = blink ? imgFanSlow1 : imgFanSlow0; else image = imgFanIdle; tft.add_image(0, 10, image, COLOR_FAN); tft_string.set((uint8_t *)ui8tostr4pctrj(thermalManager.fan_speed[0])); tft_string.trim(); tft.add_text(tft_string.center(64) + 6, 72, COLOR_FAN, tft_string); } void MarlinUI::draw_status_screen() { const bool blink = get_blink(); TERN_(TOUCH_SCREEN, touch.clear()); // heaters and fan uint16_t i, x, y = POS_Y; for (i = 0 ; i < ITEMS_COUNT; i++) { x = (320 / ITEMS_COUNT - 64) / 2 + (320 * i / ITEMS_COUNT); switch (i) { #ifdef ITEM_E0 case ITEM_E0: draw_heater_status(x, y, H_E0); break; #endif #ifdef ITEM_E1 case ITEM_E1: draw_heater_status(x, y, H_E1); break; #endif #ifdef ITEM_E2 case ITEM_E2: draw_heater_status(x, y, H_E2); break; #endif #ifdef ITEM_BED case ITEM_BED: draw_heater_status(x, y, H_BED); break; #endif #ifdef ITEM_CHAMBER case ITEM_CHAMBER: draw_heater_status(x, y, H_CHAMBER); break; #endif #ifdef ITEM_FAN case ITEM_FAN: draw_fan_status(x, y, blink); break; #endif } } // coordinates tft.canvas(4, 103, 312, 24); tft.set_background(COLOR_BACKGROUND); tft.add_rectangle(0, 0, 312, 24, COLOR_AXIS_HOMED); uint16_t color; uint16_t offset; bool is_homed; tft.add_text( 10, 3, COLOR_AXIS_HOMED , "X"); tft.add_text(127, 3, COLOR_AXIS_HOMED , "Y"); tft.add_text(219, 3, COLOR_AXIS_HOMED , "Z"); is_homed = TEST(axis_homed, X_AXIS); tft_string.set(blink & !is_homed ? "?" : ftostr4sign(LOGICAL_X_POSITION(current_position.x))); tft.add_text( 68 - tft_string.width(), 3, is_homed ? COLOR_AXIS_HOMED : COLOR_AXIS_NOT_HOMED, tft_string); is_homed = TEST(axis_homed, Y_AXIS); tft_string.set(blink & !is_homed ? "?" : ftostr4sign(LOGICAL_Y_POSITION(current_position.y))); tft.add_text(185 - tft_string.width(), 3, is_homed ? COLOR_AXIS_HOMED : COLOR_AXIS_NOT_HOMED, tft_string); is_homed = TEST(axis_homed, Z_AXIS); if (blink & !is_homed) { tft_string.set("?"); offset = 25; // ".00" } else { const float z = LOGICAL_Z_POSITION(current_position.z); tft_string.set(ftostr52sp((int16_t)z)); tft_string.rtrim(); offset = tft_string.width(); tft_string.set(ftostr52sp(z)); offset += 25 - tft_string.width(); } tft.add_text(301 - tft_string.width() - offset, 3, is_homed ? COLOR_AXIS_HOMED : COLOR_AXIS_NOT_HOMED, tft_string); // feed rate tft.canvas(70, 136, 80, 32); tft.set_background(COLOR_BACKGROUND); color = feedrate_percentage == 100 ? COLOR_RATE_100 : COLOR_RATE_ALTERED; tft.add_image(0, 0, imgFeedRate, color); tft_string.set(i16tostr3rj(feedrate_percentage)); tft_string.add('%'); tft.add_text(32, 6, color , tft_string); TERN_(TOUCH_SCREEN, touch.add_control(FEEDRATE, 70, 136, 80, 32)); // flow rate tft.canvas(170, 136, 80, 32); tft.set_background(COLOR_BACKGROUND); color = planner.flow_percentage[0] == 100 ? COLOR_RATE_100 : COLOR_RATE_ALTERED; tft.add_image(0, 0, imgFlowRate, color); tft_string.set(i16tostr3rj(planner.flow_percentage[active_extruder])); tft_string.add('%'); tft.add_text(32, 6, color , tft_string); TERN_(TOUCH_SCREEN, touch.add_control(FLOWRATE, 170, 136, 80, 32, active_extruder)); // print duration char buffer[14]; duration_t elapsed = print_job_timer.duration(); elapsed.toDigital(buffer); tft.canvas(96, 176, 128, 20); tft.set_background(COLOR_BACKGROUND); tft_string.set(buffer); tft.add_text(tft_string.center(128), 0, COLOR_PRINT_TIME, tft_string); // progress bar const uint8_t progress = ui.get_progress_percent(); tft.canvas(4, 198, 312, 9); tft.set_background(COLOR_PROGRESS_BG); tft.add_rectangle(0, 0, 312, 9, COLOR_PROGRESS_FRAME); if (progress) tft.add_bar(1, 1, (310 * progress) / 100, 7, COLOR_PROGRESS_BAR); // status message tft.canvas(0, 216, 320, 20); tft.set_background(COLOR_BACKGROUND); tft_string.set(status_message); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), 0, COLOR_STATUS_MESSAGE, tft_string); #if ENABLED(TOUCH_SCREEN) add_control(256, 130, menu_main, imgSettings); TERN_(SDSUPPORT, add_control(0, 130, menu_media, imgSD, card.isMounted() && !printingIsActive(), COLOR_CONTROL_ENABLED, card.isMounted() && printingIsActive() ? COLOR_BUSY : COLOR_CONTROL_DISABLED)); #endif } // Draw a static item with no left-right margin required. Centered by default. void MenuItem_static::draw(const uint8_t row, PGM_P const pstr, const uint8_t style/*=SS_DEFAULT*/, const char * const vstr/*=nullptr*/) { menu_item(row); tft_string.set(pstr, itemIndex, itemString); if (vstr) tft_string.add(vstr); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_YELLOW, tft_string); } // Draw a generic menu item with pre_char (if selected) and post_char void MenuItemBase::_draw(const bool sel, const uint8_t row, PGM_P const pstr, const char pre_char, const char post_char) { menu_item(row, sel); uint8_t *string = (uint8_t *)pstr; MarlinImage image = noImage; switch (*string) { case 0x01: image = imgRefresh; break; // LCD_STR_REFRESH case 0x02: image = imgDirectory; break; // LCD_STR_FOLDER } uint8_t offset = MENU_TEXT_X_OFFSET; if (image != noImage) { string++; offset = 32; tft.add_image(0, 0, image, COLOR_MENU_TEXT, sel ? COLOR_SELECTION_BG : COLOR_BACKGROUND); } tft_string.set(string, itemIndex, itemString); tft.add_text(offset, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); } // Draw a menu item with a (potentially) editable value void MenuEditItemBase::draw(const bool sel, const uint8_t row, PGM_P const pstr, const char* const data, const bool pgm) { menu_item(row, sel); tft_string.set(pstr, itemIndex, itemString); tft.add_text(MENU_TEXT_X_OFFSET, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); if (data) { tft_string.set(data); tft.add_text(TFT_WIDTH - MENU_TEXT_X_OFFSET - tft_string.width(), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); } } // Low-level draw_edit_screen can be used to draw an edit screen from anyplace void MenuEditItemBase::draw_edit_screen(PGM_P const pstr, const char* const value/*=nullptr*/) { ui.encoder_direction_normal(); TERN_(TOUCH_SCREEN, touch.clear()); uint16_t line = 1; menu_line(line++); tft_string.set(pstr, itemIndex, itemString); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); TERN_(AUTO_BED_LEVELING_UBL, if (ui.external_control) line++); // ftostr52() will overwrite *value so *value has to be displayed first menu_line(line); tft_string.set(value); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); #if ENABLED(AUTO_BED_LEVELING_UBL) if (ui.external_control) { menu_line(line - 1); tft_string.set(X_LBL); tft.add_text(52, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); tft_string.set(ftostr52(LOGICAL_X_POSITION(current_position.x))); tft_string.trim(); tft.add_text(144 - tft_string.width(), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); tft_string.set(Y_LBL); tft.add_text(176, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); tft_string.set(ftostr52(LOGICAL_X_POSITION(current_position.y))); tft_string.trim(); tft.add_text(268 - tft_string.width(), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); } #endif extern screenFunc_t _manual_move_func_ptr; if (ui.currentScreen != _manual_move_func_ptr && !ui.external_control) { #define SLIDER_LENGHT 224 #define SLIDER_Y_POSITION 140 tft.canvas((TFT_WIDTH - SLIDER_LENGHT) / 2, SLIDER_Y_POSITION, SLIDER_LENGHT, 16); tft.set_background(COLOR_BACKGROUND); int16_t position = (SLIDER_LENGHT - 2) * ui.encoderPosition / maxEditValue; tft.add_bar(0, 7, 1, 2, ui.encoderPosition == 0 ? COLOR_SLIDER_INACTIVE : COLOR_SLIDER); tft.add_bar(1, 6, position, 4, COLOR_SLIDER); tft.add_bar(position + 1, 6, SLIDER_LENGHT - 2 - position, 4, COLOR_SLIDER_INACTIVE); tft.add_bar(SLIDER_LENGHT - 1, 7, 1, 2, int32_t(ui.encoderPosition) == maxEditValue ? COLOR_SLIDER : COLOR_SLIDER_INACTIVE); #if ENABLED(TOUCH_SCREEN) tft.add_image((SLIDER_LENGHT - 8) * ui.encoderPosition / maxEditValue, 0, imgSlider, COLOR_SLIDER); touch.add_control(SLIDER, (TFT_WIDTH - SLIDER_LENGHT) / 2, SLIDER_Y_POSITION - 8, SLIDER_LENGHT, 32, maxEditValue); #endif } #if ENABLED(TOUCH_SCREEN) add_control(32, 176, DECREASE, imgDecrease); add_control(224, 176, INCREASE, imgIncrease); add_control(128, 176, CLICK, imgConfirm); #endif } // The Select Screen presents a prompt and two "buttons" void MenuItem_confirm::draw_select_screen(PGM_P const yes, PGM_P const no, const bool yesno, PGM_P const pref, const char * const string/*=nullptr*/, PGM_P const suff/*=nullptr*/) { uint16_t line = 1; if (string == NULL) line++; menu_line(line++); tft_string.set(pref); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); if (string) { menu_line(line++); tft_string.set(string); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); } if (suff) { menu_line(line); tft_string.set(suff); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); } #if ENABLED(TOUCH_SCREEN) add_control(48, 176, CANCEL, imgCancel, true, yesno ? HALF(COLOR_CONTROL_CANCEL) : COLOR_CONTROL_CANCEL); add_control(208, 176, CONFIRM, imgConfirm, true, yesno ? COLOR_CONTROL_CONFIRM : HALF(COLOR_CONTROL_CONFIRM)); #endif } #if ENABLED(SDSUPPORT) void MenuItem_sdbase::draw(const bool sel, const uint8_t row, PGM_P const, CardReader &theCard, const bool isDir) { menu_item(row, sel); if (isDir) tft.add_image(0, 0, imgDirectory, COLOR_MENU_TEXT, sel ? COLOR_SELECTION_BG : COLOR_BACKGROUND); tft.add_text(32, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, theCard.longest_filename()); } #endif #if ENABLED(ADVANCED_PAUSE_FEATURE) void MarlinUI::draw_hotend_status(const uint8_t row, const uint8_t extruder) { #if ENABLED(TOUCH_SCREEN) touch.clear(); draw_menu_navigation = false; touch.add_control(RESUME_CONTINUE , 0, 0, 320, 240); #endif menu_line(row); tft_string.set(GET_TEXT(MSG_FILAMENT_CHANGE_NOZZLE)); tft_string.add('E'); tft_string.add((char)('1' + extruder)); tft_string.add(' '); tft_string.add(i16tostr3rj(thermalManager.degHotend(extruder))); tft_string.add(LCD_STR_DEGREE); tft_string.add(" / "); tft_string.add(i16tostr3rj(thermalManager.degTargetHotend(extruder))); tft_string.add(LCD_STR_DEGREE); tft_string.trim(); tft.add_text(tft_string.center(TFT_WIDTH), MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); } #endif // ADVANCED_PAUSE_FEATURE #if ENABLED(AUTO_BED_LEVELING_UBL) #define GRID_OFFSET_X 8 #define GRID_OFFSET_Y 8 #define GRID_WIDTH 144 #define GRID_HEIGHT 144 #define CONTROL_OFFSET 8 void MarlinUI::ubl_plot(const uint8_t x_plot, const uint8_t y_plot) { tft.canvas(GRID_OFFSET_X, GRID_OFFSET_Y, GRID_WIDTH, GRID_HEIGHT); tft.set_background(COLOR_BACKGROUND); tft.add_rectangle(0, 0, GRID_WIDTH, GRID_HEIGHT, COLOR_WHITE); for (uint16_t x = 0; x < GRID_MAX_POINTS_X ; x++) for (uint16_t y = 0; y < GRID_MAX_POINTS_Y ; y++) if (position_is_reachable({ ubl.mesh_index_to_xpos(x), ubl.mesh_index_to_ypos(y) })) tft.add_bar(1 + (x * 2 + 1) * (GRID_WIDTH - 4) / GRID_MAX_POINTS_X / 2, GRID_HEIGHT - 3 - ((y * 2 + 1) * (GRID_HEIGHT - 4) / GRID_MAX_POINTS_Y / 2), 2, 2, COLOR_UBL); tft.add_rectangle((x_plot * 2 + 1) * (GRID_WIDTH - 4) / GRID_MAX_POINTS_X / 2 - 1, GRID_HEIGHT - 5 - ((y_plot * 2 + 1) * (GRID_HEIGHT - 4) / GRID_MAX_POINTS_Y / 2), 6, 6, COLOR_UBL); const xy_pos_t pos = { ubl.mesh_index_to_xpos(x_plot), ubl.mesh_index_to_ypos(y_plot) }, lpos = pos.asLogical(); tft.canvas(216, GRID_OFFSET_Y + (GRID_HEIGHT - 32) / 2 - 32, 96, 32); tft.set_background(COLOR_BACKGROUND); tft_string.set(X_LBL); tft.add_text(0, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); tft_string.set(ftostr52(lpos.x)); tft_string.trim(); tft.add_text(96 - tft_string.width(), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); tft.canvas(216, GRID_OFFSET_Y + (GRID_HEIGHT - 32) / 2, 96, 32); tft.set_background(COLOR_BACKGROUND); tft_string.set(Y_LBL); tft.add_text(0, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); tft_string.set(ftostr52(lpos.y)); tft_string.trim(); tft.add_text(96 - tft_string.width(), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); tft.canvas(216, GRID_OFFSET_Y + (GRID_HEIGHT - 32) / 2 + 32, 96, 32); tft.set_background(COLOR_BACKGROUND); tft_string.set(Z_LBL); tft.add_text(0, MENU_TEXT_Y_OFFSET, COLOR_MENU_TEXT, tft_string); tft_string.set(isnan(ubl.z_values[x_plot][y_plot]) ? "-----" : ftostr43sign(ubl.z_values[x_plot][y_plot])); tft_string.trim(); tft.add_text(96 - tft_string.width(), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); tft.canvas(GRID_OFFSET_X + (GRID_WIDTH - 32) / 2, GRID_OFFSET_Y + GRID_HEIGHT + CONTROL_OFFSET - 1, 32, 32); tft.set_background(COLOR_BACKGROUND); tft_string.set(ui8tostr3rj(x_plot)); tft_string.trim(); tft.add_text(tft_string.center(32), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); tft.canvas(GRID_OFFSET_X + GRID_WIDTH + CONTROL_OFFSET, GRID_OFFSET_Y + (GRID_HEIGHT - 27) / 2, 32, 32); tft.set_background(COLOR_BACKGROUND); tft_string.set(ui8tostr3rj(y_plot)); tft_string.trim(); tft.add_text(tft_string.center(32), MENU_TEXT_Y_OFFSET, COLOR_MENU_VALUE, tft_string); #if ENABLED(TOUCH_SCREEN) touch.clear(); draw_menu_navigation = false; add_control(GRID_OFFSET_X + GRID_WIDTH + CONTROL_OFFSET, GRID_OFFSET_Y + CONTROL_OFFSET, UBL, ENCODER_STEPS_PER_MENU_ITEM * GRID_MAX_POINTS_X, imgUp); add_control(GRID_OFFSET_X + GRID_WIDTH + CONTROL_OFFSET, GRID_OFFSET_Y + GRID_HEIGHT - CONTROL_OFFSET - 32, UBL, - ENCODER_STEPS_PER_MENU_ITEM * GRID_MAX_POINTS_X, imgDown); add_control(GRID_OFFSET_X + CONTROL_OFFSET, GRID_OFFSET_Y + GRID_HEIGHT + CONTROL_OFFSET, UBL, - ENCODER_STEPS_PER_MENU_ITEM, imgLeft); add_control(GRID_OFFSET_X + GRID_WIDTH - CONTROL_OFFSET - 32, GRID_OFFSET_Y + GRID_HEIGHT + CONTROL_OFFSET, UBL, ENCODER_STEPS_PER_MENU_ITEM, imgRight); add_control(224, GRID_OFFSET_Y + GRID_HEIGHT + CONTROL_OFFSET, CLICK, imgLeveling); add_control(144, 206, BACK, imgBack); #endif } #endif // AUTO_BED_LEVELING_UBL #if ENABLED(TOUCH_SCREEN_CALIBRATION) void MarlinUI::touch_calibration() { static uint16_t x, y; calibrationState calibration_stage = touch.get_calibration_state(); if (calibration_stage == CALIBRATION_NONE) { defer_status_screen(true); clear_lcd(); calibration_stage = touch.calibration_start(); } else { tft.canvas(x - 15, y - 15, 31, 31); tft.set_background(COLOR_BACKGROUND); } x = 20; y = 20; touch.clear(); if (calibration_stage < CALIBRATION_SUCCESS) { switch(calibration_stage) { case CALIBRATION_POINT_1: tft_string.set("Top Left"); break; case CALIBRATION_POINT_2: y = TFT_HEIGHT - 21; tft_string.set("Bottom Left"); break; case CALIBRATION_POINT_3: x = TFT_WIDTH - 21; tft_string.set("Top Right"); break; case CALIBRATION_POINT_4: x = TFT_WIDTH - 21; y = TFT_HEIGHT - 21; tft_string.set("Bottom Right"); break; default: break; } tft.canvas(x - 15, y - 15, 31, 31); tft.set_background(COLOR_BACKGROUND); tft.add_bar(0, 15, 31, 1, COLOR_TOUCH_CALIBRATION); tft.add_bar(15, 0, 1, 31, COLOR_TOUCH_CALIBRATION); touch.add_control(CALIBRATE, 0, 0, TFT_WIDTH, TFT_HEIGHT, uint32_t(x) << 16 | uint32_t(y)); } else { tft_string.set(calibration_stage == CALIBRATION_SUCCESS ? "Calibration Completed" : "Calibration Failed"); defer_status_screen(false); touch.calibration_end(); touch.add_control(BACK, 0, 0, TFT_WIDTH, TFT_HEIGHT); } tft.canvas(0, (TFT_HEIGHT - tft_string.font_height()) >> 1, TFT_WIDTH, tft_string.font_height()); tft.set_background(COLOR_BACKGROUND); tft.add_text(tft_string.center(TFT_WIDTH), 0, COLOR_MENU_TEXT, tft_string); } #endif // TOUCH_SCREEN_CALIBRATION void menu_line(const uint8_t row, uint16_t color) { tft.canvas(0, 2 + 34 * row, TFT_WIDTH, 32); tft.set_background(color); } void menu_pause_option(); void menu_item(const uint8_t row, bool sel ) { #if ENABLED(TOUCH_SCREEN) if (row == 0) { touch.clear(); draw_menu_navigation = TERN(ADVANCED_PAUSE_FEATURE, ui.currentScreen != menu_pause_option, true); } #endif menu_line(row, sel ? COLOR_SELECTION_BG : COLOR_BACKGROUND); TERN_(TOUCH_SCREEN, touch.add_control(sel ? CLICK : MENU_ITEM, 0, 2 + 34 * row, 320, 32, encoderTopLine + row)); } #endif // HAS_UI_320x240
{ "pile_set_name": "Github" }
What would happen if Batman, Superman and Wonder Woman faced off against the villainous likes of Freddy Krueger and Jason? That’s exactly the plot the folks from Stryder HD and Adeel of Steel cooked up in their mashup trailer Justice League vs Monsters. The trailer takes scenes from Batman v Superman: Dawn of Justice and combines them with clips from Halloween, Friday the 13th, Nightmare on Elm Street and a whole slew of other horror movies. The result is a new take on the recent DC film that will leave fans longing for more. Check out the epic mashup trailer below! [ H/T YouTube / Stryder HD ]
{ "pile_set_name": "OpenWebText2" }
With the cooperation of the Director of the centers, specific procedures will be established to identify the patients and assure that their care conforms to specific standards for diagnosis treatment and management that we have identified in year 1 of the project. Two sets of activities will be conducted during the second year of this grant. First, a descriptive field study using approximately 60 diabetic and 60 hypertensive patients will be conducted in order to: 1. Describe the interrelationships among the following patient contributions to care; a) patients knowledge of disease and therapy. b) patients beliefs regarding the severity of the disease, the benefits of therapy and the barriers to implementing therapy. c) patient compliance with prescribed therapy. 2. To produce a prediction model of clinical health states using knowledge, beliefs, compliance and intake severity of disease. 3. To produce a prediction model of psychosocial health states using knowledge, beliefs, compliance and intake severity of disease. 4. To classify patients according to the pattern of scores on measures of knowledge, beliefs, and compliance. Second, based on the results of this descriptive phase, protocols will be developed for the experimental nursing intervention phase of the study. Nurse interviewers will be trained for the experimental phase of the study during the latter part of this second year of funding. In addition, patient interviewers and record auditors will be trained to conduct these functions necessary for the conduct of the experiment.
{ "pile_set_name": "NIH ExPorter" }
SAN JOSE (KPIX 5) – A Bay Area rabbi is helping his Jewish community celebrate Passover while social distancing from home. Rabbi Mendel Weinfeld of the Chabad House-Almaden Valley and his wife, Mussi, are creating “Seder to-go” boxes. Like many other goods, local grocery stores have been running out of kosher foods typically eaten during Passover. The Weinfelds are now making sure many families have what they need to have a proper Seder at home. “Normally at this time we have thousands of people around the Bay Area that come, tens of thousands of people, celebrate Passover,” Weinfeld said. “Today it’s going to be hard, and we’re forced to be at home but that’s not going to stop us and we’re all going to be leaders in our own homes.” The Chabad of San Francisco is also offering Seder to-go boxes. They still had dozens left for purchase as of Monday night.
{ "pile_set_name": "OpenWebText2" }
AGO4 is specifically required for heterochromatic siRNA accumulation at Pol V-dependent loci in Arabidopsis thaliana. In plants, 24 nucleotide long heterochromatic siRNAs (het-siRNAs) transcriptionally regulate gene expression by RNA-directed DNA methylation (RdDM). The biogenesis of most het-siRNAs depends on the plant-specific RNA polymerase IV (Pol IV), and ARGONAUTE4 (AGO4) is a major het-siRNA effector protein. Through genome-wide analysis of sRNA-seq data sets, we found that AGO4 is required for the accumulation of a small subset of het-siRNAs. The accumulation of AGO4-dependent het-siRNAs also requires several factors known to participate in the effector portion of the RdDM pathway, including RNA POLYMERASE V (POL V), DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2) and SAWADEE HOMEODOMAIN HOMOLOGUE 1 (SHH1). Like many AGO proteins, AGO4 is an endonuclease that can 'slice' RNAs. We found that a slicing-defective AGO4 was unable to fully recover AGO4-dependent het-siRNA accumulation from ago4 mutant plants. Collectively, our data suggest that AGO4-dependent siRNAs are secondary siRNAs dependent on the prior activity of the RdDM pathway at certain loci.
{ "pile_set_name": "PubMed Abstracts" }
Early tumor necrosis factor-alpha release from the pulmonary macrophage in lung ischemia-reperfusion injury. Tumor necrosis factor-alpha is a proinflammatory mediator required for the development of experimental lung ischemia-reperfusion injury. The alveolar macrophage is a rich source of tumor necrosis factor-alpha in multiple models of acute lung injury. The present study was undertaken to determine whether the alveolar macrophage is an important source of tumor necrosis factor-alpha in lung ischemia-reperfusion injury and whether suppression of its function protects against injury. Left lungs of Long-Evans rats underwent normothermic ischemia for 90 minutes and reperfusion for up to 4 hours. Treated animals received gadolinium chloride, a rare earth metal that inhibits macrophage function. Injury was quantitated via lung tissue neutrophil accumulation (myeloperoxidase content), lung vascular permeability, and bronchoalveolar lavage fluid leukocyte, cytokine, and chemokine content. Separate samples were generated for immunohistochemistry. Tumor necrosis factor-alpha secretion occurred at 15 minutes of reperfusion and was localized to the alveolar macrophage by immunohistochemistry. In gadolinium-treated animals, lung vascular permeability was reduced by 66% at 15 minutes (P <.03) of reperfusion and by 34% at 4 hours (P <.02) of reperfusion. Suppression of macrophage function resulted in a 35% reduction in lung myeloperoxidase content (P <.03) and similar reductions in bronchoalveolar lavage leukocyte accumulation. Tumor necrosis factor-alpha and microphage inflammatory protein-1alpha protein levels were markedly reduced in the bronchoalveolar lavage of gadolinium-treated animals by enzyme-linked immunosorbent assay. The alveolar macrophage secretes tumor necrosis factor-alpha protein by 15 minutes of reperfusion, which orchestrates the early events that eventually result in lung ischemia-reperfusion injury at 4 hours. Gadolinium pretreatment markedly reduces tumor necrosis factor-alpha elaboration, resulting in significant protection against lung ischemia-reperfusion injury.
{ "pile_set_name": "PubMed Abstracts" }
Effects of a new mild shampoo for preventing hair loss in Asian by a simple hand-held phototrichogram technique. This study was conducted to evaluate the effects of a commercially available shampoo in Korean subjects with alopecia using a simple hand-held phototrichogram technique. Forty-four subjects with alopecia were enrolled and forty subjects continued for 16 weeks. In the test group, total hair counts increased significantly at weeks 8 and 16, and the number of shedding hair significantly decreased at week 16. Terminal hair counts significantly increased at week 8. In the control group, hair thickness and the number of vellus hairs significantly decreased at week 16. The number of total hairs significantly increased in the test group than in the control group at weeks 8 and 16. The number of shedding hairs significantly decreased in the test group than in the control group at week 16. Visual assessment using clinical digital images showed that the number of total hairs appeared to increase although there was no statistical significance. In this study, it was found that the test shampoo could prevent hair loss.
{ "pile_set_name": "PubMed Abstracts" }
You know how it is. You're a power user, an alpha nerd. You just aren't happy without multiple screens - a puny one-screen desktop isn't enough for the multiple video feeds, apps and so forth that are essential to your working life. But that's too bad, because you are also a deadly US Navy SEAL supertrooper. Your video feeds aren't CNN - well, maybe some of them are actually - they're live video from surveillance drones prowling overhead, or from robothopter bat-bugs you have sent into the bad guys' stronghold. You're normally working up to your chest in the snows of the Hindu Kush or the stinking mud of the Euphrates delta, generally resting your ruggedised laptop on the dead body of a terrorist you have just killed in total silence with no more than a piece of string, using a little known Oriental grappling technique. So no multiple screens for you - or at least, not until now. Because now the era of the waterproof, shockproof, dustproof, dual screen laptop has dawned. The machine in this case is a ruggedised version of the G400 dual-screen machine from Alaskan startup gScreen, which says the G400 is "the first true dual-screen laptop with identical 15-inch LED backlit displays", though others might dispute that. The G400 isn't actually on sale yet - customers can reserve one from the 25th. Meanwhile, however, certain privileged customers are jumping the queue. Last week, Naval Special Warfare Group 2 based at Little Creek, Virginia issued a notice of intent to award a small-biz set aside contract for "Titan M1 Dual screen Laptop Workstations". The gScreen corporate blog confirms that "this product is being built specifically to specs requested by the US NAVY for extreme environments". Apart from dual displays, the frogman-commando IT types will get an Intel CORE 2 Quad QX9300 processor, 4GB of RAM, 500GB hard drive and standard MIL-STD 810F ruggedisation. The SEALs have also specified dual GeForce 8600M graphics cards, Blu-Ray drive, Gigabit networking, WiFi and Bluetooth. It seems they're no fans of Vista - the machines are to come with Windows XP Pro. It does occur that machines of this sort would also be capable of other than strictly work-related tasks. ®
{ "pile_set_name": "OpenWebText2" }
Earlier this morning, we reported that other sources had said FC Utrecht midfilder Gianluca Nijholt was currently on trial with the Chicago Fire. After reaching out to Nijholt's agent Bart Baving, we received this response (it has been cleaned up slightly as it doesn't appear English is Baving's first language, but the full context remains): "Spoke with Frank Klopas yesterday; Frank was very positive and wants to have him with the team. But I have to negotiate with (Chicago Fire Vice President of Soccer Operations) Guillermo (Petrei) first about the numbers...both parties are interested in working with each other." The Fire potentially picked up an international slot yesterday when Colombian side Millonarios reported that Fire midfielder Rafael Robayo would rejoin his old club. Datos Millionarios also reported there was a transfer fee involved. There is no word as to whether Nijholt would recognized as a designated player. The article that was referenced this morning indicated Nijholt could be moved on a free transfer, which makes the chance of him taking up a DP spot less likely. When reached by Hot Time in Old Town, a Fire spokesperson said they had no comment on Nijholt at this time. Here are some highlights of the Fire's potential next signing: Gianluca Nijholt Highlights (via baradona73)
{ "pile_set_name": "OpenWebText2" }
On construction of single-arm two-stage designs with consideration of both response and toxicity. When establishing a treatment in clinical trials, it is important to evaluate both effectiveness and toxicity. In phase II clinical trials, multinomial data are collected in m-stage designs, especially in two-stage ( m = 2 ) design. Exact tests on two proportions, p r for the response rate and p t for the nontoxicity rate, should be employed due to limited sample sizes. However, existing tests use certain parameter configurations at the boundary of null hypothesis space to determine rejection regions without showing that the maximum Type I error rate is achieved at the boundary of null hypothesis. In this paper, we show that the power function for each test in a large family of tests is nondecreasing in both p r and p t ; identify the parameter configurations at which the maximum Type I error rate and the minimum power are achieved and derive level-α tests; provide optimal two-stage designs with the least expected total sample size and the optimization algorithm; and extend the results to the case of m > 2 . Some R-codes are given in the Supporting Information.
{ "pile_set_name": "PubMed Abstracts" }
NHAI to develop Paradip-Daitary highway. With Paradip-Daitary expressway in Odisha in pathetic shape, the National Highway Authority of India (NHAI) has decided to develop the highway on an 'Operate-Maintain Transfer' (OMT) mode, NHAI officials said. The tender-bidding process is on to hand over the maintenance work of the highway to a private firm. The contract would be for six years to repair and maintain the highway. As the highway is in bad shape in certain patches, a local construction company has been entrusted to undertake repair till OMT bid is finalised, the officials said. The busy highway otherwise called as Paradip-Daitary expressway incidentally happens to be the principal road between the mineral-rich areas and Paradip port, a stretch of 77 km. Replete with potholes, the highway has come under roadside encroachment at strategic traffic junctions. A 10-KM stretch of the road, (Bhutamundai to Atharbanki), is not in a motorable condition and one has to undergo a bitter experience at many places on the road as the road is replete with hundreds of potholes. 'We have urged upon the officials to set right the road in perfect condition so as to ensure speedy travel,' said Paradip Private Truck Operators' Union President Sumant Biswal.
{ "pile_set_name": "Pile-CC" }
Mind Your Manners Mind Your Manners may refer to: "Mind Your Manners" (Chiddy Bang song) (2011) "Mind Your Manners" (Pearl Jam song) (2013) Mind Your Manners (film), a 1953 short film by Olivier Megaton "Mind Your Manners" (Arthur), an episode of Arthur See also Manners or etiquette
{ "pile_set_name": "Wikipedia (en)" }
Q: How and when to use Ember.Application register and inject methods? I'm trying to understand how to use Ember.Application register & inject methods What use case are these functions designed for? How are they to be used and when? I'd really like to know! A: Ember by default does dependency injection when it boots your application using mostly conventions, for example if you use ember-data then an instance of the store class is injected in every route and controller in your application, so you can later get a reference by simply doing this.get('store') inside any route or controller. For example here is a code extract where the default store get's registered (taken from the source) Ember.onLoad('Ember.Application', function(Application) { Application.initializer({ name: "store", initialize: function(container, application) { application.register('store:main', application.Store); ... } container.lookup('store:main'); } }); And then injected (source) Application.initializer({ name: "injectStore", initialize: function(container, application) { application.inject('controller', 'store', 'store:main'); application.inject('route', 'store', 'store:main'); application.inject('dataAdapter', 'store', 'store:main'); } ... }); In other words register and inject are methods to register dependencies and inject them yourself. Let's assume you have a Session object which you populate after a server request on application start, and which you want to have a reference to in every controller, you could do something like this: var App = Ember.Application.create({ ready: function(){ this.register('session:current', App.Session, {singleton: true}); this.inject('controller', 'session', 'session:current'); } }); App.Session = Ember.Object.extend({ sessionHash: '' }); This code would set the session property of every controller instance to a singleton instance of App.Session, so you could in any controller do this.get('session') and get a reference to it, and since it's defined as a singleton it would be always the same session object. With register you can register controllers, models, views, or any arbitrary object type. inject, in the other hand, can inject onto all instances of a given class. For example inject('model', 'session', 'session:current') would also inject the session property with the session:current instance into all models. To inject the session object, let's say onto the IndexView you could do inject('view:index', 'session', 'session:current'). Although register and inject are very powerful you should use them wisely and only in the case you really know there is no other way to achieve your goal, I guess the lack of documentation is an indicator for discouragement. Update - No good explanation without a working example Since It's mostly a must to provide a working example with an explanation, there it goes: http://jsbin.com/usaluc/6/edit. Notice how in the example we can simply access the mentioned sessionHash by referring to the current controller's session object with {{controller.session.sessionHash}} in every route we are in, this is the merit of what we have done by registering and injecting the App.Session object in every controller in the application. Hope it helps. A: A common use case is to provide the current loggedin user property to controllers and routes as in https://github.com/kelonye/ember-user/blob/master/lib/index.js and https://github.com/kelonye/ember-user/blob/master/test/index.js
{ "pile_set_name": "StackExchange" }
--- author: - 'Armeen Taeb[^1]' - 'Arian Maleki[^2]' - 'Christoph Studer[^3]' - 'Richard G. Baraniuk' bibliography: - 'references2.bib' title: Maximin Analysis of Message Passing Algorithms for Recovering Block Sparse Signals --- Group sparsity; group LASSO; approximate message passing; phase transition. Introduction ============ Background ========== Main results ============ Proofs of the main results ========================== [^1]: Dept. of Electrical, Computer, and Energy Engineering, University of Colorado at Boulder. [^2]: Dept. of Statistics, Columbia University. [^3]: Dept. of Electrical and Computer Engineering, Rice University.
{ "pile_set_name": "ArXiv" }
OMGOMGOMGOMGOMGOMGOMGOMGOMG! 8D!!!! *FANGASM* Oh, this is great, this is wonderful, this is amazing! When I saw this in my inbox, I thought to myself, "Ah Sweet! He drew Vinyl! Kinda ironic since he and I were talking about Vinyl..." Then I clicked on it, saw it was AMAZINGLY AWESOME, and on top of that, it was for me, pffft, I exploded with happiness. I LOVE the atmosphere, and the lighting is very appealing. I also like Vinyl's pose, and the fact that you did Vinyl's shades like separate shades, it looks VERY cool. I admit that the dot in the middle instead of a proper connection is a bit odd, but I still like it C: This is now my PSP's, PS3's and laptop's wallpaper, I seriously love this, thanks a million man.
{ "pile_set_name": "OpenWebText2" }
Subbaramiah Minakshisundaram Subbaramiah Minakshisundaram (12 October 1913, Trichur – 13 August 1968, Kerala) was an Indian mathematician who worked on heat kernels and parabolic partial differential equations and introduced the Minakshisundaram–Pleijel zeta function. Publications References External links S. Minakshisundaram memorial society Category:1913 births Category:1968 deaths Category:20th-century Indian mathematicians Category:Scientists from Kerala
{ "pile_set_name": "Wikipedia (en)" }
American Biographical Institute The American Biographical Institute (ABI) was a paid-inclusion vanity biographical reference directory publisher based in Raleigh, North Carolina which had been publishing biographies since 1967. It generated revenue from sales of fraudulent certificates and books. Each year the company awarded hundreds of "Man of the Year" or "Woman of the Year" awards at between $195 and $295 each. Its awards were frequently denounced as scams by politicians, journalists, and others. The Government of Western Australia's ScamNet service considers the American Biographical Institute to be a scam vanity publisher "who appeals to people who want a plaque on their wall or see their name in a book, even if the honour has no real credibility—in effect, they have purchased the honour." The company went bankrupt in 2012. The company's owner, Arlene Calhoun, also ran another purveyor of for-profit awards called the United Cultural Convention and another vanity press called the Pentland Press or Ivy House Publishing Group. Operations The ABI invited individuals to purchase various honors as a commemorative in their inclusion for a specific biography. One former employee explained that the company bought mailing lists from organizations, and using those names, they sent out blanket mailings inviting individuals to be in biographical books or accept awards. Such honors include "International Man of the Year," "Most Admired Man of the Decade" or "Outstanding Man of the 21st Century" (see list below), or to be included in ABI publications, such as 500 Leaders of Science or The World Book of Knowledge, in exchange for a contribution fee. Those who accept, who sometimes write their own biographies, are offered books or certificates at prices as high as US $795. On its website, the publisher describes itself as "one of the world’s leading biographical reference publishers and authorities on global contemporary achievement" and claims that "inclusion in an ABI reference title is based on personal achievement alone and is not available for purchase." The ABI shares an address and P.O. box with the United Cultural Convention, another purveyor of for-profit awards. The Chairman of the ABI, Arlene Calhoun, also runs another vanity press, Pentland Press (d/b/a Ivy House Publishing Group). "World Forum" The ABI is also the co-host with the International Biographical Centre of a yearly World Forum, (previously the International Congress on Arts and Communications) which invites a group for a week of professional seminars, artistic displays and performances, and culture sharing. Host cities over the 31 yearly meetings have included: New York; Washington D.C.; New Orleans; San Francisco; Edinburgh; Cambridge, UK; Nairobi; Madrid; Lisbon; Cambridge, Mass. USA; Oxford, UK.; Singapore; and Sydney. The Maitre Artiste of Ethiopia, Afewerk Tekle was a regular attendee. No proceedings of these forums are produced except from the ABI which includes these in a newsletter. The often prestigious location is then quoted on their literature as if to add gravitas. In 2007, referring to the International Biographical Centre, the American Biographical Institute and Marquis Who's Who, Jan Margosian, consumer information coordinator for the Oregon Department of Justice, warned consumers to be wary and called the companies "pretty tacky", adding "I don't know why they would put you in there if they weren't hoping to get you to buy the book.. "You truly have to look at how they are marketing and what the spin is. It's something you might want to watch out for." Awards and titles New awards are continually created and marketed. Most awards are available for between US $195 and $495, payable by the recipient, depending on their level of prestige and the quality of the printing on the certificate and the material in the frame or mount. In 2005 the Institute awarded 200 "Man of the Year" awards at between $195 and $295 each. American Biographical Institute gives awards like Man of The Year or Scientific Award of the Excellence to many people in a year. Every award can be purchased from them. The ABI does not provide a consolidated list of all the awards, medals, diplomas and certificates it issues, but the titles of the honors may be identified through the recipients' use of them in their résumés. See also Author mill Who's Who scam References External links Company homepage — First-hand account that exposes fraudulent who’s who publishers Category:Companies based in Raleigh, North Carolina Category:Publishing companies established in 1967 Category:Book publishing companies based in North Carolina Category:Résumé frauds and controversies Category:Self-publishing companies Category:1967 establishments in North Carolina
{ "pile_set_name": "Wikipedia (en)" }
Q: Struts Javascript AJAX onsuccess page update I have a problem with updating an HTML site after an AJAX request success. In my project I'm using old Struts 1 framework with a coolmenus JS component that produces a menu. After a form submit the server returns a block of JS code within <script> tags (among HTML) and these create a menu on page load each time. Now recently I had to implement a solution that is doing an AJAX request for updating my model on server side. Everything to that point is OK, the model is being updated but the problem starts on swapping received html (using prototypejs): $$('html')[0].innerHTML = t.responseText; or $$('html')[0].innerHTML.update(t.responseText); It breaks my menu creation(there is no menu after update). I tried to get all 'script' tags from the body and invoke them from evalScripts() function, but it doesn't work at all. I mean the scripts are invoked but it doesn't create the menu. Any ideas? A: I was trying to update the whole page, but I didn't have to. I ended up with extraction of the 'content' part of the page without touching the menu.
{ "pile_set_name": "StackExchange" }
Decrease of DNA methyltransferase 1 expression relative to cell proliferation in transitional cell carcinoma. In many common cancers such as transitional cell carcinoma (TCC), specific genes are hypermethylated, whereas overall DNA methylation is diminished. Genome-wide DNA hypomethylation mostly affects repetitive sequences such as LINE-1 retrotransposons. Methylation of these sequences depends on adequate expression of DNA methyltransferase I (DNMT1) during DNA replication. Therefore, DNMT1 expression relative to proliferation was investigated in TCC cell lines and tissue as well as in renal carcinoma (RCC) cell lines, which also display hypomethylation, as indicated by decreased LINE-1 methylation. Cultured normal uroepithelial cells or normal bladder tissue served as controls. In all tumor cell lines, DNMT1 mRNA as well as protein was decreased relative to the DNA replication factor PCNA, and DNA hypomethylation was present. However, the extents of hypomethylation and DNMT1 downregulation did not correlate. Reporter gene assays showed that the differences in DNMT1 expression between normal and tumor cells were not established at the level of DNMT1 promoter regulation. Diminished DNMT1:PCNA mRNA ratios were also found in 28/45 TCC tissues but did not correlate with the extent of DNA hypomethylation. In addition, expression of the presumed de novo methyltransferases DNMT3A and DNMT3B mRNAs was investigated. DNMT3B overexpression was observed in about half of all high-stage TCC (DNMT3B vs. tumor stage, chi(2): p = 0.03), whereas overexpression of DNMT3A was rarer and less pronounced. Expression of DNMT3A and DNMT3B in most RCC lines was higher than in TCC lines. Our data indicate that DNMT1 expression does not increase adequately with cell proliferation in bladder cancer. This relative downregulation probably contributes to hypomethylation of repetitive DNA but does not determine its extent alone.
{ "pile_set_name": "PubMed Abstracts" }
The UK Qualifications and Curriculum Authority is considering introducing a new A’level course (in Britain, A’level is the exam that is taken at the end of high school) called “Use of Mathematics”. As one might expect, this idea has not met with universal approval, and there is now a campaign to stop the idea in its tracks. (I should warn you that the preceding link is to a Word file rather than to a web page.) The General Secretary of the National Association of Headteachers has this to say to the campaigners: They should get down from their ivory towers. They should be out in the world where young people live and exist and they should be appreciative that young people have great skills in the use of technology and we have to latch on to that. We cannot continue teaching an out dated 19th century curriculum. This is simply turning many children off education because it is completely not relevant to them at all. Some sample papers for the new course have been made available, so let’s have a look at the up-to-date 21st-century curriculum that will enthuse a new generation of British schoolchildren. I’ll concentrate on one or two questions but if you want to see more, then the sample papers can be found at the bottom of this page. (Update: unfortunately, these sample papers have been taken down. I can’t help wondering why. Further update: at least some sample papers are now available at the bottom of this page.) Before I discuss any of the actual questions, let’s imagine that we are sitting in a committee trying to devise a use-of-maths syllabus. What could be on it? Perhaps the most obvious place where mathematics is used is science, but that kind of use of mathematics is covered in mechanics questions and also in physics. Another place is statistics, but that too is on the existing mathematics syllabus. To help us, let us remember that we are looking for something that is relevant to schoolchildren. One might think that statistics was pretty relevant, but we had better suppress that thought and look for something else. Here is an idea. Many children will one day find themselves taking out a mortgage. Perhaps we could devise a question that will help them think about how mortgages work. I’m not saying in advance that this will turn out to be a good idea, but let us at least try. For later reference, I want to discuss an obvious problem about repayment mortgages and to do so in some detail. Suppose for simplicity that the interest rate for an interest-only mortgage would be 5% and that this rate never changes. If I take out a repayment mortgage of £50,000 and pay £500 a month, then roughly how long will it take me to pay off the mortgage? Let me answer that in a way that comes naturally to me, and, I’m guessing, to most mathematicians. To start with, I would replace a discrete problem (payments once a month) by a continuous one (money leaking out of my bank account at a constant rate). Next, I would forget the numerical values, which obscure what is going on (for instance, they make it harder to keep track of units) and rephrase the problem like this: at time I take out a loan of , and thereafter the amount I owe, changes in two ways. On the one hand there is a constant-rate decrease of (because of my repayments) but superimposed on this is an increase (the interest I have to pay) that is proportional to the current amount I owe, which we can denote by . In other words, satisfies the differential equation . We can solve this by turning it upside down and getting , which we can solve easily since all we have to do is integrate with respect to . From this we get . Rearranging, we find that , so that . When this is supposed to give us . From that it follows that , so . Therefore, the amount of time it will take until is the value of such that , or , or . From this expression we can see that I will never pay off the mortgage unless , though in fact it is more sensible to deduce that from the differential equation: if then will not decrease. (This is telling us the rate at which repayments must be made in order to keep up with interest payments.) Also, if we know a little about the shape of the exponential function, we can see that if is negative, then will decrease rather slowly at first and much more quickly later on. This is a well-known phenomenon with repayment mortgages: initially most of the repayments are interest repayments (because the amount owing is large so the interest is large) but later on they are mostly capital repayments (because now the amount owing is small so the interest is small). Of course, there is one final stage, which is to plug in some numbers. I won’t do it completely here, but I will point out that at least some thought is required if we want to work out what value of corresponds to an interest rate of 5% and what value of corresponds to a monthly repayment of £500. If we measure in years, then we need to choose such that and we should take to be 6000. We are given that , and a reasonable approximation for is 0.05 (since for small ), so the time taken will be around , where . So which is slightly under 2, so we get a bit less than , and could get a better estimate with the help of a calculator (which is not just allowed but actually required in the use-of-maths exam). Now what skills did I need in order to do that calculation? (Apologies if I’ve made a mistake in it — I have not checked it carefully.) There were basically two. The first was to transform the original real-world problem into a purely mathematical one. The second was to solve the mathematics problem, which involved solving a fairly simple differential equation and then doing some routine algebraic manipulations. I would guess that an average A’level student would find the first task quite hard, because it involves a bit of thought: it seems that most of the A’level syllabus nowadays consists of learning to do certain algorithmic tasks such as differentiating compositions of basic functions, and not much is devoted to solving problems or to solving the kind of modelling problem that constituted the first stage of the above calculation. But perhaps this is where the use-of-maths A’level would come into its own. One could do a bit less of the pure stuff, but by way of compensation one would learn how to take a real-world problem, transform it into mathematics, and solve the mathematics. I’m not sure I like that idea, but it could perhaps be justified, so let us now have a look at some questions in the sample papers. Candidates are to be given something called a “data sheet”, which you might think consisted of tables of data that you then had to use your modelling skills to analyse and comment on mathematically. But actually the name “data sheet” is rather misleading: it’s more like a couple of pages with a few mathematical concepts explained. I think the idea is that the data sheet explains the mathematical principles and then your job as candidate is to use the principles. For the question I want to talk about, which is number 2 on this paper, the data sheet is called “Waves as models”. I’ll let you read that if you want, but here’s the question. 2. The article states that, for the case of the simple pendulum, the angle, , that the string makes with the vertical, seconds after release, can be modelled by a function such as . (a) What does this model suggest for the angle that the string makes with the vertical when it is first released? (b) For this model, show that . Ah, so my guess was wrong. You aren’t asked to model anything. Instead, you are given the model! (The equivalent for my mortgage question would be to be told what the formula was for the amount owing at time and to be asked to draw various conclusions from the formula.) So what exactly are the skills you need to solve the above question? Again, there are two. The first is the ability to solve what in the US are called word problems: this means that instead of being asked to solve something like you are given an equivalent wordy problem such as “I have some apples in a bag, put two more in, count them, and find that I have five; how many did I have originally?” When you get used to these, they are rather easy: you just cut out all those stupid words and leave the maths. (However, interestingly, they were found very difficult when I taught them in the US. I was a PhD student at the time and the course was College Algebra 1021 at Lousiana State University, taken by people who would typically not go on to major in mathematics.) What do we have to do for part (a) above? Well, the underlying maths problem is very simple indeed: what is when ? To get to that problem, we had to interpret “when it is first released” as “at ” and we had to interpret “the angle that the string makes with the vertical” as . The second of these tasks is trivial, since the question has just said that that is what is, and the first is, well, not very challenging. On to the second part. It’s telling us to differentiate twice, but it just about qualifies as a word problem because it starts “For this model”. Luckily, we can turn this “word problem” into maths by simply ignoring the words “For this model”. The setter of this question seems to have a touching belief that if the word “model” is splashed around a bit, then candidates are learning how to use mathematics. But they are doing nothing of the kind. They are learning how to solve word problems, and very easy ones at that. (For extreme examples of questions where the word model is used a lot, but plays absolutely no role in the questions, see the Calculus sample paper.) And the irony of it is that you learn how to do word problems in a conventional A’level syllabus, and you learn more mathematics in the process. Even worse, the problems in the use-of-maths sample papers aren’t real word problems: in a word problem you normally have to say something like, “Let’s represent this quantity by $x$.” Even that step seems to be regarded as too challenging on these papers. There are in fact some questions about interest rates and the like on another paper, but the “data sheet” gives you formulae for everything, so that all you are required to do is take the values given to you in the question and plug them into the formula. (The data sheet, by the way, is made available before the exam.) Let’s just think for a moment about whether this is likely to be a valuable life skill. Suppose, for instance, that you have a choice of two long-term savings accounts. One of them has a higher interest rate, but the other one has a bonus if you keep your money in for five years. Luckily you took use-of-maths A’level a few years ago, so you should be in a good position to decide which one to go for. Ah, but you’ve lost your data sheet, and in any case the data sheet didn’t tell you a formula for the amount you end up with when there is a bonus involved. What can you do? There isn’t an obvious internet search: the problem is that you have to think a bit, and unfortunately what you’ve been trained to do is plug numbers into formulae that are just given to you. There are so many ironies to this. Those who propose the use-of-maths A’level will no doubt say that it is not a dumbing down of mathematics A’level (with its out-of-date nineteenth-century syllabus) but rather an equally rigorous exam that tests different skills. They will also say that the syllabus is more interesting and relevant. But it is blindingly obvious from the sample papers that it is not testing different skills (except perhaps the skill of understanding what the data sheets, which unfortunately don’t seem to be available in real life, are on about), and is deeply boring, and not even all that relevant to the people who are actually taking the exam, who should be enjoying their last few years of not having to think about mortgages, income tax returns and the like. (Does anyone seriously think that teenagers will be filled with enthusiasm by personal finance, when for adults, who are directly affected by it, it is an awful chore?) A conventional A’level student will do plenty of word problems and more mathematics, and will also solve modelling problems when they do statistics and mechanics. Who will end up better at solving mathematical problems that arise in the real world? You do the math. Share this: Twitter Facebook Like this: Like Loading... Related
{ "pile_set_name": "OpenWebText2" }
Search He’s the most beautiful being, angel man. With a heart as big as the moon. The most loving, adorable man. I’ve never seen anybody as beautiful as he. Perfect perfect. We’re in love with each other, completely. He’s the strongest leader on earth today, a messenger of Allah. Most beautiful. Islamic thought has experienced much advancement since the time of Prophet Muhammad (peace be upon him); with the emergence of new schools of thoughts such as the Mu’tazila, the bewildering world of Sufism, etc. At the same time, a Muslim is inclined to ask the question: Was it true advancement? After all, knowledge as it were, was complete with the Holy Qur’an. Why search for more? – The answer is provided by the Qur’an; as Allah says to the Prophet of Islam, “Say: O Lord! Give me more knowledge! (Surah 20)” The Prophet Muhammad had said, “Seek knowledge even if you must go to China.” Hence, the scientists and scholars of Islam left no stone unturned to seek greater knowledge about the heavens and the earth. The ordinary believers who may not spend as much time with knowledge as do the saints, scientists, and scholars; were left to answer the essential question for themselves: Who is more correct – the scientist, the scholar, or the saint? Also in Islam there are various schools of thought teaching dissimilar albeit similar modes of the five daily prayers, e.g. Hanafi, Shafi, etc. To add to the variety of practices from which to choose, the modern-day cosmopolitan youth is confronted by a host of global religions, some of which remind him about truth that is intrinsic to the soul. Allah says in the Qur’an: “Those who believe and those who are Jews, Christians and Sabeans, [in fact] anyone who believes in God and the Last Day, and acts honorably will receive their earnings from their Lord: no fear will lie upon them nor need they feel saddened” (Surah 2). The Qur’an insists on belief in oneness of God. And so, all saints, scholars and scientists of Islam in addition to the ordinary believers have one belief in common: Belief in One God, Allah Supreme, Lord of the universe. This is the heart of Islam. Belief in Muhammad as the prophet of Islam, and beliefs in the Day of Judgment as well as the unseen world of angels are also essential. Politicians are among the richest people on earth today, virtually selling the lives of the innocent to get richer by the minute. There are serious human rights abuses around the world, thanks to political corruption. And it’s not just Zardari whom people think is out to sell Pakistan. We know the United States is extremely corrupt. When a Harvard University professor gets away with rape in the U.S. just because the person complaining of rape is a student — you know U.S. can be considered equivalent to a jungle in Africa. See http://owurapist.blogspot.com Here’s God’s take on things: They say: “If the Mercy-giving had so wished, we would not have worshipped them.” No matter what knowledge they may have about that, they are still merely guessing. Or have We given them a book already which they try to hold on to? Moreover they say: “We found our forefathers following such a community and we are merely being guided along their tracks.” Just the same We have not sent any warner into a town previous to you unless its highlivers said: “We found our forefathers with such a community, and are merely being led along their tracks.” He said: “Even if I should bring you better guidance than what you found your forefathers had?”, they would say: ‘We reject anything you are sent with!” We have been avenged on them; see what the outcome was for those who denied [it all]! (Qur’an) Basically, the “high-livers” are too complacent to believe in truth when falsehood is working for them so well in the worldly sense. Are the innocent destroyed with the corrupt? — No, provided that they’ve consistently opposed corruption. They should be praying against the corrupt if nothing else. We have never acted as punishers until We have despatched some messenger: yet whenever We want to wipe out some town, We order its high-livers so they act depraved in it; thus the Sentence about it is proven to be right and We utterly annihilate it. How many generations did We destroy since Noah? Sufficient is it for your Lord to be Informed, Observant of His servants’ sins! (Qur’an) Now both the Christians and the Muslims are expecting Jesus Christ or the Mahdi to emerge during this time. Do you think he’d side with the corrupt here or there? Would he shake hands with Obama or Bush, both engaged in perpetual war and under whose leadership countless innocents are killed day after day? Or would he side with the likes of Zardari or bin Laden? We hurry good things up for them, yet they do not even notice it! Those who feel anxious out of awe for their Lord, and those who believe in their Lord’s signs, and those who do not associate anything with their Lord, and those who give away anything they may give while their hearts feel wary lest they should return to their Lord; [all] those compete in doing good deeds and they will soon attain them. We only assign a soul something it can cope with. Before Us lies a Book which speaks up for Truth; they will not be dealt with unjustly. Instead their hearts are full of excitement because of this. They have other deeds besides those which they are committing, so that whenever We seize their high-livers with torment, just imagine how they bellow! “Do not roar [so loud] today; you will not be supported by Us. My signs have already been recited to you while you proudly turned on your heels away from it, sitting up nights to chatter on and on about it.” (Qur’an) God says He would not argue with the dumb. We have sent no town a warner unless its high-livers said: “We are disbelievers in what you have been sent with.” They say: “We have more wealth and children, and will not be tormented!” SAY: “My Lord extends sustenance to anyone He wishes and budgets it, even though most men do not realize it.” It is not your wealth nor your children that will bring you close to Us in patronage; only someone who believes and acts honorably (will do so). Those will have a double reward because of what they have done. They will feel secure in mansions while those who were attempting to thwart 0ur signs will be paraded forth into torment. SAY: “My Lord extends sustenance to anyone He wishes among His servants and He budgets it out. He will compensate you for anything you have spent since He is the best Provider.” Some day He will summon them all together; then He will say to the angels: “Are those the ones who were worshipping you?” They will say: “Glory be to You! You are our Patron rather than they. Instead they have been worshipping sprites; most of them even believe in them!” (Qur’an) Corrupt governments, political leaders and/or rapist professors from Harvard University or elsewhere do not wish to believe in the devil. They are devils. In fact, a book called “UFO’s in the Qur’an” claims that the U.S. government is being run by aliens or evil spirits. That’s Skull & Bones. They tell their members they’d always be protected from corruption charges. So a Harvard University professor, just because he goes to Harvard, is told he’d always be protected from rape accusations, and no matter what crime he commits, it’s going to be ok. But this is not the way God’s government works. This is not something Jesus would stand for. Corrupt people are going to hell, and nothing can change this fact. It’s written on their foreheads, it’s their destiny. Hell? Well, they’d see it when they get there. Jesus is not their Savior. Salvation is only for good people. Some readers may remember the 1961 film “The Day the Earth Caught Fire”. It could be viewed as the original “climate alarmist” film as it contains all of the plot elements of our current climate alarmism scenarios: exaggerated images of a dying planet, a mainstream media newspaper reporter, technology that is feared, the Met Office, and last but not least, junk science. A new study out of MIT predicts “a 90% probability that worldwide surface temperatures will rise at least 9 degrees by 2100.“ This is more than twice what was expected in 2003. The Telegraph reports “Global warming of 7C ‘could kill billionsthis century‘. Global temperatures could rise by more than 7C this century killing billions of people and leaving the world on the brink of total collapse, according to new research“A similar 2003 study had predicted a mere- but still significant- 4 degree increase in global temperatures by 2100, but those models weren’t nearly as comprehensive, and they didn’t take into consideration economic factors. So what has changed since 2003 to cause the scientists at MIT’s “Centre for Global Climate Change” to believe the world is going to boil over this century and send billions of us directly to a toasty demise similar to our featured movie? Antarctic ice has broken the record for greatest extent ever recorded.http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/current.area.south.jpg January, 2008 broke the record for the most snow covered area ever measured in the Northern Hemisphere.http://climate.rutgers.edu/snowcover/png/monthlyanom/nhland01.pngI added a red line below showing the reported projected rise in temperatures from the MIT models, compared with the actual observed temperature trends since the previous 2003 report. Their projections show a correlation of essentially zero.Given that the observed trends are exactly opposite what the MIT models have predicted, one might have to ask what they have observed since 2003 to more than double their warming estimates, and where their 90% confidence value comes from? The study, carried out in unprecedented detail, projected that without “rapid and massive action” temperatures worldwide will increase by as much as 7.4C (13.3F) by 2100, from levels seen in 2000. This study has a strong scent of GIGO (garbage, in garbage out.) MIT has one of the world’s preeminent climatologists Dr. Richard Lindzen in their Department of Earth, Atmospheric and Planetary Sciences. I wonder if the scientists at the “Centre for Global Climate Change” checked with him before firing this remarkable piece off to the press? During the Phanerozoic, CO2 levels have at times been more than 1,500% higher than present, but temperatures have never been more than 10C higher than present. So how does a projected 30% increase in CO2 produce a 7C temperature rise in their models? During the late Ordovician, there was an ice age with CO2 levels about 1000% of current levels. Hopefully the newspaper headlines don’t accurately represent the content of the article. The United States should not have used the atomic bomb to stop the Japanese militaristic threat during World War II, seeing that it was unnecessary to cause untold suffering unto hundreds of thousands of people in Hiroshima and Nagasaki. Seeing that the United States is nowadays a champion of nuclear warming, it would be ironical if the nation would continue to agree with the logic of using an atomic bomb to end war. Certainly, the atomic bombs used during World War II – Little Boy (especially named for Hiroshima), and Fat Boy (detonated in Nagasaki three days after the Hiroshima bombing) – were deadly, to say the least.[1] The bombs used by the United States served to terrify the Japanese people, and therefore ended the war quicker than previously believed. However, today the United States knows that the cost that was paid by the Japanese people at the expense of a war, was humungous. It should not have happened. What if it happens in our homeland? The photographs that have arrived from Hiroshima and Nagasaki are enough to convince us that the bombing was actually unnecessary (See Appendix). If Mr. Truman were to be asked his opinion today, he might agree, although he might add that it was necessary to check the potency of those bombs for the world to stop using them altogether. While it is a fact that the world has stopped using atomic bombs after the Hiroshima-Nagasaki bombing, it remains true that it was unnecessary to use the atomic bombs in the first place. It was unnecessary because we knew all along that those bombs are powerfully dangerous. Indeed, the Hiroshima-Nagasaki bombings were a crime against humanity. Needless to say, it is essential to stop such crimes. Thankfully, still, the U.S. has realized its mistakes and today acts a spokesperson for ‘freedom from nuclear proliferation and explosions,’ which Mr. Truman had thought were actually equivalent to the harnessing of universal energies, if not the powers of God, as of the Big Bang. At the same time as the atomic bombing of 1945 acted as a revolution for humanity, and the marriage between technology and human beings – it was a “terrible” disaster. In the words of the then-unapologetic Mr. Truman, the extent of the disaster was also expected: We have discovered the most terrible bomb in the history of the world. It may be the fire destruction prophesied in the Euphrates Valley Era, after Noah and his fabulous Ark. Anyway we “think” we have found the way to cause a disintegration of the atom. An experiment in the New Mexico desert was startling – to put it mildly. Thirteen pounds of the explosive caused the complete disintegration of a steel tower 60 feet high, created a crater 6 The experiment was not essential to conduct upon the lives of countless civilians who ended up losing their existence to Mr. Truman’s whim. The United States should simply have shown the New Mexico desert example to the Japanese, and warned them thereby. Science allows for such examples to serve as warnings. In any case, Mr. Truman was successful in that he managed to warn the Japanese alright.[3] As a matter of fact, the Americans promised the Japanese more ruin to come from the air, if the latter failed to concede subsequent to the Hiroshima explosion. Was it not reasonable for the U.S. to have waited more than three days before it also bombed Nagasaki – for the effects of the bomb to show up in greater intensity in Hiroshima, or for the Japanese to simply look upon their damages and surrender? The effects of the bomb were present the first day to boot.[4] Unfortunately, the Japanese did not concede until after the Nagasaki bombing.[5] According to the Americans, by bombing Hiroshima and Nagasaki, they terrified the Japanese into surrender. However, it can reasonably be argued that the United States should have used its actual scientific testing of the nuclear weapon (in the Mexican desert) to scare the Japanese, instead. The U.S. could have easily reported the scientific testing in the Japanese press. Furthermore, the U.S. should not have bombed Nagasaki after Hiroshima, seeing that the effects of the bomb in Hiroshima were horrible at best. The United States is a nation of people standing by God through their world-famous Declaration of Independence and Constitution. It is quite obvious from news reports about the Hiroshima bombing alone that the attack called for the help of God. In actual fact, the attack was a miserable failure for the United States because it stopped all sense of normal life in Hiroshima in the twinkling of an eye. Quite similar to 9/11, the Hiroshima bombing was enough as warning, even if we were to give the U.S. the benefit of the doubt by assuming that the scientific experiment could not have been enough of a warning for the Japanese. The U.S. should not have gone forward with the Nagasaki bombing after inflicting a disaster similar to 9/11, but bigger in magnitude than 9/11. It was an inhumane mistake. Fortunately, however, the United States is now wise enough to avoid such disasters in the present and the future. The world knows that the nation is capable of inflicting such a disaster, and other countries are developing similar military power in a race to rule the world. All the same, everybody now understands that it is atrocious to use atomic bombs on other human beings like unto ourselves. It is not only inhumane, but also stupid to use nuclear weapons when scientific experiments (including Hiroshima) have clearly shown the immensity of the damage that these weapons may inflict. It is, moreover, a terrible mistake to be thinking of developing such weapons. Even though they serve as good warning measures, or may be later used in an ice age; atomic bombs are atrocious to employ on people. Lastly, it is essential to realize that it is never necessary to be violent and horrible. Rather, the concepts of peace, love, and brotherhood – all emotional appeals – plus numberless varieties of logical appeals could keep us on the paths of peace and prosperity. In fact, the relationship between U.S. and Japan as it exists today is evidence that the realization has hopefully occurred. Of course, Prophet Muhammad, peace on him, never said people should be killed for abandoning Islam. Rather, the Qur’an says, “There’s no compulsion in religion.” That’s the truth. It’s another fact that illiterate so-called Muslim men invented countless sayings in the name of Muhammad. And, what you do is to torture, oppress and kill innocent Muslims because you can’t find bin Laden and buddies, previously the buddies of Bush, etc. You imagine that all innocent Muslims are on the side of illiterate radicals. So you become as unjust as the bin Ladens. Hamas brainwash videos. (Note: Jews are “People of the Book” in the Holy Koran; Zionists do not follow Mosaic law; they also kill Orthodox Jews).
{ "pile_set_name": "Pile-CC" }
Elgiva cucularia Elgiva cucularia is a species of fly in the family Sciomyzidae. It is found in the Palearctic . Larvae of E. cucularia are predators of aquatic, pulmonate snails in the families Lymnaeidae, Physidae, and Planorbidae. References External links Images representing Elgiva cucularia at BOLD Category:Sciomyzidae Category:Insects described in 1767 Category:Muscomorph flies of Europe Category:Taxa named by Carl Linnaeus
{ "pile_set_name": "Wikipedia (en)" }
The Rutledge Honorary History Club celebrated its 75th anniversary (1929-2004) at the history and political science department’s Annual Awards Banquet on May 6, 2004. The banquet speaker was Dr. Duane Bolin, Professor of History at Murray State University in Murray, Kentucky, whose talk was entitled “In Search of Adolph Rupp: Fans’ Delight, Critics’ Villain, and Historians’ Challenge.” Sixty people, including faculty, students, alumni, and friends of the university, attended the celebration, which was planned by the club’s president, Kari Barnhart, and its faculty advisor, Dr. Terry Lindley. The club’s origins date back to November, 1929, when Professor Lovick DeWitt Rutledge and his wife, Rosa Dyer Rutledge, met with a select group of students to discuss the possibility of a history organization. Before November was over, the club adopted a constitution and by-laws, and the motto: “We seek historical truth and shun historical error.” It also adopted the following purpose: “To become better acquainted with the field of history and to become fully conscious of the place that knowledge of history occupies in the lives of educated people.” Initially the club was called the Union University History Club, but it voted to change its name to the Rutledge History Club in 1942 to honor Mr. Rutledge, who had passed away two years earlier. In the 1970s and 1980s the club played an integral role in promoting and administering National History Day competitions and Annual History Contests for high school students. More recently, it raised more than $400 for the construction of the World War II Memorial in Washington, D. C. It currently makes an annual financial contribution toward the awards that the department offers to students in a history research paper contest known as the Dr. James Alex Baggett History Research Competition. Dr. Duane Bolin of Murray State University presents his talk on Adolph Rupp at the department’s awards banquet; Dr. Terry Lindley is seated to Dr. Bolin’s right.
{ "pile_set_name": "Pile-CC" }
Game Green piggy online In the present game, you need to force your hit to destroy the largest number of young characters who will travel to you. Once you start the game, you come out with a rifle. With a sight you will need to get to the largest number of targets. In order to pass to the next level you need to get to the greatest number of green characters, which will be located in front of you.
{ "pile_set_name": "Pile-CC" }
friendship ˈfrɛn(d)ʃɪp/ noun the emotions or conduct of friends; the state of being friends. a relationship between friends. Friends. It's a beautiful thing to have friends. Right? A very precious bond between humans. However, 'friends'... this word or this thing, at the same time, is complicated. It acts as a safe guard, but also a spear. A heart warmer, and also a heart breaker. A word that may as naeun was walking home, her eyes widen when she saw a banner ''room for rent'' was in infront of her house. what is her sister up to? what could possibly happen in a big house with 14 different people, with different reason to be living in a sharehouse ? love? friendship? family? " I will make you mine no matter what and I will do anything for destroyed your relationship with him! " - Suho " Please Suho..... forget me and move on " - Chorong " No matter what I'll be your side and I'll be the best husband They meet again after graduating 7 years ago. In high school, She hates him. Not totally. But just that particular side of him. He hates her too. But not totally. Because she hates him at times, while other times she's nice. He doesn't know why. He just ends up teasing her whenever thet meet. However... it's not at all so simple. Secrets are kept from each other. Truths are being consealed because of the fakes. Nam Woohyun. Park Choron Myungsoo , Eunji , Woohyun and Naeun are the best of friends since they were a child . They always hang-out together , eat together , as if they're Inseparable . But one day , Eunji distant herself from them , followed by Myungsoo , L They weren't opposites. They weren't enemies. They were capable, but no one believed they could last. He, Nam Woohyun, is the type to be dragged down by expectations. She, Park Chorong was the type to go beyond expectations. So what happens when life decides to take them on a rollercoaster ride? For the sake of their relationship, will they fight or accept their 'reality'? It's New Year's Eve. You stare at the sky and even though everything looks the same, something in your heart tells you things are going to be different tonight. Listen to Tom Odell Constellations https://www.youtube.com/watch?v=q1jjhp8vtoM "Sometimes I wonder HOW I ended up with you... you were the worst person I've ever met... but at the same time, you were the best thing that has happened to me.... **COUGH** excluding your annoying desperate remarks **COUGH** but thinking back... I guess you DID make the first move..." - Chorong Son Naeun is a queenka in the college while Kim Myungsoo is the kingka . Things doesn't goes well between them as well as with their squad . Naeun with Apink and Myungsoo with Infinite . Argues and fights make them closer . Let's see how they can overcome the up growing feelings between them .
{ "pile_set_name": "Pile-CC" }
The effect of misonidazole on some physiologic parameters in mice. The physiologic effects of misonidazole (Ro-07-0582) were studied in BALB/cKa mice injected i.p. at 0.5 to 1.5 mg/g b.wt. A 2--4 degree C reduction of body core temperature was observed in unanesthetized mice: the duration and degree of effect were dependent on dose. Normal core temperatures were restored when the serum level of misonidazole had fallen to 0.5 microM (100 micrograms/ml). Misonidazole (1 mg/g) produced a rapid postinjectional drop of heart rate (40%), respiration (45%) and body core (4 degrees C) temperatures which gradually returned to preinjection values 6 to 8 hr later. In addition, misonidazole administration (1 mg/g) enhanced the overall effect on body temperature induced by hexobarbital anesthesia by a factor of approximately 3. These results are discussed in relation to the use of mouse model tumor systems to give an estimate of the magnitude of the cytotoxic effect of misonidazole expected in humans.
{ "pile_set_name": "PubMed Abstracts" }
Introduction {#s1} ============ Tuberculosis (TB) is caused by the pathogenic species, *Mycobacterium tuberculosis (Mtb)*; together with human immunodeficiency virus (HIV/AIDS) infection, TB is among the most prevalent and severe of the infectious diseases worldwide. In 2019, an estimated 10 million people developed active tuberculosis in association with 1.6 million deaths ([@B1]). Infection with *Mtb* triggers an immune response, however *Mtb* can survive and grow by circumventing the host immune detection. One of the pathological characteristics of the successful infection with *Mtb* is the formation of granulome, which are organized cellular structures that include a variety of innate and adaptive immune cells that surround the *Mtb*-infected phagocytes ([@B2]--[@B5]). During the formation of granulome, intricate host-*Mtb* interactions occur at the infectious site and this pathogen can escape various host immune responses, which ultimately prevent *Mtb* elimination by these systems. Once *Mtb* enters the host, its cell wall components and proteins are detected by Toll-like receptors (TLRs), primarily by TLR2 and TLR4. *Mtb* is engulfed by professional phagocytic cells such as a macrophage, dendritic cell (DC), or neutrophil, and becomes incorporated into the subcellular organelle formed by the fusion of the phagosome and lysosome to create the phagolysosome, however *Mtb* is able to manipulate the endocytic pathway by suppressing fusion of the phagosome containing the bacteria with lysosomes. Infected macrophages synthesize and release both inflammatory and antimicrobial genes and molecules, including interleukin (IL)-1β, IL-6, IL-12, tumor necrosis factor (TNF), inducible nitric oxide synthase/nitric oxide synthase 2 (iNOS/NOS2), and chemokines which activate both the innate and adaptive immune systems. Activated immune cells secrete protective molecules to the extracellular space to promote recruitment of other immune cells to form a granuloma ([@B4], [@B6]). Interestingly, endogenous proteins expressed by *Mtb* serve to perturb the formation of phagolysosome, the permitting its survival and proliferation within macrophages. For preventing excessive lung damage during *Mtb* infection, *Mtb* also elicits the production of protective factors that promote its survival including anti-inflammatory mediators such as IL-4, IL-10, IL-13, and transforming growth factor β (TGF-β) ([@B7]--[@B9]) and several human TB studies show that these factors has been shown to be increased in the active TB patients ([@B10], [@B11]). These immunosuppressive factors play key roles in limits effective the immune defense to *Mtb* ([@B12], [@B13]). *Mtb* will persist and exacerbate pathophysiological manifestations within the granulome; this will ultimately result in progression of disease and dissemination to the other hosts ([@B5], [@B14]). As a major focus of this disease process, mycobacterial granulome have been the subject of intense scrutiny mainly focused on mechanisms of formation, function, maintenance, and evolution. Recently, there has been an increasing appreciation of the important relationship that exists between essential metabolism and immune cell function. Metabolic reprogramming in immune cells, a phenomenon known as immunometabolism, focuses on unique cellular functions that are essential for the immune response. During TB infection, host cells undergo profound metabolic change, which results in differential control of various cytokines and chemokines associated with inflammation, clearance, inhibition, and progression of *Mtb* infection ([@B15], [@B16]). Specifically, a shift in the use of pathways promoting glucose and lipid metabolism can be an important feature for directing host cell function to promote mycobacterial survival with the granulome ([@B17]). At homeostasis, cells in "resting" condition utilize oxidative phosphorylation (OXPHOS) to produce ATP from NADH and FADH2 by facilitating transfer of protons and electrons. Cells typically switch from OXPHOS to glycolysis in order to generate ATP under oxygen-depleted or hypoxic conditions ([@B18]). Similarly, glycolysis is main form of metabolism in immune cells that promote the inflammatory response in the immune system. This observation--that immune cells utilize glycolysis even in the presence of adequate concentrations of oxygen (i.e., aerobic glycolysis)-- is known as the "Warburg effect." To date, the Warburg effect has been explored primarily with respect to cancer metabolism. Although aerobic glycolysis generates fewer ATP molecules per cycle than does OXPHOS, this pathway is capable of rapid generation of ATP required by immune cells. Additionally, aerobic glycolysis requires a number of specific precursors, including nucleotides, amino acids, and lipids ([@B19]). Because metabolic reprogramming is essential for immune cell function, studies that explore this phenomenon in also provide new insight into the relationship between host immune cells and infection with *Mtb*. Furthermore, predisposing factors for TB, including diabetes, and HIV also related to immunometabolism against TB pathogenicity. Diabetes mellitus (DM) is a mainly risky factor for occurring active TB ([@B20]--[@B22]). In DM, innate immune cells undergo activation for releasing cytokines, recruiting neutrophils, upregulate T cell activation and antigen recognition ([@B23], [@B24]). Metabolism of DM is characterized by increasing glucose production and impairing glucose uptake. Expression of glucose transporter and glycolytic enzymes is elevated in DM ([@B25]). In DM, High glucose level increased IL-10 production, impaired macrophage phagocytic ability for promoting better milieu for survival and proliferation of TB ([@B26], [@B27]). Additionally, HIV is also other pathogen to be associated with pathogenicity of TB ([@B28]--[@B30]). In HIV-1-infected primary CD4^+^ T cells, glycolytic metabolism is induced with high pro-inflammatory response and increased production of virus ([@B31], [@B32]). Interestingly, glycolytic metabolism is regulated by HIV-1 infection in macrophage alleviated Warburg effects ([@B33]). These factors promote the activation of TB by reprogramming the metabolism. A variety of antibiotics have been introduced for promoting eradication of *Mtb* infection, including 6--9 months courses of isoniazid, rifampicin, ethambutol, and pyrazinamide. However, the emergence of multidrug-resistant TB (MDR-TB) or extensively drug-resistant TB (XDR-TB) has become a major challenge toward designing effective treatments and for eradication of this disease ([@B34], [@B35]). Among the approaches to this challenge, host-directed therapy (HDT) has been introduced as a means to potentiate and to amplify the effectiveness of current treatments used for TB ([@B36]). A clear understanding of the molecular interactions between host cell metabolism and accommodations made to *Mtb* may provide new strategies to combat infection. Here we review the current understanding of the metabolic relationship between the host and the *Mtb* pathogen. We also suggest several new strategies that may enhance host metabolic pathways and thereby promote protective antimicrobial functions in the setting of TB infection. Metabolic Reprogramming in TB {#s2} ============================= Warburg Effect in Immune Cells ------------------------------ Immune cells provide critical protection and maintain homeostasis in the mammalian host. There are currently many studies that suggest that the functions of immune cells are largely reliant on specific aspects of host metabolism. These studies, which have generated a field known as immunometabolism, have provided us for a new focus for understanding how and why immune cells exist or persist in a specific metabolic state in order to support or direct functional changes. Several recent reports suggest that different metabolic signatures have a direct impact on specific effector functions characteristic of the innate and adaptive immune systems ([@B37]). As such, among the primary functions of immune cells, there are those that generate an inflammatory response, actions typically undertaken by M1-polarized macrophages, DCs, neutrophils, and effector T cells, and those that promote an anti- inflammatory response, which include M2-polarized macrophages, as well as regulatory and memory T cells. The basic metabolic profiles of these cells differ significantly from one another. Inflammatory immune cells generate energy in the form of ATP mainly via glycolytic metabolism; by contrast, immune cells that promote anti- inflammatory activities generate ATP via oxidative phosphorylation and fatty acid oxidation ([@B38]--[@B43]). These observations have been best characterized for polarized macrophages. The predominant phenotypes of macrophages are known as M1 and M2 ([@B44], [@B45]). M1 macrophages, activated by lipopolysaccharide (LPS) and IFN-γ, promote pro-inflammatory and antibacterial functions in immune system, and they produce nitric oxide (NO) and reactive oxygen species (ROS) which are fundamental components of the pathways used to eradicate bacteria. The main metabolic pathway used by these cells is glycolysis, which results in rapid production of ATP via inhibition of the trichloroacetic acid (TCA) cycle and OXPHOS in mitochondria; this is a critical factor due to the fact that M1 macrophages require rapid generation of ATP to activate inflammation. By contrast, M2 macrophages promote anti-inflammatory responses and tissue repair; these cells mainly utilize OXPHOS and fatty acid oxidation in order to generate ATP; this takes place via efficient pathways localized in the mitochondria ([@B46]--[@B51]). In T cells, metabolic state is reprogrammed according to T cell subsets. Naïve T cells mainly use OXPHOS for generating energy. Upon TCR stimulation, glycolytic metabolism is upregulated for differentiating into activated T cell. Th1, TH2, and Th17 effector cells mainly depend on aerobic glycolysis. While, regulatory and memory T cells use fatty acid oxidation and OXPHOS for differentiation and functions ([@B52], [@B53]). Mammalian target of rapamycin (mTOR) and AKT signaling is essential for regulating metabolism of T cells and cytokine responses ([@B54]). Recently, cyclophililn D (CypD) related to necrosis is a factor for regulating metabolic state and functions in T cells ([@B55]). Pro-inflammatory immune cells generate ATP in high concentrations via glycolysis even when functioning in aerobic conditions; the phenomenon of aerobic glycolysis is also known as the "Warburg effect" ([@B56]). Hypoxia and inflammation are inherently linked to one another; upon activation, immune cells undergo considerable metabolic reprogramming to sustain energy needs and thus switch to predominantly aerobic glycolysis. Hypoxia-induced factor 1 (HIF-1), the main mediator of the Warburg effect, is expressed in response to hypoxia and controls expression of numerous glycolytic enzymes. HIF-1 has two subunits, α and β; regulation of HIF-1 is dependent on the α subunit. Post-translational regulation of HIF- 1 is modulated via the expression and stability of HIF-1α ([@B56]--[@B58]). Members of the nuclear factor-κB (NF-κB) family of transcription factors comprise the signaling pathway that is most closely involved in Hif-1α/HIF-1A expression ([@B59], [@B60]). Under conditions of physiologic oxygenation, prolyl hydrolases (PHD) degrade HIF-1α and target it for proteasome-mediated degradation. Inhibiting HIF (FIH) is an aspariginyl hydroxylase that also determines the level of active HIF-1α. Overall, hypoxia-inducible genes encode proteins involved in a myriad of cellular pathways that mediate cell survival, apoptosis, erythropoiesis, angiogenesis, glucose metabolism, and that regulate acid-base balance ([@B61]). HIF-1α is expressed in primary innate immune cells, including macrophages, DCs, neutrophils, and Th17 cells. Additional roles for HIF-1α in promoting macrophage differentiation and function have also been demonstrated. Most notably, HIF-1α-mediated metabolic reprogramming plays a significant role in modulating macrophage polarization toward the M1 or M2 phenotype ([@B62]). Glycolysis Metabolism in TB --------------------------- When the host is infected by bacteria, immune cells are activated; the characteristic immune response occur concomitant with a switch to glycolytic metabolism ([Figure 1](#F1){ref-type="fig"}). Several recent studies that have focused on transcriptome data from mouse and rabbit lung as well as granulome from the lungs of TB patients suggest that the metabolic state of the TB-infected host includes modulation of glucose metabolism ([@B63]--[@B66]). The general metabolic characteristics in TB infection included enhanced expression of genes related to the Warburg effect including HIF-1α, glycolytic enzymes, the pentose phosphate pathway, and H^+^-ATPase. Additionally, ^1^H-NMR-based metabolomics profiled the increased accumulation of lactate due to the increased levels of glycolysis in the lungs of *Mtb*-infected mice ([@B67]). Likewise, host immune cells responded to *Mtb* infection with increased expression of pro- inflammatory and antimicrobial-related genes associated with the Warburg effect. These results highlighted the importance of metabolic reprogramming due to glycolysis and its relationship to protection against *Mtb* infection. Furthermore, analysis of the transcriptomes of bone marrow-derived macrophages (BMDM) infected with one of two clinical strains of *Mtb* (the immunogenic strain CDC1551 or the hypervirulent strain HN878) included elevated levels of expression of genes associated with the Warburg effect. Given that these two clinical strains are known for differential activation of immune responses during the course of BMDM infection, different metabolic responses were anticipated ([@B64]). Interestingly, BMDMs infected with each strain promoted upregulation of genes encoding enzymes associated with the Warburg effect together with HIF-1α-associated signaling, although specific differences were observed. Of note, at 6 h post-infection, the induction of the gene encoding 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase 3 (PFKFB3) a member of the of phosphofructokinase (PFK)-2 family, was more prominent in CDC1551-infected BMDMs ([@B65]). *Pfkfb3* has the highest activity among the PFK-2 members, and fructose-2,6-diphosphate (F-2,6-BP), which is the product of Pfkfb3-mediated phosphorylation, is an essential component promoting regulation of glycolysis ([@B68]). CDC1551-infected BMDMs in a state of elevated glycolysis respond with a vigorous early pro-inflammatory response. By contrast, relatively limited activation of the Warburg effect together with high levels of glucose uptake were observed in response to *Mtb*. Furthermore, HN878-infection of BMDMs may result in dysregulated host cell lipid metabolism. Specifically, one study compared gene expression in response to *Mtb* H37Ra or H37Rv infection of human alveolar macrophage revealed strain-specific differences. Gene expression associated with inflammation, general metabolism, and lipid metabolism was downregulated in H37Rv infected macrophages ([@B69]). As suggested by the responses to infection with HN878, a virulent strain can have an impact on host metabolism gene by downregulating inflammatory responses that results in diminished the inflammation and prolonged *Mtb* survival. Another study compared the metabolic states elicited by macrophage challenge with *Mtb*, with the vaccine strain *M. bovis* BCG or with killed *Mtb*. Each strain promoted a unique pattern of energy modulation, as determined by XF (extracellular flux) analysis. Total metabolism in response to challenge with live *Mtb* including glucose utilization and OXPHOS is lower than that observed in response to BCG or dead *Mtb* ([@B70]). Also, CD8^+^ T cell showed similar results in *Mtb* or BCG infection. Through RNA-seq, glycolytic metabolism is upregulated by challenging *Mtb* in early and late phase. Surprisingly, *Mtb* triggered mitochondrial dysfunction, which downregulates OXPHOS metabolism, while upregulates mtROS, but metabolism is recovered against BCG ([@B71]). Thus, infection with live, virulent *Mtb* decelerated the shift to glycolytic and OXPHOS bioenergetics, and thereby limited the development of inflammatory effector functions. ![Metabolic reprogramming in *Mtb-*infected immune cells. *Mtb* infection in host is accompanied by upregulation of glycolysis and lactate production. Increased HIF-1α-induced Warburg effect enhance gene of glycolytic metabolism. In contrast, TCA cycle and oxidative phosphorylation (OXPHOS) is downregulated. Dysregulation of TCA cycle accumulates several intermediates in TCA cycle such as succinate and itaconate. Additionally, breakdown of OXPHOS increases NO and ROS level. Blue, increased expression/level; Red, decreased expression/level.](fimmu-11-01790-g0001){#F1} The switch to glycolytic metabolism resulted in the accumulation of several TCA intermediates that themselves function as a metabolic signal to link metabolism and immunity ([Figure 2](#F2){ref-type="fig"}). Succinate, a prominent TCA intermediate, drives IL-1β production, inhibits the production of anti-inflammatory cytokines, and enhances HIF-1α activity by inhibiting HIF-1α prolyl hydrolases ([@B72]--[@B74]). The succinate-induced pro-inflammatory response is directly dependent on the activity of succinate dehydrogenase (SDH). Inhibition of SDH activity via hydrolysis of dimethyl malonate to produce malonate, results in an attenuation of the activity of LPS-induced IL-1β, and likewise a boost in IL-10 production in BMDMs generated from C57BL/6 mice ([@B75]). In *Mtb*-infected murine macrophages, *Sdh* expression is downregulated; this leads to the induction of HIF-1α, the Warburg effect, and characteristic pro-inflammatory responses ([@B76]). Itaconate, a metabolite derived from the TCA cycle intermediate cis-aconitate, also regulates SDH activity in C57BL/6 BMDMs ([@B77], [@B78]). Breakdown of TCA cycle results in downregulation of mitochondrial isocitrate dehydrogenase (*Idh*)2 immediately following formation of itaconate. Aconitate decarboxylase 1 (*ACOD1*), is also known as immune-responsive gene (Irg)1; production of this mediator is related to generation of itaconate. *ACOD1* is upregulated in *Mtb*-infected murine macrophages and lung tissue. Itaconate has antimicrobial functions via its capacity to inhibit isocitrate lyase, the essential enzyme in the glyoxylate shunt that is critical for bacterial growth. Itaconate inhibits SDH activity which results in the accumulation of succinate. Additionally, itaconate modulates pro- inflammatory responses in macrophage; *Irg1*^−/−^ BMDMs from C57BL/6 mice maintain higher HIF-1α mRNA and protein levels, and produce more pro-inflammatory cytokines and antimicrobial factors including IL-6, IL-12, IL-1β, and NO in response to lipopolysaccharide (LPS)-mediated activation ([@B79]). Thus, itaconate may be a critical link between the Warburg effect induced by *Mtb* infection, and the generation of anti-inflammatory responses to prevent damage to host cells. ![Process of the Immune response and metabolic reprogramming in *Mtb-* infected immune cells. After *Mtb* infection, inflammatory signaling is activated by TLR2 or 4. Also, Metabolism is switch to aerobic glycolysis mediated by HIF-1α which upregulates glycolytic enzymes. Increased glycolysis related to upregulate pro-inflammatory cytokines and anti-microbial effectors. PPARγ upregulates lipid synthetic gene for formation of lipid droplet which is exploited by *Mtb* for survival and growth. Blue, increased expression/level.](fimmu-11-01790-g0002){#F2} Upregulated expression of HIF-1α, the enhanced Warburg effect, and the antimicrobial response to *Mtb* infection of host immune cells are all linked to the actions of the glycolytic regulatory protein, pyruvate kinase M2 (PKM2). Expression of PKM2, one of the two Pkm/PKM gene products, is upregulated in response to macrophage activation. In the cytoplasm, PKM2 maintains an enzymatically inactive state via its phosphorylation; the PKM2 dimer is transferred into the nucleus where it interacts with HIF-1α to activate target genes, including those encoding glycolytic enzymes and IL- 1β. In LPS-activated macrophages, small molecules such as TEPP-46 modulate PKM2 activation by preventing PKM2 translocation into the nucleus; consequently, results in a diminished Warburg effect and limited production of IL-1β. Inhibition of PKM2 translocation also promotes production of IL-10 and a decreased antimicrobial response in an *S. typhimurinum* infection model ([@B80]). In transcriptome analysis studies, upregulation of Pkm2/PKM2 was detected in *Mtb*- infected murine macrophages and in mouse lung tissue ([@B65]). These results suggest that, similar to itaconate, PKM2 promotes the HIF-1α-mediated Warburg effect and the associated antimicrobial response during *Mtb* infection. CypD, mitochondrial matrix protein, is regulator of metabolism in *Mtb* infection via upregulating mtROS in T cells. CypD-deficient T cells showed higher OXPHOS than wild-type T cells and more susceptible to *Mtb* ([@B55]). In summary, metabolism in *Mtb*-infected host cells undergoes a switch from OXPHOS to glycolysis and generates a Warburg effect. The HIF-1α induced Warburg effect in the setting of TB infection plays an essential role in promoting upregulation of pro-inflammatory cytokine and antimicrobial effector gene expression, both factors underlying the acute immune response. However, host immune responses were different depending on the virulence or avirulence of the *Mtb*-infecting strain. How and why immune responses are modulated by different strains of *Mtb* are not fully understood. Arginine Metabolism in TB ------------------------- Arginine, the key substrate for production of NO and other reactive nitrogen species, and also serves as a substrate for arginase. Arginine plays a distinct role in the host immune response. iNOS promotes one pathway that results in the generation of NO; the other pathway is via the arginase-mediated production of ornithine ([@B16]). iNOS is one of three NO synthase enzymes and the major isoform involved in immune cell functions. iNOS is inducible in immune cells, and is a prominent antimicrobial effector molecule produced by activated macrophages ([@B81]). The balance of arginine metabolism between the two competing pathways constitutes an important regulatory mechanism that modulates the polarization states of M1 and M2 macrophages. In M1 macrophages, arginine is in demand for protein synthesis, for production of NO, and for its antimicrobial roles; by contrast, in M2 macrophages, arginine is used for production of polyamines and proline. The iNOS pathway is in direct competition with the arginase pathway ([@B82], [@B83]). Two arginase isoforms exist in the cells. Cytosolic arginase ARG1 and mitochondrial arginase ARG2 are encoded by different genes and have different subcellular distributions ([@B84], [@B85]). ARG1 is mainly detected in murine myeloid cells, DCs, and granulocytes. ARG1 inhibits NO production from iNOS/NOS2 which is among the mechanisms used by *Mtb* for immune evasion. *Mtb*-infected *Arg1* conditional gene-deleted mice were characterized with a diminished bacterial burden; Arg1-deficient macrophages were more capable of killing *Mtb* compared to their wild-type counterparts ([@B86]). ARG1 and iNOS are distributed in distinct patterns in human TB-associated granulome; expression of iNOS was highest in the central region, and ARG1 was more prominent at the periphery ([@B87]). The role of ARG1 in mediating immune cell function is directly dependent on the stage of *Mtb* infection. At initial stages of infection, the Mtb pathogen takes advantage of ARG1 activity by limiting macrophage immunity via competition with iNOS/NOS2. During the late stages of infection, ARG1 contributes to control of prolonged hyperinflammation; ARG1 also plays a role in regulating the progression of lung immunopathology in *Mtb*-infected, Nos2-deficient mice ([@B87]). Lipid Metabolism in TB ---------------------- Once glycolytic metabolism has been activated, the genes encoding pro- inflammatory mediators are synthesized, together with the synthesis of fatty acids and phospholipids. The TCA cycle and OXPHOS are inhibited, and several intermediates of the TCA cycle accumulate *in situ* ([@B88]). Similar to what has been observed for glucose metabolism, including the TCA cycle and OXPHOS, host lipid metabolism is also regulated in *Mtb* infection ([Figure 2](#F2){ref-type="fig"}). There are master regulators that mediate lipid metabolism including the peroxisome proliferator-activated receptors (PPARs), liver X receptor (LXR), sterol regulatory element binding proteins (SREBPs) and HIF ([@B89]--[@B93]). These factors work together to regulate processes including fatty acid uptake, lipid synthesis, the activities of lipolytic enzymes, and lipid droplet (LD) biogenesis ([@B94]). The activation of TLR signaling upregulates expression of several enzymes that promote synthesis of triglycerides and/or cholesterol ester, including fatty acid synthase (FASN), diacylglycerol O- acyltransferases (DGAT-1 and DGAT-2), and acyl-CoA:cholesterol O-acyltransferases (ACAT1 and ACAT2) ([@B95]--[@B97]). During lipid accumulation, increased expression of lipid uptake and transport-related genes is observed, and expression of genes involved in lipolysis is decreased. Perilipin-2 (Plin2) and Perilipin-3 (Plin3) are the main structural proteins of LDs that serve to promote lipid accumulation ([@B96], [@B98], [@B99]). These proteins are essential for the biogenesis and assembly of LDs ([@B100]). PPARs are members of the ligand-activated transcription factor family ([@B101]). PPARs can have a direct impact on LD formation via the regulation of Plin2 expression. PPARs also regulate proteins associated with *de novo* lipogenesis, including fatty acid synthase and gene regulatory factors LXR and SREBPs ([@B94]). PPAR-γ is important for regulating lipid and glucose metabolism and other cellular process including inflammation ([@B102]). Host immune cells which are infected by *Mtb* exhibit increased PPAR-γ gene expression; this results in downregulation of NF-κB signaling and increases in production of prostaglandin (PG) E2; overall, this results in suppression of pro- inflammatory cytokines and Th1 responses ([@B103], [@B104]). Increased PPAR-γ expression in *Mtb*-infected macrophages is also associated with LD formation ([@B105]). Formation of LDs is critical for bacterial survival; the accumulated lipids in these infected cells provide nutrients and promote bacterial growth in host. Additionally, infection with *M. bovis* BCG results in upregulated expression and activation of PPAR-γ and the induction of lipid-loaded macrophages. In BCG-infected TLR2-deficient mice, production of TNF-α undergoes significant downregulation ([@B104], [@B106]). Taken together, these findings suggest that PPAR-γ accelerates intracellular lipid accumulation by modulating the expression of genes that modulate lipid absorption as well as those that promote fatty acid synthesis in response to *Mtb* infection. PPAR-α is another isoform of the PPAR family. It is a transcription factor that modulates the expression of several genes involved in lipid oxidation and glucose metabolism ([@B107]). PPAR-α enhances fatty acid oxidation and ketogenesis while inhibiting fatty acid synthesis and glycolysis ([@B108]). As such, activation of PPAR-α may prevent lipid accumulation in *Mtb*-infected cells. PPAR-α activation also results in the upregulation of transcription factor EB (TFEB) and promotes host innate immunity and autophagy against *Mtb* infection. The induction of TFEB also promotes lipid catabolism which inhibited intracellular growth of *Mtb* growth in bone marrow-derived macrophages ([@B109]). Metabolic HDT in TB {#s3} =================== In recent years many researchers have demonstrated that changes in dynamic immunometabolism take place in response to infection with microbes; as such, studies focused on immunometabolism are important so as to provide a larger understanding of their role in promoting pathogenesis in host ([@B110]). Current clinical trials have limitations with respect to the elimination of *Mtb* infection, including the need for long-term use, severe side effects, and the emergence of drug-resistant strains ([@B111]). As noted above, *Mtb* infection can induce a Warburg effect in host immune cells, similar to that described in tumor tissue ([@B65]). *Mtb* exploits host metabolism in order to escape immune surveillance and modulates various responses to subvert their activities toward promoting its survival and longevity. We expect HDT to be a clinically-feasible approach toward readjusting uncontrolled immune responses in patients with infectious disorders. We discuss HDT drugs currently in use or under development that target host metabolism. We will also suggest novel candidate HDT pathways and agents that might be effective toward eradicating *Mtb* ([Table 1](#T1){ref-type="table"}). ###### Host-directed therapies that regulate host metabolism in TB. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **HDT in glucose metabolism** ------------------------------- ------------------------------- ------------------------------------------------------------------------------------------------ ---------------------------- **Name** **Target** **Result** **References** 2-deoxyglucose Hexokinase Inhibition of glycolysis Suppression of IL-1β ([@B73], [@B112]) 3-bromopyruvate Hexokinase Inhibition of glycolysis ([@B113]) Ritonavir Glucose transporter Inhibition of glycolysis ([@B114]) Dichloroacetate Pyruvate dehydrogenase kinase Inhibition of glycolysis ([@B115]) FX11 Lactate dehydrogenase Inhibition of glycolysis\ ([@B116]) Downregulation of cytokines and iNOS TEPP46 Pyruvate kinase M2 Inhibition of HIF-1α Suppression of IL-1β ([@B80]) Rapamycin mTOR Inhibition of glycolysis Upregulation of antimicrobial effect ([@B117], [@B118]) Loperamide mTOR Inhibition of glycolysis\ ([@B119]) Upregulation of antimicrobial effect **HDT in lipid metabolism** Metformin AMP kinase Increased fatty acid oxidation. Inhibition antibacterial activity Reduced gene of inflammation ([@B120], [@B121]) AICAR AMP kinase Increased antibacterial activity\ ([@B122]) Induced mitochondrial biogenesis and energy metabolism Inhibition of lipid synthesis C75 Fatty acid synthase Inhibition of fatty acid synthesis Reduced the inflammation and oxidative stress\ ([@B123]--[@B125]) Switch M2 to M1 Downregulation of NLRP3 inflammasom Cerulenin Fatty acid synthase Inhibition of fatty acid synthesis Downregulation of NLRP3 inflammasome ([@B125]) GW9662 PPARγ Modulation of lipid metabolism, inflammation and pathogenesis of bacteria ([@B95]) Sirtuins PGC-1α Inhibition of NF-κB signaling and proinflammatory response\ ([@B76], [@B126]--[@B128]) Upregulation of fatty acid oxidation and anti-inflammation --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- *HDT in glucose metabolism HDT in lipid metabolism*. HDT in Glucose Metabolism ------------------------- In TB infection, metabolism switches to glycolysis in order to protect the host against early-phase *Mtb* responses. HIF-1-dependent glycolysis promotes various immune effector functions including production and release of pro-inflammatory cytokines and NO. As noted earlier, virulent *Mtb* perturbs the glycolytic metabolism and thereby inhibits antimicrobial functions. These results suggest metabolic reprogramming to aerobic glycolysis is essential component of the anti-TB response. On the other hand, persistent inflammation can result in hyperinflammation and ultimately damage host cells and tissues. Among the featured mechanisms of HDT in TB, there is a focus on inhibition of glycolysis as well as modulation of mTOR and AMP-activated protein kinase (AMPK) pathways. For example, 2-deoxyglucose (2-DG) and 3-bromopyruvate suppress activity of hexokinase which is a critical enzyme that catalyzes the first step of glycolysis ([@B113]). In LPS-activated macrophages, 2-DG suppresses the production of IL-1β and results in the accumulation of succinate ([@B73]). Additionally, LPS-induced acute lung injury is reduced by 2-DG-dependent inhibition of glycolysis ([@B112]). Among others under consideration is the HIV-protease inhibitor, ritonavir, which is an antagonist of glucose transporters ([@B114]), dichloroacetate, an inhibitor of pyruvate dehydrogenase kinase ([@B115]), and FX11, a specific inhibitor of lactate dehydrogenase. In LPS-activated RAW 264.7 mouse macrophages, FX11-mediated inhibition of lactate dehydrogenase resulted in the downregulation of cytokine and iNOS production ([@B116]). Likewise, TEPP46 is small molecule that inhibits the activity of pyruvate kinase M2; this inhibitor attenuates activation of PKM2 in LPS-induced macrophage *in vivo* and results in suppression of IL-1β production ([@B80]). Induction of autophagy can be potential defense strategy used by cells to eradicate *Mtb* infection. The enzyme, mTOR kinase, negatively regulates autophagy; as such, mTOR kinase inhibitors may be potent candidates for HDT for the elimination of *Mtb* infection. Other mTOR inhibitors including rapamycin and torin serve to limit the increased levels of lactate detected in *Mtb*-infected macrophages ([@B54]). Rapamycin-mediated activation of autophagy results in acidification of mycobacterial phagosomes and thus decreased survival of BCG ([@B117]). Loperamide induces mTOR-independent autophagy and likewise controls intracellular *Mtb* burden in lung macrophages ([@B119]). However, the use of these inhibitors has several limitations. For example, rapamycin-induced autophagy resulted in enhanced intracellular bacterial replication in HIV/H37Rv co-infected cells ([@B118]). Therefore, pharmacological induction of autophagy should be carefully evaluated among the candidate drugs to be used for HDT. HDT in Lipid Metabolism ----------------------- *Mtb* exploits host lipid or fatty acid metabolism to promote its own survival and growth. Foamy macrophages are recruited to granulome where and are included in the barrier that forms around *Mtb*-infected phagocytic cells to which they provide support and nutrition. Toward this end, infection with *Mtb* induces the synthesis of LDs and fatty acids in host cell. Targeting the lipid synthesis may be a good strategy for initial HDT with the goal of eliminating *Mtb*. 5\' AMPK is a highly conserved master regulator which can restore the energy balance by shifting cellular metabolism from one that consumes ATP to a catabolic mechanism that generates ATP ([@B129]). AMPK and other metabolic energy sensors are critical in maintaining various functions of *Mtb*-infected host immune cells, including autophagy, fatty acid β- oxidation, and metabolic reprogramming; the AMPK pathway also plays multi-faceted roles in promoting host defense against viral and bacterial infection. As such, molecules that are targeted by AMPK-targeted are considered to be effective adjuvant agents used to combat *Mtb* infection ([@B130], [@B131]). Metformin, a drug that is clinically-approved for the treatment of type 2 diabetes functions by activating the AMPK-mediated signaling pathway ([@B121]). Treatment with metformin can limit intracellular *Mtb* growth in macrophages via induction mitochondrial ROS and can thereby reduce activation of inflammatory-related gene expression. Also, metformin shows some synergy with conventional anti-TB drugs, including isoniazid or ethionamide when evaluated in *Mtb*-infected mice. Metformin treatment also decreases the incidence of latent TB ([@B120]). AICAR (5-aminoimidazole-4-carboxamide-1-β-D-ribofuranoside) is another agent that activates AMPK; AICAR activates autophagy pathways in macrophages and thus promotes antibacterial activity against *Mtb*. AICAR-mediated AMPK-activation also results in the activation of the PPARGC1 (peroxisome proliferator-activated receptor gamma, coactivator 1) pathway; this latter pathway regulates mitochondrial biogenesis and energy metabolism in macrophages and in *Drosophila melanogaster* infected with *M. marinum* ([@B122]). Factors that suppress lipid synthesis can limit inflammation and balance the inflammatory state of the host. Among several candidate molecules, C75 and cerulenin inhibit fatty acid synthase. C75 effectively lowers free fatty acid accumulation in mice with sepsis and limits inflammation and oxidative stress ([@B123]). Additionally, C75-mediated inhibition of lipid-derived droplet formation results in a switch from M2 to M1 macrophage polarization, resulting in enhanced production of both ROS and NO generation ([@B124]). Additionally, inhibition of fatty acid synthase by C75 and cerulenin results in downregulated uncoupling protein (UCP2)- mediated NLRP3 inflammasome activation ([@B125]). GW9662, an antagonist of PPARγ, acts as a key modulator of lipid metabolism, inflammation, and pathogenesis in BCG-infected macrophages; this result suggests that regulation of lipid metabolism may be a strong potential host target for novel TB therapy ([@B91]). Likewise, sirtuins (SIRTs) have been recognized as potential targets for anti-TB therapeutics. Sirtuins are enzymes with deacetylase activity that modulate cellular process by inhibiting NF-κB signaling; this results in a downregulation of the pro-inflammatory response and upregulation of fatty acid oxidation and anti- inflammatory response by targeting Peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) ([@B126], [@B127]). SIRT1 expression is diminished in *Mtb*-infected THP-1 macrophages and in whole mouse lung tissue. SIRT1 promotes inflammatory resolution by downregulating the expression of the RelA/p65 unit of NF-κB ([@B128]). SIRT6 also suppress pro-inflammatory and antimicrobial responses at the early stages of *Mtb* infection ([@B76]). Conclusion {#s4} ========== Immunometabolism is among the critical features that define the intimate relationship between host and the *Mtb* pathogen; a clear understanding of these interactions will be essential for limiting the progression of the TB. Metabolic reprogramming from OXPHOS to glycolysis in *Mtb* infection results in the upregulated expression of numerous pro-inflammatory cytokines and antimicrobial effector molecules. Further investigation will be needed in order to understand more fully the relationship between *Mtb* and host metabolism. How and when *Mtb* exploit the host metabolism is not clearly understood at this time; clarification will be critical in order to identify the most appropriate candidates for HDT. Among those currently under consideration is *Mtb*-mediated modulation of glucose and/or lipid metabolism. Glucose metabolism might be targeted at the early stage, which would ultimately provide a boost to the Warburg effect. Thus, more efficient elimination of *Mtb* bacteria; by contrast, targeting glucose metabolism at a later stage may result in a much needed- alleviation of hyperinflammation. A better understanding of metabolic reprogramming in TB will provide further insights toward novel therapeutic strategies. Author Contributions {#s5} ==================== J-SK, Y-RK, and C-SY designed, conceptualized, and wrote the manuscript. All authors contributed to the article and approved the submitted version. Conflict of Interest {#s6} ==================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. We would like to thank all members of the Infection Biology Lab for critical reading and discussion of the manuscript. **Funding.** This work was supported by the NRF grant funded by the Korea government (MSIP) (2016R1D1A1A02937312 and 2019R1I1A2A01064237); a grant from the KHIDI, funded by the Ministry of Health & Welfare, Republic of Korea (HI16C1653). *Mtb* : *Mycobacterium tuberculosis* TB : Tuberculosis HDT : Host-directed target TLRs : Toll-like receptors DC : Dendritic cell IL : Interleukin TNF : Tumor necrosis factor iNOS/NOS2 : inducible nitric oxide synthase/nitric oxide synthase 2 TGF-β : Transforming growth factor β OXPHOS : Oxidative phosphorylation DM : Diabetes mellitus MDR-TB : Multidrug-resistant TB XDR-TB : Extensively drug-resistance TB NO : Nitric oxide ROS : Reactive oxygen species HIF-1 : Hypoxia-induced factor 1 NF-κB : Nuclear factor-κB CypD : Cyclophililn D PHD : Prolyl hydrolases FIH : Factor inhibiting HIF PFKFB3 : 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase 3 F-2,6-BP : Fructose-2,6-diphosphate SDH : Succinate dehydrogenase LPS : Lipopolysaccharide ACOD1 : Aconitate decarboxylase 1 Irg1 : Immune-responsive gene1 PKM2 : Pyruvate kinase M2 ARG : Arginase PPARs : Peroxisome proliferator-activated receptors LXR : Liver X receptor SREBPs : Sterol regulatory element-binding proteins LD : Lipid droplet FASN : Fatty acid synthase DGAT : Diacylglycerol O-acyltransferase ACAT : Acyl-CoA:cholesterol O- acyltransferase Plin : Perilipin TFEB : Transcription factor EB mTOR : Mammal target of rapamycin AMPK : AMP-activated protein kinase 2-DG : 2-deoxyglucose PPARGC1 : Peroxisome proliferator-activated receptor gamma, coactivator 1 AICAR : 5-aminoimidazole-4-carboxamide-1-β-D-ribofuranoside UCP2 : Mitochondrial uncoupling protein 2 SIRTs : Sirtuins PGC-1α : Peroxisome proliferator-activated receptor gamma coactivator 1-alpha. [^1]: Edited by: Anca Dorhoi, Friedrich Loeffler Institute, Germany [^2]: Reviewed by: Arshad Khan, McGovern Medical School at UTHealth, United States; Elsa Anes, University of Lisbon, Portugal [^3]: This article was submitted to Microbial Immunology, a section of the journal Frontiers in Immunology
{ "pile_set_name": "PubMed Central" }
Story highlights The report comes after Sessions moved to expand the program in July The IG called on Tennessee to "remedy" the prohibited six-figure food expenses Washington (CNN) Tennessee law enforcement misused funds from a program involving seized assets, spending more than $110,000 on catering, a government watchdog report released Thursday found. The report from the Inspector General for the Department of Justice came weeks after Attorney General Jeff Sessions moved to allow the asset forfeiture program to grow, which former Attorney General Eric Holder scaled back in 2015. Law enforcement seizes property routinely under the suspicion it is obtained from or for illegal activity. Through the Department of Justice's equitable sharing program, the Justice Department's law enforcement partners can request and spend funds from the seizures for law enforcement purposes. The report noted law enforcement cannot use the funds for bayonets, weaponized aircraft or food purchases, among other things. The inspector general report released on Thursday said it had "identified several areas of improvement" with the Tennessee Department of Safety and Homeland Security's use of the program. Namely, the report took issue with $112,614 in funds spent on food with just over $110,000 spent on catering from March 2014-March 2016 based on funds from the program. Read More
{ "pile_set_name": "OpenWebText2" }
Transcriptional analysis of differential carbohydrate utilization by Clostridium acetobutylicum. Transcriptional analysis was performed on Clostridium acetobutylicum with the goal of identifying sugar-specific mechanisms for the transcriptional regulation of transport and metabolism genes. DNA microarrays were used to determine transcript levels from total RNA isolated from cells grown on media containing eleven different carbohydrates, including two pentoses (xylose, arabinose), four hexoses (glucose, mannose, galactose, fructose), four disaccharides (sucrose, lactose, maltose, cellobiose) and one polysaccharide (starch). Sugar-specific induction of many transport and metabolism genes indicates that these processes are regulated at the transcriptional level and are subject to carbon catabolite repression. The results show that C. acetobutylicum utilizes symporters and ATP-binding cassette (ABC) transporters for the uptake of pentose sugars, while disaccharides and hexoses are primarily taken up by phosphotransferase system (PTS) transporters and a gluconate : H(+) (GntP) transporter. The transcription of some transporter genes was induced by specific sugars, while others were induced by a subset of the sugars tested. Sugar-specific transport roles are suggested, based on expression comparisons, for various transporters of the PTS, the ABC superfamily and members of the major facilitator superfamily (MFS), including the GntP symporter family and the glycoside-pentoside-hexuronide (GPH)-cation symporter family. Additionally, updates to the C. acetobutylicum genome annotation are proposed, including the identification of genes likely to encode proteins involved in the metabolism of arabinose and xylose via the pentose phosphate pathway.
{ "pile_set_name": "PubMed Abstracts" }
FILE PHOTO - A man walks near a banner of ride-sharing app Uber during a news conference in Cairo, Egypt, December 4, 2018. REUTERS/Lena Masri CAIRO (Reuters) - Egypt’s top administrative court on Saturday lifted a ban on operations by ride-hailing companies Uber and Careem, which have faced fierce opposition from traditional taxi drivers, a judicial source and lawyer said. A lower administrative court withdrew the permits of U.S.-based Uber and its main rival, Dubai-based Careem, in March 2018 after 42 taxi drivers filed suit, arguing the apps were illegally using private cars as taxis and were registered as a call center and an internet company, respectively. In April last year, however, the Cairo Court of Urgent Matters said the ruling should be suspended and the two firms should be allowed to continue operating until a final decision was made by the Highest Administrative Court, which accepted the companies’ appeal on Saturday. Uber has faced repeated regulatory and legal setbacks around the world due to opposition from traditional taxi services. It has been forced to quit several countries, including Denmark and Hungary. The company has said Egypt is its largest market in the Middle East, with 157,000 drivers in 2017 and four million users since its launch there in 2014. Last week, Uber reached an agreement with the Egyptian Tax Authority to pay value-added tax (VAT), which Careem said it had been paying since March 2018.
{ "pile_set_name": "OpenWebText2" }
Stone maintained a big lead throughout this race. Graves and Hersberger put on a good race for second before Graves dropped out on the back stretch on the last lap while holding that place. Heat Race for the Fastest Half of the Cars – 10 Laps – 5 cars Place Driver Automobile Time 1 Johnny Mais* Essex 6:09.6 2 Fred Lentz Mercer chassis with a Hudson engine 3 Elmer J. Negy*** Haynes DNF Jake Strickler Hudson DNF Leonard Kerbs Ford Kerbs dropped out on the second lap with a broken wheel. Strickler dropped out on the fourth lap with engine trouble. Exhibition Run Against Time – 2 Laps Driver Automobile Time Elfrieda Mais** Essex 1:11.6 Free-For-All – 30 Laps – 5 cars Place Driver Automobile Time Purse 1 Johnny Mais* Essex 19:20.2 $900 2 Elmer J. Negy*** Haynes 500 3 Fred Lentz Mercer chassis with a Hudson engine 4 Leonard Kerbs Ford 5 James I. “Toots” Higgins**** Buick co owned by Higgins & Tip Sealey DNS Jake Strickler Hudson DNS _____ Stone Overland Jake Strickler of Enid, Oklahoma qualified for this race but could not start due to engine trouble. Stone qualified for this race but could not start due to a broken hub. Mais was leading over Kerbs by ¼ lap when Kerbs had to pit late in the race to change a tire.
{ "pile_set_name": "Pile-CC" }
There are errors in the Funding section. The correct funding information is as follows: This study was supported by the National Cancer Institute of the National Institutes of Health under award number K08CA155035 and the Melanoma Research Alliance. The authors are also grateful to Timothy Dattels for his generous support. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
{ "pile_set_name": "PubMed Central" }
Q: Javascript call function I have been testing some code lately trying to understand javascript a little bit better. Then I came across the call() function wich I can't get to understand well. I have the following code: function hi(){ console.log("hi"); } var bye = function(param, param2){ console.log(param); console.log(param2); console.log("bye"); } If I call bye.call(hi(), 1, 2), I get hi 1 2 undefined And if I call bye.cal(1,2), I get 2 undefined bye undefined for which I understand the call() function first parameter has to be a function, followed by the amount of parameter my bye function accepts. But where is the last undefined coming from? A: This first parameter doesn't have to be a function. The first parameter is the object to which the "this" variable is set to in the context of the function call. var bye = function(param, param2){ console.log(param); console.log(param2); console.log("bye"); console.log(this.x) } t = {'x': 1}; bye.call(t, 1, 2); And the console should show: 1, 2, "bye" and 1. The undefined is the return value of your function. In your first call: bye.call(hi(), 1, 2) You're calling hi() (so it prints 'hi'), the return value is not used, and 1 and 2 are the parameters to bye. In your second call: bye.cal(1,2) 1 is assigned to this. 2 is param, and param2 is undefined.
{ "pile_set_name": "StackExchange" }
testConstructorMessageCause(org.apache.commons.math.FunctionEvaluationExceptionTest) testConstructorMessage(org.apache.commons.math.FunctionEvaluationExceptionTest) testConstructor(org.apache.commons.math.FunctionEvaluationExceptionTest) testConstructorCause(org.apache.commons.math.MathConfigurationExceptionTest) testConstructorMessageCause(org.apache.commons.math.MathConfigurationExceptionTest) testConstructorMessage(org.apache.commons.math.MathConfigurationExceptionTest) testConstructor(org.apache.commons.math.MathConfigurationExceptionTest) testSerialization(org.apache.commons.math.MathExceptionTest) testConstructorCause(org.apache.commons.math.MathExceptionTest) testConstructorMessageCause(org.apache.commons.math.MathExceptionTest) testConstructorMessage(org.apache.commons.math.MathExceptionTest) testPrintStackTrace(org.apache.commons.math.MathExceptionTest) testConstructor(org.apache.commons.math.MathExceptionTest) testSetMaximalIterationCount(org.apache.commons.math.analysis.BisectionSolverTest) testSetRelativeAccuracy(org.apache.commons.math.analysis.BisectionSolverTest) testSetAbsoluteAccuracy(org.apache.commons.math.analysis.BisectionSolverTest) testSerialization(org.apache.commons.math.analysis.BisectionSolverTest) testSetFunctionValueAccuracy(org.apache.commons.math.analysis.BisectionSolverTest) testResetRelativeAccuracy(org.apache.commons.math.analysis.BisectionSolverTest) testResetAbsoluteAccuracy(org.apache.commons.math.analysis.BisectionSolverTest) testResetMaximalIterationCount(org.apache.commons.math.analysis.BisectionSolverTest) testResetFunctionValueAccuracy(org.apache.commons.math.analysis.BisectionSolverTest) testQuinticZero(org.apache.commons.math.analysis.BisectionSolverTest) testSinZero(org.apache.commons.math.analysis.BisectionSolverTest) testBadEndpoints(org.apache.commons.math.analysis.BrentSolverTest) testQuinticZero(org.apache.commons.math.analysis.BrentSolverTest) testSinZero(org.apache.commons.math.analysis.BrentSolverTest) testConstructorCause(org.apache.commons.math.analysis.ConvergenceExceptionTest) testConstructorMessageCause(org.apache.commons.math.analysis.ConvergenceExceptionTest) testConstructorMessage(org.apache.commons.math.analysis.ConvergenceExceptionTest) testConstructor(org.apache.commons.math.analysis.ConvergenceExceptionTest) testParameters(org.apache.commons.math.analysis.DividedDifferenceInterpolatorTest) testSinFunction(org.apache.commons.math.analysis.DividedDifferenceInterpolatorTest) testExpm1Function(org.apache.commons.math.analysis.DividedDifferenceInterpolatorTest) testLinearFunction(org.apache.commons.math.analysis.LaguerreSolverTest) testQuadraticFunction(org.apache.commons.math.analysis.LaguerreSolverTest) testParameters(org.apache.commons.math.analysis.LaguerreSolverTest) testQuinticFunction2(org.apache.commons.math.analysis.LaguerreSolverTest) testQuinticFunction(org.apache.commons.math.analysis.LaguerreSolverTest) testExpm1Function2(org.apache.commons.math.analysis.MullerSolverTest) testParameters(org.apache.commons.math.analysis.MullerSolverTest) testSinFunction(org.apache.commons.math.analysis.MullerSolverTest) testQuinticFunction2(org.apache.commons.math.analysis.MullerSolverTest) testExpm1Function(org.apache.commons.math.analysis.MullerSolverTest) testSinFunction2(org.apache.commons.math.analysis.MullerSolverTest) testQuinticFunction(org.apache.commons.math.analysis.MullerSolverTest) testParameters(org.apache.commons.math.analysis.NevilleInterpolatorTest) testSinFunction(org.apache.commons.math.analysis.NevilleInterpolatorTest) testExpm1Function(org.apache.commons.math.analysis.NevilleInterpolatorTest) testSerialization(org.apache.commons.math.analysis.NewtonSolverTest) testQuinticZero(org.apache.commons.math.analysis.NewtonSolverTest) testSinZero(org.apache.commons.math.analysis.NewtonSolverTest) testLinearFunction(org.apache.commons.math.analysis.PolynomialFunctionLagrangeFormTest) testQuadraticFunction(org.apache.commons.math.analysis.PolynomialFunctionLagrangeFormTest) testParameters(org.apache.commons.math.analysis.PolynomialFunctionLagrangeFormTest) testQuinticFunction(org.apache.commons.math.analysis.PolynomialFunctionLagrangeFormTest) testLinearFunction(org.apache.commons.math.analysis.PolynomialFunctionNewtonFormTest) testQuadraticFunction(org.apache.commons.math.analysis.PolynomialFunctionNewtonFormTest) testParameters(org.apache.commons.math.analysis.PolynomialFunctionNewtonFormTest) testQuinticFunction(org.apache.commons.math.analysis.PolynomialFunctionNewtonFormTest) testQuadratic(org.apache.commons.math.analysis.PolynomialFunctionTest) testConstants(org.apache.commons.math.analysis.PolynomialFunctionTest) testQuintic(org.apache.commons.math.analysis.PolynomialFunctionTest) testLinear(org.apache.commons.math.analysis.PolynomialFunctionTest) testfirstDerivativeComparision(org.apache.commons.math.analysis.PolynomialFunctionTest) testValues(org.apache.commons.math.analysis.PolynomialSplineFunctionTest) testConstructor(org.apache.commons.math.analysis.PolynomialSplineFunctionTest) testParameters(org.apache.commons.math.analysis.RiddersSolverTest) testSinFunction(org.apache.commons.math.analysis.RiddersSolverTest) testExpm1Function(org.apache.commons.math.analysis.RiddersSolverTest) testQuinticFunction(org.apache.commons.math.analysis.RiddersSolverTest) testParameters(org.apache.commons.math.analysis.RombergIntegratorTest) testSinFunction(org.apache.commons.math.analysis.RombergIntegratorTest) testQuinticFunction(org.apache.commons.math.analysis.RombergIntegratorTest) testParameters(org.apache.commons.math.analysis.SimpsonIntegratorTest) testSinFunction(org.apache.commons.math.analysis.SimpsonIntegratorTest) testQuinticFunction(org.apache.commons.math.analysis.SimpsonIntegratorTest) testInterpolateSin(org.apache.commons.math.analysis.SplineInterpolatorTest) testInterpolateLinearDegenerateTwoSegment(org.apache.commons.math.analysis.SplineInterpolatorTest) testIllegalArguments(org.apache.commons.math.analysis.SplineInterpolatorTest) testInterpolateLinear(org.apache.commons.math.analysis.SplineInterpolatorTest) testInterpolateLinearDegenerateThreeSegment(org.apache.commons.math.analysis.SplineInterpolatorTest) testParameters(org.apache.commons.math.analysis.TrapezoidIntegratorTest) testSinFunction(org.apache.commons.math.analysis.TrapezoidIntegratorTest) testQuinticFunction(org.apache.commons.math.analysis.TrapezoidIntegratorTest) testNewBrentSolverValid(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewBisectionSolverValid(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewBrentSolverNull(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewBisectionSolverNull(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewNewtonSolverNull(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewNewtonSolverValid(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewSecantSolverValid(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testNewSecantSolverNull(org.apache.commons.math.analysis.UnivariateRealSolverFactoryImplTest) testSolveNoRoot(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testSolveSin(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testSolveAccuracySin(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testSolveBadParameters(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testSolveNull(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testBracketSin(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testBracketCornerSolution(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testSolveAccuracyNull(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testBadParameters(org.apache.commons.math.analysis.UnivariateRealSolverUtilsTest) testParseNegativeImaginary(org.apache.commons.math.complex.ComplexFormatTest) testConstructorSingleFormat(org.apache.commons.math.complex.ComplexFormatTest) testParseSimpleWithDecimals(org.apache.commons.math.complex.ComplexFormatTest) testZeroImaginary(org.apache.commons.math.complex.ComplexFormatTest) testSetImaginaryFormatNull(org.apache.commons.math.complex.ComplexFormatTest) testDifferentImaginaryChar(org.apache.commons.math.complex.ComplexFormatTest) testFormatNumber(org.apache.commons.math.complex.ComplexFormatTest) testFormatObject(org.apache.commons.math.complex.ComplexFormatTest) testNan(org.apache.commons.math.complex.ComplexFormatTest) testSimpleWithDecimalsTrunc(org.apache.commons.math.complex.ComplexFormatTest) testSetImaginaryCharacterNull(org.apache.commons.math.complex.ComplexFormatTest) testStaticFormatComplex(org.apache.commons.math.complex.ComplexFormatTest) testGetRealFormat(org.apache.commons.math.complex.ComplexFormatTest) testParseNegativeBoth(org.apache.commons.math.complex.ComplexFormatTest) testParseNegativeReal(org.apache.commons.math.complex.ComplexFormatTest) testGetImaginaryFormat(org.apache.commons.math.complex.ComplexFormatTest) testParseSimpleWithDecimalsTrunc(org.apache.commons.math.complex.ComplexFormatTest) testNegativeInfinity(org.apache.commons.math.complex.ComplexFormatTest) testSetRealFormatNull(org.apache.commons.math.complex.ComplexFormatTest) testPaseNegativeInfinity(org.apache.commons.math.complex.ComplexFormatTest) testParseDifferentImaginaryChar(org.apache.commons.math.complex.ComplexFormatTest) testSetImaginaryCharacterEmpty(org.apache.commons.math.complex.ComplexFormatTest) testSimpleNoDecimals(org.apache.commons.math.complex.ComplexFormatTest) testZeroReal(org.apache.commons.math.complex.ComplexFormatTest) testNegativeBoth(org.apache.commons.math.complex.ComplexFormatTest) testNegativeReal(org.apache.commons.math.complex.ComplexFormatTest) testNegativeImaginary(org.apache.commons.math.complex.ComplexFormatTest) testParseSimpleNoDecimals(org.apache.commons.math.complex.ComplexFormatTest) testPositiveInfinity(org.apache.commons.math.complex.ComplexFormatTest) testParseZeroReal(org.apache.commons.math.complex.ComplexFormatTest) testParseNan(org.apache.commons.math.complex.ComplexFormatTest) testParseZeroImaginary(org.apache.commons.math.complex.ComplexFormatTest) testParsePositiveInfinity(org.apache.commons.math.complex.ComplexFormatTest) testSimpleWithDecimals(org.apache.commons.math.complex.ComplexFormatTest) testConjugateNaN(org.apache.commons.math.complex.ComplexTest) testEqualsClass(org.apache.commons.math.complex.ComplexTest) testAddInfinite(org.apache.commons.math.complex.ComplexTest) testAbs(org.apache.commons.math.complex.ComplexTest) testAdd(org.apache.commons.math.complex.ComplexTest) testSubtract(org.apache.commons.math.complex.ComplexTest) testDivideNaNInf(org.apache.commons.math.complex.ComplexTest) testDivideNaN(org.apache.commons.math.complex.ComplexTest) testEqualsRealDifference(org.apache.commons.math.complex.ComplexTest) testNegateNaN(org.apache.commons.math.complex.ComplexTest) testEqualsNull(org.apache.commons.math.complex.ComplexTest) testEqualsSame(org.apache.commons.math.complex.ComplexTest) testEqualsTrue(org.apache.commons.math.complex.ComplexTest) testMultiplyNaN(org.apache.commons.math.complex.ComplexTest) testConjugate(org.apache.commons.math.complex.ComplexTest) testMultiplyNaNInf(org.apache.commons.math.complex.ComplexTest) testEqualsImaginaryDifference(org.apache.commons.math.complex.ComplexTest) testConstructorNaN(org.apache.commons.math.complex.ComplexTest) testHashCode(org.apache.commons.math.complex.ComplexTest) testAbsNaN(org.apache.commons.math.complex.ComplexTest) testAddNaN(org.apache.commons.math.complex.ComplexTest) testDivide(org.apache.commons.math.complex.ComplexTest) testMultiply(org.apache.commons.math.complex.ComplexTest) testEqualsNaN(org.apache.commons.math.complex.ComplexTest) testDivideInfinite(org.apache.commons.math.complex.ComplexTest) testNegate(org.apache.commons.math.complex.ComplexTest) testConjugateInfiinite(org.apache.commons.math.complex.ComplexTest) testSubtractNaN(org.apache.commons.math.complex.ComplexTest) testAbsInfinite(org.apache.commons.math.complex.ComplexTest) testConstructor(org.apache.commons.math.complex.ComplexTest) testTanNull(org.apache.commons.math.complex.ComplexUtilsTest) testTanhInf(org.apache.commons.math.complex.ComplexUtilsTest) testTanhNaN(org.apache.commons.math.complex.ComplexUtilsTest) testSqrt1zNull(org.apache.commons.math.complex.ComplexUtilsTest) testlogNull(org.apache.commons.math.complex.ComplexUtilsTest) testExpNull(org.apache.commons.math.complex.ComplexUtilsTest) testAcosInf(org.apache.commons.math.complex.ComplexUtilsTest) testAcosNaN(org.apache.commons.math.complex.ComplexUtilsTest) testCos(org.apache.commons.math.complex.ComplexUtilsTest) testExp(org.apache.commons.math.complex.ComplexUtilsTest) testLog(org.apache.commons.math.complex.ComplexUtilsTest) testPow(org.apache.commons.math.complex.ComplexUtilsTest) testSin(org.apache.commons.math.complex.ComplexUtilsTest) testTan(org.apache.commons.math.complex.ComplexUtilsTest) testAcos(org.apache.commons.math.complex.ComplexUtilsTest) testAsin(org.apache.commons.math.complex.ComplexUtilsTest) testAtan(org.apache.commons.math.complex.ComplexUtilsTest) testCosh(org.apache.commons.math.complex.ComplexUtilsTest) testSinh(org.apache.commons.math.complex.ComplexUtilsTest) testTanh(org.apache.commons.math.complex.ComplexUtilsTest) testAsinInf(org.apache.commons.math.complex.ComplexUtilsTest) testAsinNaN(org.apache.commons.math.complex.ComplexUtilsTest) testAtanInf(org.apache.commons.math.complex.ComplexUtilsTest) testAtanNaN(org.apache.commons.math.complex.ComplexUtilsTest) testAcosNull(org.apache.commons.math.complex.ComplexUtilsTest) testTanhCritical(org.apache.commons.math.complex.ComplexUtilsTest) testPowZero(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtImaginaryZero(org.apache.commons.math.complex.ComplexUtilsTest) testPolar2ComplexInf(org.apache.commons.math.complex.ComplexUtilsTest) testPolar2ComplexNaN(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtRealPositive(org.apache.commons.math.complex.ComplexUtilsTest) testSqrt1zNaN(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtRealNegative(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtPolar(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtNull(org.apache.commons.math.complex.ComplexUtilsTest) testLogZero(org.apache.commons.math.complex.ComplexUtilsTest) testAsinNull(org.apache.commons.math.complex.ComplexUtilsTest) testTanhNull(org.apache.commons.math.complex.ComplexUtilsTest) testTanCritical(org.apache.commons.math.complex.ComplexUtilsTest) testCoshNull(org.apache.commons.math.complex.ComplexUtilsTest) testPowNaNBase(org.apache.commons.math.complex.ComplexUtilsTest) testPolar2ComplexIllegalModulus(org.apache.commons.math.complex.ComplexUtilsTest) testCosNull(org.apache.commons.math.complex.ComplexUtilsTest) testCoshInf(org.apache.commons.math.complex.ComplexUtilsTest) testCoshNaN(org.apache.commons.math.complex.ComplexUtilsTest) testAtanNull(org.apache.commons.math.complex.ComplexUtilsTest) testPowNaNExponent(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtRealZero(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtImaginaryNegative(org.apache.commons.math.complex.ComplexUtilsTest) testsinhNull(org.apache.commons.math.complex.ComplexUtilsTest) testCosInf(org.apache.commons.math.complex.ComplexUtilsTest) testCosNaN(org.apache.commons.math.complex.ComplexUtilsTest) testExpInf(org.apache.commons.math.complex.ComplexUtilsTest) testExpNaN(org.apache.commons.math.complex.ComplexUtilsTest) testLogInf(org.apache.commons.math.complex.ComplexUtilsTest) testLogNaN(org.apache.commons.math.complex.ComplexUtilsTest) testPowInf(org.apache.commons.math.complex.ComplexUtilsTest) testSinNull(org.apache.commons.math.complex.ComplexUtilsTest) testSinhInf(org.apache.commons.math.complex.ComplexUtilsTest) testSinhNaN(org.apache.commons.math.complex.ComplexUtilsTest) testSinInf(org.apache.commons.math.complex.ComplexUtilsTest) testSinNaN(org.apache.commons.math.complex.ComplexUtilsTest) testSqrt1z(org.apache.commons.math.complex.ComplexUtilsTest) testTanInf(org.apache.commons.math.complex.ComplexUtilsTest) testTanNaN(org.apache.commons.math.complex.ComplexUtilsTest) testPolar2Complex(org.apache.commons.math.complex.ComplexUtilsTest) testpowNull(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtInf(org.apache.commons.math.complex.ComplexUtilsTest) testSqrtNaN(org.apache.commons.math.complex.ComplexUtilsTest) testParseNegativeImaginary(org.apache.commons.math.complex.FrenchComplexFormatTest) testConstructorSingleFormat(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseSimpleWithDecimals(org.apache.commons.math.complex.FrenchComplexFormatTest) testZeroImaginary(org.apache.commons.math.complex.FrenchComplexFormatTest) testSetImaginaryFormatNull(org.apache.commons.math.complex.FrenchComplexFormatTest) testDifferentImaginaryChar(org.apache.commons.math.complex.FrenchComplexFormatTest) testFormatNumber(org.apache.commons.math.complex.FrenchComplexFormatTest) testFormatObject(org.apache.commons.math.complex.FrenchComplexFormatTest) testNan(org.apache.commons.math.complex.FrenchComplexFormatTest) testSimpleWithDecimalsTrunc(org.apache.commons.math.complex.FrenchComplexFormatTest) testSetImaginaryCharacterNull(org.apache.commons.math.complex.FrenchComplexFormatTest) testStaticFormatComplex(org.apache.commons.math.complex.FrenchComplexFormatTest) testGetRealFormat(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseNegativeBoth(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseNegativeReal(org.apache.commons.math.complex.FrenchComplexFormatTest) testGetImaginaryFormat(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseSimpleWithDecimalsTrunc(org.apache.commons.math.complex.FrenchComplexFormatTest) testNegativeInfinity(org.apache.commons.math.complex.FrenchComplexFormatTest) testSetRealFormatNull(org.apache.commons.math.complex.FrenchComplexFormatTest) testPaseNegativeInfinity(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseDifferentImaginaryChar(org.apache.commons.math.complex.FrenchComplexFormatTest) testSetImaginaryCharacterEmpty(org.apache.commons.math.complex.FrenchComplexFormatTest) testSimpleNoDecimals(org.apache.commons.math.complex.FrenchComplexFormatTest) testZeroReal(org.apache.commons.math.complex.FrenchComplexFormatTest) testNegativeBoth(org.apache.commons.math.complex.FrenchComplexFormatTest) testNegativeReal(org.apache.commons.math.complex.FrenchComplexFormatTest) testNegativeImaginary(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseSimpleNoDecimals(org.apache.commons.math.complex.FrenchComplexFormatTest) testPositiveInfinity(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseZeroReal(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseNan(org.apache.commons.math.complex.FrenchComplexFormatTest) testParseZeroImaginary(org.apache.commons.math.complex.FrenchComplexFormatTest) testParsePositiveInfinity(org.apache.commons.math.complex.FrenchComplexFormatTest) testSimpleWithDecimals(org.apache.commons.math.complex.FrenchComplexFormatTest) testDegenerate0(org.apache.commons.math.distribution.BinomialDistributionTest) testDegenerate1(org.apache.commons.math.distribution.BinomialDistributionTest) testDensities(org.apache.commons.math.distribution.BinomialDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.BinomialDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.BinomialDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.BinomialDistributionTest) testScale(org.apache.commons.math.distribution.CauchyDistributionTest) testMedian(org.apache.commons.math.distribution.CauchyDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.CauchyDistributionTest) testSetScale(org.apache.commons.math.distribution.CauchyDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.CauchyDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.CauchyDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.CauchyDistributionTest) testConsistency(org.apache.commons.math.distribution.CauchyDistributionTest) testDfAccessors(org.apache.commons.math.distribution.ChiSquareDistributionTest) testSmallDf(org.apache.commons.math.distribution.ChiSquareDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.ChiSquareDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.ChiSquareDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.ChiSquareDistributionTest) testConsistency(org.apache.commons.math.distribution.ChiSquareDistributionTest) testWeibullDistributionZeroPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionPositivePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionPositiveNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateTDistributionPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionPositivePositiveZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateTDistributionNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testWeibullDistributionNegativePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateExponentialDistributionZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCauchyDistributionNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionNegativePositivePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCauchyDistributionZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateGammaDistributionPositiveZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateFDistributionPositiveZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionPositiveZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateGammaDistributionNegativePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testWeibullDistributionPositiveZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionZeroPositivePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateFDistributionNegativePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testWeibullDistributionPositiveNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionPositiveNegativePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateGammaDistributionPositivePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateTDistributionZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateGammaDistributionPositiveNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionNegativePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionPositiveZeroPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateChiSquareDistributionPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateExponentialDistributionPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateChiSquareDistributionNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionSmallPopulationSize(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateFDistributionPositivePositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateExponentialDistributionNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateChiSquareDistributionZero(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateFDistributionPositiveNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testHypergeometricDistributionPositivePositiveNegative(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateGammaDistributionZeroPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCreateFDistributionZeroPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionZeroPositive(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionPositiveOne(org.apache.commons.math.distribution.DistributionFactoryImplTest) testBinomialDistributionPositiveTwo(org.apache.commons.math.distribution.DistributionFactoryImplTest) testCumulativeProbability2(org.apache.commons.math.distribution.ExponentialDistributionTest) testCumulativeProbabilityExtremes(org.apache.commons.math.distribution.ExponentialDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.ExponentialDistributionTest) testMeanAccessors(org.apache.commons.math.distribution.ExponentialDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.ExponentialDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.ExponentialDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.ExponentialDistributionTest) testConsistency(org.apache.commons.math.distribution.ExponentialDistributionTest) testCumulativeProbabilityExtremes(org.apache.commons.math.distribution.FDistributionTest) testLargeDegreesOfFreedom(org.apache.commons.math.distribution.FDistributionTest) testDfAccessors(org.apache.commons.math.distribution.FDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.FDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.FDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.FDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.FDistributionTest) testConsistency(org.apache.commons.math.distribution.FDistributionTest) testProbabilities(org.apache.commons.math.distribution.GammaDistributionTest) testParameterAccessors(org.apache.commons.math.distribution.GammaDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.GammaDistributionTest) testValues(org.apache.commons.math.distribution.GammaDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.GammaDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.GammaDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.GammaDistributionTest) testConsistency(org.apache.commons.math.distribution.GammaDistributionTest) testLargeValues(org.apache.commons.math.distribution.HypergeometricDistributionTest) testDegenerateNoFailures(org.apache.commons.math.distribution.HypergeometricDistributionTest) testDegenerateNoSuccesses(org.apache.commons.math.distribution.HypergeometricDistributionTest) testDegenerateFullSample(org.apache.commons.math.distribution.HypergeometricDistributionTest) testPopulationSize(org.apache.commons.math.distribution.HypergeometricDistributionTest) testMoreLargeValues(org.apache.commons.math.distribution.HypergeometricDistributionTest) testDensities(org.apache.commons.math.distribution.HypergeometricDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.HypergeometricDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.HypergeometricDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.HypergeometricDistributionTest) testSetStandardDeviation(org.apache.commons.math.distribution.NormalDistributionTest) testGetStandardDeviation(org.apache.commons.math.distribution.NormalDistributionTest) testQuantiles(org.apache.commons.math.distribution.NormalDistributionTest) testGetMean(org.apache.commons.math.distribution.NormalDistributionTest) testSetMean(org.apache.commons.math.distribution.NormalDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.NormalDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.NormalDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.NormalDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.NormalDistributionTest) testConsistency(org.apache.commons.math.distribution.NormalDistributionTest) testDegenerate0(org.apache.commons.math.distribution.PascalDistributionTest) testDegenerate1(org.apache.commons.math.distribution.PascalDistributionTest) testDensities(org.apache.commons.math.distribution.PascalDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.PascalDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.PascalDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.PascalDistributionTest) testDegenerateInverseCumulativeProbability(org.apache.commons.math.distribution.PoissonDistributionTest) testMean(org.apache.commons.math.distribution.PoissonDistributionTest) testLargeMeanInverseCumulativeProbability(org.apache.commons.math.distribution.PoissonDistributionTest) testNormalApproximateProbability(org.apache.commons.math.distribution.PoissonDistributionTest) testLargeMeanCumulativeProbability(org.apache.commons.math.distribution.PoissonDistributionTest) testDensities(org.apache.commons.math.distribution.PoissonDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.PoissonDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.PoissonDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.PoissonDistributionTest) testCumulativeProbabilityAgaintStackOverflow(org.apache.commons.math.distribution.TDistributionTest) testDfAccessors(org.apache.commons.math.distribution.TDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.TDistributionTest) testSmallDf(org.apache.commons.math.distribution.TDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.TDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.TDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.TDistributionTest) testConsistency(org.apache.commons.math.distribution.TDistributionTest) testAlpha(org.apache.commons.math.distribution.WeibullDistributionTest) testBeta(org.apache.commons.math.distribution.WeibullDistributionTest) testSetBeta(org.apache.commons.math.distribution.WeibullDistributionTest) testInverseCumulativeProbabilityExtremes(org.apache.commons.math.distribution.WeibullDistributionTest) testSetAlpha(org.apache.commons.math.distribution.WeibullDistributionTest) testInverseCumulativeProbabilities(org.apache.commons.math.distribution.WeibullDistributionTest) testCumulativeProbabilities(org.apache.commons.math.distribution.WeibullDistributionTest) testIllegalArguments(org.apache.commons.math.distribution.WeibullDistributionTest) testConsistency(org.apache.commons.math.distribution.WeibullDistributionTest) testNumeratorFormat(org.apache.commons.math.fraction.FractionFormatTest) testFormatImproperNegative(org.apache.commons.math.fraction.FractionFormatTest) testFormatImproper(org.apache.commons.math.fraction.FractionFormatTest) testParseProper(org.apache.commons.math.fraction.FractionFormatTest) testParseProperNegative(org.apache.commons.math.fraction.FractionFormatTest) testParse(org.apache.commons.math.fraction.FractionFormatTest) testWholeFormat(org.apache.commons.math.fraction.FractionFormatTest) testFormatZero(org.apache.commons.math.fraction.FractionFormatTest) testFormatNegative(org.apache.commons.math.fraction.FractionFormatTest) testParseInvalidDenominator(org.apache.commons.math.fraction.FractionFormatTest) testDenominatorFormat(org.apache.commons.math.fraction.FractionFormatTest) testParseProperInvalidMinus(org.apache.commons.math.fraction.FractionFormatTest) testParseInteger(org.apache.commons.math.fraction.FractionFormatTest) testParseInvalid(org.apache.commons.math.fraction.FractionFormatTest) testFormat(org.apache.commons.math.fraction.FractionFormatTest) testParseNegative(org.apache.commons.math.fraction.FractionFormatTest) testFloatValue(org.apache.commons.math.fraction.FractionTest) testAbs(org.apache.commons.math.fraction.FractionTest) testAdd(org.apache.commons.math.fraction.FractionTest) testSubtract(org.apache.commons.math.fraction.FractionTest) testReciprocal(org.apache.commons.math.fraction.FractionTest) testGetReducedFraction(org.apache.commons.math.fraction.FractionTest) testConstructorDouble(org.apache.commons.math.fraction.FractionTest) testCompareTo(org.apache.commons.math.fraction.FractionTest) testLongValue(org.apache.commons.math.fraction.FractionTest) testIntValue(org.apache.commons.math.fraction.FractionTest) testDivide(org.apache.commons.math.fraction.FractionTest) testMultiply(org.apache.commons.math.fraction.FractionTest) testEqualsAndHashCode(org.apache.commons.math.fraction.FractionTest) testNegate(org.apache.commons.math.fraction.FractionTest) testDoubleValue(org.apache.commons.math.fraction.FractionTest) testConstructor(org.apache.commons.math.fraction.FractionTest) testOperate(org.apache.commons.math.linear.BigMatrixImplTest) testAddFail(org.apache.commons.math.linear.BigMatrixImplTest) testAdd(org.apache.commons.math.linear.BigMatrixImplTest) testScalarAdd(org.apache.commons.math.linear.BigMatrixImplTest) testSolve(org.apache.commons.math.linear.BigMatrixImplTest) testTrace(org.apache.commons.math.linear.BigMatrixImplTest) testNorm(org.apache.commons.math.linear.BigMatrixImplTest) testToString(org.apache.commons.math.linear.BigMatrixImplTest) testIsSingular(org.apache.commons.math.linear.BigMatrixImplTest) testConstructors(org.apache.commons.math.linear.BigMatrixImplTest) testPlusMinus(org.apache.commons.math.linear.BigMatrixImplTest) testDeterminant(org.apache.commons.math.linear.BigMatrixImplTest) testMultiply2(org.apache.commons.math.linear.BigMatrixImplTest) testDimensions(org.apache.commons.math.linear.BigMatrixImplTest) testSubMatrix(org.apache.commons.math.linear.BigMatrixImplTest) testPremultiplyVector(org.apache.commons.math.linear.BigMatrixImplTest) testCopyFunctions(org.apache.commons.math.linear.BigMatrixImplTest) testLUDecomposition(org.apache.commons.math.linear.BigMatrixImplTest) testGetVectors(org.apache.commons.math.linear.BigMatrixImplTest) testGetColumnMatrix(org.apache.commons.math.linear.BigMatrixImplTest) testMultiply(org.apache.commons.math.linear.BigMatrixImplTest) testEqualsAndHashCode(org.apache.commons.math.linear.BigMatrixImplTest) testInverse(org.apache.commons.math.linear.BigMatrixImplTest) testTranspose(org.apache.commons.math.linear.BigMatrixImplTest) testPremultiply(org.apache.commons.math.linear.BigMatrixImplTest) testGetRowMatrix(org.apache.commons.math.linear.BigMatrixImplTest) testSetSubMatrix(org.apache.commons.math.linear.BigMatrixImplTest) testConstructorMessage(org.apache.commons.math.linear.InvalidMatrixExceptionTest) testConstructor(org.apache.commons.math.linear.InvalidMatrixExceptionTest) testConstructorMessage(org.apache.commons.math.linear.MatrixIndexExceptionTest) testConstructor(org.apache.commons.math.linear.MatrixIndexExceptionTest) testCreateRealMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateRowRealMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateBigMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateColumnBigMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateBigIdentityMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateRowBigMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateIdentityMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testCreateColumnRealMatrix(org.apache.commons.math.linear.MatrixUtilsTest) testAEqualQR(org.apache.commons.math.linear.QRDecompositionImplTest) testDimensions(org.apache.commons.math.linear.QRDecompositionImplTest) testRUpperTriangular(org.apache.commons.math.linear.QRDecompositionImplTest) testQOrthogonal(org.apache.commons.math.linear.QRDecompositionImplTest) testOperate(org.apache.commons.math.linear.RealMatrixImplTest) testExamples(org.apache.commons.math.linear.RealMatrixImplTest) testGetEntry(org.apache.commons.math.linear.RealMatrixImplTest) testAddFail(org.apache.commons.math.linear.RealMatrixImplTest) testAdd(org.apache.commons.math.linear.RealMatrixImplTest) testScalarAdd(org.apache.commons.math.linear.RealMatrixImplTest) testSolve(org.apache.commons.math.linear.RealMatrixImplTest) testTrace(org.apache.commons.math.linear.RealMatrixImplTest) testNorm(org.apache.commons.math.linear.RealMatrixImplTest) testToString(org.apache.commons.math.linear.RealMatrixImplTest) testIsSingular(org.apache.commons.math.linear.RealMatrixImplTest) testPlusMinus(org.apache.commons.math.linear.RealMatrixImplTest) testDeterminant(org.apache.commons.math.linear.RealMatrixImplTest) testMultiply2(org.apache.commons.math.linear.RealMatrixImplTest) testDimensions(org.apache.commons.math.linear.RealMatrixImplTest) testSubMatrix(org.apache.commons.math.linear.RealMatrixImplTest) testPremultiplyVector(org.apache.commons.math.linear.RealMatrixImplTest) testCopyFunctions(org.apache.commons.math.linear.RealMatrixImplTest) testLUDecomposition(org.apache.commons.math.linear.RealMatrixImplTest) testGetVectors(org.apache.commons.math.linear.RealMatrixImplTest) testGetColumnMatrix(org.apache.commons.math.linear.RealMatrixImplTest) testMultiply(org.apache.commons.math.linear.RealMatrixImplTest) testEqualsAndHashCode(org.apache.commons.math.linear.RealMatrixImplTest) testInverse(org.apache.commons.math.linear.RealMatrixImplTest) testTranspose(org.apache.commons.math.linear.RealMatrixImplTest) testPremultiply(org.apache.commons.math.linear.RealMatrixImplTest) testGetRowMatrix(org.apache.commons.math.linear.RealMatrixImplTest) testSetSubMatrix(org.apache.commons.math.linear.RealMatrixImplTest) testRegularizedBetaPositiveNanPositive(org.apache.commons.math.special.BetaTest) testRegularizedBetaPositivePositivePositive(org.apache.commons.math.special.BetaTest) testRegularizedBetaPositivePositiveNegative(org.apache.commons.math.special.BetaTest) testRegularizedBetaPositiveZeroPositive(org.apache.commons.math.special.BetaTest) testLogBetaPositivePositive(org.apache.commons.math.special.BetaTest) testLogBetaPositiveNegative(org.apache.commons.math.special.BetaTest) testLogBetaPositiveNan(org.apache.commons.math.special.BetaTest) testRegularizedBetaNegativePositivePositive(org.apache.commons.math.special.BetaTest) testRegularizedBetaPositivePositiveNan(org.apache.commons.math.special.BetaTest) testLogBetaZeroPositive(org.apache.commons.math.special.BetaTest) testRegularizedBetaPositivePositiveZero(org.apache.commons.math.special.BetaTest) testRegularizedBetaPositiveNegativePositive(org.apache.commons.math.special.BetaTest) testLogBetaNanPositive(org.apache.commons.math.special.BetaTest) testLogBetaNegativePositive(org.apache.commons.math.special.BetaTest) testLogBetaPositiveZero(org.apache.commons.math.special.BetaTest) testRegularizedBetaZeroPositivePositive(org.apache.commons.math.special.BetaTest) testRegularizedBetaNanPositivePositive(org.apache.commons.math.special.BetaTest) testErf1960(org.apache.commons.math.special.ErfTest) testErf2576(org.apache.commons.math.special.ErfTest) testErf2807(org.apache.commons.math.special.ErfTest) testErf3291(org.apache.commons.math.special.ErfTest) testErf0(org.apache.commons.math.special.ErfTest) testLogGammaPositive(org.apache.commons.math.special.GammaTest) testLogGammaNegative(org.apache.commons.math.special.GammaTest) testRegularizedGammaPositivePositive(org.apache.commons.math.special.GammaTest) testRegularizedGammaPositiveNegative(org.apache.commons.math.special.GammaTest) testLogGammaNan(org.apache.commons.math.special.GammaTest) testRegularizedGammaNanPositive(org.apache.commons.math.special.GammaTest) testRegularizedGammaZeroPositive(org.apache.commons.math.special.GammaTest) testLogGammaZero(org.apache.commons.math.special.GammaTest) testRegularizedGammaNegativePositive(org.apache.commons.math.special.GammaTest) testRegularizedGammaPositiveNan(org.apache.commons.math.special.GammaTest) testRegularizedGammaPositiveZero(org.apache.commons.math.special.GammaTest) testUnivariateImpl(org.apache.commons.math.stat.CertifiedDataTest) testStoredUnivariateImpl(org.apache.commons.math.stat.CertifiedDataTest) testEmptyTable(org.apache.commons.math.stat.FrequencyTest) testAdd(org.apache.commons.math.stat.FrequencyTest) testPcts(org.apache.commons.math.stat.FrequencyTest) testToString(org.apache.commons.math.stat.FrequencyTest) testIntegerValues(org.apache.commons.math.stat.FrequencyTest) testCounts(org.apache.commons.math.stat.FrequencyTest) testDifferenceStats(org.apache.commons.math.stat.StatUtilsTest) testPercentile(org.apache.commons.math.stat.StatUtilsTest) testArrayIndexConditions(org.apache.commons.math.stat.StatUtilsTest) testMax(org.apache.commons.math.stat.StatUtilsTest) testMin(org.apache.commons.math.stat.StatUtilsTest) testStats(org.apache.commons.math.stat.StatUtilsTest) testSumSq(org.apache.commons.math.stat.StatUtilsTest) testMean(org.apache.commons.math.stat.StatUtilsTest) testN0andN1Conditions(org.apache.commons.math.stat.StatUtilsTest) testProduct(org.apache.commons.math.stat.StatUtilsTest) testGeometricMean(org.apache.commons.math.stat.StatUtilsTest) testVariance(org.apache.commons.math.stat.StatUtilsTest) testSumLog(org.apache.commons.math.stat.StatUtilsTest) testCertifiedValues(org.apache.commons.math.stat.data.LewTest) testCertifiedValues(org.apache.commons.math.stat.data.LotteryTest) testGetSortedValues(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsImplTest) testProductAndGeometricMean(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsImplTest) testStats(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsImplTest) testN0andN1Conditions(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsImplTest) testPercentiles(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsImplTest) testSkewAndKurtosis(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsImplTest) testGetSortedValues(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testProductAndGeometricMean(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testSerialization(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testStats(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testToString(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testN0andN1Conditions(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testAddValue(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testWindowing(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testPercentiles(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testWindowSize(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testNewInstanceClassValid(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testNewInstanceClassNull(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testSkewAndKurtosis(org.apache.commons.math.stat.descriptive.DescriptiveStatisticsTest) testInteraction(org.apache.commons.math.stat.descriptive.InteractionTest) testProductAndGeometricMean(org.apache.commons.math.stat.descriptive.ListUnivariateImplTest) testSerialization(org.apache.commons.math.stat.descriptive.ListUnivariateImplTest) testStats(org.apache.commons.math.stat.descriptive.ListUnivariateImplTest) testN0andN1Conditions(org.apache.commons.math.stat.descriptive.ListUnivariateImplTest) testSkewAndKurtosis(org.apache.commons.math.stat.descriptive.ListUnivariateImplTest) testProductAndGeometricMean(org.apache.commons.math.stat.descriptive.MixedListUnivariateImplTest) testStats(org.apache.commons.math.stat.descriptive.MixedListUnivariateImplTest) testN0andN1Conditions(org.apache.commons.math.stat.descriptive.MixedListUnivariateImplTest) testSkewAndKurtosis(org.apache.commons.math.stat.descriptive.MixedListUnivariateImplTest) testSerialization(org.apache.commons.math.stat.descriptive.StatisticalSummaryValuesTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.StatisticalSummaryValuesTest) testProductAndGeometricMean(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testGetSummary(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testSerialization(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testStats(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testNaNContracts(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testN0andN1Conditions(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.SummaryStatisticsImplTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.FirstMomentTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.FirstMomentTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.FirstMomentTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.FirstMomentTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.FirstMomentTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.FirstMomentTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.FourthMomentTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.FourthMomentTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.FourthMomentTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.FourthMomentTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.FourthMomentTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.FourthMomentTest) testSpecialValues(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.GeometricMeanTest) testNaN(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.KurtosisTest) testSmallSamples(org.apache.commons.math.stat.descriptive.moment.MeanTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.MeanTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.MeanTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.MeanTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.MeanTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.MeanTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.MeanTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.SecondMomentTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.SecondMomentTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.SecondMomentTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.SecondMomentTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.SecondMomentTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.SecondMomentTest) testNaN(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.SkewnessTest) testNaN(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testPopulation(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.StandardDeviationTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.ThirdMomentTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.ThirdMomentTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.ThirdMomentTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.ThirdMomentTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.ThirdMomentTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.ThirdMomentTest) testNaN(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testPopulation(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testSerialization(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testIncrementation(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testConsistency(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testEvaluation(org.apache.commons.math.stat.descriptive.moment.VarianceTest) testNaNs(org.apache.commons.math.stat.descriptive.rank.MaxTest) testSpecialValues(org.apache.commons.math.stat.descriptive.rank.MaxTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.rank.MaxTest) testSerialization(org.apache.commons.math.stat.descriptive.rank.MaxTest) testIncrementation(org.apache.commons.math.stat.descriptive.rank.MaxTest) testConsistency(org.apache.commons.math.stat.descriptive.rank.MaxTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.rank.MaxTest) testEvaluation(org.apache.commons.math.stat.descriptive.rank.MaxTest) testEvaluation(org.apache.commons.math.stat.descriptive.rank.MedianTest) testNaNs(org.apache.commons.math.stat.descriptive.rank.MinTest) testSpecialValues(org.apache.commons.math.stat.descriptive.rank.MinTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.rank.MinTest) testSerialization(org.apache.commons.math.stat.descriptive.rank.MinTest) testIncrementation(org.apache.commons.math.stat.descriptive.rank.MinTest) testConsistency(org.apache.commons.math.stat.descriptive.rank.MinTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.rank.MinTest) testEvaluation(org.apache.commons.math.stat.descriptive.rank.MinTest) testPercentile(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testNISTExample(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testSingleton(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testSetQuantile(org.apache.commons.math.stat.descriptive.rank.PercentileTest) test5(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testHighPercentile(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testSpecialValues(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testNullEmpty(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testEvaluation(org.apache.commons.math.stat.descriptive.rank.PercentileTest) testSpecialValues(org.apache.commons.math.stat.descriptive.summary.ProductTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.summary.ProductTest) testSerialization(org.apache.commons.math.stat.descriptive.summary.ProductTest) testIncrementation(org.apache.commons.math.stat.descriptive.summary.ProductTest) testConsistency(org.apache.commons.math.stat.descriptive.summary.ProductTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.summary.ProductTest) testEvaluation(org.apache.commons.math.stat.descriptive.summary.ProductTest) testSpecialValues(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testSerialization(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testIncrementation(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testConsistency(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testEvaluation(org.apache.commons.math.stat.descriptive.summary.SumLogTest) testSpecialValues(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testSerialization(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testIncrementation(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testConsistency(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testEvaluation(org.apache.commons.math.stat.descriptive.summary.SumSqTest) testSpecialValues(org.apache.commons.math.stat.descriptive.summary.SumTest) testMomentSmallSamples(org.apache.commons.math.stat.descriptive.summary.SumTest) testSerialization(org.apache.commons.math.stat.descriptive.summary.SumTest) testIncrementation(org.apache.commons.math.stat.descriptive.summary.SumTest) testConsistency(org.apache.commons.math.stat.descriptive.summary.SumTest) testEqualsAndHashCode(org.apache.commons.math.stat.descriptive.summary.SumTest) testEvaluation(org.apache.commons.math.stat.descriptive.summary.SumTest) testChiSquareIndependence(org.apache.commons.math.stat.inference.ChiSquareFactoryTest) testChiSquareZeroCount(org.apache.commons.math.stat.inference.ChiSquareFactoryTest) testChiSquareLargeTestStatistic(org.apache.commons.math.stat.inference.ChiSquareFactoryTest) testChiSquare(org.apache.commons.math.stat.inference.ChiSquareFactoryTest) testChiSquareIndependence(org.apache.commons.math.stat.inference.ChiSquareTestTest) testChiSquareZeroCount(org.apache.commons.math.stat.inference.ChiSquareTestTest) testChiSquareLargeTestStatistic(org.apache.commons.math.stat.inference.ChiSquareTestTest) testChiSquare(org.apache.commons.math.stat.inference.ChiSquareTestTest) testOneSampleTTest(org.apache.commons.math.stat.inference.TTestFactoryTest) testSmallSamples(org.apache.commons.math.stat.inference.TTestFactoryTest) testTwoSampleTHeterscedastic(org.apache.commons.math.stat.inference.TTestFactoryTest) testOneSampleT(org.apache.commons.math.stat.inference.TTestFactoryTest) testPaired(org.apache.commons.math.stat.inference.TTestFactoryTest) testTwoSampleTHomoscedastic(org.apache.commons.math.stat.inference.TTestFactoryTest) testOneSampleTTest(org.apache.commons.math.stat.inference.TTestTest) testSmallSamples(org.apache.commons.math.stat.inference.TTestTest) testTwoSampleTHeterscedastic(org.apache.commons.math.stat.inference.TTestTest) testOneSampleT(org.apache.commons.math.stat.inference.TTestTest) testPaired(org.apache.commons.math.stat.inference.TTestTest) testTwoSampleTHomoscedastic(org.apache.commons.math.stat.inference.TTestTest) testOneSampleTTest(org.apache.commons.math.stat.inference.TestUtilsTest) testChiSquareIndependence(org.apache.commons.math.stat.inference.TestUtilsTest) testChiSquareZeroCount(org.apache.commons.math.stat.inference.TestUtilsTest) testSmallSamples(org.apache.commons.math.stat.inference.TestUtilsTest) testTwoSampleTHeterscedastic(org.apache.commons.math.stat.inference.TestUtilsTest) testChiSquareLargeTestStatistic(org.apache.commons.math.stat.inference.TestUtilsTest) testChiSquare(org.apache.commons.math.stat.inference.TestUtilsTest) testOneSampleT(org.apache.commons.math.stat.inference.TestUtilsTest) testPaired(org.apache.commons.math.stat.inference.TestUtilsTest) testTwoSampleTHomoscedastic(org.apache.commons.math.stat.inference.TestUtilsTest) testClear(org.apache.commons.math.stat.regression.SimpleRegressionTest) testCorr(org.apache.commons.math.stat.regression.SimpleRegressionTest) testNaNs(org.apache.commons.math.stat.regression.SimpleRegressionTest) testPerfect(org.apache.commons.math.stat.regression.SimpleRegressionTest) testSSENonNegative(org.apache.commons.math.stat.regression.SimpleRegressionTest) testPerfectNegative(org.apache.commons.math.stat.regression.SimpleRegressionTest) testNorris(org.apache.commons.math.stat.regression.SimpleRegressionTest) testInference(org.apache.commons.math.stat.regression.SimpleRegressionTest) testRandom(org.apache.commons.math.stat.regression.SimpleRegressionTest) testAdHocData(org.apache.commons.math.transform.FastCosineTransformerTest) testParameters(org.apache.commons.math.transform.FastCosineTransformerTest) testSinFunction(org.apache.commons.math.transform.FastCosineTransformerTest) testAdHocData(org.apache.commons.math.transform.FastFourierTransformerTest) testParameters(org.apache.commons.math.transform.FastFourierTransformerTest) testSinFunction(org.apache.commons.math.transform.FastFourierTransformerTest) testAdHocData(org.apache.commons.math.transform.FastSineTransformerTest) testParameters(org.apache.commons.math.transform.FastSineTransformerTest) testSinFunction(org.apache.commons.math.transform.FastSineTransformerTest) testGoldenRation(org.apache.commons.math.util.ContinuedFractionTest) testTransformDouble(org.apache.commons.math.util.DefaultTransformerTest) testTransformBigDecimal(org.apache.commons.math.util.DefaultTransformerTest) testTransformObject(org.apache.commons.math.util.DefaultTransformerTest) testTransformString(org.apache.commons.math.util.DefaultTransformerTest) testTransformInteger(org.apache.commons.math.util.DefaultTransformerTest) testTransformNull(org.apache.commons.math.util.DefaultTransformerTest) testBinomialCoefficient(org.apache.commons.math.util.MathUtilsTest) testFactorialFail(org.apache.commons.math.util.MathUtilsTest) testIndicatorDouble(org.apache.commons.math.util.MathUtilsTest) testRoundDouble(org.apache.commons.math.util.MathUtilsTest) testIndicatorInt(org.apache.commons.math.util.MathUtilsTest) testGcd(org.apache.commons.math.util.MathUtilsTest) testLcm(org.apache.commons.math.util.MathUtilsTest) testCosh(org.apache.commons.math.util.MathUtilsTest) testHash(org.apache.commons.math.util.MathUtilsTest) testSinh(org.apache.commons.math.util.MathUtilsTest) testRoundFloat(org.apache.commons.math.util.MathUtilsTest) testFactorial(org.apache.commons.math.util.MathUtilsTest) testSubAndCheck(org.apache.commons.math.util.MathUtilsTest) testBinomialCoefficientFail(org.apache.commons.math.util.MathUtilsTest) testCoshNaN(org.apache.commons.math.util.MathUtilsTest) testSubAndCheckErrorMessage(org.apache.commons.math.util.MathUtilsTest) test0Choose0(org.apache.commons.math.util.MathUtilsTest) testMulAndCheck(org.apache.commons.math.util.MathUtilsTest) testSignByte(org.apache.commons.math.util.MathUtilsTest) testSignLong(org.apache.commons.math.util.MathUtilsTest) testEquals(org.apache.commons.math.util.MathUtilsTest) testIndicatorFloat(org.apache.commons.math.util.MathUtilsTest) testIndicatorShort(org.apache.commons.math.util.MathUtilsTest) testIndicatorByte(org.apache.commons.math.util.MathUtilsTest) testIndicatorLong(org.apache.commons.math.util.MathUtilsTest) testSignDouble(org.apache.commons.math.util.MathUtilsTest) testAddAndCheck(org.apache.commons.math.util.MathUtilsTest) testSignInt(org.apache.commons.math.util.MathUtilsTest) testSinhNaN(org.apache.commons.math.util.MathUtilsTest) testSignFloat(org.apache.commons.math.util.MathUtilsTest) testSignShort(org.apache.commons.math.util.MathUtilsTest) testNextAfter(org.apache.commons.math.util.MathUtilsTest) testNextAfterSpecialCases(org.apache.commons.math.util.MathUtilsTest) testAdd1000(org.apache.commons.math.util.ResizableDoubleArrayTest) testConstructors(org.apache.commons.math.util.ResizableDoubleArrayTest) testAddElementRolling(org.apache.commons.math.util.ResizableDoubleArrayTest) testWithInitialCapacityAndExpansionFactor(org.apache.commons.math.util.ResizableDoubleArrayTest) testSetNumberOfElements(org.apache.commons.math.util.ResizableDoubleArrayTest) testWithInitialCapacity(org.apache.commons.math.util.ResizableDoubleArrayTest) testDiscard(org.apache.commons.math.util.ResizableDoubleArrayTest) testMutators(org.apache.commons.math.util.ResizableDoubleArrayTest) testSetElementArbitraryExpansion(org.apache.commons.math.util.ResizableDoubleArrayTest) testGetValues(org.apache.commons.math.util.ResizableDoubleArrayTest) testMinMax(org.apache.commons.math.util.ResizableDoubleArrayTest) testClear(org.apache.commons.math.util.TransformerMapTest) testContainsTransformer(org.apache.commons.math.util.TransformerMapTest) testTransformers(org.apache.commons.math.util.TransformerMapTest) testPutTransformer(org.apache.commons.math.util.TransformerMapTest) testContainsClass(org.apache.commons.math.util.TransformerMapTest) testClasses(org.apache.commons.math.util.TransformerMapTest) testRemoveTransformer(org.apache.commons.math.util.TransformerMapTest)
{ "pile_set_name": "Github" }
1. Field of the Invention The present invention relates to a polymer/cholesteric liquid crystal dispersion which is utilized for display elements, image/information recording elements and spatial light modulators, to a method of producing the dispersion and to a liquid crystal display element utilizing the dispersion. 2. Description of the Related Art A cholesteric liquid crystal display element has, for example, the characteristics that it has a memory storing ability which can retain a display without any power source, it has the ability to obtain a bright display because no polarizing plate is used, and it enables color displaying without using a color filter. Attention has been therefore focused on such display elements in recent years (see, for example, Japanese Patent Application Laid-Open (JP-A) No. 05-080303). A cholesteric liquid crystal, in particular, is made of rod-shaped molecules oriented spirally and reflects interference light having a wavelength which corresponds to the spiral pitch (called selective reflection). It therefore has the characteristics that bright color display is possible without using any color filter by designing the spiral pitch to have a length corresponding to the wavelength of a red color, a green color or a blue color. Cholesteric liquid crystal sealed into a cell constituted by a pair of substrates provided with electrodes is known to take any of three types of oriented states: planar (P) orientation, focal conic (F) orientation and homeotropic (H) orientation as shown in FIG. 8A to FIG. 8C. In the figures, reference numeral 2 represents cholesteric liquid crystal, 21 and 22 represent a pair of substrates, and 11 and 12 represent electrodes. The P orientation is a state in which the spiral axis is oriented substantially perpendicular to the surface of the substrate and provides selective reflection. The F orientation is a state in which the spiral axis is oriented substantially parallel to the surface of the substrate, and light is transmitted in this state. The H orientation is an oriented state that appears when a sufficiently high voltage is applied between the pair of electrodes. In this state, the spiral is loosened, molecules are oriented perpendicular to the surface of the substrate, and light is transmitted. These three oriented states can be switched among each other by applying voltage between the electrodes. Accordingly, if a light absorber having a color such as a black color is disposed on the backside of the cell, it is possible to obtain a bright display colored with the selective reflection color during the P orientation and a dark display colored with the black color of the light absorber during the F or H orientation. Among the above orientation forms, both the P orientation and the F orientation can exist stably without using any power source. The utilization of this property makes it possible to attain a memory display in which a display is maintained without using any power source. On the other hand, a structure is known in which a polymer/cholesteric liquid crystal dispersion 4 obtained by dispersing a cholesteric liquid crystal 2 as particles in a polymer 1 is sandwiched between a pair of substrates 21 and 22 having electrodes 11 and 12, as shown in FIG. 9, instead of sealing the cholesteric liquid crystal directly between a pair of substrates having electrodes. In this case as well, the above display principle may be similarly utilized. The polymer/cholesteric liquid crystal dispersion is more resistant than ordinary liquid crystal cells to stresses applied from the outside. Therefore, the dispersion is not only resistant to the breakdown of a stored image but can also be apparently handled as a solid. As a result, there are advantages in that the polymer/cholesteric liquid crystal dispersion can be handled in, for example, a production process more easily than a liquid cholesteric liquid crystal and can be laminated on other functional films such as an optical conductor. As shown in FIG. 10, however, the reflection spectrum of the polymer/cholesteric liquid crystal dispersion is largely different from that of a liquid crystal cell, and the polymer/cholesteric liquid crystal dispersion has the following problems: (1) the spectrum of the polymer/cholesteric liquid crystal dispersion at the time of light reflection has significantly larger short-wavelength components than those of a liquid crystal cell, whereby only a color display having low color purity can be obtained and (2) the spectrum at the time of a dark display has large short-wavelength components, whereby only a display having a low contrast is obtained. There is also a problem in that (3) although a liquid crystal cell has a relatively stable reflectance at the time of a dark display (dark reflectance) over time, the polymer/cholesteric liquid crystal dispersion has a strong tendency toward an increase in this reflectance, which is accompanied by a display being made lighter in color over time. The above problems are characteristics found to be common to several methods of producing the polymer/cholesteric liquid crystal dispersion, such as a cholesteric liquid crystal microcapsule using a gelatin and gum arabic as its wall material, a cholesteric liquid crystal microcapsule using a polyurethane resin as its wall material, and a polymer/cholesteric liquid crystal dispersion obtained by dispersing cholesteric liquid crystals in an aqueous solution of a polyvinyl alcohol resin, followed by drying. Conventionally, the problems (1) and (2) have been caused by the superpositioning of boundary light scattering caused by a difference in refractive index between the polymer and the liquid crystal and have been considered to be unavoidable in polymer/cholesteric liquid crystal dispersions containing numerous cholesteric liquid crystal droplets in the direction of the film thickness. For this reason, as a measure for solving this problem, a method is disclosed, for example, in which a polymer/cholesteric liquid crystal dispersion is formed so as to contain only one liquid crystal droplet in the direction of the film thickness to decrease the influence of light scattering, in JP-A No. 6-160817. Although contrast is certainly improved according to this method disclosed in JP-A No. 6-160817, the method has a problem in that the area percentage of the cholesteric liquid crystal is reduced, whereby reflectance is reduced. The problem (3) cannot be explained by interfacial light scattering, and neither the cause of nor a preventive measure for the problem has been known.
{ "pile_set_name": "USPTO Backgrounds" }
In carrying several fishing rod and reel assemblies at the same time, it frequently occurs that the fishing lines of the various rods may become entangled. Further, there is frequently no convenient place to store small items of equipment associated with the fishing rods such as fishing tackle. The prior art is already aware of U.S. Pat. No. 3,674,190 to Wright which carries plural fishing rod and reel assemblies along a backbone member. However, the device of Wright is both complicated in regard to the purposes of the present invention and does not provide a place for storing fishing tackle.
{ "pile_set_name": "USPTO Backgrounds" }
Mossy fiber sprouting after recurrent seizures during early development in rats. In some children, epilepsy is a catastrophic condition, leading to significant intellectual and behavioral impairment, but little is known about the consequences of recurrent seizures during development. In the present study, we evaluated the effects of 15 daily pentylenetetrazol-induced convulsions in immature rats beginning at postnatal day (P) 1, 10, or 60. In addition, we subjected another group of P10 rats to twice daily seizures for 15 days. Both supragranular and terminal sprouting in the CA3 hippocampal subfield was assessed in Timm-stained sections by using a rating scale and density measurements. Prominent sprouting was seen in the CA3 stratum pyramidale layer in all rats having 15 daily seizures, regardless of the age when seizures began. Based on Timm staining in control P10, P20, and P30 rats, the terminal sprouting in CA3 appears to be new growth of axons and synapses as opposed to a failure of normal regression of synapses. In addition to CA3 terminal sprouting, rats having twice daily seizures had sprouting noted in the dentate supragranular layer, predominately in the inferior blade of the dentate, and had a decreased seizure threshold when compared with controls. Cell counting of dentate granule cells, CA3, CA1, and hilar neurons, with unbiased stereological methods demonstrated no differences from controls in rats with daily seizures beginning at P1 or P10, whereas adult rats with daily seizures had a significant decrease in CA1 neurons. Rats that received twice daily seizures on P10-P25 had an increase in dentate granule cells. This study demonstrates that, like the mature brain, immature animals have neuronal reorganization after recurrent seizures, with mossy fiber sprouting in both the CA3 subfield and supragranular region. In the immature brain, repetitive seizures also result in granule cell neurogenesis without loss of principal neurons. Although the relationship between these morphological changes after seizures during development and subsequent cognitive impairment is not yet clear, our findings indicate that during development recurrent seizures can result in significant alterations in cell number and axonal growth.
{ "pile_set_name": "PubMed Abstracts" }
Dariusz Koszykowski Dariusz Koszykowski (born January 22, 1972) is a Polish sprint canoer who competed in the early to mid-1990s. He won a bronze medal in the C-2 1000 m event at the 1994 ICF Canoe Sprint World Championships in Mexico City. Koszykowski also competed in two Summer Olympics, earning his best finish of fourth in the C-2 1000 m semifinal round at Barcelona in 1992. He did not advance to the final round in either Olympics. References Sports-reference.com profile Category:1972 births Category:Canoeists at the 1992 Summer Olympics Category:Canoeists at the 1996 Summer Olympics Category:Living people Category:Olympic canoeists of Poland Category:Polish male canoeists Category:People from Gryfino County Category:ICF Canoe Sprint World Championships medalists in Canadian Category:Sportspeople from West Pomeranian Voivodeship
{ "pile_set_name": "Wikipedia (en)" }
Linda & Richard Eyre: A big reason for not giving up on marriage All marriages go through their ups and downs, and we have become convinced that staying married and strengthening a marriage over time are often a matter of realizing just how high the stakes are and being committed enough that we fight our way through the tough times with never even the thought of giving up or throwing in the towel. Staying married and strengthening a marriage over time are often a matter of realizing just how high the stakes are and being committed enough that we fight our way through the tough times. We are continuing our focus on the topic of marriage this week because of large marriage seminars we have keynoted last week and this week and next week in Phoenix, Ogden and Minneapolis. All marriages go through their ups and downs, and we have become convinced that staying married and strengthening a marriage over time are often a matter of realizing just how high the stakes are and being committed enough that we fight our way through the tough times with never even the thought of giving up or throwing in the towel. And for those of us in The Church of Jesus Christ of Latter-day Saints, particularly those with temple marriages, the stakes are almost unimaginably high. Let us try a metaphor on you that may be helpful in grasping how much is at stake and in giving us all the long-range perspective that may get us through the rough patches that every marriage experiences. Imagine that you have started a business. Imagine that you put everything you had into the new company, all your money and all your borrowing power, and committed yourself to make that business work. There were some good years and some bad years, but now you have had the company for some time, and in some ways you are getting a little tired of it. And there are problems. You get audited by the IRS and have to pay some back taxes that you can’t afford. One of your employees has been embezzling from you and it is putting a strain on everything. And sometimes you feel that you don’t even really like the product you are producing and find the day-to-day process of running the business somewhat tiring and even boring. You keep reminding yourself how much you have invested in the company — both in money and in time, and so you keep at it, slogging away and doing your best. But it just seems like things aren’t getting any easier, and more and more often you have the feeling that you should just get out — sell the company and start over. Maybe you could build a better business the next time around, one you would enjoy more and that produced a better product. But then something happens. You have a vision one day of what the company could be worth if you held on to it and kept it going. This little epiphany overwhelms you and you fully and deeply believe that if you persevere and give it your all, the company will one day be worth not $1 million, not $100 million, not $1 billion, but $100 billion. Now, armed not only with the motivation of all that you have put into the business, but also the motivation of the unimaginable amount that it will one day be worth, you deepen your commitment and give all you have to making it succeed. The business, of course, is metaphorical for our marriages and for all we put into them, and for the problems and doubts and challenges we feel, and for the tendencies we sometimes have to feel like giving up or starting over with someone else, and for the inestimably huge worth that an eternal, celestial marriage will someday have. May we all draw from both of those motivations and make our marriage commitments absolutely firm and maximize and prioritize our constant efforts to strengthen and improve this most important union of eternity. Richard and Linda Eyre are New York Times No. 1 best-selling authors who lecture throughout the world on family-related topics. Visit them at www.EyresFreeBooks.com or www.valuesparenting.com. Their latest Deseret e-book is “On the Homefront." Popular Comments For when the DEAD rise, they will neither marry nor be given in marriage. In this respect they will be like the angels in heaven,(Mt 22:30 More.. 9:53 a.m. Feb. 8, 2013 Top comment ElleJay Sandy, UT 1st - If you're in an abusive relationship, GET OUT of it fast. 2nd - Divorce is sometimes unavoidable. If you truly feel in your heart that you have done everything possible to make your marriage work, yet it continues to spiral downward, you More..
{ "pile_set_name": "Pile-CC" }
More British women than men think a wife's role is to 'look after her husband' Follow the author of this article Women in Britain think a wife’s main role is to ‘look after her husband’ a major new study has found. The research, by YouGov, surveyed 42,000 people in 24 countries on their attitudes to gender equality. It is the first time the market reserach company has collated public opinion data on the subject. One of the statements posed was that ‘a wife’s first role is to look after her husband’. In European countries, between seven and 18 per cent of people agreed, while in Indonesia and Malaysia more than two-thirds of the online respondents concurred. 1950s housewife Betty Draper in Mad MenCredit: AP Britain was the only country where more women (17 per cent) than men (16 per cent) agreed with the statement. It was also one of only two countries where more women believed the statement ‘a woman’s place is in the home’ (10 per cent, compared to eight per cent of men). The other was the US. Seventy-one per cent of Brits thought ‘men should spend more time doing housework’ compared to 78 per cent of respondents in Middle East and north African nations (a group of 18 nations known as MENA and including Morocco, Algeria, Egypt, Saudi Arabia, Iraq, Syria and Yemen). Overall, the survey found a startling disparity in how the world views gender equality. In Denmark, 85 per cent agreed that men and women are of equal intelligence. In MENA nations that number was just 48 per cent. Credit: YouGov Nordic countries had the most gender positive attitudes, with Sweden, Denmark and Finland outperforming the other nations questioned. Britain ranked seventh, although YouGov pointed out that “there are two attitudes in which Britain falls behind”. These are the statements: ‘in the world as a whole women are an oppressed group’ and ‘creating more opportunity for women should be one of the world’s top concerns’. On the latter, Britain scored higher than only Morocco, Jordan, Thailand and Algeria with just 56 per cent of respondents agreeing. Added YouGov: “While Britain has a fundamentally progressive outlook to traditional gender roles around the home and in society, the issue is not thought of as such a high priority as it is in other developed countries, either for at home or abroad”. Credit: YouGov Ninety-two per cent of British women thought the sexes should get equal pay, compared to 86 per cent of men. While 30 per cent though it would ‘cause problems if a woman earns more money than her husband’, compared to 18 per cent of British men. The statement “it is unattractive for women to express opinions in public” was disagreed with by 90 per cent of British women and 86 per cent of men – but almost a third (29 per cent) of women in France concurred with the statement. This figure was roughly one in 10 throughout the rest of Europe. In China, Indonesia and Thailand, more respondents agreed that ‘men and women are equal’ (84, 80 and 89 per cent respectively). In Britain, just 73 per cent agreed, although this can probably be explained by the differing standards of ‘equality’ in each country. Fifty-eight per cent of those in China thought the pop singer Beyonce was a positive role model. Credit: YouGov Worldwide, women earn 33 per cent less than men; 700 million women are victims of physical or sexual violence every year; and men own and manage around 70 per cent of all business. Of the 774 million illiterate adults, two-thirds are women. In Britain, the overall pay gap is around 19 per cent and two women are killed each week by a partner or former partner.
{ "pile_set_name": "Pile-CC" }
Elias Pettersson News Pettersson (knee) has been activated from IR and will return to the lineup on Sunday. Pettersson has missed the last five games with a knee issue but will return to the lineup vs. the Red Wings on Sunday afternoon. Pettersson has had a sensational rookie season, scoring 22 goals with 20 assists (42 points) in 38 games. Pettersson (knee) will not play Wednesday vs. the Oilers. Pettersson took part in the Canucks morning skate and is close to returning to the lineup but will miss his fourth straight game tonight. Pettersson said he’s “feeling better every day” and hopes to play on Friday. Pettersson (lower-body) will not play on Saturday. Pettersson left Thursday’s game in Montreal with a lower-body and will not play Saturday in Toronto. With Pettersson out, Brandon Sutter will skate with Nikolay Goldobin and Brock Boeser. Pettersson is day-to-day. Pettersson left Thursday’s game with a lower-body injury and did not return. Pettersson’s leg bent awkwardly in a collision with Canadiens’ rookie Jesperi Kotkaniemi. The Canucks did not have much of an update postgame but said that they expect him to remain with them during their road-trip. Expect a more detailed update after Pettersson is re-evaluated. Pettersson (concussion) is expected to return to the Canucks lineup on Saturday. Pettersson missed the last six games with a concussion but will return tonight vs. the Penguins. Pettersson, who had eight points (5G / 3A) in his first five NHL games, will skate on a line with Brock Boeser and Nikolay Goldobin. Pettersson (concussion) is still not ready to return to the Canucks lineup. Pettersson (concussion) will not play on Wednesday. Pettersson continues to skate and appears close to returning to the Canucks lineup, but will not be available on Wednesday. With Pettersson still sidelined, Adam Gaudette will continue to centre the second line. Pettersson (concussion) returned to the ice for Tuesday’s practice; will travel on the Canucks two-game road-trip. Pettersson has missed the last four games with a concussion that he suffered in Florida 10 days ago. Canucks head coach Travis Green said that Pettersson could play on the trip, which starts Wednesday in Vegas and wraps up the following night in Arizona. Pettersson had a sensational start to the season, scoring five goals with three assists (eight points) in his first five games. Pettersson is expected to miss at least 7-10 days with a concussion. Pettersson was forced to leave Saturday’s game in Florida after a controversial collision with Panthers’ defenseman Mike Matheson. Pettersson will remain with the team on their current road-trip—which goes through Pittsburgh on Tuesday and Winnipeg on Thursday—but will not see any game action for at least one week. Pettersson could be out longer, depending on how long he experiences concussion symptoms.
{ "pile_set_name": "Pile-CC" }
1. Field of the Invention The present invention relates generally to the field of corn breeding. In particular, the invention relates to inbred corn seed and plants of the variety designated I181664, and derivatives and tissue cultures thereof 2. Description of Related Art The goal of field crop breeding is to combine various desirable traits in a single variety/hybrid. Such desirable traits include greater yield, better stalks, better roots, resistance to insecticides, herbicides, pests, and disease, tolerance to heat and drought, reduced time to crop maturity, better agronomic quality, higher nutritional value, and uniformity in germination times, stand establishment, growth rate, maturity, and fruit size. Breeding techniques take advantage of a plant""s method of pollination. There are two general methods of pollination: a plant self-pollinates if pollen from one flower is transferred to the same or another flower of the same plant. A plant cross-pollinates if pollen comes to it from a flower on a different plant. Corn plants (Zea mays L.) can be bred by both self-pollination and cross-pollination. Both types of pollination involve the corn plant""s flowers. Corn has separate male and female flowers on the same plant, located on the tassel and the ear, respectively. Natural pollination occurs in corn when wind blows pollen from the tassels to the silks that protrude from the tops of the ear shoot. Plants that have been self-pollinated and selected for type over many generations become homozygous at almost all gene loci and produce a uniform population of true breeding progeny, a homozygous plant. A cross between two such homozygous plants produces a uniform population of hybrid plants that are heterozygous for many gene loci. Conversely, a cross of two plants each heterozygous at a number of loci produces a population of hybrid plants that differ genetically and are not uniform. The resulting non-uniformity makes performance unpredictable. The development of uniform corn plant hybrids requires the development of homozygous inbred plants, the crossing of these inbred plants, and the evaluation of the crosses. Pedigree breeding and recurrent selection are examples of breeding methods used to develop inbred plants from breeding populations. Those breeding methods combine the genetic backgrounds from two or more inbred plants or various other broad-based sources into breeding pools from which new inbred plants are developed by selfing and selection of desired phenotypes. The new inbreds are crossed with other inbred plants and the hybrids from these crosses are evaluated to determine which of those have commercial potential. The pedigree breeding method involves crossing two genotypes. Each genotype can have one or more desirable characteristics lacking in the other; or, each genotype can complement the other. If the two original parental genotypes do not provide all of the desired characteristics, other genotypes can be included in the breeding population. Superior plants that are the products of these crosses are selfed and selected in successive generations. Each succeeding generation becomes more homogeneous as a result of self-pollination and selection. Typically, this method of breeding involves five or more generations of selfing and selection: S1xe2x86x92S2; S2xe2x86x92S3; S3xe2x86x92S4; S4xe2x86x92S5, etc. After at least five generations, the inbred plant is considered genetically pure. Backcrossing can also be used to improve an inbred plant. Backcrossing transfers a specific desirable trait from one inbred or non-inbred source to an inbred that lacks that trait. This can be accomplished, for example, by first crossing a superior inbred (A) (recurrent parent) to a donor inbred (non-recurrent parent), which carries the appropriate locus or loci for the trait in question. The progeny of this cross are then mated back to the superior recurrent parent (A) followed by selection in the resultant progeny for the desired trait to be transferred from the non-recurrent parent. After five or more backcross generations with selection for the desired trait, the progeny are heterozygous for loci controlling the characteristic being transferred, but are like the superior parent for most or almost all other loci. The last backcross generation would be selfed to give pure breeding progeny for the trait being transferred. A single cross hybrid corn variety is the cross of two inbred plants, each of which has a genotype which complements the genotype of the other. The hybrid progeny of the first generation is designated F1. Typically, F1 hybrids are more vigorous than their inbred parents. This hybrid vigor, or heterosis, is manifested in many polygenic traits, including markedly improved yields, better stalks, better roots, better uniformity and better insect and disease resistance. In the development of hybrids only the F1 hybrid plants are typically sought. An F1 single cross hybrid is produced when two inbred plants are crossed. A double cross hybrid is produced from four inbred plants crossed in pairs (Axc3x97B and Cxc3x97D) and then the two F1 hybrids are crossed again (Axc3x97B)xc3x97(Cxc3x97D). The development of a hybrid corn variety involves three steps: (1) the selection of plants from various germplasm pools; (2) the selfing of the selected plants for several generations to produce a series of inbred plants, which, although different from each other, each breed true and are highly uniform; and (3) crossing the selected inbred plants with unrelated inbred plants to produce the hybrid progeny (F1). During the inbreeding process in corn, the vigor of the plants decreases. Vigor is restored when two unrelated inbred plants are crossed to produce the hybrid progeny (F1). An important consequence of the homozygosity and homogeneity of the inbred plants is that the hybrid between any two inbreds is always the same. Once the inbreds that give a superior hybrid have been identified, hybrid seed can be reproduced indefinitely as long as the homogeneity of the inbred parents is maintained. Conversely, much of the hybrid vigor exhibited by F1 hybrids is lost in the next generation (F2). Consequently, seed from hybrid varieties is not used for planting stock. It is not generally beneficial for farmers to save seed of F1 hybrids. Rather, farmers purchase F1 hybrid seed for planting every year. North American farmers plant tens of millions of acres of corn at the present time and there are extensive national and international commercial corn breeding programs. A continuing goal of these corn breeding programs is to develop corn hybrids that are based on stable inbred plants and have one or more desirable characteristics. To accomplish this goal, the corn breeder must select and develop superior inbred parental plants. In one aspect, the present invention provides a corn plant of the variety designated I181664. Also provided are corn plants having all the physiological and morphological characteristics of the inbred corn variety I181664. The inbred corn plant of the invention may further comprise, or have, a cytoplasmic or nuclear factor that is capable of conferring male sterility or otherwise preventing self-pollination, such as by self-incompatibility. Parts of the corn plant of the present invention are also provided, for example, pollen obtained from an inbred plant and an ovule of the inbred plant. The invention also concerns seed of the corn plant I181664. A sample of this seed has been deposited under ATCC Accession No. PTA-3226. The inbred corn seed of the invention may be provided as an essentially homogeneous population of inbred corn seed of the corn plant designated I181664. Essentially homogeneous populations of inbred seed are those that consist essentially of the particular inbred seed, and are generally free from substantial numbers of other seed, so that the inbred seed forms between about 90% and about 100% of the total seed, and preferably, between about 95% and about 100% of the total seed. Most preferably, an essentially homogeneous population of inbred corn seed will contain between about 98.5%, 99%, 99.5% and about 99.9% of inbred seed, as measured by seed grow outs. Therefore, in the practice of the present invention, inbred seed generally forms at least about 97% of the total seed. However, even if a population of inbred corn seed was found, for some reason, to contain about 50%, or even about 20% or 15% of inbred seed, this would still be distinguished from the small fraction (generally less than 2% and preferably less than 1%) of inbred seed that may be found within a population of hybrid seed, e.g., within a commercial bag of hybrid seed. In such a bag of hybrid seed offered for sale, Federal regulations require that the hybrid seed be at least about 95% of the total seed, or be labeled as a mixture. In the most preferred practice of the invention, the female inbred seed that may be found within a bag of hybrid seed will be about 1% of the total seed, or less, and the male inbred seed that may be found within a bag of hybrid seed will be negligible, i.e., will be on the order of about a maximum of 1 per 100,000, and usually less than this value. The population of inbred corn seed of the invention can further be particularly defined as being essentially free from hybrid seed. The inbred seed population may be separately grown to provide an essentially homogeneous population of inbred corn plants designated I181664. In another aspect of the invention, single locus converted plants of variety I181664 are provided. The single transferred locus may preferably be a dominant or recessive allele. Preferably, the single transferred locus will confer such traits as male sterility, yield stability, waxy starch, yield enhancement, industrial usage, herbicide resistance, insect resistance, resistance to bacterial, fungal, nematode or viral disease, male fertility, and enhanced nutritional quality. The single locus may be a naturally occurring maize gene introduced into the genome of the variety by backcrossing, a natural or induced mutation, or a transgene introduced through genetic transformation techniques. When introduced through transformation, a single locus may comprise one or more transgenes integrated at a single chromosomal location. In yet another aspect of the invention, an inbred corn plant of the variety designated I181664 is provided, wherein a cytoplasmically-inherited trait has been introduced into said inbred plant. Such cytoplasmically-inherited traits are passed to progeny through the female parent in a particular cross. An exemplary cytoplasmically-inherited trait is the male sterility trait. Cytoplasmic-male sterility (CMS) is a pollen abortion phenomenon determined by the interaction between the genes in the cytoplasm and the nucleus. Alteration in the mitochondrial genome and the lack of restorer genes in the nucleus will lead to pollen abortion. With either a normal cytoplasm or the presence of restorer gene(s) in the nucleus, the plant will produce pollen normally. A CMS plant can be pollinated by a maintainer version of the same variety, which has a normal cytoplasm but lacks the restorer gene(s) in the nucleus, and continue to be male sterile in the next generation. The male fertility of a CMS plant can be restored by a restorer version of the same variety, which must have the restorer gene(s) in the nucleus. With the restorer gene(s) in the nucleus, the offspring of the male-sterile plant can produce normal pollen grains and propagate. A cytoplasmically inherited trait may be a naturally occurring maize trait or a trait introduced through genetic transformation techniques. In another aspect of the invention, a tissue culture of regenerable cells of a plant of variety I181664 is provided. The tissue culture will preferably be capable of regenerating plants capable of expressing all of the physiological and morphological characteristics of the variety, and of regenerating plants having substantially the same genotype as other plants of the variety. Examples of some of the physiological and morphological characteristics of the variety I181664 include characteristics related to yield, maturity, and kernel quality, each of which is specifically disclosed herein. The regenerable cells in such tissue cultures will preferably be derived from embryos, meristematic cells, immature tassels, microspores, pollen, leaves, anthers, roots, root tips, silk, flowers, kernels, ears, cobs, husks, or stalks, or from callus or protoplasts derived from those tissues. Still further, the present invention provides corn plants regenerated from the tissue cultures of the invention, the plants having all the physiological and morphological characteristics of variety I181664. In yet another aspect of the invention, processes are provided for producing corn seeds or plants, which processes generally comprise crossing a first parent corn plant with a second parent corn plant, wherein at least one of the first or second parent corn plants is a plant of the variety designated I181664. These processes may be further exemplified as processes for preparing hybrid corn seed or plants, wherein a first inbred corn plant is crossed with a second corn plant of a different, distinct variety to provide a hybrid that has, as one of its parents, the inbred corn plant variety I181664. In these processes, crossing will result in the production of seed. The seed production occurs regardless of whether the seed is collected or not. In a preferred embodiment of the invention, the first step in xe2x80x9ccrossingxe2x80x9d comprises planting, preferably in pollinating proximity, seeds of a first and second parent corn plant, and preferably, seeds of a first inbred corn plant and a second, distinct inbred corn plant. Where the plants are not in pollinating proximity, pollination can nevertheless be accomplished by transferring a pollen or tassel bag from one plant to the other as described below. A second step comprises cultivating or growing the seeds of said first and second parent corn plants into plants that bear flowers. Corn bears both male flowers (tassels) and female flowers (silks) in separate anatomical structures on the same plant. A third step comprises preventing self-pollination of the plants, i.e., preventing the silks of a plant from being fertilized by any plant of the same variety, including the same plant. This is preferably done by emasculating the male flowers of the first or second parent corn plant, (i.e., treating or manipulating the flowers so as to prevent pollen production, in order to produce an emasculated parent corn plant), Self-incompatibility systems are also used in some hybrid crops for the same purpose. Self-incompatible plants still shed viable pollen and can pollinate plants of other varieties but are incapable of pollinating themselves or other plants of the same variety. A fourth step comprises allowing cross-pollination to occur between the first and second parent corn plants. When the plants are not in pollinating proximity, this is done by placing a bag, usually paper or glassine, over the tassels of the first plant and another bag over the silks of the incipient ear on the second plant. The bags are left in place for at least 24 hours. Since pollen is viable for less than 24 hours, this assures that the silks are not pollinated from other pollen sources, that any stray pollen on the tassels of the first plant is dead, and that the only pollen transferred comes from the first plant. The pollen bag over the tassel of the first plant is then shaken vigorously to enhance release of pollen from the tassels, and the shoot bag is removed from the silks of the incipient ear on the second plant. Finally, the pollen bag is removed from the tassel of the first plant and is placed over the silks of the incipient ear of the second plant, shaken again and left in place. Yet another step comprises harvesting the seeds from at least one of the parent corn plants. The harvested seed can be grown to produce a corn plant or hybrid corn plant. The present invention also provides corn seed and plants produced by a process that comprises crossing a first parent corn plant with a second parent corn plant, wherein at least one of the first or second parent corn plants is a plant of the variety designated I181664. In one embodiment of the invention, corn seed and plants produced by the process are first generation (F1) hybrid corn seed and plants produced by crossing an inbred in accordance with the invention with another, distinct inbred. The present invention further contemplates seed of an F1 hybrid corn plant. Therefore, certain exemplary embodiments of the invention provide an F1 hybrid corn plant and seed thereof An example of such a hybrid which can be produced with the variety designated I181664 is the hybrid designated 9901269. In still yet another aspect of the invention, the genetic complement of the corn plant variety designated 181664 is provided. The phrase xe2x80x9cgenetic complementxe2x80x9d is used to refer to the aggregate of nucleotide sequences, the expression of which sequences defines the phenotype of, in the present case, a corn plant, or a cell or tissue of that plant. A genetic complement thus represents the genetic make up of an inbred cell, tissue or plant, and a hybrid genetic complement represents the genetic make up of a hybrid cell, tissue or plant. The invention thus provides corn plant cells that have a genetic complement in accordance with the inbred corn plant cells disclosed herein, and plants, seeds and diploid plants containing such cells. Plant genetic complements may be assessed by genetic marker profiles, and by the expression of phenotypic traits that are characteristic of the expression of the genetic complement, e.g., isozyme typing profiles. Thus, such corn plant cells may be defined as having an SSR profile in accordance with the profile shown in Table 6, or a genetic isozyme typing profile in accordance with the profile shown in Table 7, or having both an SSR profile and an isozyme typing profile in accordance with the profiles shown in Table 6 and Table 7. It is understood that variety I181664 could also be identified by other types of genetic markers such as, for example, Simple Sequence Length Polymorphisms (SSLPs) (Williams et al., 1990), Randomly Amplified Polymorphic DNAs (RAPDs), DNA Amplification Fingerprinting (DAF), Sequence Characterized Amplified Regions (SCARs), Arbitrary Primed Polymerase Chain Reaction (AP-PCR), Amplified Fragment Length Polymorphisms (AFLPs) (EP 534 858, specifically incorporated herein by reference in its entirety), and Single Nucleotide Polymorphisms (SNPs) (Wang et al., 1998). In still yet another aspect, the present invention provides hybrid genetic complements, as represented by corn plant cells, tissues, plants, and seeds, formed by the combination of a haploid genetic complement of an inbred corn plant of the invention with a haploid genetic complement of a second corn plant, preferably, another, distinct inbred corn plant. In another aspect, the present invention provides a corn plant regenerated from a tissue culture that comprises a hybrid genetic complement of this invention. In still yet another aspect, the present invention provides a method of producing an inbred corn plant derived from the corn variety I181664, the method comprising the steps of: (a) preparing a progeny plant derived from corn variety I181664, wherein said preparing comprises crossing a plant of the corn variety I181664 with a second corn plant, and wherein a sample of the seed of corn variety I181664 has been deposited under ATCC Accession No. PTA-3226; (b) crossing the progeny plant with itself or a second plant to produce a seed of a progeny plant of a subsequent generation; (c) growing a progeny plant of a subsequent generation from said seed of a progeny plant of a subsequent generation and crossing the progeny plant of a subsequent generation with itself or a second plant; and (d) repeating steps (c) and (d) for an addition 3-10 generations to produce an inbred corn plant derived from the corn variety I181664. In the method, it may be desirable to select particular plants resulting from step (c) for continued crossing according to steps (b) and (c). By selecting plants having one or more desirable traits, an inbred corn plant derived from the corn variety I181664 is obtained which possesses some of the desirable traits of corn variety I181664 as well potentially other selected traits.
{ "pile_set_name": "USPTO Backgrounds" }
Book Discussion on Thoughts Without Cigarettes: A Memoir Oscar Hijuelos talked about his book, Thoughts Without Cigarettes: A Memoir, in conversation with Carolyn Curiel. Oscar Hijuelos is… read more Oscar Hijuelos talked about his book, Thoughts Without Cigarettes: A Memoir, in conversation with Carolyn Curiel. Oscar Hijuelos is the author of eight novels and was the first Latino to win the Pulitzer Prize for fiction, for The Mambo Kings Play Songs of Love. He also has received the Rome Prize and grants from the National Endowment for the Arts and the Guggenheim Foundation. Mr. Hijuelos responded to questions from members of the audience. This was an event in the University Center’s Lake Room at the 2011 Chicago Tribune Printers Row Lit Fest. close Javascript must be enabled in order to access C-SPAN videos. Transcript type Filter by Speaker Search this transcript *The transcript for this program was compiled from uncorrected Closed Captioning.
{ "pile_set_name": "Pile-CC" }
Type 93 surface-to-air missile The is a surface-to-air missile used by the Japan Ground Self-Defense Force. It is the vehicle-borne version of the Type 91 missile. It is known in JSDF ranks as the Closed Arrow. Description It was first deployed in 1993, due to a need to replace L-90 35mm Anti-Aircraft Twin Cannons in JGSDF service. It is typically deployed on a modified launcher Kōkidōsha (military version Toyota Mega Cruiser) with a total of eight missiles ready to fire. Operation The Type 93 is a vast improvement over the L-90 as it has the ability to track down and shoot down enemy aircraft thanks to infrared homing on its system. See also Type 91 surface-to-air missile References External links Official JGSDF Page Category:Weapons and ammunition introduced in 1993 Category:Surface-to-air missiles of Japan
{ "pile_set_name": "Wikipedia (en)" }
Barbezières Barbezières is a commune in the Charente department in southwestern France. Population See also Communes of the Charente department References INSEE Category:Communes of Charente
{ "pile_set_name": "Wikipedia (en)" }
Q: Alternative to Publishing on WAS in RAD I am working on WAS on RAD 7.5 and publishing and making changes as very slow and frustrating.. Is there any other faster alternative like using eclipse and any other server to develop and eventually run it on WAS-RAD system ? I heard somewhere we can use the dump of mysql and use it something like it but have no idea. A: you can try was development profile or liberty profile. if you don't have enough money :) you can use tomcat and embedded ejb container as alternative.. it would be faster ..but you will need to take care while packaging to tomacat and to websphere
{ "pile_set_name": "StackExchange" }
1. Field of the Invention The present invention relates to a cathode ray tube and, in particular, a cathode ray tube for applying a predetermined voltage to a corresponding electrode via a resistor unit which is disposed in the neck of a cathode ray tube. 2. Description of the Related Art Generally, a color CRT is known as a CRT which is supplied with high voltage. The color CRT, usually, comprises an envelope 3 comprising a panel 1, a funnel 2 and a neck 6, as shown in FIG. 1. A phosphor screen (target) 5 is formed on the inner surface of the panel 1 and a shadow mask 4 is provided opposite to the phosphor screen (target) 5 which is composed of a three-color phosphor layer for emitting R (red), B (blue) and G (Green) light. At a time of use, a deflection yoke 20 is mounted near a boundary between a funnel 2 and a neck 6. An electron gun assembly 7 is located in the neck 6 to emit three electron beams 9. The electron gun assembly 7 is composed of a plurality of electrodes, such as a cathode serving as an electron beam generating section, an electrode for controlling the generation of the electron beams 9 emitting from the cathode, and an electrode for focusing the electron beams toward the phosphor screen at accelerated speed. It is necessary to supply a high anode voltage of about 25 to 30 KV and medium voltage of about 5 to 8 KV (focusing voltage) to the corresponding electrodes. A voltage which is to be applied to the associated electrode in the electron gun assembly 7 is applied there via a corresponding stem pin 17 which extends through a stem section 6a of the neck 6 in airtight fashion, noting that anode voltage is applied via an inner conductive film 16 which is formed on the inner surface of an anode terminal 8 and funnel 2. Supplying a medium voltage, such as a focusing voltage, via the stem section 6a poses a "arcing or flashover" problem as involved at a supply section such as a socket which is connected to the stem pin 17. This causes a complex structure. A way for obtaining a requisite medium voltage through the division of anode voltage which is made by a resistor unit located within the CRT is disclosed in Japanese Utility Model Disclosure (KOKAI) Nos. 48-21561 and 55-38484 and U.S. Pat. Nos. 3,932,786 and 4,413,298. However, there is no adequate space for the resistor unit to be arranged within the CRT. For this reason, the resistor unit is located in a small space in the neck 6 such that it is situated near the electron gun assembly 7. FIG. 2 is one form of an electron gun assembly having a resistor unit arranged in it. In an arrangement shown in FIG. 2, reference numeral 7 denotes electron gun assembly 10a, 10b, 10c (10b, 10c hidden from view in FIG. 2), heaters; 11a, 11b, 11c (11b, 11c hidden from view in FIG. 2), cathodes; G1, G2, G3, G4 and G5, first, second, third, fourth and fifth grids, respectively; 12, a shield cup; 13a, 13b, a pair of insulating support rods; 15, a spacer; 16, an inner conductive film and 17, a stem pin. In the electron gun assembly 7, a resistor unit 14 is located at the back surface of the insulating support rod 13a. The resistor unit 14 is formed as shown in FIG. 3. In the arrangement shown in FIG. 3, 18 denotes an insulating board; 19, a high resistance section; T1 . . . T4, voltage pickup terminals; and CN, a connector. If the resistor unit 14 is arranged in a narrow space in the neck 6 such that it is located near the electron gun assembly 7, a relatively complex potential distribution is created in the space in the neck of the CRT, which is caused by a potential on each electrode in the electron gun assembly 7 and on the inner conductive film 16. For this reason, a problem occurs as set out below. That is, since the surface of the neck 6 and those of the insulating support rods 13a, 13b and resistor unit 14 are formed with an insulating material, electrons leaking from an "electrode side" opening of the electron gun assembly 7 as well as electrons emitted from the electrode in the presence of a strong electric field are accelerated from a low to a high potential zone. Upon the collision of electrons on the insulating material as set forth above, many secondary electrons are generated, moving toward the high potential section while increasing in number. As a result, a greater discharge occurs, sometimes destroying a drive circuit for the CRT, the resistor unit 14, insulating support rods 13a, 13b and so on. Even in the case where no greater discharge takes place, a tiny steady discharge may occur between the aforementioned material and the electrode. At that time, bluish white light is observed as a discharge, causing a variation in the potential on the insulating material as set forth above and in a potential distribution around the insulating material. This variation exerts an adverse effect upon an electron lens, thus degrading an electron beam spot configuration on the phosphor screen 5 and hence reducing image quality. As a solution to the problem as set out above, Japanese Patent Disclosure (KOKAI) 57-119437 discloses the technique of using a metal ring for surrounding such an insulating support rod against a low or a medium potential electrode. Even in the arrangement shown in FIG. 2, a metal ring SR is placed at that location of the third grid G3 as near to an electrode pickup terminal T3 as possible to surround the insulating support rods 13a, 13b and resistor unit 14 with it. The metal ring SR is heated to form an evaporated matter on the inner wall of the neck 6. In FIG. 2, reference numeral 101 denotes a metal evaporation film, that is the evaporated matter. In the arrangement using such a technique, an electric field still stays strong in the area of the resistor unit 14 which is situated near an electrode pickup terminal T2. A tiny discharge is developed between an involved location near to the electrode pickup terminal T2 and the metal deposition film 101 on the inner wall of the neck and between that and the insulating support rods 13a, 13b, causing a variation in a division voltage on the resistor unit 14. The variation of the division voltage fails to exhibit a given performance of an electronic lens. It is, therefore, not possible to prevent a deterioration in an electron beam spot pattern on the phosphor screen 5 and in an image quality. In the case where a given voltage is applied to a corresponding electrode on the electron gun assembly 7 through a given division resistance on the resistor unit 14 which is located near the electron gun assembly 7 in the narrow space of the neck 6, if such a metal ring SR is used so as to prevent the occurrence of a discharge in the neck 6, there is less beneficial result in the event of the resistor unit's voltage pickup terminal being higher in voltage than the metal ring SR, failing to achieve complete prevention of a discharge in the neck 6 of the color CRT, that is, to achieve a normal operation of the color CRT.
{ "pile_set_name": "USPTO Backgrounds" }
The inherited bone marrow failure syndromes (IBMFS) are a heterogeneous group of disorders characterized by marrow failure, congenital anomalies, and predisposition to myelodysplastic syndromes (MDS). Studies of IBMFS to date have largely focused on pediatric patients, but these are increasingly recognized in adults presenting with cytopenia(s). There are no paradigms defining the optimal care of adult patients with IBMFS. A significant subset of patients fail to fall within the known categories of IBMFS. The diagnosis and medical care of IBMFS patients are limited by our lack of knowledge regarding genetic causes. We will pursue complementary bidirectional studies moving between the pediatric/adult clinics and the laboratory to investigate the clinical features, genetic etiology, and pathophysiology of IBMFS. We will also exploit recent technological advances as a platform to develop novel diagnostic tests for these syndromes. The identification of molecular pathways contributing to marrow failure should provide insights into global molecular pathways regulating hematopoiesis as well as inform our understanding of acquired marrow failure and MDS in the general population. PUBLIC HEALTH RELEVANCE: Understanding the genetic pathways contributing to marrow failure will allow the development of new diagnostic tests and rationally designed medical therapies. Elucidating the molecular mechanisms underlying inherited marrow failure will provide insights into marrow failure arising in the general population.
{ "pile_set_name": "NIH ExPorter" }
November 19, 2017Canada Tour 2018We are happy to announce that we will embark on a cross-Canada tour in May 2018 with concerts all across the country. The tour will be supported by the Goethe-Institut. See our Calendar for details. June 1, 2014Welcome to the New Subtone WebsiteCheck out the new multimedia section with audio, video and photos. In the store section you will be able to buy CDs, scores and posters. March 15, 2014New German Jazz Competition MannheimToday, Subtone will compete as one of 3 finalists at the New German Jazz Competition in Mannheim, Germany. February 7, 2014New CD "Roswitha's Revenge" Out NowRoswitha's Revenge, Subtone's 4th album with 9 brand new compositions has been released today on Laika Records. It is available for purchase in the store section.click here for more info Subtone stands in the forefront of today’s young innovative Jazz ensembles. Founded in 2005 in Berlin, its members now live on both sides of the Atlantic, in St. John’s, NL, Berlin and Cologne. Through innumerable concerts at home and abroad, as well as appearances at festivals like the Bohemia Jazz Fest in the Czech Republic and Jazz Baltica in Salzau, Germany, Subtone quickly established themselves as one of the most sought-after jazz ensembles in Europe. After three acclaimed albums, in early 2014 Subtone released their 4th album “Roswitha’s Revenge” on Laika Records. Subtone is the winner of the Jury and Audience Award of the International Jazz Competition “Tremplin Jazz d’Avignon” in France. The individual members are all accomplished performers. Collectively, they have won prestigious awards like the US National Trumpet Competition (Magnus Schriefl), the Bass Competition of the International Bass Convention (Matthias Pichler) and the Sting Ray Rising Star Award (Florian Hoefner) and have collaborated with artists and groups like the Kurt Rosenwinkel Trio, Seamus Blake, Joe Lovano, Dave Liebman, Randy Brecker and the European Jazz Orchestra. With their own imaginative compositions, the five artists offer an inexhaustible selection, from the voluminous big-band sound to a bare-essential chamber music approach. In addition to the apparent mastery over their instruments, it is above all the implicitness with which they communicate that is so striking. What makes Subtone stand out from other ensembles is the impressive artistic connection among the five musicians. The listener perceives this intense communication and fundamental understanding among all members of the band in the very first notes. Malte Dürrschnabel Malte Dürrschnabel studied saxophone at the University of the Arts Berlin and at the Royal Conservatories in The Hague and Brussels. He's now a freelance musician and music teacher. His achievements include a six-month appointment as guest professor at the Pontifica Universidad Javeriana in Bogotá, Colombia and the award as best instrumentalist at the international jazz competition "Tremplin Jazz d’Avignon" in France. Malte is a sought-after lead alto player and woodwind specialist and regularly works with ensembles like the WDR Big Band Cologne, the HR Big Band Frankfurt and the Glenn Miller Orchestra. In concerts around the world on 5 different continents he shared the stage with artist like Al Porcino, Lalo Schifrin, Benny Golson, Ack van Rooyen and Theo Bleckmann. He also appears on various commercial recordings and broadcast productions. Magnus Schriefl Magnus Schriefl belongs to the outstanding trumpet players of the young generation. His merits as a performer include an invitation as featured soloist to the Jazz Baltica Jazz Festival in Salzau, Germany, appearances with the European Jazz Orchestra, the Andromeda Mega Express Orchestra and concerts with artists like Randy Brecker, Dave Liebman and Seamus Blake. He is the winner of the 2011 National Trumpet Competition in the USA. Magnus started playing the trumpet when he was five years old. With age 15 he joined the German National Youth Jazz Orchestra. He studied at the Conservatorium van Amsterdam, at the Conservatoire National Supérieure de Musique et de Danse de Paris, at the Jazz-Institut Berlin and at the Manhattan School of Music in New York City where he completed his Master’s degree in 2012. Florian Hoefner Florian Hoefner has been able to establish himself as a unique voice in modern jazz, both as a pianist and composer. Hailed as a “cerebral and harmonically daring pianist […] reaching toward new sonic territory” by Downbeat and a “composer-bandleader of insightful resolve” by the New York Times, Mr. Hoefner has made his mark as an inventive creator of exciting small group jazz. With his working quartet, the Florian Hoefner group he has released 3 albums on Origin Records that have received rave reviews around the globe. Described as a “starkly picturesque album [...] that makes profound statements from quiet moments” by Downbeat and as a “total experience” by the New York Jazz Record his latest release, “Luminosity” featuring tenor saxophonist, Seamus Blake has been his most ambitious project so far. With the group, he maintains a busy touring schedule that has included over 100 dates in Europe and North America to date. After their performance at the Montréal Jazz Festival 2015, Florian was awarded the Stingray Rising Star Award. Florian’s work as a sideman led to numerous additional CD releases including a collaboration with guitarist, Kurt Rosenwinkel on Fresh Sounds Records. He has shared the stage with the likes of Joe Lovano, Seamus Blake, Rich Perry, John Riley and Mark Nuccio. A two-time winner of the ASCAP Young Jazz Composer Award, Florian has also contributed compositions and arrangements to many commercial albums by artists like Till Brönner, Jasmin Tabatabai and Peter Fessler. His big band compositions been performed by the New York Jazz Orchestra, the Lucerne Jazz Orchestra, the German Youth Jazz Orchestra and the DanJam Orchestra. Florian initially began his musical studies in Bavaria, where he played the trumpet and accordion, in addition to piano. After obtaining his first degree in jazz piano from the University of Arts in Berlin, he was granted a Fulbright Scholarship to complete a Master of Music degree from the Manhattan School of Music in New York City where he studied with Jason Moran, Dave Liebman, and Garry Dial. As a teacher, Florian has been active both as a private instructor and clinician, working for institutions like the Manhattan School of Music in New York City, the University of Toronto, Memorial University St. John’s, the University of Arts in Berlin, Germany, the Anton Bruckner University in Linz, Austria, and the Pontificia Universidad Javeriana in Bogotá, Colombia. Since July 2014 Florian is a resident of St. John’s, Newfoundland, Canada, from where he continues to embark on new musical adventures. Matthias Pichler Matthias Pichler is one of the finest young jazz bassists in Europe. In 2010, he won the Jazz Competition of the International Bass Convention in Berlin . Since 2005 he is a regular member of the Wolfgang Muthspiel Trio. Other collaborations include Harry Sokal, Ingrid Jensen, Dick Oatts, Jochen Rückert, Marc Copland, Kirk Lightsey and Johannes Enders. Concerts with these artists brought him to over 20 countries all over the world. Hailing from Innsbruck, Austria he studied jazz bass at the Anton Bruckner University in Linz and was awarded Austria’s prestigious Hans-Koller Award in 2004 and 2006. In 2010 he relocated to Berlin where he is now working on a new duo project with his twin-brother Andreas on drums. Peter Gall Peter Gall studied drums with John Riley, John Hollenbeck and composition with Phil Markowitz, Dave Liebman and Jim McNeely. He holds a diploma from the University of the Arts / Jazz Institute Berlin and spent two years in New York City based on a DAAD-Scholarship, where he graduated with a Master Of Music from Manhattan School of Music. His performance experience includes artists like the Kurt Rosenwinkel Trio, Dave Liebman, Joe Lovano, Nils Landgren, Seamus Blake and Randy Brecker. Having a passion also for styles other than jazz, Peter was a former member of NY-pop-singer Gabriel Rios' group and is currently touring with German singer and TV star Jasmin Tabatabai's (ECHO-Winner) group as well as collaborations with indie/electro-artist Enik and the German rising Indie-Rock band Balloon Pilot. Peter appears on various recordings on labels including ACT, Freshsound Records, Enja and Skip Records and performed on prestigious festivals and stages like Montreux Jazz Festival, Jazz Baltica, Bohemian Jazzfest, Munich Jazzfest and Carnegie Hall. As a touring artist he visited many countries in the world including Austria, Czech Republic, Colombia, Ecuador, Estonia, Ethiopia, France, Italy, the Netherlands, Poland, Russia, Slovenia, Serbia, Switzerland, Turkey, USA ...it becomes obvious quickly that it’s only about the music, about the greater picture, spirit, emotions, colors, influences that have to be processed. Not a single piece follows the head-solo-head pattern which jazz musicians use as the easy way out much too often. Too much ambition is wisely being anticipated when it threatens to impede the course of events. A solo takes places when it makes sense, when the dense textures of the compositions dramaturgically call for a release.Ssirus W. PakzadNeue Musikzeitung This band is really one of the most innovative new bands to emerge on the jazz scene over the past couple of years, with a fresh sound, great playing by all and a very positive attitude. They have a sound of their own. These guys know how to kick it and I like it! Respect!Nils Landgren The complexity of their solely original music by far exceeds what we are used to as improvising over a theme in jazz. In fact, it does not seems like single soloists want to distinguish themselves but that all five commission their enormous technical and musical skills to a multi-layered sound painting.Münchner Merkur, Munich, Germany It is the sound that from the first second on puts Subtone in the forefront of today’s band concepts and keeps me at it. Five marvellous soloists leave their egos outside the studio door to contribute to something higher.Till Brönner
{ "pile_set_name": "Pile-CC" }
Let c(l) = -l**2 + l - 1. Determine s*j(h) - 28*c(h). -4*h**2 Let n(a) = 7*a - 7. Let o(s) = s. Let r(g) = -n(g) + 3*o(g). Let d(w) = 6*w - 4*w - 3 - 3*w + 4. Calculate 5*d(x) - r(x). -x - 2 Suppose 5 = 3*i - 1. Suppose 0*v + 12 = i*v. Let o(r) = 7*r**2 + 6*r + 5. Let b(n) = -6*n**2 - 5*n - 4. What is v*b(j) + 5*o(j)? -j**2 + 1 Let j(u) = -u**3 - 96*u + 4. Let p(b) = 3*b**3 + 191*b - 9. Calculate -5*j(s) - 2*p(s). -s**3 + 98*s - 2 Let z(g) = -22*g**3 + 2*g**2 - 6*g - 4. Let t(p) = p**3 + p + 1. Calculate -5*t(y) - z(y). 17*y**3 - 2*y**2 + y - 1 Let k(f) = 0*f**2 + 5*f**2 + 6 - f**2. Let s(t) = 7*t**2 + 11. Suppose 290 = -3*g + h + 113, -4*h = -5*g - 302. Let r be 9/(-75)*-5 + g/5. Give r*k(v) + 6*s(v). -2*v**2 Let t(w) = -11*w**2 + 7. Let u(n) = 11*n**2 - 18*n - 4. Let p(h) = -12*h**2 + 21*h + 4. Let l(b) = -6*p(b) - 7*u(b). Give 7*l(m) - 4*t(m). 9*m**2 Let z(m) = 2*m**2 + 1. Let l(h) = h**2. Let u(i) = -i**3 + 39*i**2 - 39*i + 37. Let j be u(38). Give j*z(w) - 2*l(w). -4*w**2 - 1 Let k(g) be the first derivative of 0*g**3 + 21 + 55/4*g**4 + 25*g + 0*g**2. Let q(o) = 9*o**3 + 4. Give 4*k(b) - 25*q(b). -5*b**3 Let a(x) = -15*x - 267. Let b(r) = 3*r + 53. Give 5*a(j) + 24*b(j). -3*j - 63 Let y(n) = -472*n**2 - 1 + 474*n**2 + 2*n**3 + 0 - 1. Let r(w) = -6*w**3 - 7*w**2 + 7. Calculate 2*r(v) + 7*y(v). 2*v**3 Let c(j) = -6*j**2 + 7*j**3 + 260 - 266 + 3*j**3. Let u(r) = -3*r**3 + 2*r**2 + 2. Let h = 8 - 6. What is h*c(p) + 7*u(p)? -p**3 + 2*p**2 + 2 Let s(y) = -5*y**2 - 8*y - 109. Let q(r) = 2*r**2 + 3*r + 37. Determine 11*q(c) + 4*s(c). 2*c**2 + c - 29 Let p(h) = -5 + 4*h + h**2 - 3*h + 4. Let v(d) = 2*d**3 - 6*d**2 - 4*d + 6. Give -6*p(b) - v(b). -2*b**3 - 2*b Let r(j) = j**2 + 6*j + 3. Let b(l) = -6*l**2 - 31*l - 16. Let z be (-1)/2 + -3*56/(-48). Suppose -z*u + 50 = 17. Determine u*r(q) + 2*b(q). -q**2 + 4*q + 1 Let x(y) = 7*y**2 + 3*y - 4. Let v(m) = 4*m**2 + 2*m - 2. Let j = 26 + -21. Suppose 3*z - 15 = 0, -5*z = -j*g - 0 - 10. Calculate g*x(p) - 5*v(p). p**2 - p - 2 Let f(l) = -1104998*l. Let v(g) = 1897*g. Give -5*f(p) - 2913*v(p). -971*p Let q be 10/4 + (-3)/6. Let b(a) = 69*a**2 + 4*a - 72*a**q - 4 + 4. Let o(x) = -2*x - 1. Let g be o(-3). Let t(l) = -l**2 + 2*l. Determine g*t(u) - 2*b(u). u**2 + 2*u Let v(p) = -5*p**2 + 4*p + 3. Let t(i) = 5*i**2 - 5*i - 3. Determine 2*t(c) + 3*v(c). -5*c**2 + 2*c + 3 Let o be (-32 - -30)*4/8. Let j(a) = a**2 - a + 1. Let v(w) = -4*w**2 + 2*w - 3. Calculate o*v(n) - 5*j(n). -n**2 + 3*n - 2 Let y(i) = -84*i**2 - 2*i - 1. Let g(p) = -p**2. What is g(q) + y(q)? -85*q**2 - 2*q - 1 Let u(g) = -g - 1. Let i(s) = 16*s + 10. Give i(d) + 6*u(d). 10*d + 4 Let g(m) = -m**3 + 2*m**2 + 4. Let q(r) = -r**3 + 3*r**2 + 3. Determine 6*g(x) - 5*q(x). -x**3 - 3*x**2 + 9 Let j(x) = 6*x**2 - 7. Let y = 98 - 96. Let f(s) = 4*s**y - 8 + 4*s**2 + 2*s**2 - 3*s**2. Give -4*f(l) + 5*j(l). 2*l**2 - 3 Let d be 1*(-3)/6 - 162/(-36). Let r(w) = -4*w**3 + 19*w**2 - 7*w + 7. Let t(i) = -i**3 + 6*i**2 - 2*i + 2. Calculate d*r(z) - 14*t(z). -2*z**3 - 8*z**2 Let c(g) = -6*g + 12. Let u(a) = 5*a - 14. What is 6*c(h) + 5*u(h)? -11*h + 2 Let n(x) = -89*x**2 - 6. Let v(l) = 30*l**2 + 2. Give -3*n(i) - 8*v(i). 27*i**2 + 2 Let k(c) = 5*c**2 - 41*c + 6. Let x be k(8). Let u(t) = t**2 + t - 2. Let n(i) = -i + 3. Determine x*n(a) - 3*u(a). -3*a**2 - a Let d(m) be the second derivative of -m**3/3 + 5*m**2/2 - 2*m + 1. Let q(b) = -6*b + 14. What is 11*d(y) - 4*q(y)? 2*y - 1 Let n(o) = -8*o. Let z(a) = -2*a**3 + 23*a**2 - 12*a + 17. Let f be z(11). Let y(d) = -3*d. What is f*n(k) - 17*y(k)? 3*k Let g(r) = -r**3 - 5*r**2 - 18*r - 7. Let h(y) = y**3 + 4*y**2 + 12*y + 5. What is 5*g(t) + 7*h(t)? 2*t**3 + 3*t**2 - 6*t Let a(j) = -215*j - 9. Let o(w) = 209*w + 11. Give -7*a(c) - 6*o(c). 251*c - 3 Let u(v) = -v**2 + 2. Let n = -34 + 39. Suppose 2*c - 26 = 4*b + 2, -10 = n*b. Let k(q) = -3*q**2 + 5. Determine c*u(w) - 4*k(w). 2*w**2 Let s(a) = -2*a - 2. Let n(p) = -24*p - 18. Give 4*n(y) - 40*s(y). -16*y + 8 Let w(g) = -2*g**2 - g - 2. Let p(v) = -6*v**2 - 3*v - 7. Suppose -44*h = -15*h - 58. Calculate h*p(i) - 7*w(i). 2*i**2 + i Let p(l) = l + 5. Let s(m) = 3*m**2 + m + 23. Determine 3*p(u) - s(u). -3*u**2 + 2*u - 8 Let v(x) = 443*x - 906*x - 5 + 3 + 462*x - 6*x**3. Let u(a) = -11*a**3 + a + 3*a - 6*a - 3. Determine -4*u(h) + 7*v(h). 2*h**3 + h - 2 Let n(l) = -792*l + 440. Let h(i) = -88*i + 48. Determine -55*h(c) + 6*n(c). 88*c Let g(y) be the first derivative of -y**4/4 + y**3/3 - y**2/2 + y + 1. Let s(w) = -6*w**3 + 4*w**2 - 4*w + 4. What is -12*g(q) + 3*s(q)? -6*q**3 Let u(q) = 327*q**2 - 5*q + 10. Let l(m) = 490*m**2 - 7*m + 14. What is 5*l(z) - 7*u(z)? 161*z**2 Let c(m) be the third derivative of -m**4/12 + m**3/2 + 72*m**2. Let t(x) = 2*x - 4. Determine -4*c(l) - 3*t(l). 2*l Let u(a) = 12*a - 6*a - 1 + 0 - 5*a. Let s(d) = 6*d - 3. Determine -s(y) + 5*u(y). -y - 2 Let x be (-1 - 1)/(4/(-22)). Let q(r) be the first derivative of r**4/2 + 7*r**3/3 - 3*r + 3128. Let v(b) = 4*b**3 + 13*b**2 - 5. Give x*q(s) - 6*v(s). -2*s**3 - s**2 - 3 Let o(l) = 23*l**2 + 40*l + 7. Let w(y) = -15*y**2 - 27*y - 5. Calculate 5*o(k) + 7*w(k). 10*k**2 + 11*k Let h(x) = -2*x**3 - 4*x**2 + x. Let y(d) = d**3 + d**2. Suppose 2 = 21*m - 19*m. What is m*h(f) + 3*y(f)? f**3 - f**2 + f Let x(r) = 5*r**2. Let j(z) = -3*z + z**2 + 5*z - 2*z. Let m(t) = 12*t**2 - t. Suppose 6 - 9 = -3*u. Let n be m(u). Give n*j(w) - 2*x(w). w**2 Let j(d) = -d. Let v(m) = 35*m - 9. Give -6*j(f) + v(f). 41*f - 9 Let r = 205 - 206. Let l(o) = -o**3. Let g(s) = s**3 + s**2 + 4*s. Determine r*g(a) + l(a). -2*a**3 - a**2 - 4*a Let n(p) = -p + 1. Let v be -1 - (3 + -3)*(2 + -3). Let x(m) = -3*m + 6. What is v*x(t) + 2*n(t)? t - 4 Let t(h) = 19*h**2 + 8*h + 21. Let p(z) = -13*z**2 - 5*z - 14. Calculate -8*p(g) - 5*t(g). 9*g**2 + 7 Let k be -2 - 32/(-12)*30/4. Let z(v) = v**3 - v**2. Let i(u) = -4*u**3 + 4*u**2. Let o be (-184)/(-6) - 8/12. Suppose 10 = -5*x + o. Give k*z(a) + x*i(a). 2*a**3 - 2*a**2 Let y(h) = -31*h - 11. Let q(g) = 152*g + 54. Determine -3*q(b) - 14*y(b). -22*b - 8 Let n(o) = o**3 - 5. Suppose 0*w - 4*w + 16 = 0. Suppose -w*j + 4 = 2*z - 3*z, 0 = -3*j - 5*z - 20. Let f(q) = 74 + q**3 + j*q**3 - 80. What is 5*f(s) - 6*n(s)? -s**3 Let l = 20 - 7. Let x(w) = -7*w**2 - 6*w + 9. Let n(q) = -15*q**2 - 13*q + 19. Let g(c) = l*x(c) - 6*n(c). Let b(j) = -2*j**2 + 8. Give 3*b(f) - 8*g(f). 2*f**2 Let b(z) = 11*z**3 - 4*z**2 - 5*z - 7. Let d = 531 - 526. Let l(f) = 17*f**3 - 6*f**2 - 8*f - 11. Give d*l(n) - 8*b(n). -3*n**3 + 2*n**2 + 1 Let g(j) = -j**3 - 16*j**2 + 19*j + 40. Let y be g(-17). Let o(f) = -13*f + 6. Let t(b) = -26*b + 13. Determine y*t(v) - 13*o(v). 13*v Let d(w) = 2*w**2 + w - 2. Let f(b) = 6*b**2 + 3*b - 5. Suppose k + 3*m = 0, -k + 6*k + 2*m = 13. Calculate k*f(s) - 8*d(s). 2*s**2 + s + 1 Suppose g = 3*g + 110. Let c = g + 97. Let h = -59 + c. Let y(d) = -8*d**2 - 3*d + 17. Let o(s) = -3*s**2 - s + 6. Determine h*o(k) + 6*y(k). 3*k**2 - k Let q(k) = -3*k**2 + 4*k - 1. Let j(y) = -5*y**2 + 7*y - 2. Let f(o) = -4*j(o) + 7*q(o). Let n(w) = 3*w**2 - 2*w - 4. Give 2*f(r) + n(r). r**2 - 2*r - 2 Let j(v) = -7*v**2 + 12*v + 14. Let d(t) = t**2 - 3. What is 5*d(q) + j(q)? -2*q**2 + 12*q - 1 Let a(f) = 15*f**3 + 5*f**2 - 7*f - 2. Let p(u) = 23*u**3 + 8*u**2 - 11*u - 3. Let y be (0 - 10)/(-10 - (-13 - -5)). Determine y*p(v) - 8*a(v). -5*v**3 + v + 1 Suppose 8*d + 14 + 10 = 0. Let a(m) = -240*m - 155. Let z(t) = 34*t + 22. Let s(y) = d*a(y) - 20*z(y). Let p(b) = -5*b - 3. Give -25*p(u) - 3*s(u). 5*u Let z(t) = -13*t**2 + 8*t + 125. Let o(c) = 28*c**2 - 17*c - 250. What is -6*o(p) - 13*z(p)? p**2 - 2*p - 125 Let p(h) = 26*h**3 - 16*h**2 + 11*h - 1. Let b(l) = 5*l**3 - 3*l**2 + 2*l. Suppose 165*y - 44 = 169*y. Give y*b(j) + 2*p(j). -3*j**3 + j**2 - 2 Let s(o) = 67*o + 15. Let k(f) = 202*f + 41. Give -4*k(n) + 11*s(n). -71*n + 1 Let h(n) = 5*n - 3. Let t(d) be the second derivative of 2*d**3/3 - d**2 - 2*d - 57. Determine 3*h(g) - 4*t(g). -g - 1 Let i(m) = -2*m**3 - m**2 + m + 1. Let d = 1 - -4. Let k(l) = 748 - 749 - l**3 - 4*l**2 + d*l**2. Give -i(c) - k(c). 3*c**3 - c Let z(w) = -4*w**2 - 2*w + 3. Let j(h) = 1. Give -6*j(g) + 2*z(g). -8*g**2 - 4*g Let u(a) = -1. Suppose f + 5*c = -15, -9*f + 5*
{ "pile_set_name": "DM Mathematics" }
Similar distribution of simple sequence repeats in diverse completed Human Immunodeficiency Virus Type 1 genomes. The survey of simple sequence repeats (SSRs) has been extensively made in eukaryotes and prokaryotes. However, its still rare in viruses. Thus, we undertook a survey of SSRs in Human Immunodeficiency Virus Type 1 (HIV-1) which is an excellent system to study evolution and roles of SSRs in viruses. Distribution of SSRs was examined in 81 completed HIV-1 genome sequences which come from 34 different countries or districts over 6 continents. In these surveyed sequences, although relative abundance and relative density exhibit very high similarity, some of these sequences show different preference for most common SSRs and longest SSRs. Our results suggest proportion of various repeat types might be related to genome stability.
{ "pile_set_name": "PubMed Abstracts" }
Services Design Whether you need an entire re-brand, a new exhibition stand, a new brochure designed or just an invitation to your annual client drinks party – our creative team can help. Our clients come from all business sectors and range from private individuals or sole traders to large corporations. However small or large the project, we can bring it to life. Events Take the stress out of events and let us help. Our event management team makes sure that i’s are dotted and t’s are crossed, so you can just relax and enjoy your event – whether that’s a private party or a large corporate occasion. We treat every one as if it’s our own. Marketing Not sure how to best reach your audience? Need an extra pair of hands to handle a marketing campaign? Know you need to do some ‘marketing’ but not sure where to start? Our marketing professionals can handle your campaigns and delivers results on time and on budget.
{ "pile_set_name": "Pile-CC" }
Russell Crowe got Hugh Jackman the Wolverine part in ‘X-Men’, after turning it down himself. In an interview with Australian radio station Triple M 104.9FM., Jackman said: "Bryan Singer asked Rusty [we assume Crowe] to do Wolverine, and he said, 'Nah mate I've just done 'Gladiator', it's not for me but you should look at this guy...' " Luckily he was talking about Jackman, whose portrayal of the clawed mutant was a big hit with movie fans. He’s since appeared as the character a further three times, with a second X-Men spin-off flick ‘The Wolverine’ due out July 2013. Hugh is apparently great friends with Russell and says he has learned a lot from his mate. He said: "He's a great guy in every way, I have so much time for him and to watch him work, everyone says he is a great actor, when you actually get to work with someone like that... He's that good when it matters, in that moment, in that close up tough moment, you just sit back and watch you know someone's got that confidence and is going to deliver. I learnt a lot from him."
{ "pile_set_name": "OpenWebText2" }
Q: json not outputting I'm running this code in php while ($row = mysql_fetch_array($result)) { $arr = array("joke" => $row['joke'], "date" => $row['date'], "rating" => $row['rating']); echo json_encode($arr); } but there's no output. I am running php 5.3.6 A: nvm I figured it out. the way to do this is to use sql2json
{ "pile_set_name": "StackExchange" }
Q: Why isn't inline JavaScript code being executed in ASP.NET MVC? This is my complete View: @{ ViewBag.Title = "Home"; } <div style="width:100%; height:100%" id="map"></div> <script defer="defer" type="text/javascript"> var map = new OpenLayers.Map('map'); var wms = new OpenLayers.Layer.WMS("OpenLayers WMS", "http://vmap0.tiles.osgeo.org/wms/vmap0", { layers: 'basic' }); map.addLayer(wms); map.zoomToMaxExtent(); </script> But when I run it nothing I can't see the map. I made a HTML page in Notepad that looks like this: <html> <head> <title>OpenLayers Example</title> <script src="http://openlayers.org/api/OpenLayers.js"></script> </head> <body> <div style="width:100%; height:100%" id="map"></div> <script defer="defer" type="text/javascript"> var map = new OpenLayers.Map('map'); var wms = new OpenLayers.Layer.WMS( "OpenLayers WMS", "http://vmap0.tiles.osgeo.org/wms/vmap0", {layers: 'basic'} ); map.addLayer(wms); map.zoomToMaxExtent(); </script> </body> </html> And it works. Why isn't the code being executed in ASP.NET? I installed OpenLayers from NuGet and if I select OpenLayers and press F12 ('Go To Definition' it opens up OpenLayers.js so it seems to have been downloaded correctly). EDIT: The complete generated code: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>FIKA - Home</title> <script src="http://openlayers.org/api/OpenLayers.js"></script> <link href="/Content/css?v=bxomq82-FU9mU3eDX6m-kca-a2PFEz0RK2Z7mS-QmnY1" rel="stylesheet"/> <script src="/bundles/modernizr?v=wBEWDufH_8Md-Pbioxomt90vm6tJN2Pyy9u9zHtWsPo1"></script> </head> <body> <div class="container body-content"> <div style="width:100%; height:100%" id="map"></div> <script src="http://openlayers.org/api/OpenLayers.js"></script> <script defer="defer" type="text/javascript"> var map = new OpenLayers.Map('map'); var wms = new OpenLayers.Layer.WMS("OpenLayers WMS", "http://vmap0.tiles.osgeo.org/wms/vmap0", { layers: 'basic' }); map.addLayer(wms); map.zoomToMaxExtent(); </script> <hr /> </div> <script src="/bundles/jquery?v=FVs3ACwOLIVInrAl5sdzR2jrCDmVOWFbZMY6g6Q0ulE1"></script> <script src="/bundles/bootstrap?v=2Fz3B0iizV2NnnamQFrx-NbYJNTFeBJ2GM05SilbtQU1"></script> <!-- Visual Studio Browser Link --> <script type="application/json" id="__browserLink_initializationData"> {"appName":"Chrome","requestId":"0ea737fab0f240fab62a7978c5db4fa7"} </script> <script type="text/javascript" src="http://localhost:60314/4457514eae394a96a55c4c6c386b7942/browserLink" async="async"></script> <!-- End Browser Link --> </body> </html> A: Your issue occurs because in the complete generated code you have the HTML5 doctype where in your working demo you do not have a doctype. That difference occurs for the browser to render differently the height:100%; property. You need to set the height in pixels before executing your code. function setMapHeight() { var w = window, d = document, e = d.documentElement, g = d.getElementsByTagName('body')[0]; document.getElementById('map').style.height = (w.innerHeight || e.clientHeight || g.clientHeight) + 'px'; } setMapHeight(); window.onresize = setMapHeight; // Add this code to fix height if window resizes var map = new OpenLayers.Map('map'); var wms = new OpenLayers.Layer.WMS("OpenLayers WMS", "http://vmap0.tiles.osgeo.org/wms/vmap0", { layers: 'basic' }); map.addLayer(wms); map.zoomToMaxExtent();
{ "pile_set_name": "StackExchange" }
Novel avian leukosis virus-related endogenous proviruses from layer chickens: characterization and development of locus-specific assays. During the course of evolution, vertebrate genomes have been invaded and colonized by retroviruses. In humans, for example, endogenous retroviruses (long terminal repeat elements) occupy roughly twice as much sequence space as essential genes. There are numerous reports in the literature implicating endogenous proviruses in the modulation of host physiology. The fact that many of these host-virus interactions take place in a proviral locus-specific manner speaks to the need for rapid assays for element profiling. This report deals with the identification of novel elements belonging to a family of endogenous retroviruses, designated ALVE, that reside in the genome of the chicken and that are closely related to exogenous avian leukosis viruses. The study of ALVE elements in the chicken genome serves as a model system for understanding the interplay between endogenous viruses and their vertebrate hosts in general, including humans. In this report, we present locus-specific, diagnostic PCR-based assays for 2 novel ALVE elements. In addition, we characterize the proviral structures and examine the genomic environments of both novel elements along with a previously described element known as ALVE-NSAC-3.
{ "pile_set_name": "PubMed Abstracts" }
Q: Disable native Soap class in PHP5 and use nuSoap? I've spent the last week developing code to connect to a Web Service using the nuSoap library. I just deployed the code to production, but immediately started getting error's that I hadn't seen before. I traced the problem back to a line of code that is trying to instantiate a new soapclient object. It turns out that both libraries have a class named 'soapclient' and the one that's being created in production is from the native Soap library, not the nuSoap library that I'm including. How can I disable the native Soap functionality and stick strictly to nuSoap? A: With the release of PHP5 there is a soapclient class included in the php_soap extension. NuSOAP has renamed its class to nusoap_client. If your copy of NuSOAP is current you should be able to use that. This doesn't disable the php_soap extension, but should allow you to use the NuSOAP class without further conflict.
{ "pile_set_name": "StackExchange" }
/***************************************************************************/ /* */ /* ftlcdfil.h */ /* */ /* FreeType API for color filtering of subpixel bitmap glyphs */ /* (specification). */ /* */ /* Copyright 2006-2018 by */ /* David Turner, Robert Wilhelm, and Werner Lemberg. */ /* */ /* This file is part of the FreeType project, and may only be used, */ /* modified, and distributed under the terms of the FreeType project */ /* license, LICENSE.TXT. By continuing to use, modify, or distribute */ /* this file you indicate that you have read the license and */ /* understand and accept it fully. */ /* */ /***************************************************************************/ #ifndef FTLCDFIL_H_ #define FTLCDFIL_H_ #include <ft2build.h> #include FT_FREETYPE_H #include FT_PARAMETER_TAGS_H #ifdef FREETYPE_H #error "freetype.h of FreeType 1 has been loaded!" #error "Please fix the directory search order for header files" #error "so that freetype.h of FreeType 2 is found first." #endif FT_BEGIN_HEADER /*************************************************************************** * * @section: * lcd_filtering * * @title: * LCD Filtering * * @abstract: * Reduce color fringes of subpixel-rendered bitmaps. * * @description: * Should you #define FT_CONFIG_OPTION_SUBPIXEL_RENDERING in your * `ftoption.h', which enables patented ClearType-style rendering, * the LCD-optimized glyph bitmaps should be filtered to reduce color * fringes inherent to this technology. The default FreeType LCD * rendering uses different technology, and API described below, * although available, does nothing. * * ClearType-style LCD rendering exploits the color-striped structure of * LCD pixels, increasing the available resolution in the direction of * the stripe (usually horizontal RGB) by a factor of~3. Since these * subpixels are color pixels, using them unfiltered creates severe * color fringes. Use the @FT_Library_SetLcdFilter API to specify a * low-pass filter, which is then applied to subpixel-rendered bitmaps * generated through @FT_Render_Glyph. The filter sacrifices some of * the higher resolution to reduce color fringes, making the glyph image * slightly blurrier. Positional improvements will remain. * * A filter should have two properties: * * 1) It should be normalized, meaning the sum of the 5~components * should be 256 (0x100). It is possible to go above or under this * target sum, however: going under means tossing out contrast, going * over means invoking clamping and thereby non-linearities that * increase contrast somewhat at the expense of greater distortion * and color-fringing. Contrast is better enhanced through stem * darkening. * * 2) It should be color-balanced, meaning a filter `{~a, b, c, b, a~}' * where a~+ b~=~c. It distributes the computed coverage for one * subpixel to all subpixels equally, sacrificing some won resolution * but drastically reducing color-fringing. Positioning improvements * remain! Note that color-fringing can only really be minimized * when using a color-balanced filter and alpha-blending the glyph * onto a surface in linear space; see @FT_Render_Glyph. * * Regarding the form, a filter can be a `boxy' filter or a `beveled' * filter. Boxy filters are sharper but are less forgiving of non-ideal * gamma curves of a screen (viewing angles!), beveled filters are * fuzzier but more tolerant. * * Examples: * * - [0x10 0x40 0x70 0x40 0x10] is beveled and neither balanced nor * normalized. * * - [0x1A 0x33 0x4D 0x33 0x1A] is beveled and balanced but not * normalized. * * - [0x19 0x33 0x66 0x4c 0x19] is beveled and normalized but not * balanced. * * - [0x00 0x4c 0x66 0x4c 0x00] is boxily beveled and normalized but not * balanced. * * - [0x00 0x55 0x56 0x55 0x00] is boxy, normalized, and almost * balanced. * * - [0x08 0x4D 0x56 0x4D 0x08] is beveled, normalized and, almost * balanced. * * The filter affects glyph bitmaps rendered through @FT_Render_Glyph, * @FT_Load_Glyph, and @FT_Load_Char. It does _not_ affect the output * of @FT_Outline_Render and @FT_Outline_Get_Bitmap. * * If this feature is activated, the dimensions of LCD glyph bitmaps are * either wider or taller than the dimensions of the corresponding * outline with regard to the pixel grid. For example, for * @FT_RENDER_MODE_LCD, the filter adds 3~subpixels to the left, and * 3~subpixels to the right. The bitmap offset values are adjusted * accordingly, so clients shouldn't need to modify their layout and * glyph positioning code when enabling the filter. * * It is important to understand that linear alpha blending and gamma * correction is critical for correctly rendering glyphs onto surfaces * without artifacts and even more critical when subpixel rendering is * involved. * * Each of the 3~alpha values (subpixels) is independently used to blend * one color channel. That is, red alpha blends the red channel of the * text color with the red channel of the background pixel. The * distribution of density values by the color-balanced filter assumes * alpha blending is done in linear space; only then color artifacts * cancel out. */ /**************************************************************************** * * @enum: * FT_LcdFilter * * @description: * A list of values to identify various types of LCD filters. * * @values: * FT_LCD_FILTER_NONE :: * Do not perform filtering. When used with subpixel rendering, this * results in sometimes severe color fringes. * * FT_LCD_FILTER_DEFAULT :: * The default filter reduces color fringes considerably, at the cost * of a slight blurriness in the output. * * It is a beveled, normalized, and color-balanced five-tap filter * that is more forgiving to screens with non-ideal gamma curves and * viewing angles. Note that while color-fringing is reduced, it can * only be minimized by using linear alpha blending and gamma * correction to render glyphs onto surfaces. The default filter * weights are [0x08 0x4D 0x56 0x4D 0x08]. * * FT_LCD_FILTER_LIGHT :: * The light filter is a variant that is sharper at the cost of * slightly more color fringes than the default one. * * It is a boxy, normalized, and color-balanced three-tap filter that * is less forgiving to screens with non-ideal gamma curves and * viewing angles. This filter works best when the rendering system * uses linear alpha blending and gamma correction to render glyphs * onto surfaces. The light filter weights are * [0x00 0x55 0x56 0x55 0x00]. * * FT_LCD_FILTER_LEGACY :: * This filter corresponds to the original libXft color filter. It * provides high contrast output but can exhibit really bad color * fringes if glyphs are not extremely well hinted to the pixel grid. * In other words, it only works well if the TrueType bytecode * interpreter is enabled *and* high-quality hinted fonts are used. * * This filter is only provided for comparison purposes, and might be * disabled or stay unsupported in the future. * * FT_LCD_FILTER_LEGACY1 :: * For historical reasons, the FontConfig library returns a different * enumeration value for legacy LCD filtering. To make code work that * (incorrectly) forwards FontConfig's enumeration value to * @FT_Library_SetLcdFilter without proper mapping, it is thus easiest * to have another enumeration value, which is completely equal to * `FT_LCD_FILTER_LEGACY'. * * @since: * 2.3.0 (`FT_LCD_FILTER_LEGACY1' since 2.6.2) */ typedef enum FT_LcdFilter_ { FT_LCD_FILTER_NONE = 0, FT_LCD_FILTER_DEFAULT = 1, FT_LCD_FILTER_LIGHT = 2, FT_LCD_FILTER_LEGACY1 = 3, FT_LCD_FILTER_LEGACY = 16, FT_LCD_FILTER_MAX /* do not remove */ } FT_LcdFilter; /************************************************************************** * * @func: * FT_Library_SetLcdFilter * * @description: * This function is used to apply color filtering to LCD decimated * bitmaps, like the ones used when calling @FT_Render_Glyph with * @FT_RENDER_MODE_LCD or @FT_RENDER_MODE_LCD_V. * * @input: * library :: * A handle to the target library instance. * * filter :: * The filter type. * * You can use @FT_LCD_FILTER_NONE here to disable this feature, or * @FT_LCD_FILTER_DEFAULT to use a default filter that should work * well on most LCD screens. * * @return: * FreeType error code. 0~means success. * * @note: * This feature is always disabled by default. Clients must make an * explicit call to this function with a `filter' value other than * @FT_LCD_FILTER_NONE in order to enable it. * * Due to *PATENTS* covering subpixel rendering, this function doesn't * do anything except returning `FT_Err_Unimplemented_Feature' if the * configuration macro FT_CONFIG_OPTION_SUBPIXEL_RENDERING is not * defined in your build of the library, which should correspond to all * default builds of FreeType. * * @since: * 2.3.0 */ FT_EXPORT( FT_Error ) FT_Library_SetLcdFilter( FT_Library library, FT_LcdFilter filter ); /************************************************************************** * * @func: * FT_Library_SetLcdFilterWeights * * @description: * This function can be used to enable LCD filter with custom weights, * instead of using presets in @FT_Library_SetLcdFilter. * * @input: * library :: * A handle to the target library instance. * * weights :: * A pointer to an array; the function copies the first five bytes and * uses them to specify the filter weights. * * @return: * FreeType error code. 0~means success. * * @note: * Due to *PATENTS* covering subpixel rendering, this function doesn't * do anything except returning `FT_Err_Unimplemented_Feature' if the * configuration macro FT_CONFIG_OPTION_SUBPIXEL_RENDERING is not * defined in your build of the library, which should correspond to all * default builds of FreeType. * * LCD filter weights can also be set per face using @FT_Face_Properties * with @FT_PARAM_TAG_LCD_FILTER_WEIGHTS. * * @since: * 2.4.0 */ FT_EXPORT( FT_Error ) FT_Library_SetLcdFilterWeights( FT_Library library, unsigned char *weights ); /* * @type: * FT_LcdFiveTapFilter * * @description: * A typedef for passing the five LCD filter weights to * @FT_Face_Properties within an @FT_Parameter structure. * * @since: * 2.8 * */ #define FT_LCD_FILTER_FIVE_TAPS 5 typedef FT_Byte FT_LcdFiveTapFilter[FT_LCD_FILTER_FIVE_TAPS]; /* */ FT_END_HEADER #endif /* FTLCDFIL_H_ */ /* END */
{ "pile_set_name": "Github" }
ALTURAS, Calif. (AP) — Firefighters in Northern California were battling several lightning-sparked wildfires Friday, including a blaze in rural Modoc County that forced about 120 people to evacuate. The fire near the community of Day had burned through nearly 18 square miles of brush and heavy timber and was threatening 150 structures Friday, two days after it began. It was only 10 percent contained. Officials were expected to send additional fire crews to supplement the 700 or so firefighters battling the blaze, which is burning in steep terrain and could be aggravated by winds and lightning from more expected thunderstorms, said state fire spokesman Dennis Mathisen. California is in its third year of drought, which has heightened the fire danger. The California Governor’s Office of Emergency Services announced Friday that it has asked the state’s National Guard to activate specially trained helicopter units to help fire agencies. “The forward deployment of these will help incident commanders and the personnel they are directing save lives, homes and personal property as well as valuable watershed by providing critical resources within a moment’s notice,” California emergency services Director Mark Ghilarducci said in a statement. The fire in Modoc was among more than 40 that have broken out as a result of lightning strikes since Wednesday. Most were in remote areas and were not threatening homes, and many of them were quickly contained. In Washington state, a new wildfire burning intensely amid high winds forced the evacuation of about 200 homes Friday. Officials ordered the evacuations and closed part of state Highway 20 because of the surging fire that started in north-central Washington halfway between Twisp and Winthrop. The Sun Mountain Lodge, a luxury resort near Winthrop, was also evacuated as a precaution. Janet Pearce, a spokeswoman for the state Department of Natural Resources, said some structures had burned. It wasn’t known if they were homes or other buildings. Back in California, fire crews also were battling a blaze in Sierra National Forest about 60 miles northeast of Fresno that threatened dozens of homes and forced a handful of evacuations Friday afternoon. Evacuations were ordered for some 50 houses in Arnold Meadows, but many are vacation homes and only about a dozen people had to evacuate when deputies went door-to-door, The Madera County Sheriff’s Department said. The blaze was burning close to the Mammoth Pool Reservoir, a popular recreation spot that supplies drinking water, and crews were trying to finish containment lines on the section near the reservoir, said fire spokesman Matthew Chambers. The blaze had burned through nearly 13 square miles and was 15 percent contained. In Yosemite National Park, residents from about 50 homes returned Friday afternoon. They were the last remaining evacuees from a fire that had burned through 7 square miles and destroyed a home and a duplex. It was 78 percent contained.
{ "pile_set_name": "Pile-CC" }
The Massachusetts Institute of Technology proposes that support for the High Field NMR Resource (Grant no. RR00995-13), located at the Francis Bitter National Magnet Laboratory (FBNML), be continued for five years and that its scope be enlarged to support ongoing activities in in-vivo spectroscopy and imaging of animal models and human subjects at the M.I.T. MRI Facility. We thus propose to create and operate a new entity called the Comprehensive NMR Center for Biomedical Research. This center will be comprised of two division; (1) the existing High Field NMR Resource, now in its 13th year of operation, and (2) the In-vivo MNR/MRI Facility, a research laboratory developed in parallel during the past six years at M.I.T. almost exclusively with private funds. The major new technologies which we propose to introduce are: (1) a capability to perform high-resolution 3D-NMR: (2) an enhanced capability for NMR microscopic imaging and spectroscopy; (3) the use of M.I.T.'s unique actively shielded pulsed gradient coils for NMR microscopy and in- vivo spectroscopy and imaging; (4) a high performance spectrometer for dynamic nuclear polarization studies of crystalline proteins; (5) a high- resolution multinuclear 600 MHz spectrometer; (6) a whole body human in- vivo spectroscopy/imaging system utilizing a 1.5T/120 cm magnet of our own design and construction; (7) enhanced computer facilities for acquisition, data processing and display of 2D- and 3D-NMR data sets and for in-vivo spectroscopy and imaging. Our objective during the next five years is to establish an integrated, comprehensive capability which will allow study by NMR of all the diverse biological systems -- macro-molecules, subcellular organelles, microorganisms, cells, mammalian tissues, intact organs, animals and humans. Experimental capabilities will exist for NMR microscopy, NMR spectroscopy (solutions, solids, in-vivo tissues) and NMR imaging. The comprehensive nature of the Center will allow for research projects initiated at the High Field NMR Division to migrate naturally to the In- Vivo MRS/MRI Division. Thus, a given project begun at the cellular or small animal level can be easily broadened in scope to include investigations of larger animals and humans. Such advanced NMR capabilities, which permit long-term, in-depth, health-related investigations and to which researchers have ready, equitable access do not exist at a single site in the greater Boston/Cambridge area.
{ "pile_set_name": "NIH ExPorter" }
Search by sound Search by sound is the retrieval of information based on audio input. There are a handful of applications, specifically for mobile devices that utilize search by sound. Shazam (service), Soundhound (previously Midomi), Axwave, ACRCloud and others has seen considerable success by using a simple algorithm to match an acoustic fingerprint to a song in a library. These applications takes a sample clip of a song, or a user generated melody and checks a music library/music database to see where the clip matches with the song. From there, song information will queried and displayed to the user. These kind of applications is mainly used for finding a song that the user does not already know. Searching by sound is not limited to just identifying songs, but also for identifying melodies, tunes or advertisements, sound library management and video files. Acoustic Fingerprinting The way these apps search by sound is through generating an acoustic fingerprint; a digital summary of the sound. A microphone is used to pick up an audio sample, which is then broken down into a simple numeric signature, a code unique to each track. Using the same method of fingerprinting sounds, when Shazam picks up a sound clip, it will generate a signature for that clip. Then it’s simple pattern matching from there using an extensive audio music database. The practice of using acoustic fingerprints is not limited to just music however, but other areas of the entertainment business as well. Shazam also can identify television shows with the same technique of acoustic fingerprinting. Of course, this method of breaking down a sound sample into a unique signature is useless unless there is an extensive database of music with keys to match with the samples. Shazam has over 11 million songs in its database. Other services such as Midomi and Soundhound allow users to add to that library of music in order to expand the chances to match a sound sample with its corresponding sound. Query by Humming Midomi and Soundhound both utilize query by humming. This is a branch off of acoustic fingerprints, but is still a musical retrieval system. After receiving a user generated hummed melody, which is the input query, and returns a ranked list of songs that is closest to the user query. See also AmpliFIND Automatic content recognition List of online music databases Music information retrieval Sound recognition References *
{ "pile_set_name": "Wikipedia (en)" }
New artwork believed to be from the elusive Banksy appeared overnight on a wall near where climate change activists spent the last two weeks protesting in London. The street art – which was first noticed by people Thursday night – purportedly shows a child clutching a sign of the Extinction Rebellion emblem while planting a small flower with the words “From this Moment Despair Ends and Tactics Begin.” It comes on the day that Extinction Rebellion backed up their protests which snarled traffic in the British capital for nearly two weeks. CLIMATE ACTIVISTS GLUE THEMSELVES TO LONDON STOCK EXCHANGE ON ‘LAST DAY’ OF PROTESTS A Bansky collector and expert told The Guardian he believes the mural on Marble Arch is an authentic piece by the Bristol street artist. John Brandler, who owns a dozen pieces by Banksy, said he believes it’s an original because of its execution and theme. “I’m convinced about the one in London for two reasons: it’s a topic that he would support, and it’s a continuation of the Port Talbot piece that appeared in December 2018,” he said. “The name in the corner is not important, the signature is the work. And this is a Banksy. It’s a wonderful statement and beautiful piece.” A spokesperson for Westminster council told The Guardian that they are investigating the authenticity of the art piece. Banksy has not confirmed the authenticity. Extinction Rebellion – self-described “rebels” – have recently made headlines for snarling traffic and public transit in London through a series of blockades. They ended their 10-day outlandish demonstrations on Thursday but not before several members glued themselves to the London Stock Exchange building. Calvin Benson, 48, a supporter of the climate change activist group, told Sky News that the artwork “represents the will of the people that were here and the people of the nation.” TOPLESS CLIMATE CHANGE PROTESTERS ARRESTED AFTER INTERRUPTING BREXIT DEBATE IN BRITISH PARLIAMENT Bansky, who has never disclosed his full identity, began his career spray-painting buildings in Bristol, England, and has since become one of the world’s best-known artists. His mischievous and often satirical images include two policemen kissing, armed riot police with yellow smiley faces and a chimpanzee with a sign bearing the words, “Laugh now, but one day I’ll be in charge.” His artwork titled “Girl With Balloon” – one of his best-known works – was sold on auction for $1.4 million last October just before it self-destructed. Videos from inside the Sotheby’s in London show the painting partially running through a shredder embedded in the frame and emerging from the bottom as strips. CLICK HERE TO GET THE FOX NEWS APP A video on Banksy’s website after the incident appeared to imply that the painting’s partial shredding was supposed to have been complete. “In rehearsals, it worked every time…” the video notes at the end.
{ "pile_set_name": "OpenWebText2" }
<?php $lang['L_NOFTPPOSSIBLE']="Es stehen keine FTP-Funktionen zur Verfügung!"; $lang['L_INFO_LOCATION']="Du befindest dich auf "; $lang['L_INFO_DATABASES']="Folgende Datenbank(en) befinden sich auf dem MySql-Server:"; $lang['L_INFO_NODB']="Datenbank existiert nicht"; $lang['L_INFO_DBDETAIL']="Detail-Information von Datenbank "; $lang['L_INFO_DBEMPTY']="Die Datenbank ist leer!"; $lang['L_INFO_RECORDS']="Datensätze"; $lang['L_INFO_SIZE']="Größe"; $lang['L_INFO_LASTUPDATE']="letztes Update"; $lang['L_INFO_SUM']="insgesamt"; $lang['L_INFO_OPTIMIZED']="optimiert"; $lang['L_OPTIMIZE_DATABASES']="Tabellen optimieren"; $lang['L_CHECK_TABLES']="Tabellen überprüfen"; $lang['L_CLEAR_DATABASE']="Datenbank leeren"; $lang['L_DELETE_DATABASE']="Datenbank löschen"; $lang['L_INFO_CLEARED']="wurde geleert"; $lang['L_INFO_DELETED']="wurde gelöscht"; $lang['L_INFO_EMPTYDB1']="Soll die Datenbank"; $lang['L_INFO_EMPTYDB2']=" wirklich geleert werden? (Achtung! Alle Daten gehen unwiderruflich verloren)"; $lang['L_INFO_KILLDB']=" wirklich gelöscht werden? (Achtung! Alle Daten gehen unwiderruflich verloren)"; $lang['L_PROCESSKILL1']="Es wird versucht, Prozess "; $lang['L_PROCESSKILL2']="zu beenden."; $lang['L_PROCESSKILL3']="Es wird seit "; $lang['L_PROCESSKILL4']=" Sekunde(n) versucht, Prozess "; $lang['L_HTACC_CREATE']="Verzeichnisschutz erstellen"; $lang['L_ENCRYPTION_TYPE']="Verschlüsselungsart"; $lang['L_HTACC_CRYPT']="Crypt maximal 8 Zeichen (Linux und Unix-Systeme)"; $lang['L_HTACC_MD5']="MD5 (Linux und Unix-Systeme)"; $lang['L_HTACC_NO_ENCRYPTION']="unverschlüsselt (Windows)"; $lang['L_HTACCESS8']="Es besteht bereits ein Verzeichnisschutz. Wenn Du einen neuen erstellst, wird der alte überschrieben!"; $lang['L_HTACC_NO_USERNAME']="Du musst einen Namen eingeben!"; $lang['L_PASSWORDS_UNEQUAL']="Die Passwörter sind nicht identisch oder leer!"; $lang['L_HTACC_CONFIRM_DELETE']="Soll der Verzeichnisschutz jetzt erstellt werden?"; $lang['L_HTACC_CREATED']="Der Verzeichnisschutz wurde erstellt."; $lang['L_HTACC_CONTENT']="Inhalt der Datei"; $lang['L_HTACC_CREATE_ERROR']="Es ist ein Fehler bei der Erstellung des Verzeichnisschutzes aufgetreten!<br>Bitte erzeuge die Dateien manuell mit folgendem Inhalt"; $lang['L_HTACC_PROPOSED']="Dringend empfohlen"; $lang['L_HTACC_EDIT']=".htaccess editieren"; $lang['L_HTACCESS18']=".htaccess erstellen in "; $lang['L_HTACCESS19']="Neu laden "; $lang['L_HTACCESS20']="Skript ausführen"; $lang['L_HTACCESS21']="Handler zufügen"; $lang['L_HTACCESS22']="Ausführbar machen"; $lang['L_HTACCESS23']="Verzeichnis-Listing"; $lang['L_HTACCESS24']="Error-Dokument"; $lang['L_HTACCESS25']="Rewrite aktivieren"; $lang['L_HTACCESS26']="Deny / Allow"; $lang['L_HTACCESS27']="Redirect"; $lang['L_HTACCESS28']="Error-Log"; $lang['L_HTACCESS29']="weitere Beispiele und Dokumentation"; $lang['L_HTACCESS30']="Provider"; $lang['L_HTACCESS31']="allgemein"; $lang['L_HTACCESS32']="Achtung! Die .htaccess hat eine direkte Auswirkung auf den Browser.<br>Bei falscher Anwendung sind die Seiten nicht mehr erreichbar."; $lang['L_PHPBUG']="Bug in zlib! Keine Kompression möglich!"; $lang['L_DISABLEDFUNCTIONS']="Abgeschaltete Funktionen"; $lang['L_NOGZPOSSIBLE']="Da zlib nicht installiert ist, stehen keine GZip-Funktionen zur Verfügung!"; $lang['L_DELETE_HTACCESS']="Verzeichnisschutz entfernen (.htaccess löschen)"; $lang['L_WRONG_RIGHTS']="Die Datei oder das Verzeichnis '%s' ist für mich nicht beschreibbar.<br> Entweder hat sie/es den falschen Besitzer (Owner) oder die falschen Rechte (Chmod).<br> Bitte setze die richtigen Attribute mit Deinem FTP-Programm. <br> Die Datei oder das Verzeichnis benötigt die Rechte %s.<br>"; $lang['L_CANT_CREATE_DIR']="Ich konntes das Verzeichnis '%s' nicht erstellen. Bitte erstelle es mit Deinem FTP-Programm."; $lang['L_TABLE_TYPE']="Typ"; $lang['L_CHECK']="prüfen"; $lang['L_HTACC_SHA1']="SHA1 (alle Systeme)"; $lang['L_OS']="Betriebssystem"; $lang['L_MSD_VERSION']="MySQLDumper-Version"; $lang['L_MYSQL_VERSION']="MySQL-Version"; $lang['L_PHP_VERSION']="PHP-Version"; $lang['L_MAX_EXECUTION_TIME']="Maximale Ausführungszeit"; $lang['L_PHP_EXTENSIONS']="PHP-Erweiterungen"; $lang['L_MEMORY']="Speicher"; $lang['L_FILE_MISSING']="konnte Datei nicht finden"; ?>
{ "pile_set_name": "Github" }
[Colitis in the non-functioning rectosigmoid after establishment of an end-sigmoid colostomy]. In 11 patients with a sigmoid end colostomy (and one additional patient with an end ileostomy), we examined the endoscopic and microscopic aspects of both the dysfunctioned bowel and the colon proximal to the colostomy. The latter showed in none of the cases signs of inflammation, while in 7 patients a remarkable or even severe colitis (mimicking ulcerative colitis) could be demonstrated endoscopically and/or histologically - irrespective to the duration of the dysfunction (one month up til 11 years). In the four patients with restoration of the intestinal continuity, macroscopic and microscopic findings of the rectal mucosa became normal as soon as two weeks after reoperation. We conclude, that the "dysfunctioned bowel-colitis" is somehow related to the mucosa's contact to the fecal stream.
{ "pile_set_name": "PubMed Abstracts" }
by I mentioned during my Mac Minute in Julie’s podcast interview with Jack de Golia that I recently put together a temporary sound booth that can be easily moved from location to location. (We sold our house, and are doing a series of house-sitting gigs while we look for a new home. The setup is pretty simple. As a frame I’m using a set of “light panels” that were designed for holding fabric scrims for photography. These are nothing more than shock-corded pvc pipes arranged in 4×8 foot panels. You can do the same thing with pvc pipe and corner connectors from your local hardware store. If you don’t glue all of the pipe sections into the connectors, you can still disassemble the frames for easy storage. Across the top of the structure I’ve got two 3×3 foot frames, again from the light panel collection. Note that with two of the light panels I got supporting legs, which you see at the bottom of the image. If you use stiffer pvc pipe, maybe 3/4-inch size, you probably won’t need the extra bracing support. For additional rigidity I used velcro wraps and pipe clamps to hold the three vertical frames together. I also have velcro wraps holding the two 3×3-foot frames across the top of the structure. The sound absorption is handled by two layers of moving blankets. I’ve got three heavy quilted moving blankets (green/blue in the photos) and four lighter blankets from Harbor Freight. The blankets are held onto the frame using the large clamps shown in the image above. I hung the heavier moving blankets inside the frame, and the lighter (black) blankets outside the frame. This gives a little airspace between the two blanket layers to further absorb and “deaden” the sound. The booth looks a little funky, but the “boomy” sound from the room I’ve used it in so far is nicely reduced. I didn’t rig any light arrangement, so I’ve been using the booth with one of the top layers hung slightly over the front. But I don’t have a blanket extending all the way to the floor as a “door.” The computer equipment is on the table next to the booth. I have to start a file recording and then walk into the booth, but that hasn’t been much of a hassle (except when I recorded a five-minute audiobook audition and realized I’d forgotten to push “Record”). With the booth completed, I used a heavy-duty photographic tripod to mount a Harlan Hogan Porta-Pro Plus box for my “stand-up mic” (a shotgun), and brought the “sit-down” mic arm in through a gap in the blankets. A book stand holds copy or the iPad I’ve been using for scripts, and a music stool completes the setup. If you’ve been wanting a booth, but aren’t ready to build something permanent, but your spouse would like to have you and your equipment out of the walk-in closet, consider building your own temporary booth. Since I had the frames on hand, I haven’t priced doing something like this at the hardware store. But I’m going to guess that you could put something similar together for around $100, including the pipe, connectors, and moving blankets.
{ "pile_set_name": "OpenWebText2" }
1. Field of the Invention The present invention relates generally to removable safety rail systems installed around rooftops to prevent workers from falling to the ground below. More particularly, the invention relates to a modular stanchion holder for such a removable guard rail system that can be mounted on either a parapet or an overhanging ledge rooftop periphery, as necessary. 2. Description of the Related Art Construction sites are generally known to be very dangerous places. For this reason there are numerous federal and state laws that address the various health and safety issues associated with construction work and work conducted at construction sites, including rooftops and elevated areas. For instance, in the United States, Occupational Health and Safety Administration (OSHA) standards require contractors to install protective railings about a rooftop worksite according to specific guidelines. Moreover, state regulations and insurance companies mandate similar requirements. Because of this, several guardrail systems have been developed to comply with the many safety codes in existence. Among these is a safety rail system invented by the present applicant and disclosed in U.S. Pat. No. 6,053,281. As a part of providing such a safety rail system, it is necessary to temporarily install vertical stanchions or support posts at spaced intervals around the perimeter of the work area to support the horizontal rails of the safety rail system. A variety of stanchion holders are known from safety rail systems of the prior art. For example, in the safety rail system disclosed in U.S. Pat. No. 3,863,900, each stanchion is provided with a horizontal foot that serves as a fixed jaw portion cooperating with a positionable and adjustable jaw portion to form a clamp that adjusts to clamp along a vertical clamping direction, whereby the stanchion can be clamped to an overhanging ledge. U.S. Pat. No. 3,995,833 teaches a safety rail system wherein each stanchion comprises a pair of telescopically adjustable tube segments, and each segment includes a jaw portion fixed thereto for clamping in a vertical direction to an overhanging ledge. A device for mounting a stanchion to a horizontal I-beam is described in U.S. Pat. Nos. 4,037,824 and 5,029,670, and includes a vertical stanchion-receiving sleeve fixed to a horizontal member having a fixed jaw portion and a movable jaw portion cooperating with the fixed jaw portion to clamp in a horizontal direction to a top leg of the I-beam. A safety rail system marketed by Protective Roofing Products Ltd. of Stoney Creek, Ontario, Canada, under the designation PR-100 provides a stanchion that is connectable at right angles to a mounting bracket that clamps in a horizontal direction, whereby the stanchion can be mounted to a parapet. Finally, it is known use cement anchors or other fasteners to secure a stanchion holder to a structure. The clamping style systems of the prior art lack versatility in that they are designed to mount only to an overhanging ledge or only to a parapet. Systems requiring anchors are time-consuming and often require special tools to install. Therefore, it is an object of the present invention to provide a stanchion holder that can be mounted on either a parapet or an overhanging ledge of a rooftop in a fast and simple manner to enable efficient installation of a safety rail system about the rooftop perimeter. It is another object of the present invention to provide a versatile stanchion holder that can be constructed from readily available component parts. In view of these and other objects, a stanchion holder formed in accordance with a preferred embodiment of the present invention generally comprises a clamp having a first stanchion sleeve fixed thereto, and a right-angle stanchion sleeve adapter having a second stanchion sleeve and a male portion sized for removable receipt within the first stanchion sleeve. The first stanchion sleeve extends in a direction substantially orthogonal to a clamping direction of the clamp, such that a stanchion can be inserted vertically into the first stanchion sleeve when a horizontal clamping direction is required, as with clamping to a parapet, and the adapter is omitted. When the male portion of the stanchion sleeve adapter is received by the first stanchion sleeve, the second stanchion sleeve provided on the adapter extends in a direction substantially parallel to the direction of clamping, whereby a stanchion can be inserted vertically into the second stanchion sleeve and the clamp can be secured to an overhanging ledge by applying clamping force in a vertical direction. The clamp itself has a C-shaped frame including a spine portion, a first leg portion fixed with respect to the spine portion, and a second leg portion opposite the first leg portion and adjustable along the spine to change its distance from the first leg portion. The first sleeve portion is fixed to and extends along the first leg portion of the clamp frame. The male portion of the adapter and the stanchion are preferably held in place within a corresponding sleeve passage by transverse pins, however other means of releasably retaining these members are contemplated.
{ "pile_set_name": "USPTO Backgrounds" }
Knowledge, attitude, and practice about malaria: Socio-demographic implications for malaria control in rural Ghana. Despite continuing international attention to malaria prevention, the disease remains a global public health problem. We investigated socio-demographic factors influencing knowledge, attitudes, and practices about malaria in rural Ghana. Our survey looked at 354 households. Mean knowledge score was higher among individuals with a history of volunteers having visited their households to educate them about malaria; families with 4-6 members; and males. Households with at least one under-five-aged child also had significantly higher knowledge scores. Households with at least one pregnant woman evinced a positive attitude towards malaria prevention. National malaria control strategies have achieved positive results in the fight against malaria. Nonetheless, multipronged community-based health strategies that integrate malaria programs and population growth control initiatives may be able to reach by 2030 the sustainable development goal of eliminating malaria.
{ "pile_set_name": "PubMed Abstracts" }
1. Technical Field The present invention relates to a pipeline-type analog-to-digital converter. 2. Related Art Conventionally, there has been known a pipeline-type analog-to-digital (A/D) converter in which an A/D converting stage (stage) of a small bit number is cascade-connected and digital values obtained in each stage are computed so as to obtain a final digital value. For example, refer to JP-A-2005-252326. In each stage, an input analog signal is quantized by a sub A/D converter to be converted into a digital signal, and then the digital signal is digital-to analog converted by a sub D/A converter. The input analog signal and the analog signal produced by the sub D/A converter are subjected to a subtraction process. The resulting signal is amplified by an operational amplifier to be outputted to a subsequent stage. In the pipeline-type A/D converter, a good linearity is required in which a relation between an analog input and a digital output shows a line. The linearity of the input and output signals, however, has not been thoroughly examined in known patents. Thus, improving the input-output linearity has been expected.
{ "pile_set_name": "USPTO Backgrounds" }
The formation of cognitive maps of adjacent environments: evidence from the head direction cell system. In 2 experiments the authors tested whether the head direction (HD) cell system underlies a sense of direction maintained across environments. In Experiment 1, HD neurons failed to maintain their firing directions across T mazes in adjacent environments but rather reoriented to the T maze within each environment. Such reorientation suggests that familiar landmarks override an internal directional sense, so in Experiment 2 the authors recorded HD neurons as rats walked between novel and familiar "rooms" of a 4-chamber apparatus. In novel rooms, HD neurons maintained the firing direction of the preceding environment. However, in familiar rooms, HD neuron firing directions shifted to agree with the landmarks therein. With repeated experience, a homogeneous representation of all rooms developed in a subset of the rats.
{ "pile_set_name": "PubMed Abstracts" }
September 18/07 12:07 pm - BC Seniors Games Posted by Editoress on 09/18/07 British Columbia Seniors Games September 14-15, Nanaimo BCCourtesy Peter McCaffery ITT A 16 km time trial was held on a safe, sheltered undulating, out-and back course that climbed about 100 m to the turn. Some fine performances were produced by riders in all age groups. Total entry was: 56 men and 20 women. Thanks to all the marshalls and other volunteers who did a fantastic job, to Leigh Blaney for the medical service, Dave Kenny for all those signs, and to Corey Pikett who had his first taste of timekeeping. No times or placings currently available Road Race This is on a hilly 20km circuit in Cedar, starting at 9.00am at Cedar Community Hall. Women 80+ (1Lap) 1. Olwyn Ringheim 2:32:45 70-74 (2 laps) 1. Dora Ellis 2:21:17 2. Jean Nelson 3. Mary Ellen Pakka DNS. Helen Bourchier DNS. Kay Ogilvie 65-69 (2 laps) 1. Sandra Olafson 1:54:53 2. Donna Nicholas 3. Audrey Sellars 60-64 (2 laps) 1. Diana Rogers 1:49:16 2. Jean MacDonald 3. Hana Garrick 4. Verena Balke 5. Terry Chalmers 6. Linda Scott 7. Ann Kantakis 8. Jan Schmidt 55-59 (2 laps) 1. Sandy Hartley 1:49:16 2. Jannie Koomen 3. Barbara Davies Men 55-59 (3 laps) 1. Duane Martindale 1:47:30 2. Chris Hahlen 3. Michael Faulkner 4. Jacob Koomen 5. George McLaughlin 6. Robert Hagar DNS. Michael Grant DNF. Douglas Morley DNS. Raymond Morrison DNS. Brooke Parker 60-64 (3 laps) 1. John Sullivan 1:47:55 2. Des Snider 3. David Mercer 4. David Lloyd 5. Derek Steel 6. Bob MacLean 7. Bill Davis 8. Dick Allin 9. Douglas Richardson 10. Jim Henderson DNF. Harry Balke DNF. Roy Kregosky DNS. Harry Balke DNS. Malcolm Farrow DNS. Michael Fibiger-Crossman DNS. Bill Hutchinson DNS. Brendan Kennelly DNS. Bob Lindsay 65-69 (2 laps) 1. David Emery 1:22:30 2. David Steen 3. Dave Ellis 4. Matt de Nys 5. Leo Le Couteur 6. Dick Patterson DNS. Gerry Goodleff 70-74 (2 laps) 1. Ian Mahon 1:34:27 2. Franco Crema 3. Terry Stone 4. Norman Kendall 5. John C Smith 6. Rino De Biasio 7. Cesario Cifilillini 8. Frank Ludtke 9. Robert Dumalanede 10. Gus McCarthy 11. Jim Lister 12. Torry Kier DNS. Paul Hendricks ?. Barton Mann 75-79 (2 laps) 1. Robert Thom 1:39:55 2. Peter Blockker DNS. Barry Ogilvie 80+ (2 laps) 1. Bob Allen 1:49:16 2. Tom Humeniuk 3. Eldo Neufeld DNS. Doug Bentley DNS. Harold Bridge Hill Climb This event was contested on a 2.15 km hill that averaged 7-8 percent grade with one 10% and one 13% section in the middle. As you can see from the times, some of the competitors did rides worthy of riders half or even a third of their age. The top men in the younger categories averaged close to 18kph!
{ "pile_set_name": "Pile-CC" }
Q: How high (height-wise) should the oil be for frying chicken? I thought the point of fried chicken is to have enough oil to deep fry it, but I've seen a lot of recipe discussing to fry the chicken for x-time, then flip over and fry for y-time. Does this mean for recipes that involve flipping chicken in fryer we don't want the oil too high (height-wise), or does it make a difference even when completely covered in oil to cook on each side. A: Deep fry and shallow fry both work. At home, when using oil in a wok (safest way because of the sloping sides), I flip whether the oil is deep or shallow. This is just to ensure even browning. For shallow, I would use an amount of oil that is at least half the thickness of the chicken.
{ "pile_set_name": "StackExchange" }
Smoke-free Father’s Day Father’s Day is the day we set aside each year to celebrate and thank dad for his many contributions. However, one thing that should not be celebrated this year or anytime, is dad’s addiction to tobacco. A father’s influence on his children is great, and as a dad he should want to ensure that the decisions he makes are the right ones so that they do not negatively impact his children. When dad smokes, he is not only putting his life at risk, but is signaling to his family that is okay for them to follow in his footsteps. The Wisconsin Tobacco Prevention and Poverty Network (WTPPN) is working to get the message out this Father’s Day that smoking rates among people in diverse populations are decreasing at a slower rate than the overall population. Therefore it is imperative that we as a community become aware of the tobacco industry’s savvy marketing tactics that target specific areas of Milwaukee in an effort to keep dad and his family addicted. The love a father feels from his children on Father’s Day should be enough to make him want to quit smoking and as someone who loves their father unconditionally, we should all encourage him, if he is unable to quit on his own, to give up this deadly habit.
{ "pile_set_name": "Pile-CC" }
Web hosting options and trade-offs Shopify and BigCommerce are two of the fastest growing and most well-known hosted eCommerce platforms. Ecommerce options exist on a spectrum of convenience and control. Both Web hosting options and trade-offs and BigCommerce are right in the middle of the spectrum because they bundle all the technical parts of an online store — hosting, speed, security, inventory, shopping cart and payment processing — and bundle it into a single monthly price. But like a self-hosted eCommerce website web hosting options and trade-offs, Shopify and BigCommerce also bundle as part of your website on your domain where you have full control of product, pricing and customer experience. So unlike running a store on Etsy, eBay or Amazon — you control the build, design, and content of your store. Even though this part of the spectrum has plenty of tradeoffs — services like Shopify and BigCommerce are an excellent option for many store owners. So the question becomes — Shopify vs. They are both excellent companies with an excellent product. Aside — I built a Buzzfeed style quiz for eCommerce platforms that grades the factors with your goals. You can check out the quiz here. Also, a quick disclosure — I receive referral fees from companies mentioned on this website. All data and opinions are based on my experience as a paying customer or consultant to a paying customer. That said, their plan structure is just different enough to make a direct comparison a bit difficult. They have apps and themes — but both tend to be either free or expensive. However, once you start factoring in apps, themes, credit card rates, and mid-tier features such as HTTPS and cart recovery — then Shopify is a better web hosting options and trade-offs value price for most stores. Either way — price is not the deciding factor for Shopify vs. Customer support is one of the most underestimated benefits of using a hosted eCommerce platform. Both BigCommerce and Shopify have customer support built into their monthly price. You get access to all sorts of channels on both — everything from phone to chat to forums to email tickets. Shopify and BigCommerce both serve businesses that range from very small retailers selling niche products to multi-million dollar brands. Both have enterprise plans I wrote about Shopify Plus here and they both have customer support teams trained to help absolute beginners. Their platform is built to web hosting options and trade-offs all retail businesses both on and offline. Shopify runs their own payment processing service and even has their own Point-of-Sale POS system so that small offline retailers can sell offline and online from within the same system. The idea is that your website is only one of many sales channels. You can definitely run your website as your only sales channel in Shopify — but the options to sell elsewhere are already built-in. BigCommerce has plenty of integrations with eBay, Facebook, etc — but they are still treated as an extension of the website. Both BigCommerce and Shopify are excellent platforms for beginners to enterprises. Both BigCommerce and Shopify have excellent onboarding processes and user-friendly management areas. Web hosting options and trade-offs main difference is how each backend is structured. BigCommerce has a single Dashboard where you manage everything — your products, inventory, website pages, settings, billing, etc. Additionally, Shopify has their own lingo. If you have never run a website before and only have a small to mid-size product collection, then BigCommerce will likely make more sense than Shopify. Both Shopify and BigCommerce have almost all the tools marketing, SEO, inventory, order, etc an online store would need to be successful. They differ though, in how they each approach adding new features. They have essential features that all store owners will need built-in. But for features that not all store owners need — they focus on making sure store owners can add feature extensions to their store as needed. They have a large and active App Store that not only has well-known extensions ie, MailChimp but also plenty web hosting options and trade-offs indie apps for every situation ie, apps for international tax and shipping features. Web hosting options and trade-offs has an App Store for extensions as well. However, BigCommerce has a bigger focus on building lots of features directly into their software so that there is no need web hosting options and trade-offs add an extension. For example, take selling on eBay or importing your eBay listings to your store. Both Shopify and BigCommerce can make these features happen. BigCommerce builds the feature into their backend. Shopify does web hosting options and trade-offs have it web hosting options and trade-offs in. Another example is bulk Redirects. BigCommerce has bulk upload built-in while Shopify users have to install an app to take care of it. But if you do, you are more likely to get it in some form or fashion in Shopify than BigCommerce. Overall, if you have fairly core eCommerce needs and simply want everything to be there and to work — then BigCommerce will likely work better. Aside — this is why I recommend doing a 2-week free trial with both BigCommerce and Shopify just to click around and see for yourself. You select a base theme and then edit it to look as you like. Most themes have a hybrid approach to editing. Small customizations colors, logos, etc require just a click. BigCommerce has a Theme Store that is rapidly growing. However, it still lacks the diversity of Shopify. Their price points for premium themes are usually higher as well. So Shopify or BigCommerce — who is a better fit for who? Get a free day free trial with BigCommerce here. Get a free day free trial with Shopify here. I personally like the versatility and options of Shopify. They are likely a better fit for most online store owners. If you are undecided — then take my eCommerce Platform Quiz here. It will take your preferences and tell you web hosting options and trade-offs is the best choice for your online store. I try to help people who run their own websites Skip to the conclusion here. Price Ahh — price. Customer Support Customer support is one of the most underestimated benefits of using a hosted eCommerce platform. They have videos and screenshots for even small changes on the Dashboard whereas Shopify will have text instructions. BigCommerce comes across as more beginner-friendly. Shopify has more thorough and instructive content on running your overall business. They invest a lot of time and resources in case studies, long-form guides, tutorials, and helping your business succeed beyond just implementing a new feature. Shopify web hosting options and trade-offs has a more well-developed network of 3rd party developers and marketers who specialize in Shopify. Customer Focus Shopify and BigCommerce both serve businesses that range from very small retailers selling niche products to multi-million dollar brands. Approach to Features Both Shopify and BigCommerce have almost all the tools marketing, SEO, inventory, order, etc an online store would need to be successful. While the end result web hosting options and trade-offs the same, they do take a slightly different approach. Ecommerce options exist on a spectrum of convenience and control. Both Shopify and Volusion are right in the middle of the spectrum because they bundle all the technical parts of an online store — hosting, speed, security, inventory, shopping cart and payment processing — and bundle it into a single monthly price. But like a self-hosted eCommerce websiteShopify and Volusion also bundle as part of your website on your domain where you have full control of products, pricing, and customer experience. So unlike running a store on Etsy, eBay or Amazon — you control the build, design, and content of your store. Even though this part of the spectrum has plenty of tradeoffs — services like Shopify and Volusion are an excellent option for many store owners. So the question becomes — Shopify vs. They are both excellent companies with an excellent product. Aside — I built a Buzzfeed style quiz for eCommerce platforms that grades the factors with your goals. You can check out the quiz here. Also, a quick disclosure — I receive referral fees from companies mentioned on this website. All data and opinions are based on my experience as a paying customer or consultant to a paying customer. That said, their plan structure is just different enough to make a direct comparison a bit difficult. First is your monthly store fee. Both Volusion and Shopify are generally the same. Volusion is slightly cheaper, but they web hosting options and trade-offs do not include every single feature on lower tiers that Shopify does. Second is your store transaction fee. Otherwise, their transaction fees are the same. The third is your credit card processing fees. If you use a 3rd party processor like Authorize. If you use Shopify, you can use Shopify payments for 2. If you plan on using a 3rd party processor ie, for price or for sticking with your current providerthen Volusion will be about the same — or even slightly cheaper than Shopify every month. Either way web hosting options and trade-offs price is not the deciding factor for Shopify vs. Customer support is one of the most underestimated benefits of using a hosted eCommerce web hosting options and trade-offs. Both Volusion and Shopify have customer support built into their monthly price. You get access to all sorts of channels on both — everything from phone web hosting options and trade-offs chat to forums web hosting options and trade-offs email tickets. In other words, Shopify has a bit more of a learning curve to learn their system, but once you learn it — you can do more with it. Shopify and Volusion both serve businesses that range from very small retailers selling niche products to multi-million dollar brands. Both have enterprise plans I wrote about Shopify Plus here and they both have customer support teams trained to help absolute beginners. Their platform is built to serve all retail businesses both on and offline — but with a focus on startups or online-first businesses that want to expand offline. Shopify runs their own payment processing service and even has their own Point-of-Sale POS system so that small offline retailers can sell offline and online from within the same system. The idea is that your website is only one of many sales channels. You can definitely run your website as your only sales channel in Shopify — but the options to web hosting options and trade-offs elsewhere are already built-in. Their backend and terminology are all focused on the store owner who has an existing retail business and needs to bring it online. They have a robust inventory system with a focus on the operations of an eCommerce store rather than the marketing of an eCommerce store. They have a straightforward functionality to bring on team members to manage listings and inventory. Both Volusion and Shopify are excellent platforms for startups to enterprise. Web hosting options and trade-offs Volusion and Shopify have excellent onboarding processes and user-friendly management areas. The main difference is how each backend is structured. Volusion has a web hosting options and trade-offs Dashboard where you manage everything — your products, inventory, website pages, settings, billing, etc. Additionally, Shopify has their own lingo. If you have never run a website before and only have a small to mid-size product collection, then Volusion will likely make more sense than Shopify. Both Shopify and Volusion have almost all the tools marketing, SEO, inventory, order, etc an online store would need to be successful. They differ though in how they each approach adding new features. They have essential features that all store owners will need built-in. But for features that not all store owners need — they focus on making sure store owners can add feature extensions to their store as needed. They have a large and active App Store that not only has well-known extensions ie, MailChimp but also plenty of indie apps web hosting options and trade-offs every situation ie, apps for international tax and shipping features. Volusion has an App Store for extensions as well. However, Volusion has a bigger focus on building lots of features directly into their software so that there is no need to add an extension. For example, take selling on Amazon or importing your Amazon listings to your store. Both Shopify and Volusion can make these features happen. Web hosting options and trade-offs builds the feature into their backend. Shopify web hosting options and trade-offs not have it built in. But if you do, you are more likely to get it in some form or fashion in Shopify than Volusion. Volusion, though, is decidedly lacking. Again — a CMS is not in itself a huge deal. Overall, if you have fairly core eCommerce needs and simply want everything to be there and to work — then Volusion will likely work better. Aside — this is why I recommend doing a 2-week free trial with both Volusion and Shopify just to click around and see for yourself. You select a base theme and then edit it to look as you like. Web hosting options and trade-offs can do it via drag and drop or via a hybrid approach to editing. Volusion has a Theme Store that is rapidly growing. However, it still lacks the diversity of Shopify. Their price points for premium themes are usually higher web hosting options and trade-offs well. Volusion — who is a better fit for who? Get a free day free trial with Volusion here. Get a free day free trial with Shopify here. I personally like the versatility and options of Shopify. They are likely a better fit for most online store owners. If you are undecided — then take my Ecommerce Platform Quiz here. It will take your preferences and tell you who is the best choice for your online store. I try to help people who run their own websites Web hosting options and trade-offs Ecommerce Platform Comparison. Skip to the conclusion here. Price Ahh — price. The main tradeoff comes from fees — And there are 3 different types of fees to consider. Customer Support Customer support is one of the most underestimated benefits of using a hosted eCommerce platform. All customer support is customized since both run on proprietary platforms. They have videos and screenshots for even small changes on the Dashboard whereas Shopify will have text instructions. Volusion comes across as more beginner-friendly due to onboarding and heavy consultant walk-throughs Shopify has more thorough and instructive content on running your overall business. They invest a lot of time and resources in case studies, long-form guides, tutorials, and helping your business succeed beyond just implementing a new feature. Shopify also has a more well-developed network of 3rd party developers and marketers who specialize web hosting options and trade-offs Shopify. And often, those other platforms will actually provide support for the Shopify integrations. Customer Focus Shopify and Volusion both serve businesses that range from very small retailers selling niche products to multi-million dollar brands. Approach to Features Both Shopify and Volusion have almost all the tools marketing, SEO, inventory, order, etc an online store would need to be successful. While the end result is the same, they do take a slightly different approach. Why not explore more? Featured Ecommerce Resources Shopify vs. No Infinite Web hosting options and trade-offs Here: Yet once they graduate, their accounts are deleted. In the past, I have listed my favorite options, but I realized there must be more. So I asked for suggestions on Twitter. My requirements are as follows:. If you know of others, please send me a note! This great service reimagines the late Geocities for a modern age. Instead, try Soundcloud and Vimeo, respectively. Yes, GitHub will host your static site of any size and using a custom domain for free! Having to learn git might be considered a downside. It would probably be good for you in the long run, but undoubtedly there is a learning curve. Fortunately, you can avoid the command-line completely by using web hosting options and trade-offs free GUI client. At the free level, all repos are publicly viewable, so your entire website would be open web hosting options and trade-offs by default — no secrets here. While traditional hosting companies charge a flat monthly rate, and then cap your total usage e. You just need a credit card. Fortunately, Paul Katsen wrote a great step-by-step guide to setting up a static site on S3. True penny-pinchers can even take steps to optimize their sites even further. A project out of the University of Mary Washington, this service is built specifically and only for students, faculty, and institutions. Presumably, SFTP is supported yay. They are so great. Even better, this plan includes 5 GB of file space, and you can designate individual folders within that space to be served as standalone, static websites. So, really, a FastMail account is awesome email web hosting options and trade-offs simple web hosting rolled into one. Plus, you can use SFTP to access all your files in the usual fashion. Instead, you just ZIP up your site folder and drag-and-drop it onto their web interface. Dropbox is free, up to 2 GB, and any folder you make public is served as a static website. Cactus looks like a beautiful way to develop your site on a Mac, and it has built-in integration with S3 for publishing. Divshot is free for a single, static-files-only website including use of a custom web hosting options and trade-offsbut requires use of a command-line interface to configure your site and deploy files. Pagoda Box is all git-powered, techie, and scalable. Pagoda Box hosts this website, alignedleft. Heroku is conceptually similar but even techie-er than Pagoda Box. But still, you can get the basic service free. Thanks to the many people who chimed in on Twitter with ideas, including John C. Switch JavaScript back on in your web browser for the full experience. So each May, one or two seniors on the verge of graduation will ask: I want to keep my website. Where can I host my website cheaply? My requirements are as follows: Free or cheap to fit within the budget of a student or recent grad. We can do cheaper than that. Must support custom domains. Design students will use this for online portfolios, so they need to use myname. Static file hosting is okay. I am looking for bare-bones here. Low-bandwidth and low-storage are okay. These are for personal sites to be viewed by friends and potential employers. SFTP access is ideal. The ideal solution would also allow SFTP connections, to maintain a familiar workflow. Free Neocities This great service reimagines the late Geocities web hosting options and trade-offs a modern age. Cheap Reclaim Hosting A project out of the University of Mary Washington, this service is built specifically and only for students, faculty, and institutions. Thanks Thanks to the many people who chimed in on Twitter with ideas, including John C.
{ "pile_set_name": "Pile-CC" }
Son muchas las elucubraciones que hay en torno al 'caos Nóos' y la posible condena a Iñaki Urdangarin. El ex duque consorte de Palma podría elegir el penal en el que pasar su condena en función de su cercanía con la familia o de la posiblidad de tener un tratamiento más favorable. ¿A qué cárcel irá? ¿Tendrá privilegios? ¿En qué grado ingresará? ¿Quién tiene la decisión final sobre el penal? Las dudas son muchas, aunque lo cierto es que hay pocas variables en manos del marido de la Infanta. Sobre el centro penitenciario, los expertos barajan bastantes posibilidades. En el caso de que no se entregue voluntariamente, lo lógico sería que ingresara en la provincia donde realiza su vida familiar y que menos desocialice. Por otro lado, si se entregara antes del plazo para entrar en prisión podría elegir a qué cárcel ir a cumplir su condena. "Normalmente, si entras voluntariamente, puedes elegir a qué cárcel ir", dice Joan Josep Queralt, catedrático de Derecho Penal en Barcelona. El penal de Guadalajara, con pocos presos, muchos de ellos policías; el de Álava, por estar cerca de su familia o uno donde sus abogados puedan asegurarle un trato más favorable son las opciones. Pero, ¿podrá tener privilegios? "Urdangarin ni siquiera es miembro de la familia Real, así que no puede tener ningún tipo de privilegio, será un preso como todos", señala Queralt, que además añade que "ni siquiera la familia Real tendría privilegios". Aunque puede que sus abogados consigan que se mantenga en un módulo de respeto. Así, el módulo funciona de forma completamente independiente respecto al resto de la prisión. Dispone de una galería, tiene su propio economato y cuenta con biblioteca, comedor y patio. Los internos de esa parte no tienen ningún tipo de contacto con los demás. Esto se puso en marcha en 2001 en la prisión de Mansilla de las Mulas, en León, con el objetivo de conseguir un clima de convivencia homologable en cuanto a normas, valores, hábitos y formas de interacción al de cualquier colectivo social normalizado. Actualmente, todas las prisiones españolas cuentan con este tipo de regímenes. Tercer grado "Esto un módulo de seguridad para gente relevante. Por ejemplo, el comisario Villarejo, por ser policía, no está con los demás presos, pero esto no está obligado por la ley. Si Urdangarin necesita esto o no lo determina el director de la cárcel en función de si hay algo que ponga en peligro la seguridad del preso", concreta el catedrático que añade que "si entra en la cárcel, pese a todo, va a ser el preso más escrutado del mundo". En cuanto al grado con el que ingresaría en prisión, Queralt lo deja claro: "En primer grado, como todo el mundo, y si es una pena de menos de cinco años probablemente pasará rápido al tercer grado con libertad condicional por la reforma que hizo el PP en 2003".
{ "pile_set_name": "OpenWebText2" }
General counsel A general counsel, chief counsel, or chief legal officer (CLO) is the chief lawyer of a legal department, usually in a company or a governmental department. In a company, the person holding the position typically reports directly to the CEO, and their duties involve overseeing and identifying the legal issues in all departments and their interrelation, including engineering, design, marketing, sales, distribution, credit, finance, human resources and production, as well as corporate governance and business policy. This would naturally require in most cases reporting directly to the owner or CEO overseeing the very business on which the CLO is expected to be familiar with and advise on the most confidential level. This requires the CLO/general counsel to work closely with each of the other officers, and their departments, to appropriately be aware and advise. Historically, general counsel often handled administrative tasks while outside lawyers in private practice handled more complex legal work. Since the 1980s, however, the general counsel position has become increasingly prominent in multinational companies, often directly advising the board of directors in place of outside lawyers. General counsel are now often among the most highly paid executives of major American corporations, and prominent American government lawyers and law firm partners are often hired for general counsel roles at prominent companies. Similar trends are also being seen in the United Kingdom and other countries. General counsel often have broad roles encompassing crisis management, compliance reporting management and public policy advocacy. Many companies also hire in-house counsel to handle specialized tasks such as tax work, mergers and acquisitions, labor law and intellectual property, sometimes building in-house practice groups that rival the practices of major law firms. Organizations Global The Association of Corporate Counsel The Association of Corporate Counsel ("ACC") has 35,000 general counsel, chief legal officer and other in-house counsel members located in 90 countries. ACC was founded as the American Corporate Counsel Association in 1982 and now includes more than 55 chapters, including in Argentina, Canada (four chapters), Europe, Israel, the Middle East and Singapore. Members have access to networking opportunities and education events through their regional chapter affiliations as well as global connections across practice area, job title and industry. ACC provides members with resources to deliver services and advice to their companies, promotes the value of in-house legal services and advocates on behalf of general counsel. For its general counsel and chief legal officer members, ACC hosts roundtables where members discuss current practice trends and issues. United States The General Counsel Forum The Forum is an association of 700 general counsel and senior managing counsel. The non-profit organization was founded in the fall of 1998 as the Dallas-Fort Worth General Counsel's Management Practices Forum (“DFWGCMPF”). The association is a partnership between in-house members and outside counsel, known as underwriters. Members are general counsel and managing counsel of corporations, non-profit organizations and government agencies. The mission of the Forum is to improve the professional lives of general counsel and managing counsel through meaningful opportunities for peer-to-peer interaction and knowledge exchange, mentoring through professional development in legal best practices, ethics, governance, and compliance. In November 2000, the DFWGCMPF changed its name to The Texas General Counsel Forum, also known as The Forum, and in the following year the Houston Chapter was formed, and then the Austin-San Antonio Chapter was founded. In July 2005, the Forum hired a Chief Executive Officer with the mandate to improve the efficiency and effectiveness of the organization, expand membership, and launch the organization nationally. In November 2009, the Board of Directors approved expanding the Forum nationally, and dropped the reference to Texas, becoming simply The General Counsel Forum. In the fall of 2012, the General Counsel Forum founded the Chicago Chapter. Silicon Valley Association of General Counsel The Silicon Valley Association of General Counsel (SVAGC) is a business league of chief legal officers from over 100 leading companies in the technology and life science sectors. Member companies include publicly traded corporations, private ventures and multinational subsidiaries located throughout the San Francisco Bay Area with operations in software, electronics, power technology, biotechnology, medical devices, health informatics, analytics, materials science, cleantech, fintech, telecommunications, network infrastructure, e-commerce and Internet services, artificial intelligence and machine learning. The SVAGC hosts a series of monthly luncheons featuring expert presentations and off-the-record discussion about topics of professional interest. It also assists members who wish to survey their peers or pose questions on particular issues, and cooperates in special projects such as the All Hands Meeting, an annual multi-track conference at the Santa Clara Convention Center, attended by general and staff counsel from hundreds of member and nonmember legal departments in the technology and life science sectors. The SVAGC is a successor to the Peninsula Association of General Counsel (PAGC), formed in the early 1980s. In 2003, the SVAGC was formally organized as a California mutual benefit nonprofit corporation with the assistance of Ivy Associates, a consultancy to the Silicon Valley legal community that provides organizational support for the SVAGC and produces the All Hands Meeting. Only SVAGC members, speakers and invited guests may attend monthly luncheons, which are funded by modest membership dues. Individuals may join the SVAGC when they are the chief legal officer of a company with operations in Northern California that is publicly traded or which meets alternative criteria for private companies. Some SVAGC members serve as general counsel for companies headquartered outside Northern California, and attend meetings when business travel brings them into the SF Bay Area. United Kingdom In the United Kingdom a group of general counsel, called the GC100, was officially launched on 9 March 2005 and brings together the senior legal officers of more than eighty five FTSE 100 companies. The GC100 group was created in response to the increasing volume and complexity of domestic and international law and regulation which impacts on UK listed companies. The group was formed with the support of Practical Law Company which acts as its secretariat. The main objectives of the GC100 are to: Provide a forum for practical and business focused input on key areas of legislative and policy reform common to UK listed companies. Enable members to share best practice in relation to law, risk management, compliance and other areas of common interest. Membership of the GC100 is by invitation only. At the AGM on the 16 January 2007 members voted in favour of extending membership to company secretaries as well as general counsel in the FTSE 100. The formal name of the GC100 is now "The Association of General Counsel and Company Secretaries of the FTSE100", although it will continue to be known as the GC100. Mark Harding, the first chair of the GC100, has stated that the GC100 is not a campaigning body, although they work closely with the FD100 (a similar grouping of blue chip finance directors). See also Corporation counsel References External links In-House Legal Podcast - Interviews with leading GC's The General Counsel Forum Silicon Valley Association of General Counsel All Hands Meeting Category:Management occupations Category:Lawyers by type
{ "pile_set_name": "Wikipedia (en)" }
[**The srank Conjecture on Schur’s $Q$-Functions**]{} William Y. C. Chen$^{1}$, Donna Q. J. Dou$^2$,\ Robert L. Tang$^3$ and Arthur L. B. Yang$^{4}$\ Center for Combinatorics, LPMC-TJKLC\ Nankai University, Tianjin 300071, P. R. China\ $^{1}$[chen@nankai.edu.cn]{}, $^{2}$[qjdou@cfc.nankai.edu.cn]{}, $^{3}$[tangling@cfc.nankai.edu.cn]{}, $^{4}$[yang@nankai.edu.cn]{} **Abstract.** We show that the shifted rank, or srank, of any partition $\lambda$ with distinct parts equals the lowest degree of the terms appearing in the expansion of Schur’s $Q_{\lambda}$ function in terms of power sum symmetric functions. This gives an affirmative answer to a conjecture of Clifford. As pointed out by Clifford, the notion of the srank can be naturally extended to a skew partition $\lambda/\mu$ as the minimum number of bars among the corresponding skew bar tableaux. While the srank conjecture is not valid for skew partitions, we give an algorithm to compute the srank. **MSC2000 Subject Classification:** 05E05, 20C25 Introduction ============ The main objective of this paper is to answer two open problems raised by Clifford [@cliff2005] on sranks of partitions with distinct parts, skew partitions and Schur’s $Q$-functions. For any partition $\lambda$ with distinct parts, we give a proof of Clifford’s srank conjecture that the lowest degree of the terms in the power sum expansion of Schur’s $Q$-function $Q_{\lambda}$ is equal to the number of bars in a minimal bar tableaux of shape $\lambda$. Clifford [@cliff2003; @cliff2005] also proposed an open problem of determining the minimum number of bars among bar tableaux of a skew shape $\lambda/\mu$. As noted by Clifford [@cliff2003], this minimum number can be naturally regarded as the shifted rank, or srank, of $\lambda/\mu$, denoted $\mathrm{srank}(\lambda/\mu)$. For a skew bar tableau, we present an algorithm to generate a skew bar tableau without increasing the number of bars. This algorithm eventually leads to a bar tableau with the minimum number of bars. Schur’s $Q$-functions arise in the study of the projective representations of symmetric groups [@schur1911], see also, Hoffman and Humphreys [@hofhum1992], Humphreys [@humphr1986], J$\rm{\acute{o}}$zefiak [@jozef1989], Morris [@morri1962; @morri1979] and Nazarov [@nazar1988]. Shifted tableaux are closely related to Schur’s $Q$-functions analogous to the role of ordinary tableaux to the Schur functions. Sagan [@sagan1987] and Worley [@worley1984] have independently developed a combinatorial theory of shifted tableaux, which includes shifted versions of the Robinson-Schensted-Knuth correspondence, Knuth’s equivalence relations, Schützenberger’s jeu de taquin, etc. The connections between this combinatorial theory of shifted tableaux and the theory of projective representations of the symmetric groups are further explored by Stembridge [@stemb1989]. Clifford [@cliff2005] studied the srank of shifted diagrams for partitions with distinct parts. Recall that the rank of an ordinary partition is defined as the number of boxes on the main diagonal of the corresponding Young diagram. Nazarov and Tarasov [@naztar2002] found an important generalization of the rank of an ordinary partition to a skew partition in their study of tensor products of Yangian modules. A general theory of border strip decompositions and border strip tableaux of skew partitions is developed by Stanley [@stanl2002], and it has been shown that the rank of a skew partition is the least number of strips to construct a minimal border strip decomposition of the skew diagram. Motivated by Stanley’s theorem, Clifford [@cliff2005] generalized the rank of a partition to the rank of a shifted partition, called srank, in terms of the minimal bar tableaux. On the other hand, Clifford has noticed that the srank is closely related to Schur’s $Q$-function, as suggested by the work of Stanley [@stanl2002] on the rank of a partition. Stanley introduced a degree operator by taking the degree of the power sum symmetric function $p_{\mu}$ as the number of nonzero parts of the indexing partition $\mu$. Furthermore, Clifford and Stanley [@clista2004] defined the bottom Schur functions to be the sum of the lowest degree terms in the expansion of the Schur functions in terms of the power sums. In [@cliff2005] Clifford studied the lowest degree terms in the expansion of Schur’s $Q$-functions in terms of power sum symmetric functions and conjectured that the lowest degree of the Schur’s $Q$-function $Q_{\lambda}$ is equal to the srank of $\lambda$. Our first result is a proof of this conjecture. However, in general, the lowest degree of the terms, which appear in the expansion of the skew Schur’s $Q$-function $Q_{\lambda/\mu}$ in terms of the power sums, is not equal to the srank of the shifted skew diagram of $\lambda/\mu$. This is different from the case for ordinary skew partitions and skew Schur functions. Instead, we will take an algorithmic approach to the computation of the srank of a skew partition. It would be interesting to find an algebraic interpretation in terms of Schur’s $Q$-functions. Shifted diagrams and bar tableaux {#sect2} ================================= Throughout this paper we will adopt the notation and terminology on partitions and symmetric functions in [@macdon1995]. A *partition* $\lambda$ is a weakly decreasing sequence of positive integers $\lambda_1\geq \lambda_2\geq \ldots\geq \lambda_k$, denoted $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k)$, and $k$ is called the *length* of $\lambda$, denoted $\ell(\lambda)$. For convenience we may add sufficient 0’s at the end of $\lambda$ if necessary. If $\sum_{i=1}^k\lambda_i=n$, we say that $\lambda$ is a partition of the integer $n$, denoted $\lambda\vdash n$. For each partition $\lambda$ there exists a geometric representation, known as the Young diagram, which is an array of squares in the plane justified from the top and left corner with $\ell(\lambda)$ rows and $\lambda_i$ squares in the $i$-th row. A partition is said to be *odd* (resp. even) if it has an odd (resp. even) number of even parts. Let $\mathcal{P}^o(n)$ denote the set of all partitions of $n$ with only odd parts. We will call a partition *strict* if all its parts are distinct. Let $\mathcal {D}(n)$ denote the set of all strict partitions of $n$. For each partition $\lambda\in \mathcal {D}(n)$, let $S(\lambda)$ be the shifted diagram of $\lambda$, which is obtained from the Young diagram by shifting the $i$-th row $(i-1)$ squares to the right for each $i>1$. For instance, Figure \[shifted diagram\] illustrates the shifted diagram of shape $(8,7,5,3,1)$. (180,120) (10,100)[(1,0)[160]{}]{} (10,80)[(1,0)[160]{}]{} (30,60)[(1,0)[140]{}]{}(50,40)[(1,0)[100]{}]{} (70,20)[(1,0)[60]{}]{}(90,0)[(1,0)[20]{}]{} (10,80)[(0,1)[20]{}]{} (30,60)[(0,1)[40]{}]{}(50,40)[(0,1)[60]{}]{} (70,20)[(0,1)[80]{}]{}(90,0)[(0,1)[100]{}]{} (110,0)[(0,1)[100]{}]{}(130,20)[(0,1)[80]{}]{} (150,40)[(0,1)[60]{}]{}(170,60)[(0,1)[40]{}]{} Given two partitions $\lambda$ and $\mu$, if for each $i$ we have $\lambda_i\geq \mu_i$, then the skew partition $\lambda/\mu$ is defined to be the diagram obtained from the diagram of $\lambda$ by removing the diagram of $\mu$ at the top-left corner. Similarly, the skew shifted diagram $S(\lambda/\mu)$ is defined as the set-theoretic difference of $S(\lambda)$ and $S(\mu)$. Now we recall the definitions of bars and bar tableaux as given in Hoffman and Humphreys [@hofhum1992]. Let $\lambda\in \mathcal {D}(n)$ be a partition with length $\ell(\lambda)=k$. Fixing an odd positive integer $r$, three subsets $I_{+}, I_{0}, I_{-}$ of integers between $1$ and $k$ are defined as follows: $$\begin{aligned} I_{+}& = &\{i: \lambda_{j+1}<\lambda_i-r<\lambda_j\: \mbox{for some } j\leq k,\: \mbox {taking}\:\lambda_{k+1}=0\},\\[5pt] I_{0} & = & \{i: \lambda_i=r\},\\[5pt] I_{-} & = & \{i: r-\lambda_{i}=\lambda_j \:\mbox{for some} \:j\: \mbox{with} \:i<j\leq k\}.\end{aligned}$$ Let $I(\lambda,r)=I_{+}\cup I_{0}\cup I_{-}$. For each $i\in I(\lambda,r)$, we define a new strict partition $\lambda(i,r)$ of $\mathcal {D}(n-r)$ in the following way: - If $i\in I_{+}$, then $\lambda_i>r$, and let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing $\lambda_i$ and inserting $\lambda_i-r$ between $\lambda_j$ and $\lambda_{j+1}$. - If $i\in I_{0}$, let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing $\lambda_i$. - If $i\in I_{-}$, then let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing both $\lambda_i$ and $\lambda_j$. Meanwhile, for each $i\in I(\lambda,r)$, the associated $r$-bar is given as follows: - If $i\in I_{+}$, the $r$-bar consists of the rightmost $r$ squares in the $i$-th row of $S(\lambda)$, and we say that the $r$-bar is of Type $1$. - If $i\in I_{0}$, the $r$-bar consists of all the squares of the $i$-th row of $S(\lambda)$, and we say that the $r$-bar is of Type $2$. - If $i\in I_{-}$, the $r$-bar consists of all the squares of the $i$-th and $j$-th rows, and we say that the $r$-bar is of Type $3$. For example, as shown in Figure \[bar tableau\], the squares filled with $6$ are a $7$-bar of Type $1$, the squares filled with $4$ are a $3$-bar of Type $2$, and the squares filled with $3$ are a $7$-bar of Type $3$. (180,103) (10,100)[(1,0)[180]{}]{} (10,80)[(1,0)[180]{}]{} (30,60)[(1,0)[140]{}]{}(50,40)[(1,0)[120]{}]{} (70,20)[(1,0)[60]{}]{}(90,0)[(1,0)[20]{}]{} (10,80)[(0,1)[20]{}]{} (30,60)[(0,1)[40]{}]{}(50,40)[(0,1)[60]{}]{} (70,20)[(0,1)[80]{}]{}(90,0)[(0,1)[100]{}]{} (110,0)[(0,1)[100]{}]{}(130,20)[(0,1)[80]{}]{} (150,40)[(0,1)[60]{}]{}(170,40)[(0,1)[60]{}]{} (190,80)[(0,1)[20]{}]{} (18,87)[$1$]{}(38,87)[$1$]{}(58,87)[$6$]{} (78,87)[$6$]{}(98,87)[$6$]{}(118,87)[$6$]{} (138,87)[$6$]{}(158,87)[$6$]{}(178,87)[$6$]{} (38,67)[$1$]{}(58,67)[$2$]{}(78,67)[$2$]{} (98,67)[$2$]{}(118,67)[$5$]{}(138,67)[$5$]{} (158,67)[$5$]{} (58,47)[$3$]{}(78,47)[$3$]{}(98,47)[$3$]{} (118,47)[$3$]{}(138,47)[$3$]{}(158,47)[$3$]{} (78,27)[$4$]{}(98,27)[$4$]{}(118,27)[$4$]{} (98,7)[$3$]{} A *bar tableau* of shape $\lambda$ is an array of positive integers of shape $S(\lambda)$ subject to the following conditions: - It is weakly increasing in every row; - The number of parts equal to $i$ is odd for each positive integer $i$; - Each positive integer $i$ can appear in at most two rows, and if $i$ appears in two rows, then these two rows must begin with $i$; - The composition obtained by removing all squares filled with integers larger than some $i$ has distinct parts. We say that a bar tableau $T$ is of type $\rho=(\rho_1,\rho_2,\ldots)$ if the total number of $i$’s appearing in $T$ is $\rho_i$. For example, the bar tableau in Figure \[weight\] is of type $(3,1,1,1)$. For a bar tableau $T$ of shape $\lambda$, we define its weight $wt(T)$ recursively by the following procedure. If $T$ is empty, let $wt(T)=1$. Let $\varepsilon(\lambda)$ denote the parity of the partition $\lambda$, i.e., $\varepsilon(\lambda)=0$ if $\lambda$ has an even number of even parts; otherwise, $\varepsilon(\lambda)=1$. Suppose that the largest numbers in $T$ form an $r$-bar, which is associated with an index $i\in I(\lambda, r)$. Let $j$ be the integer that occurrs in the definitions of $I_{+}$ and $I_{-}$. Let $T'$ be the bar tableau of shape $\lambda(i, r)$ obtained from $T$ by removing this $r$-bar. Now, let $$wt(T)=n_i\, wt(T'),$$ where $$n_i=\left\{\begin{array}{cc} (-1)^{j-i}2^{1-\varepsilon(\lambda)},& \mbox{if}\ i\in I_{+},\\[6pt] (-1)^{\ell(\lambda)-i},& \mbox{if}\ i\in I_{0},\\[6pt] (-1)^{j-i+\lambda_i}2^{1-\varepsilon(\lambda)},& \mbox{if}\ i\in I_{-}. \end{array} \right.$$ For example, the weight of the bar tableau $T$ in Figure \[weight\] equals $$wt(T)=(-1)^{1-1}2^{1-0}\cdot(-1)^{1-1}2^{1-1}\cdot(-1)^{2-2} \cdot(-1)^{1-1}=2.$$ (180,40) (40,40)[(1,0)[100]{}]{}(40,20)[(1,0)[100]{}]{} (60,0)[(1,0)[20]{}]{} (40,20)[(0,1)[20]{}]{}(60,0)[(0,1)[40]{}]{} (80,0)[(0,1)[40]{}]{}(100,20)[(0,1)[20]{}]{} (120,20)[(0,1)[20]{}]{}(140,20)[(0,1)[20]{}]{} (48,26)[$1$]{}(68,26)[$1$]{} (88,26)[$1$]{}(108,26)[$3$]{}(128,26)[$4$]{} (68,6)[$2$]{} The following lemma will be used in Section 3 to determine whether certain terms will vanish in the power sum expansion of Schur’s $Q$-functions indexed by partitions with two distinct parts. \[vanishbar\] Let $\lambda=(\lambda_1,\lambda_2)$ be a strict partition with the two parts $\lambda_1$ and $\lambda_2$ having the same parity. Given an partition $\sigma=(\sigma_1,\sigma_2)\in \mathcal{P}^o(|\lambda|)$, if $\sigma_2<\lambda_2$, then among all bar tableaux of shape $\lambda$ there exist only two bar tableaux of type $\sigma$, say $T_1$ and $T_2$, and furthermore, we have $wt(T_1)+wt(T_2)=0$. Suppose that both $\lambda_1$ and $\lambda_2$ are even. The case when $\lambda_1$ and $\lambda_2$ are odd numbers can be proved similarly. Note that $\sigma_2<\lambda_{2}<\lambda_{1}$. By putting $2$’s in the last $\sigma_2$ squares of the second row and then filling the remaining squares in the diagram with $1$’s, we obtain one tableau $T_1$. By putting $2$’s in the last $\sigma_2$ squares of the first row and then filling the remaining squares with $1$’s, we obtain another tableau $T_2$. Clearly, both $T_1$ and $T_2$ are bar tableaux of shape $\lambda$ and type $\sigma$, and they are the only two such bar tableaux. We notice that $$wt(T_1)=(-1)^{2-2}2^{1-0}\cdot (-1)^{2-1+\lambda_1} 2^{1-1}=-2.$$ While, for the weight of $T_2$, there are two cases to consider. If $\lambda_1-\sigma_2>\lambda_2$, then $$wt(T_2)=(-1)^{1-1}2^{1-0}\cdot (-1)^{2-1+\lambda_1-\sigma_2}2^{1-1}=2.$$ If $\lambda_1-\sigma_2<\lambda_2$, then $$wt(T_2)=(-1)^{2-1}2^{1-0}\cdot (-1)^{2-1+\lambda_2}2^{1-1}=2.$$ Thus we have $wt(T_2)=2$ in either case, so the relation $wt(T_1)+wt(T_2)=0$ holds. For example, taking $\lambda=(8,6)$ and $\sigma=(11,3)$, the two bar tableaux $T_1$ and $T_2$ in the above lemma are depicted as in Figure \[2-bar tableaux1\]. (180,40) (0,30)[(1,0)[80]{}]{}(0,20)[(1,0)[80]{}]{} (10,10)[(1,0)[60]{}]{}(0,30)[(0,-1)[10]{}]{} (10,30)(10,0)[7]{}[(0,-1)[20]{}]{} (80,30)[(0,-1)[10]{}]{} (4,23)(10,0)[8]{}[$1$]{} (14,13)(10,0)[3]{}[$1$]{} (44,13)(10,0)[3]{}[$2$]{} (40,0)[$T_1$]{} (110,30)[(1,0)[80]{}]{}(110,20)[(1,0)[80]{}]{} (120,10)[(1,0)[60]{}]{}(110,30)[(0,-1)[10]{}]{} (120,30)(10,0)[7]{}[(0,-1)[20]{}]{} (190,30)[(0,-1)[10]{}]{} (114,23)(10,0)[5]{}[$1$]{} (164,23)(10,0)[3]{}[$2$]{}(124,13)(10,0)[6]{}[$1$]{} (150,0)[$T_2$]{} Clifford gave a natural generalization of bar tableaux to skew shapes [@cliff2005]. Formally, a *skew bar tableau* of shape $\lambda/\mu$ is an assignment of nonnegative integers to the squares of $S(\lambda)$ such that in addition to the above four conditions (1)-(4) we further impose the condition that - the partition obtained by removing all squares filled with positive integers and reordering the remaining rows is $\mu$. For example, taking the skew partition $(8,6,5,4,1)/(8,2,1)$, Figure \[skew bar tableau\] is a skew bar tableau of such shape. (180,160) (-125,150)[(1,0)[120]{}]{}(-125,135)[(1,0)[120]{}]{} (-110,120)[(1,0)[90]{}]{}(-95,105)[(1,0)[75]{}]{} (-80,90)[(1,0)[60]{}]{}(-65,75)[(1,0)[15]{}]{} (-125,135)[(0,1)[15]{}]{}(-110,120)[(0,1)[30]{}]{} (-95,105)[(0,1)[45]{}]{}(-80,90)[(0,1)[60]{}]{} (-65,75)[(0,1)[75]{}]{}(-50,75)[(0,1)[75]{}]{} (-35,90)[(0,1)[60]{}]{}(-20,90)[(0,1)[60]{}]{} (-5,135)[(0,1)[15]{}]{} (-5,105)[$\longrightarrow$]{} (-120,139)[$0$]{}(-105,139)[$0$]{}(-90,139)[$0$]{} (-75,139)[$0$]{}(-60,139)[$0$]{}(-45,139)[$0$]{} (-30,139)[$0$]{}(-15,139)[$0$]{} (-105,124)[$1$]{}(-90,124)[$1$]{} (-75,124)[$1$]{}(-60,124)[$3$]{}(-45,124)[$3$]{} (-30,124)[$3$]{} (-90,109)[$0$]{} (-75,109)[$0$]{}(-60,109)[$2$]{}(-45,109)[$2$]{} (-30,109)[$2$]{} (-75,94)[$1$]{}(-60,94)[$1$]{}(-45,94)[$1$]{} (-30,94)[$1$]{} (-60,79)[$0$]{} (25,150)[(1,0)[120]{}]{}(25,135)[(1,0)[120]{}]{} (40,120)[(1,0)[75]{}]{}(55,105)[(1,0)[60]{}]{} (70,90)[(1,0)[45]{}]{}(85,75)[(1,0)[15]{}]{} (25,135)[(0,1)[15]{}]{}(40,120)[(0,1)[30]{}]{} (55,105)[(0,1)[45]{}]{}(70,90)[(0,1)[60]{}]{} (85,75)[(0,1)[75]{}]{}(100,75)[(0,1)[75]{}]{} (115,90)[(0,1)[60]{}]{}(130,135)[(0,1)[15]{}]{} (145,135)[(0,1)[15]{}]{}(145,105)[$\longrightarrow$]{} (30,139)[$0$]{}(45,139)[$0$]{}(60,139)[$0$]{} (75,139)[$0$]{}(90,139)[$0$]{}(105,139)[$0$]{} (120,139)[$0$]{}(135,139)[$0$]{} (45,124)[$0$]{}(60,124)[$0$]{} (75,124)[$2$]{}(90,124)[$2$]{}(105,124)[$2$]{} (60,109)[$1$]{} (75,109)[$1$]{}(90,109)[$1$]{}(105,109)[$1$]{} (75,94)[$1$]{}(90,94)[$1$]{}(105,94)[$1$]{} (90,79)[$0$]{} (175,150)[(1,0)[120]{}]{}(175,135)[(1,0)[120]{}]{} (190,120)[(1,0)[60]{}]{}(205,105)[(1,0)[45]{}]{} (220,90)[(1,0)[30]{}]{}(235,75)[(1,0)[15]{}]{} (175,135)[(0,1)[15]{}]{}(190,120)[(0,1)[30]{}]{} (205,105)[(0,1)[45]{}]{}(220,90)[(0,1)[60]{}]{} (235,75)[(0,1)[75]{}]{}(250,75)[(0,1)[75]{}]{} (265,135)[(0,1)[15]{}]{}(280,135)[(0,1)[15]{}]{} (295,135)[(0,1)[15]{}]{} (180,139)[$0$]{}(195,139)[$0$]{}(210,139)[$0$]{} (225,139)[$0$]{}(240,139)[$0$]{}(255,139)[$0$]{} (270,139)[$0$]{}(285,139)[$0$]{} (195,124)[$1$]{}(210,124)[$1$]{} (225,124)[$1$]{}(240,124)[$1$]{} (210,109)[$1$]{} (225,109)[$1$]{}(240,109)[$1$]{} (225,94)[$0$]{}(240,94)[$0$]{} (240,79)[$0$]{} (-50,30)[$\longrightarrow$]{} (25,45)[(1,0)[120]{}]{}(25,30)[(1,0)[120]{}]{} (40,15)[(1,0)[30]{}]{}(55,0)[(1,0)[15]{}]{} (25,30)[(0,1)[15]{}]{}(40,15)[(0,1)[30]{}]{} (55,0)[(0,1)[45]{}]{}(70,0)[(0,1)[45]{}]{} (85,30)[(0,1)[15]{}]{}(100,30)[(0,1)[15]{}]{} (115,30)[(0,1)[15]{}]{}(130,30)[(0,1)[15]{}]{} (145,30)[(0,1)[15]{}]{} (30,34)[$0$]{}(45,34)[$0$]{}(60,34)[$0$]{} (75,34)[$0$]{}(90,34)[$0$]{}(105,34)[$0$]{} (120,34)[$0$]{}(135,34)[$0$]{} (45,19)[$0$]{}(60,19)[$0$]{} (60,4)[$0$]{} A bar tableau of shape $\lambda$ is said to be *minimal* if there does not exist a bar tableau with fewer bars. Motivated by Stanley’s results in [@stanl2002], Clifford defined the srank of a shifted partition $S(\lambda)$, denoted ${\rm srank}(\lambda)$, as the number of bars in a minimal bar tableau of shape $\lambda$ [@cliff2005]. Clifford also gave the following formula for ${\rm srank}(\lambda)$. \[min bar\] Given a strict partition $\lambda$, let $o$ be the number of odd parts of $\lambda$, and let $e$ be the number of even parts. Then ${\rm srank}(\lambda)=\max(o,e+(\ell(\lambda) \ \mathrm{mod}\ 2))$. Next we consider the number of bars in a minimal skew bar tableau of shape $\lambda/\mu$. Note that the squares filled with $0$’s in the skew bar tableau give rise to a shifted diagram of shape $\mu$ by reordering the rows. Let $o_ r$ (resp. $e_r$) be the number of nonempty rows of odd (resp. even) length with blank squares, and let $o_s$ (resp. $e_s$) be the number of rows of $\lambda$ with some squares filled with $0$’s and an odd (resp. even) number of blank squares. It is obvious that the number of bars in a minimal skew bar tableau is greater than or equal to $$o_s+2e_s+\max(o_r,e_r+((e_r+o_r)\ \mathrm{mod}\ 2)).$$ In fact the above quantity has been considered by Clifford [[@cliff2003]]{}. Observe that this quantity depends on the positions of the 0’s. It should be remarked that a legal bar tableau of shape $\lambda/\mu$ may not exist once the positions of $0$’s are fixed. One open problem proposed by Clifford [@cliff2003] is to find a characterization of ${\rm srank}(\lambda/\mu)$. In Section 5 we will give an algorithm to compute the srank of a skew shape. Clifford’s conjecture {#sect3} ===================== In this section, we aim to show that the lowest degree of the power sum expansion of a Schur’s $Q$-function $Q_{\lambda}$ equals ${\rm srank}(\lambda)$. Let us recall relevant terminology on Schur’s $Q$-functions. Let $x=(x_1,x_2,\ldots)$ be an infinite sequence of independent indeterminates. We define the symmetric functions $q_k=q_k(x)$ in $x_1,x_2,\ldots$ for all integers $k$ by the following expansion of the formal power series in $t$: $$\prod_{i\geq 1}\frac{1+x_it}{1-x_it}=\sum_{k}q_{k}(x)t^k.$$ In particular, $q_k=0$ for $k<0$ and $q_0=1$. It immediately follows that $$\label{eq-def} \sum_{i+j=n}(-1)^iq_iq_j=0,$$ for all $n\geq 1$. Let $Q_{(a)}=q_a$ and $$Q_{(a,b)}=q_aq_b+2\sum_{m=1}^b(-1)^m q_{a+m}q_{b-m}.$$ From we see that $Q_{(a,b)}=-Q_{(b,a)}$ and thus $Q_{(a,a)}=0$ for any $a,b$. In general, for any strict partition $\lambda$, the symmetric function $Q_{\lambda}$ is defined by the recurrence relations: $$\begin{aligned} Q_{(\lambda_1,\ldots,\lambda_{2k+1})}&=& \sum_{m=1}^{2k+1} (-1)^{m+1} q_{\lambda_m}Q_{(\lambda_1,\ldots,\hat{\lambda}_m,\ldots,\lambda_{2k+1})},\\[5pt] Q_{(\lambda_1,\ldots,\lambda_{2k})}&=& \sum_{m=2}^{2k} (-1)^{m} Q_{(\lambda_1,\lambda_m)}Q_{(\lambda_2,\ldots,\hat{\lambda}_m,\ldots,\lambda_{2k})},\end{aligned}$$ where $\hat{}$ stands for a missing entry. It was known that $Q_{\lambda}$ can be also defined as the specialization at $t=-1$ of the Hall-Littlewood functions associated with $\lambda$ [@macdon1995]. Originally, these $Q_{\lambda}$ symmetric functions were introduced in order to express irreducible projective characters of the symmetric groups [@schur1911]. Note that the irreducible projective representations of $S_n$ are in one-to-one correspondence with partitions of $n$ with distinct parts, see [@jozef1989; @stemb1988; @stemb1989]. For any $\lambda\in \mathcal{D}(n)$, let $\langle\lambda\rangle$ denote the character of the irreducible projective or spin representation indexed by $\lambda$. Morris [@morri1965] has found a combinatorial rule for calculating the characters, which is the projective analogue of the Murnaghan-Nakayama rule. In terms of bar tableaux, Morris’s theorem reads as follows: \[mnrule\] Let $\lambda\in \mathcal{D}(n)$ and $\pi\in \mathcal{P}^o(n)$. Then $$\label{mnruleeq} \langle\lambda\rangle(\pi)=\sum_{T}wt(T)$$ where the sum ranges over all bar tableaux of shape $\lambda$ and type $\pi$. The above theorem for projective characters implies the following formula, which will be used later in the proof of Lemma \[len2\]. \[2odd\] Let $\lambda$ be a strict partition of length $2$. Suppose that the two parts $\lambda_1,\lambda_2$ are both odd. Then we have $$\langle\lambda\rangle(\lambda)=-1.$$ Let $T$ be the bar tableau obtained by filling the last $\lambda_2$ squares in the first row of $S(\lambda)$ with $2$’s and the remaining squares with $1$’s, and let $T'$ be the bar tableau obtained by filling the first row of $S(\lambda)$ with $1$’s and the second row with $2$’s. Clearly, $T$ and $T'$ are of the same type $\lambda$. Let us first consider the weight of $T$. If $\lambda_1-\lambda_2<\lambda_2$, then $$wt(T)=(-1)^{2-1} 2^{1-0}\cdot (-1)^{2-1+\lambda_2}2^{1-1}=-2.$$ If $\lambda_1-\lambda_2>\lambda_2$, then $$wt(T)=(-1)^{1-1} 2^{1-0}\cdot (-1)^{2-1+\lambda_1-\lambda_2}2^{1-1}=-2.$$ In both cases, the weight of $T'$ equals $$wt(T')=(-1)^{2-2}\cdot (-1)^{1-1}=1.$$ Since there are only two bar tableaux, $T$ and $T'$, of type $\lambda$, the corollary immediately follows from Theorem \[mnrule\]. Let $p_k(x)$ denote the $k$-th power sum symmetric functions, i.e., $p_k(x)=\sum_{i\geq 1}x_i^k$. For any partition $\lambda=(\lambda_1,\lambda_2,\cdots)$, let $p_{\lambda}=p_{\lambda_1}p_{\lambda_2}\cdots$. The fundamental connection between $Q_{\lambda}$ symmetric functions and the projective representations of the symmetric group is as follows. \[conn\] Let $\lambda\in \mathcal{D}(n)$. Then we have $$Q_{\lambda}=\sum_{\pi\in \mathcal{P}^o(n)} 2^{[\ell(\lambda)+\ell(\pi)+\varepsilon(\lambda)]/2} \langle\lambda\rangle(\pi)\frac{p_{\pi}}{z_{\pi}},$$ where $$z_{\pi}=1^{m_1}m_1!\cdot 2^{m_2}m_2!\cdot \cdots, \quad \mbox{if $\pi=\langle 1^{m_1}2^{m_2}\cdots \rangle$.}$$ Stanley [@stanl2002] introduced a degree operator on symmetric functions by defining $\deg(p_i)=1$, and so $\deg(p_{\nu})=\ell(\nu)$. Clifford [@cliff2005] applied this operator to Schur’s $Q$-functions and obtained the following lower bound from Theorem \[conn\]. \[atleast\] The terms of the lowest degree in $Q_{\lambda}$ have degree at least ${\rm srank}(\lambda)$. The following conjecture is proposed by Clifford: The terms of the lowest degree in $Q_{\lambda}$ have degree ${\rm srank}(\lambda)$. Our proof of the above conjecture depends on the Pfaffian formula for Schur’s $Q$-functions. Given a skew-symmetric matrix $A=(a_{i,j})$ of even size $2n\times 2n$, the *Pfaffian* of $A$, denoted [Pf]{}(A), is defined by $${\rm Pf}(A)=\sum_{\pi}(-1)^{{\rm cr}(\pi)} a_{i_1j_1}\cdots a_{i_nj_n},$$ where the sum ranges over all set partitions $\pi$ of $\{1,2,\cdots, 2n\}$ into two element blocks $i_k<j_k$ and $cr(\pi)$ is the number of crossings of $\pi$, i.e., the number of pairs $h<k$ for which $i_h<i_k<j_h<j_k$. \[pfexp\] Given a strict partition $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_{2n})$ satisfying $\lambda_1>\ldots>\lambda_{2n}\geq 0$, let $M_{\lambda}=(Q_{(\lambda_i,\lambda_j)})$. Then we have $$Q_{\lambda}={\rm Pf}(M_{\lambda}).$$ We first prove that Clifford’s conjecture holds for strict partitions of length less than three. The proof for the general case relies on this special case. \[len2\] Let $\lambda$ be a strict partition of length $\ell(\lambda)<3$. Then the terms of the lowest degree in $Q_{\lambda}$ have degree ${\rm srank}(\lambda)$. In view of Theorem \[mnrule\] and Theorem \[conn\], if there exists a unique bar tableau of shape $\lambda$ and type $\pi$, then the coefficient of $p_{\pi}$ is nonzero in the expansion of $Q_{\lambda}$. There are five cases to consider. - $\ell(\lambda)=1$ and $\lambda_1$ is odd. Clearly, we have ${\rm srank}(\lambda)=1$. Note that there exists a unique bar tableau $T$ of shape $\lambda$ and of type $\lambda$ with all squares of $S(\lambda)$ filled with $1$’s. Therefore, the coefficient of $p_{\lambda}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the lowest degree of $Q_{\lambda}$ is $1$. - $\ell(\lambda)=1$ and $\lambda_1$ is even. We see that ${\rm srank}(\lambda)=2$. Since the bars are all of odd size, there does not exist any bar tableau of shape $\lambda$ and of type $\lambda$. But there is a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1-1,1)$, which is obtained by filling the rightmost square of $S(\lambda)$ with $2$ and the remaining squares with $1$’s. So the coefficient of $p_{(\lambda_1-1,1)}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the terms of the lowest degree in $Q_{\lambda}$ have degree $2$. - $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ have different parity. In this case, we have ${\rm srank}(\lambda)=1$. Note that there exists a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1+\lambda_2)$, which is obtained by filling all the squares of $S(\lambda)$ with $1$’s. Thus, the coefficient of $p_{\lambda_1+\lambda_2}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the terms of lowest degree in $Q_{\lambda}$ have degree $1$. - $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ are both even. It is easy to see that ${\rm srank}(\lambda)=2$. Since there exists a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1-1,\lambda_2+1)$, which is obtained by filling the rightmost $\lambda_2+1$ squares in the first row of $S(\lambda)$ with $2$’s and the remaining squares with $1$’s, the coefficient of $p_{(\lambda_1-1,\lambda_2+1)}$ in the power sum expansion of $Q_{\lambda}$ is nonzero; hence the lowest degree of $Q_{\lambda}$ is equal to $2$. - $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ are both odd. In this case, we have ${\rm srank}(\lambda)=2$. By Corollary \[2odd\], the coefficient of $p_{\lambda}$ in the power sum expansion of $Q_{\lambda}$ is nonzero, and therefore the terms of the lowest degree in $Q_{\lambda}$ have degree $2$. This completes the proof. Given a strict partition $\lambda$, we consider the Pfaffian expansion of $Q_{\lambda}$ as shown in Theorem \[pfexp\]. To prove Clifford’s conjecture, we need to determine which terms may appear in the expansion of $Q_{\lambda}$ in terms of power sum symmetric functions. Suppose that the Pfaffian expansion of $Q_{\lambda}$ is as follows: $$\label{q-expand} {\rm Pf}(M_{\lambda})=\sum_{\pi}(-1)^{{\rm cr}(\pi)} Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})},$$ where the sum ranges over all set partitions $\pi$ of $\{1,2,\cdots, 2m\}$ into two element blocks $\{(\pi_1,\pi_2),\ldots,(\pi_{2m-1},\pi_{2m})\}$ with $\pi_1<\pi_3<\cdots<\pi_{2m-1}$ and $\pi_{2k-1}<\pi_{2k}$ for any $k$. For the above expansion of $Q_{\lambda}$, the following two lemmas will be used to choose certain lowest degree terms in the power sum expansion of $Q_{(\lambda_i,\lambda_j)}$ in the matrix $M_\lambda$. \[lemma1\] Suppose that $\lambda$ has both odd parts and even parts. Let $\lambda_{i_1}$ (resp. $\lambda_{j_1}$) be the largest odd (resp. even) part of $\lambda$. If the power sum symmetric function $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears in the terms of lowest degree originated from the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ as in the expansion , then we have $(\pi_1,\pi_2)=(i_1,j_1)$. Without loss of generality, we may assume that $\lambda_{i_1}> \lambda_{j_1}$. By Lemma \[len2\], the term $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears in $Q_{(\lambda_{i_1}, \lambda_{j_1})}$ with nonzero coefficients. Since $\lambda_{i_1}, \lambda_{j_1}$ are the largest odd and even parts, $p_{\lambda_{i_1}+\lambda_{j_1}}$ does not appear as a factor of any term of the lowest degree in the expansion of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$, where $\lambda_{i_k}$ and $\lambda_{j_k}$ have different parity. Meanwhile, if $\lambda_{i_k}$ and $\lambda_{j_k}$ have the same parity, then we consider the bar tableaux of shape $(\lambda_{i_k}, \lambda_{j_k})$ and of type $(\lambda_{i_1}+\lambda_{j_1}, \lambda_{i_k}+ \lambda_{j_k}-\lambda_{i_1}-\lambda_{j_1})$. Observe that $\lambda_{i_k}+ \lambda_{j_k}-\lambda_{i_1}-\lambda_{j_1}<\lambda_{j_k}$. Since the lowest degree of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$ is $2$, from Lemma \[vanishbar\] it follows that $p_{\lambda_{i_1}+\lambda_{j_1}}$ can not be a factor of any term of lowest degree in the power sum expansion of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$. This completes the proof. \[lemma2\] Suppose that $\lambda$ only has even parts. Let $\lambda_1, \lambda_2$ be the two largest parts of $\lambda$ (allowing $\lambda_2=0$). If the power sums $p_{\lambda_1-1}p_{\lambda_2+1}$ appears in the terms of the lowest degree given by the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ as in , then we have $(\pi_1,\pi_2)=(1,2)$. From Case (4) of the proof of Lemma \[len2\] it follows that $p_{\lambda_1-1}p_{\lambda_2+1}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_1,\lambda_2)}$. We next consider the power sum expansion of any other $Q_{(\lambda_i,\lambda_j)}$. First, we consider the case when $\lambda_i+\lambda_j>\lambda_2+1$ and $\lambda_i \leq\lambda_2$. Since $\lambda_i+\lambda_j-(\lambda_2+1)<\lambda_j$, by Lemma \[vanishbar\], the term $p_{\lambda_2+1}$ is not a factor of any term of the lowest degree in the power sum expansion of $Q_{(\lambda_i,\lambda_j)}$. Now we are left with the case when $\lambda_i+\lambda_j>\lambda_1-1$ and $\lambda_i\leq \lambda_1-2$. Since $\lambda_i+\lambda_j-(\lambda_1-1)<\lambda_j$, by Lemma \[vanishbar\] the term $p_{\lambda_1-1}$ does not appear as a factor in the terms of the lowest degree of $Q_{(\lambda_i,\lambda_j)}$. So we have shown that if either $p_{\lambda_2+1}$ or $p_{\lambda_1-1}$ appears as a factor of some lowest degree term for $Q_{(\lambda_i,\lambda_j)}$, then we deduce that $\lambda_i=\lambda_1$. Moreover, if both $p_{\lambda_1-1}$ and $p_{\lambda_2+1}$ are factors of the lowest degree terms in the power sum expansion of $Q_{(\lambda_1,\lambda_j)}$, then we have $\lambda_j=\lambda_2$. The proof is complete. We now present the main result of this paper. For any $\lambda\in\mathcal{D}(n)$, the terms of the lowest degree in $Q_\lambda$ have degree ${\rm srank}(\lambda)$. We write the strict partition $\lambda$ in the form $(\lambda_1,\lambda_2,\ldots,\lambda_{2m})$, where $\lambda_1>\ldots>\lambda_{2m}\geq 0$. Suppose that the partition $\lambda$ has $o$ odd parts and $e$ even parts (including $0$ as a part). For the sake of presentation, let $(\lambda_{i_1},\lambda_{i_2},\ldots,\lambda_{i_o})$ denote the sequence of odd parts in decreasing order, and let $(\lambda_{j_1},\lambda_{j_2},\ldots,\lambda_{j_e})$ denote the sequence of even parts in decreasing order. We first consider the case $o\geq e$. In this case, it will be shown that ${\rm srank}(\lambda)=o$. By Theorem \[min bar\], if $\lambda_{2m}>0$, i.e., $\ell(\lambda)=2m$, then we have $${\rm srank}(\lambda)=\max(o,e+0)=o.$$ If $\lambda_{2m}=0$, i.e., $\ell(\lambda)=2m-1$, then we still have $${\rm srank}(\lambda)=\max(o,(e-1)+1)=o.$$ Let $$A=p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_e}+\lambda_{j_e}}p_{\lambda_{i_{e+1}}}p_{\lambda_{i_{e+2}}}\cdots p_{\lambda_{i_o}}.$$ We claim that $A$ appears as a term of the lowest degree in the power sum expansion of $Q_{\lambda}$. For this purpose, we need to determine those matchings $\pi$ of $\{1,2,\ldots,2m\}$ in , for which the power sum expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ contains $A$ as a term of the lowest degree. By Lemma \[lemma1\], if the $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears as a factor in the lowest degree terms of the power sum expansion of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then we have $\{\pi_1,\pi_2\}=\{i_1,j_1\}$. Iterating this argument, we see that if $p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_e}+\lambda_{j_e}}$ appears as a factor in the lowest degree terms of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then we have $$\{\pi_1,\pi_2\}=\{i_1,j_1\},\ldots,\{\pi_{2e-1},\pi_{2e}\}=\{i_e,j_e\}.$$ It remains to determine the ordered pairs $$\{(\pi_{2e+1},\pi_{2e+2}),\ldots,(\pi_{2m-1},\pi_{2m})\}.$$ By the same argument as in Case (5) of the proof of Lemma \[len2\], for any $e+1\leq k<l\leq o$, the term $p_{\lambda_{i_{k}}}p_{\lambda_{i_{l}}}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{i_k},\lambda_{i_l})}$. Moreover, if the power sum symmetric function $p_{\lambda_{i_{e+1}}}p_{\lambda_{i_{e+2}}}\cdots p_{\lambda_{i_o}}$ appears as a term of the lowest degree in the power sum expansion of the product $Q_{(\lambda_{\pi_{2e+1}},\lambda_{\pi_{2e+2}})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then the composition of the pairs $\{(\pi_{2e+1},\pi_{2e+2}),\ldots,(\pi_{2m-1},\pi_{2m})\}$ could be any matching of $\{1,2,\ldots,2m\}/\{i_1,j_1,\ldots,i_e,j_e\}$. To summarize, there are $(2(m-e)-1)!!$ matchings $\pi$ such that $A$ appears as a term of the lowest degree in the power sum expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$. Combining Corollary \[2odd\] and Theorem \[conn\], we find that the coefficient of $p_{\lambda_{i_k}}p_{\lambda_{i_l}}$ $(e+1\leq k<l\leq o)$ in the power sum expansion of $Q_{(\lambda_{i_k}, \lambda_{i_l})}$ is $-\frac{4}{\lambda_{i_k}\lambda_{i_l}}$. It follows that the coefficient of $A$ in the expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}}, \lambda_{\pi_{2m}})}$ is independent of the choice of $\pi$. Since $(2(m-e)-1)!!$ is an odd number, the term $A$ will not vanish in the expansion of $Q_{\lambda}$. Note that the degree of $A$ is $e+(o-e)=o,$ which is equal to ${\rm srank}(\lambda)$, as desired. Similarly, we consider the case $e>o$. In this case, we aim to show that ${\rm srank}(\lambda)=e.$ By Theorem \[min bar\], if $\lambda_{2m}>0$, i.e., $\ell(\lambda)=2m$, then we have $${\rm srank}(\lambda)=\max(o,e+0)=e.$$ If $\lambda_{2m}=0$, i.e., $\ell(\lambda)=2m-1$, then we still have $${\rm srank}(\lambda)=\max(o,(e-1)+1)=e.$$ Let $$B=p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_o}+\lambda_{j_o}}p_{\lambda_{j_{o+1}}-1}p_{\lambda_{j_{o+2}}+1}\cdots p_{\lambda_{j_{e-1}}-1}p_{\lambda_{j_e}+1}.$$ We proceed to prove that $B$ appears as a term of the lowest degree in the power sum expansion of $Q_{\lambda}$. Applying Lemma \[lemma1\] repeatedly, we deduce that if $p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_o}+\lambda_{j_o}}$ appears as a factor in the lowest degree terms of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then $$\label{match1} \{\pi_1,\pi_2\}=\{i_1,j_1\},\ldots,\{\pi_{2o-1},\pi_{2o}\}=\{i_o,j_o\}.$$ On the other hand, iteration of Lemma \[lemma2\] reveals that if the power sum symmetric function $p_{\lambda_{j_{o+1}}-1}p_{\lambda_{j_{o+2}}+1}\cdots p_{\lambda_{j_{e-1}}-1}p_{\lambda_{j_e}+1}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{\pi_{2o+1}},\lambda_{\pi_{2o+2}})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then $$\label{match2} \{\pi_{2o+1},\pi_{2o+2}\}=\{j_{o+1},j_{o+2}\},\ldots,\{\pi_{2m-1},\pi_{2m}\}=\{j_{e-1},j_e\}.$$ Therefore, if $B$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then the matching $\pi$ is uniquely determined by and . Note that the degree of $B$ is $e$, which coincides with ${\rm srank}(\lambda)$. Since there is always a term of degree ${\rm srank}(\lambda)$ in the power sum expansion of $Q_\lambda$, the theorem follows. Skew Schur’s $Q$-functions ========================== In this section, we show that the srank ${\rm srank}(\lambda/\mu)$ is a lower bound of the lowest degree of the terms in the power sum expansion of the skew Schur’s $Q$-function $Q_{\lambda/\mu}$. Note that Clifford’s conjecture does not hold for skew shapes. We first recall a definition of the skew Schur’s $Q$-function in terms of strip tableaux. The concept of strip tableaux were introuduced by Stembridge [@stemb1988] to describe the Morris rule for the evaluation of irreducible spin characters. Given a skew partition $\lambda/\mu$, the *$j$-th diagonal* of the skew shifted diagram $S(\lambda/\mu)$ is defined as the set of squares $(1,j), (2, j+1), (3, j+2), \ldots$ in $S(\lambda/\mu)$. A skew diagram $S(\lambda/\mu)$ is called a *strip* if it is rookwise connected and each diagonal contains at most one box. The *height* $h$ of a strip is defined to be the number of rows it occupies. A *double strip* is a skew diagram formed by the union of two strips which both start on the diagonal consisting of squares $(j,j)$. The *depth* of a double strip is defined to be $\alpha+\beta$ if it has $\alpha$ diagonals of length two and its diagonals of length one occupy $\beta$ rows. A *strip tableau* of shape $\lambda/\mu$ and type $\pi=(\pi_1,\ldots,\pi_k)$ is defined to be a sequence of shifted diagrams $$S(\mu)=S(\lambda^0)\subseteq S(\lambda^1)\subseteq \cdots \subseteq S(\lambda^k)=S(\lambda)$$ with $|\lambda^i/\lambda^{i-1}|=\pi_i$ ($1\leq i\leq k$) such that each skew shifted diagram $S(\lambda^i/\lambda^{i-1})$ is either a strip or a double strip. The skew Schur’s $Q$-function can be defined as the weight generating function of strip tableaux in the following way. For a strip of height $h$ we assign the weight $(-1)^{h-1}$, and for a double strip of depth $d$ we assign the weight $2(-1)^{d-1}$. The weight of a strip tableau $T$, denoted $wt(T)$, is the product of the weights of strips and double strips of which $T$ is composed. Then the skew Schur’s $Q$-function $Q_{\lambda/\mu}$ is given by $$Q_{\lambda/\mu}=\sum_{\pi\in \mathcal{P}^o(|\lambda/\mu|)}\sum_{T} 2^{\ell(\pi)}wt(T)\frac{p_{\pi}}{z_{\pi}},$$ where $T$ ranges over all strip tableaux $T$ of shape $\lambda/\mu$ and type $\pi$, see [@stemb1988 Theorem 5.1]. J$\rm{\acute{o}}$zefiak and Pragacz [@jozpra1991] obtained the following Pfaffian formula for the skew Schur’s $Q$-function. \[skewpf\] Let $\lambda, \mu$ be strict partitions with $m=\ell(\lambda)$, $n=\ell(\mu)$, $\mu\subset \lambda$, and let $M(\lambda,\mu)$ denote the skew-symmetric matrix $$\begin{pmatrix} A & B\\ -B^t & 0 \end{pmatrix},$$ where $A=(Q_{(\lambda_i,\lambda_j)})$ and $B=(Q_{(\lambda_i-\mu_{n+1-j})})$. Then - if $m+n$ is even, we have $Q_{\lambda/\mu}={\rm Pf}(M(\lambda,\mu))$; - if $m+n$ is odd, we have $Q_{\lambda/\mu}={\rm Pf}(M(\lambda,\mu^\prime))$, where $\mu^\prime=(\mu_1,\cdots,\mu_n, 0)$. A combinatorial proof of the above theorem was given by Stembridge [@stemb1990] in terms of lattice paths, and later, Hamel [@hamel1996] gave an interesting generalization by using the border strip decompositions of the shifted diagram. Given a skew partition $\lambda/\mu$, Clifford [@cliff2003] constructed a bijection between skew bar tableaux of shape $\lambda/\mu$ and skew strip tableaux of the same shape, which preserves the type of the tableau. Using this bijection, it is straightforward to derive the following result. The terms of the lowest degree in $Q_{\lambda/\mu}$ have degree at least ${\rm srank}(\lambda/\mu)$. Different from the case of non-skew shapes, in general, the lowest degree terms in $Q_{\lambda/\mu}$ do not have the degree ${\rm srank}(\lambda/\mu)$. For example, take the skew partition $(4,3)/3$. It is easy to see that ${\rm srank}((4,3)/3)=2$. While, using Theorem \[skewpf\] and Stembridge’s SF Package for Maple [@stem2], we obtain that $$Q_{(4,3)/3}={\rm Pf} \begin{pmatrix}0 & Q_{(4,3)} & Q_{(4)} & Q_{(1)}\\[5pt] Q_{(3,4)} & 0 & Q_{(3)} & Q_{(0)}\\[5pt] -Q_{(4)} & -Q_{(3)}& 0 & 0\\[5pt] -Q_{(1)} & -Q_{(0)}& 0 & 0 \end{pmatrix}=2p_1^4.$$ This shows that the lowest degree of $Q_{(4,3)/3}$ equals 4, which is strictly greater than ${\rm srank}((4,3)/3)$. The srank of skew partitions {#sect4} ============================ In this section, we present an algorithm to determine the srank for the skew partition $\lambda/\mu$. In fact, the algorithm leads to a configuration of $0$’s. To obtain the srank of a skew partition, we need to minimize the number of bars by adjusting the positions of $0$’s. Given a configuration $\mathcal{C}$ of $0$’s in the shifted diagram $S(\lambda)$, let $$\kappa(\mathcal{C})=o_s+2e_s+\max(o_r,e_r+((e_r+o_r)\ \mathrm{mod}\ 2)),$$ where $o_ r$ (resp. $e_r$) counts the number of nonempty rows in which there are an odd (resp. even) number of squares and no squares are filled with $0$, and $o_s$ (resp. $e_s$) records the number of rows in which at least one square is filled with $0$ but there are an odd (resp. nonzero even) number of blank squares. If there exists at least one bar tableau of type $\lambda/\mu$ under some configuration $\mathcal{C}$, we say that $\mathcal{C}$ is *admissible*. For a fixed configuration $\mathcal{C}$, each row is one of the following eight possible types: - an even row bounded by an even number of $0$’s, denoted $(e,e)$, - an odd row bounded by an even number of $0$’s, denoted $(e,o)$, - an odd row bounded by an odd number of $0$’s, denoted $(o,e)$, - an even row bounded by an odd number of $0$’s, denoted $(o,o)$, - an even row without $0$’s, denoted $(\emptyset,e)$, - an odd row without $0$’s, denoted $(\emptyset,o)$, - an even row filled with $0$’s, denoted $(e, \emptyset)$, - an odd row filled with $0$’s, denoted $(o, \emptyset)$. Given two rows with respective types $s$ and $s'$ for some configuration $\mathcal{C}$, if we can obtain a new configuration $\mathcal{C}'$ by exchanging the locations of $0$’s in these two rows such that their new types are $t$ and $t'$ respectively, then denote it by $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right] \rightarrow \left[\tiny{{{t} \atop {t'}}}\right]\right)$. Let $o_r,e_r,o_s,e_s$ be defined as above corresponding to configuration $\mathcal{C}$, and let $o_r',e_r',o_s',e_s'$ be those of $\mathcal{C}'$. In the following we will show that how the quantity $\kappa(\mathcal{C})$ changes when exchanging the locations of $0$’s in $\mathcal{C}$. \[varyzero1-1\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right] \rightarrow \left[\tiny{{s \atop s'}}\right]\right)$ or $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right] \rightarrow \left[\tiny{{{s'} \atop s}}\right]\right)$, i.e., the types of the two involved rows are remained or exchanged, where $s,s'$ are any two possible types, then $\kappa({\mathcal{C}'})= \kappa({\mathcal{C}})$. \[varyzero1-6\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt} {(e,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. In this case we have $$o_s^\prime=o_s+1, \quad e_s^\prime=e_s-1, \quad o_r^\prime=o_r-1,\quad e_r^\prime=e_r+1.$$ Note that $o_r+e_r=\ell(\lambda)-\ell(\mu)$. Now there are two cases to consider. **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r$, then $o_r^\prime\leq e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r,\\ \kappa(\mathcal{C}')&=o_s+1+2(e_s-1)+e_r^\prime=o_s+2e_s+e_r=\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r+2$, then $o_r^\prime=o_r-1\geq e_r+1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r-2<\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r+1$, then $o_r^\prime<e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r+1,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+e_r^\prime+1=o_s+2e_s+e_r+1=\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r+3$, then $o_r^\prime=o_r-1\geq e_r+2>e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r-2<\kappa(\mathcal{C}).\end{aligned}$$ Therefore, the inequality $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$ holds under the assumption. \[varyzero2-6\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)} \atop\rule{0pt}{10pt} {(o,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. In this case we have $$o_s^\prime=o_s+1, \quad e_s^\prime=e_s-1, \quad o_r^\prime=o_r+1,\quad e_r^\prime=e_r-1.$$ Now there are two possibilities. **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r-2$, then $o_r^\prime\leq e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r,\\ \kappa(\mathcal{C}')&=o_s+1+2(e_s-1)+e_r^\prime=o_s+2e_s+e_r-2<\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r$, then $o_r^\prime=o_r+1> e_r-1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r-3$, then $o_r^\prime<e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r+1,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+e_r^\prime+1=o_s+2e_s+e_r-1<\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r-1$, then $o_r^\prime=o_r+1> e_r-1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$ In both cases we have $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$, as required. \[varyzero1-4\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})< \kappa({\mathcal{C}})$. In this case, we have $$o_s^\prime=o_s+2, \quad e_s^\prime=e_s-2, \quad o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore, $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))=\kappa(\mathcal{C})-2.$$ The desired inequality immediately follows. \[varyzero3-5\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,o)} \atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. Under this transformation we have $$o_s^\prime=o_s-1, \quad e_s^\prime=e_s, \quad o_r^\prime=o_r+1,\quad e_r^\prime=e_r-1.$$ Since $o_r+e_r=\ell(\lambda)-\ell(\mu)$ is invariant, there are two cases. **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r$, then $o_r^\prime\geq e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime =o_s-1+2e_s+o_r+1 =\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r-2$, then $o_r^\prime=o_r+1\leq e_r-1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s-1+2e_s+e_r^\prime=o_s+2e_s+e_r-2<\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r+1$, then $o_r^\prime=o_r+1\geq e_r+2>e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r-1$, then $o_r^\prime=o_r+1\leq e_r=e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+e_r^\prime+1=o_s+2e_s+e_r-1<\kappa(\mathcal{C}).\end{aligned}$$ Hence the proof is complete. \[varyzero5-8\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. In this case we have $$o_s^\prime=o_s-1, \quad e_s^\prime=e_s, \quad o_r^\prime=o_r-1,\quad e_r^\prime=e_r+1.$$ There are two possibilities: **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r+2$, then $o_r^\prime=o_r-1\geq e_r+1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime =o_s-1+2e_s+o_r-1<\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r$, then $o_r^\prime=o_r-1\leq e_r-1<e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s-1+2e_s+e_r^\prime=o_s+2e_s+e_r=\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r+3$, then $o_r^\prime=o_r-1\geq e_r+2=e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime=o_s+2e_s+o_r-2< \kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r+1$, then $o_r^\prime=o_r-1\leq e_r<e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+e_r^\prime+1=o_s+2e_s+e_r+1=\kappa(\mathcal{C}).\end{aligned}$$ Therefore, in both cases we have $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. \[varyzero2-3\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,o)} \atop\rule{0pt}{10pt} {(o,o)}}}\right] \rightarrow \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right)$ or $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,o)}}}\right] \rightarrow \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})= \kappa({\mathcal{C}})$. In each case we have $$o_s^\prime=o_s-2, \quad e_s^\prime=e_s+1, \quad o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))=\kappa(\mathcal{C}),$$ as desired. \[varyzero1-7\] If $\mathcal{C}'$ is one of the following possible cases: $$\begin{array}{ccc} \mathcal{C}\left( \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt} {(e,e)}}}\right] \rightarrow \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt} {(o,o)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(e,o)} \atop \rule{0pt}{10pt} {(e,e)}}}\right] \rightarrow \left[\tiny{{{(e,o)} \atop \rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right),\\[10pt] \mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(o,o)} \atop \rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop \rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{ {(o,e)} \atop\rule{0pt}{10pt} {(e,o)}}}\right] \rightarrow \left[\tiny{{{(e,o)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right),\\[10pt] \mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)} \atop \rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right),& \end{array}$$ then $\kappa({\mathcal{C}'})< \kappa({\mathcal{C}})$. In each case we have $$o_s^\prime=o_s, \quad e_s^\prime=e_s-1, \quad o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))<\kappa(\mathcal{C}),$$ as required. Note that Lemmas \[varyzero1-1\]-\[varyzero1-7\] cover all possible transformations of exchanging the locations of $0$’s in two involved rows. Lemmas \[varyzero1-6\]-\[varyzero1-4\] imply that, to minimize the number of bars, we should put $0$’s in the skew shifted diagram such that there are as more as possible rows for which the first several squares are filled with $0$’s and then followed by an odd number of blank squares. Meanwhile, from Lemmas \[varyzero3-5\]-\[varyzero1-7\] we know that the number of rows fully filled with $0$’s should be as more as possible. Based on these observations, we have the following algorithm to determine the location of $0$’s for a given skew partition $\lambda/\mu$, where both $\lambda$ and $\mu$ are strict partitions. Using this algorithm we will obtain a shifted diagram with some squares filled with $0$’s such that the corresponding quantity $\kappa(\mathcal{C})$ is minimized. This property allows us to determine the srank of $\lambda/\mu$. [**The Algorithm for Determining the Locations of $0$’s:**]{} - Let $\mathcal{C}_1=S(\lambda)$ be the initial configuration of $\lambda/\mu$ with blank square. Set $i=1$ and $J=\{1,\ldots,\ell(\lambda)\}$. - For $i\leq \ell(\mu)$, iterate the following procedure: - If $\mu_i=\lambda_j$ for some $j\in J$, then we fill the $j$-th row of $\mathcal{C}_i$ with $0$. - If $\mu_i\neq \lambda_j$ for any $j\in J$, then there are two possibilities. - $\lambda_j-\mu_i$ is odd for some $j\in J$ and $\lambda_j>\mu_i$. Then we take the largest such $j$ and fill the leftmost $\mu_i$ squares with $0$ in the $j$-th row of $\mathcal{C}_i$. - $\lambda_j-\mu_i$ is even for any $j\in J$ and $\lambda_j>\mu_i$. Then we take the largest such $j$ and fill the leftmost $\mu_i$ squares by $0$ in the $j$-th row of $\mathcal{C}_i$. Denote the new configuration by $\mathcal{C}_{i+1}$. Set $J=J\backslash \{j\}$. - Set $\mathcal{C}^{*}=\mathcal{C}_{i}$, and we get the desired configuration. It should be emphasized that although the above algorithm does not necessarily generate a bar tableau, it is sufficient for the computation of the srank of a skew partition. Using the arguments in the proofs of Lemmas \[varyzero1-1\]-\[varyzero1-7\], we can derive the following crucial property of the configuration $\mathcal{C}^*$. The proof is omitted since it is tedious and straightforward. \[prop-min\] For any configuration ${\mathcal{C}}$ of $0$’s in the skew shifted diagram of $\lambda/\mu$, we have $\kappa({\mathcal{C}^*})\leq \kappa({\mathcal{C}})$. \[number of skew\] Given a skew partition $\lambda/\mu$, let $\mathcal{C}^*$ be the configuration of $0$’s obtained by applying the algorithm described above. Then $$\label{srank} {\rm srank}(\lambda/\mu)=\kappa({\mathcal{C}^*}).$$ Suppose that for the configuration ${\mathcal{C}^*}$ there are $o_r^*$ rows of odd size with blank squares, and there are $o_s^*$ rows with at least one square filled with $0$ and an odd number of squares filled with positive integers. Likewise we let $e_r^*$ and $e_s^*$ denote the number of remaining rows. Therefore, $$\kappa(\mathcal{C}^*)=o_s^*+2e_s^*+\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2)).$$ Since for each configuration $\mathcal{C}$ the number of bars in a minimal bar tableau is greater than or equal to $\kappa({\mathcal{C}})$, by Proposition \[prop-min\], it suffices to confirm the existence of a skew bar tableau, say $T$, with $\kappa({\mathcal{C}^*})$ bars. Note that it is possible that the configuration ${\mathcal{C}^*}$ is not admissible. The key idea of our proof is to move $0$’s in the diagram such that the resulting configuration ${\mathcal{C}'}$ is admissible and $\kappa({\mathcal{C}'})=\kappa({\mathcal{C}^*})$. To achieve this goal, we will use the numbers $\{1,2,\ldots,\kappa({\mathcal{C}^*})\}$ to fill up the blank squares of $\mathcal{C}^*$ guided by the rule that the bars of Type $2$ or Type $3$ will occur before bars of Type $1$. Let us consider the rows without $0$’s, and there are two possibilities: (A) $o_r^*\geq e_r^*$, (B) $o_r^*<e_r^*$. In Case (A) we choose a row of even size and a row of odd size, and fill up these two rows with $\kappa({\mathcal{C}^*})$ to generate a bar of Type $3$. Then we continue to choose a row of even size and a row of odd size, and fill up these two rows with $\kappa({\mathcal{C}^*})-1$. Repeat this procedure until all even rows are filled up. Finally, we fill the remaining rows of odd size with $\kappa({\mathcal{C}^*})-e_r^*, \kappa({\mathcal{C}^*})-e_r^*-1, \ldots, \kappa({\mathcal{C}^*})-o_r^*+1$ to generate bars of Type $2$. In Case (B) we choose the row with the $i$-th smallest even size and the row with the $i$-th smallest odd size and fill their squares with the number $\kappa({\mathcal{C}^*})-i+1$ for $i=1,\ldots,o_r^*$. In this way, we obtain $o_r^*$ bars of Type $3$. Now consider the remaining rows of even size without $0$’s. There are two subcases. - The remaining diagram, obtained by removing the previous $o_r^*$ bars of Type $3$, does not contain any row with only one square. Under this assumption, it is possible to fill the squares of a row of even size with the number $\kappa({\mathcal{C}^*})-o_r^*$ except the leftmost square. This operation will result in a bar of Type $1$. After removing this bar from the diagram, we may combine this leftmost square of the current row and another row of even size, if it exists, and to generate a bar of Type $3$. Repeating this procedure until there are no more rows of even size, we obtain a sequence of bars of Type $1$ and Type $3$. Evidently, there is a bar of Type $2$ with only one square. To summarize, we have $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ bars. - The remaining diagram contains a row composed of the unique square filled with $0$. In this case, we will move this $0$ into the leftmost square of a row of even size, see Figure \[case2-2\]. Denote this new configuration by $\mathcal{C}^{\prime}$, and from Lemma \[varyzero5-8\] we see that $\kappa({\mathcal{C}^*})=\kappa({\mathcal{C}^{\prime}})$. If we start with ${\mathcal{C}'}$ instead of ${\mathcal{C}^*}$, by a similar construction, we get $\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))$ bars, occupying the rows without $0$’s in the diagram. (300,100) (40,0)[(1,0)[20]{}]{}(40,20)[(1,0)[20]{}]{} (40,0)[(0,1)[20]{}]{}(60,0)[(0,1)[20]{}]{} (50,25)[$\vdots$]{}(20,40)[(1,0)[80]{}]{} (20,60)[(1,0)[80]{}]{} (20,40)(20,0)[5]{}[(0,1)[20]{}]{} (50,65)[$\vdots$]{} (0,80)(0,20)[2]{}[(1,0)[120]{}]{} (0,80)(20,0)[7]{}[(0,1)[20]{}]{}(48,6)[$0$]{} (130,50)[(1,0)[40]{}]{} (220,0)[(1,0)[20]{}]{}(220,20)[(1,0)[20]{}]{} (220,0)[(0,1)[20]{}]{}(240,0)[(0,1)[20]{}]{} (230,25)[$\vdots$]{}(200,40)[(1,0)[80]{}]{} (200,60)[(1,0)[80]{}]{} (200,40)(20,0)[5]{}[(0,1)[20]{}]{} (230,65)[$\vdots$]{} (180,80)(0,20)[2]{}[(1,0)[120]{}]{} (180,80)(20,0)[7]{}[(0,1)[20]{}]{}(188,86)[$0$]{} Without loss of generality, we may assume that for the configuration ${\mathcal{C}^*}$ the rows without $0$’s in the diagram have been occupied by the bars with the first $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ positive integers in the decreasing order, namely, $(\kappa({\mathcal{C}^*}), \ldots, 2, 1, 0)$. By removing these bars and reordering the remaining rows, we may get a shifted diagram with which we can continue the above procedure to construct a bar tableau. At this point, it is necessary to show that it is possible to use $o_s^*+2e_s^*$ bars to fill this diagram. In doing so, we process the rows from bottom to top. If the bottom row has an odd number of blank squares, then we simply assign the symbol $o_s^*+2e_s^*$ to these squares to produce a bar of Type $1$. If the bottom row are completely filled with $0$’s, then we continue to deal with the row above the bottom row. Otherwise, we fill the rightmost square of the bottom row with $o_s^*+2e_s^*$ and the remaining squares with $o_s^*+2e_s^*-1$. Suppose that we have filled $i$ rows from the bottom and all the involved bars have been removed from the diagram. Then we consider the $(i+1)$-th row from the bottom. Let $t$ denote the largest number not greater than $o_s^*+2e_s^*$ which has not been used before. If all squares in the $(i+1)$-th row are filled with $0$’s, then we continue to deal with the $(i+2)$-th row. If the number of blank squares in the $(i+1)$-th row is odd, then we fill these squares with $t$. If the number of blank squares in the $(i+1)$-th row is even, then we are left with two cases: - The rows of the diagram obtained by removing the rightmost square of the $(i+1)$-th row have distinct lengths. In this case, we fill the rightmost square with $t$ and the remaining blank squares of the $(i+1)$-th row with $t-1$. - The removal of the rightmost square of the $(i+1)$-th row does not result in a bar tableau. Suppose that the $(i+1)$-th row has $m$ squares in total. It can only happen that the row underneath the $(i+1)$-th row has $m-1$ squares and all these squares are filled with $0$’s. By interchanging the location of $0$’s in these two rows, we get a new configuration $\mathcal{C}^{\prime}$, see Figure \[case2’\]. From Lemma \[varyzero2-3\] we deduce that $\kappa({\mathcal{C}^*})=\kappa({\mathcal{C}^{\prime}})$. So we can transform ${\mathcal{C}^*}$ to ${\mathcal{C}'}$ and continue to fill up the $(i+1)$-th row. (340,40) (20,0)[(1,0)[120]{}]{} (0,20)(0,20)[2]{}[(1,0)[140]{}]{} (20,0)(20,0)[7]{}[(0,1)[40]{}]{} (0,20)[(0,1)[20]{}]{} (28,6)(20,0)[6]{}[$0$]{}(8,26)(20,0)[3]{}[$0$]{} (150,20)[(1,0)[40]{}]{} (220,0)[(1,0)[120]{}]{} (200,20)(0,20)[2]{}[(1,0)[140]{}]{} (220,0)(20,0)[7]{}[(0,1)[40]{}]{} (200,20)[(0,1)[20]{}]{} (228,6)(20,0)[3]{}[$0$]{}(208,26)(20,0)[6]{}[$0$]{} Finally, we arrive at a shifted diagram whose rows are all filled up. Clearly, for those rows containing at least one $0$ there are $o_s^*+2e_s^*$ bars that are generated in the construction, and for those rows containing no $0$’s there are $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ bars that are generated. It has been shown that during the procedure of filling the diagram with nonnegative numbers if the configuration ${\mathcal{C}^*}$ is transformed to another configuration ${\mathcal{C}^{\prime}}$, then $\kappa({\mathcal{C}^\prime})$ remains equal to $\kappa({\mathcal{C}^*})$. Hence the above procedure leads to a skew bar tableau of shape $\lambda/\mu$ with $\kappa({\mathcal{C}^*})$ bars. This completes the proof. [**Acknowledgments.**]{} This work was supported by the 973 Project, the PCSIRT Project of the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China. [19]{} P. Clifford, Algebraic and combinatorial properties of minimal border strip tableaux, Ph.D. Thesis, M.I.T., 2003. P. Clifford, Minimal bar tableaux, *Ann. Combin.* **9** (2005), 281–291. P. Clifford and R. P. Stanley, Bottom Schur functions, *Electron. J. Combin.* **11** (2004), Research Paper 67, 16 pp. A. M. Hamel, Pfaffians and determinants for Schur Q-functions, *J. Combin. Theory Ser. A* **75** (1996), 328–340. P. Hoffman and J. F. Humphreys, Projective Representations of the Symmetric Groups, Oxford University Press, Oxford, 1992. J. F. Humphreys, Blocks of projective representations of the symmetric groups, *J. London Math. Soc.* **33** (1986), 441–452. T. J$\rm{\acute{o}}$zefiak, Characters of projective representations of symmetric groups, *Exposition. Math.* **7** (1989), 193–247. T. J$\rm{\acute{o}}$zefiak and P. Pragacz, A determinantal formula for skew Schur $Q$-functions, *J. London Math. Soc.* **43** (1991), 76–90. I. G. Macdonald, Symmetric Functions and Hall Polynomials, 2nd Edition, Oxford University Press, Oxford, 1995. A. O. Morris, The spin representation of the symmetric group, *Proc. London Math. Soc.* **12** (1962), 55–76. A. O. Morris, The spin representation of the symmetric group. *Canad. J. Math.* **17** (1965), 543–549. A. O. Morris, The projective characters of the symmetric group—an alternative proof, *J. London Math. Soc.* **19** (1979), 57–58. M. L. Nazarov, An orthogonal basis in irreducible projective representations of the symmetric group, *Funct. Anal. Appl.* **22** (1988), 66–68. M. Nazarov and V. Tarasov, On irreducibility of tensor products of Yangian modules associated with skew Young diagrams, *Duke Math. J.* **112** (2002), 343–378. B. E. Sagan, Shifted tableaux, Schur $Q$-functions, and a conjecture of R. Stanley, *J. Combin. Theory Ser. A* **45** (1987), 62–103. I. Schur, Über die Darstellung der symmetrischen und der alternierenden Gruppe durch gebrochene lineare Substitutionen, *J. Reine Angew. Math.* **139** (1911), 155–250. R. P. Stanly, The rank and minimal border strip decompositions of a skew partition, *J. Combin. Theory Ser. A* **100** (2002), 349–375. J. R. Stembridge, On symmetric functions and the spin characters of $S_n$, Topics in Algebra, Part 2 (Warsaw, 1988), 433–453, Banach Center Publ., 26, Part 2, PWN, Warsaw, 1990. J. R. Stembridge, Shifted tableaux and the projective representations of symmetric groups, *Adv. Math.* **74** (1989), 87–134. J. R. Stembridge, Nonintersecting paths, Pfaffians and plane partitions, *Adv. Math.* **83** (1990), 96–131. J. R. Stembridge, The SF Package for Maple, http://www.math.lsa.umich.edu/\~jrs/maple.html \#SF. D. Worley, A theory of shifted Young tableaux, Ph.D. Thesis, Massachusetts Inst. Tech., Cambridge, Mass., 1984.
{ "pile_set_name": "ArXiv" }
The independence day of Machines from Metalfoes. 4th Baya CS, Domestic Tournament, 07/03/2016, 88 Players —————————————————————- 1st Place: Qliphort —————————————————————-2nd Place: D/D—————————————————————-3rd place: Speedroid Phantom Abyss—————————————————————-4th Place: D/DG-Project Card Fight Kokuraminami Shop, Local Tournament, 07/03/2016—————————————————————-1st Place: ABCCard Kingdom Tokushima Local Tournament, 06/25/2016, 8 Players—————————————————————-1st Place: Symphonic ABC1st Orange CS, Domestic Tournament, 07/02/2016, 102 Players (34 Teams)—————————————————————-1st Place, Player A: D/D—————————————————————-1st Place, Player B: Blue-Eyes—————————————————————-1st Place, Player C: ABC—————————————————————-2nd Place, Player A: Speedroid Phantom Abyss—————————————————————-2nd Place, Player B: Blue-Eyes—————————————————————-2nd Place, Player C: ABC—————————————————————-3rd Place, Player A: Metalphose—————————————————————-3rd Place, Player B: Blue-Eyes—————————————————————-3rd Place, Player C: ABC—————————————————————-4th Place, Player A: Symphonic ABC—————————————————————-4th Place, Player B: D/D—————————————————————-4th Place, Player C: Monarch14th Duelist King Hobby Stage CS, Domestic Tournament, 07/03/2016, 72 Players—————————————————————-1st Place: ABC—————————————————————-2nd Place: Quickdraw Junk Doppel—————————————————————-3rd Place: Blue-Eyes—————————————————————-4th Place: D/D—————————————————————-5th Place: ABC—————————————————————-6th Place: Lightsworn Shiranui—————————————————————-7th Place: Monarch—————————————————————-8th Place: D/D 133rd Alann Cup, Domestic Tournament, 07/02/2016, 37 Players —————————————————————- 1st Place: Blue-Eyes —————————————————————-2nd Place: ABC—————————————————————-3rd Place: ABC—————————————————————-4th Place: Metalphose1st JGP Iwate, Domestic Tournament, 07/02/2016, 89 Players—————————————————————-1st Place: D/D—————————————————————-2nd Place: ABC—————————————————————-4th Place: Blue-Eyes—————————————————————-5th Place: D/D—————————————————————-6th Place: Blue-Eyes—————————————————————-7th Place: D/D—————————————————————-8th Place: Speedroid Phantom Abyss3rd Rikuzen CS, Domestic Tournament, 06/26/2016, 66 Players—————————————————————-1st Place: Metalphose—————————————————————-2nd Place: D/D—————————————————————-3rd Place: D/D—————————————————————-4th Place: Speedroid Metalphose—————————————————————-5th place: Deskbot—————————————————————-6th Place: ABC—————————————————————-7th Place: D/D—————————————————————-8th Place: D/D—————————————————————-9th Place: D/D—————————————————————-10th Place: ABC—————————————————————-11th Place: Metalphose—————————————————————-12th Place: ABC—————————————————————-13th Place: ABC—————————————————————-14th Place: Speedroid Metalphose—————————————————————-15th Place: D/D—————————————————————-16th Place: Metalphose2nd Hakata with Hatti CS, Domestic Tournament, 06/25/2016, 96 Players (32 Teams)—————————————————————-1st Place, Player A: Blue-Eyes—————————————————————-1st Place, Player B: Metalphose—————————————————————-1st Place, Player C: Speedroid Phantom Abyss—————————————————————-2nd Place, Player A: ABC—————————————————————-2nd Place, Player B: Metalphose—————————————————————-2nd Place, Player C: Deskbot—————————————————————-3rd Place, Player A: ABC—————————————————————-3rd Place, Player B: DARK Alive—————————————————————-3rd Place, Player C: ABC8th University Circle Duel Tournament, Domestic Tournament, 06/25/2016—————————————————————-1st Place, Player A: Blue-Eyes—————————————————————-1st Place, Player B: D/D—————————————————————-2nd Place, Player A: ???—————————————————————-2nd Place, Player B: MetalphoseBattle City Super League, Local Tournament, 06/26/2016, 44 Players (China)—————————————————————-1st Place: ABC—————————————————————-2nd Place: Blue-Eyes—————————————————————-3rd Place: Metalphose—————————————————————-4th Place: Artifact HERO—————————————————————-5th Place: Blue-Eyes—————————————————————-6th Place: Blue-Eyes—————————————————————-7th Place: Blue-Eyes—————————————————————-8th Place: Speedroid Phantom AbyssCard Strike, Local Tournament, 06/25/2016, 15 Players—————————————————————-1st Place: ABCGameshop VITA Kurume Shop, Local Tournament, 06/25/2016—————————————————————-1st Place: ABCSaisai Shop, Local Tournament, 06/26/2016, 9 Players—————————————————————-1st Place: ABC
{ "pile_set_name": "OpenWebText2" }
A typical form of apparatus for testing non-rotationally symmetrical hollow bodies for defects comprises a feed means for continuously conveying the hollow bodies into a test region of the apparatus, a means for conveying the hollow bodies through the test region of the apparatus, and a discharge means for conveying the hollow bodies out of the test region. Such an apparatus is used when there is a wish to check and inspect glass bodies of the above-indicated kind for surface defects such as for example cracks or irregular wall thicknesses in an automated procedure after production of the glass bodies, and to separate out glass bodies which are found to be defective.
{ "pile_set_name": "USPTO Backgrounds" }
Introduction ============ Diabetes mellitus is a common endocrine metabolic disease estimated to affect 629 million individuals in 2045 according to the International Diabetes Federation. T2DM is the extremely prevalent form of diabetes that accounts for 90% of diagnosed cases and is associated with insulin resistance and chronic hyperglycemia ([@B13]). Many clinical studies reported a broad spectrum of lower urinary tract symptoms in diabetic patients ([@B16]), accounting for 90--95% of all diabetes cases ([@B17]). DBD is a major lower urinary tract complication of diabetes and was first described by [@B26]. Such complication is traditionally described as a triad of increased capacity, decreased sensation, and poor emptying ([@B11]) and has affected over 50% of diabetic patients ([@B12]; [@B28]). DBD development is divided into two phases: the compensated phase, which occurs in the early phase and is characterized by an OAB; and the decompensated phase, which occurs in the late phase and is characterized by an atonic bladder ([@B36]; [@B24]). The pathogenesis of DBD is multifactorial and accompanied by the structural and functional impairments of the bladder ([@B37]). The bladder structural remodeling of DBD, such as the increase in bladder capacity, total BWT, and smooth muscle content, was observed in STZ-induced diabetic mice. Such remodeling may be a physical alteration to increase the urine volume ([@B22]). The two major functions of bladder are urine storage and urine disposal, and the uncoordinated contraction in the OAB of a diabetic greatly affects the urine storage ability of this organ ([@B10]). Bladder contraction is mainly mediated by purinergic and cholinergic pathways ([@B23]). In particular, ATP is the purinergic messenger released from varicosities or bulbous nerve endings of neurons, and the contractile responses mediated by ATP play a key role in DBD ([@B44]). Solute carrier family 17 member 9 (SLC17A9) is a member of the solute-carrier protein family that plays an indispensable role in the vesicular storage of ATP ([@B31]; [@B20]). The translocation of neurotransmitter-filled vesicles to the varicose terminal is the first step in the release of vesicular neurotransmitters, followed by the merger of vesicles with the membrane of the varicose terminal and the precise and rapid release of their contents into the synaptic cleft ([@B33]). In addition, the motor of vesicles for transportation to the varicose membrane in the cells is mainly provided by myosin motors, particularly myosin Va ([@B2]). Several studies found that the purinergic inhibitory neurotransmission was impaired in myosin Va-deficient mice ([@B6]). Such finding suggested that myosin Va played an important role in purinergic neurotransmission ([@B3]). Diabetic bladder dysfunction, particularly OAB, is not life threatening to humans. However, this dysfunction seriously affects the quality of the life of patients ([@B24]). The treatment methods for DBD changed when the phase progresses. Anticholinergic drugs, such as tolterodine and solifenacin, are the main treatment options for DBD patients with OAB. However, many side effects, including dry mouth, dry eyes, and memory loss, occurs after the treatment with anticholinergic drugs, thus rendering poor life quality for the patients. In the late phase, surgical intervention was the only therapeutic method for patients who did not benefit from pharmacological and behavioral treatments ([@B45]). However, pharmacological and surgical interventions were largely ineffective in clinics ([@B12]; [@B40]). Therefore, new effective treatments for DBD are urgently needed. In the treatment of diabetic OAB, traditional Chinese medicine and natural plant components have recently attracted increasing attention due to their safeness, few side effects, and excellent activity ([@B41]). SQW is a traditional Chinese herbal formula that was first recorded on *Fu Ren Liang Fang* in the Southern Song Dynasty (between 1127 and 1279 CE). This medicine is a mixture of three Chinese medicines: *roots of Lindera aggregata (Sims) Kosterm. (Lauraceae), roots of Alpinia oxyphylla Miq., (Zingiberaceae), and rhizomes of Dioscorea oppositifolia L. (Dioscoreaceae)* at a 1:1:1 ratio ([@B14]). SQW has been used to treat lower urinary tract symptoms, such as nocturia, urgency, and child bedwetting for hundreds of years ([@B4]). We have recently reported that SQW had therapeutic effects on the OAB of bladder outlet obstruction rat models by modulating the TRPV1 expression ([@B18]). In China, SQW is often used in the clinical treatment of diabetic OAB. However, its mechanism remains unclear, and its therapeutic effect has not been investigated in animal studies. Therefore, we designed experiments to explore the effects and therapeutic mechanisms of SQW in diabetic OAB mouse model. Materials and Methods {#s1} ===================== Reagents and Materials ---------------------- Suo Quan Wan was purchased from Hunan Hansen Pharmaceutical Co., Ltd. (China), and the quality control was provided by the company based on Chinese Pharmacopeia employing by high performance liquid chromatography (HPLC) technology from SQW samples ([@B9]). Three Chinese herbals were ground and mixed evenly at a 1:1:1 ratio and appropriate volumes of distilled water were used to make these powders to SQW compound. The doses were adopted according to the Experimental Methodology of Pharmacology, based on clinical usage, the Bios method ([@B39]). SQW H was 2.208 g/kg, SQW M was 1.104 g/kg, and SQW L was 0.552 g/kg. The tolterodine dose for the positive group was 0.82 mg/kg. Streptozotocin was purchased from TOKU-E Co., Ltd. (Japan). HFD (45% fat) and control diet were purchased from Guangdong Medical Laboratory Animal Center (China). Tolterodine was purchased from Chengdu Dikang Pharmaceutical Co., Ltd. (China). Roche dynamic Bg meter was purchased from Hoffmann-La Roche Inc. (Switzerland), and carbachol was obtained from Shandong Bausch & Lomb Freda Pharmaceutical Co., Ltd. (China). α,β-methylene ATP was purchased from Tocris Bio-Techne Ltd. (United Kingdom). FastQuant RT Kit (with gDNAse) and Talent qPCR PreMix (SYBR Green) were purchased from TIANGEN Biotech (Beijing) Co., Ltd. (China). TRIzol reagent was purchased from Thermo Fisher Scientific (United States). RIPA lysis buffer and protease inhibitor cocktail (100×) were obtained from CoWin Biosciences (China). All other reagents used were of analytical grade. Preparation and HPLC Conditions of SQW -------------------------------------- Suo Quan Wan samples were weighted 0.3 g and extracted with 25 mL of methanol-hydrochloric acid solution using heating reflux method and then cool the solution. Finally, the solution was filtered through 0.45 μm nylon membranes before injection. According to the Chinese pharmacopoeia 2015, the content of norisoboldine should be more than 0.4 mg/0.3 g, and the content of allantoin is more than 0.48 mg/0.3 g. The HPLC conditions and gradient elution were shown as [Tables 1](#T1){ref-type="table"}, [2](#T2){ref-type="table"}. ###### Chromatographic condition for norisoboldine. Column C18 (25°C) ------------ ----------------------------------------- ------- Solvent A Acetonitrile Solvent B 0.5% formic acid and 0.1% triethylamine Flow rate 1.0 mL/min Wavelength 280 nm Time (min) A (%) B (%) 0 10 90 13 22 78 30 22 78 ###### Chromatographic condition for allantoin. Column C18 (25°C) ------------ ------------ ------- Solvent A Methanol Solvent B H~2~O Flow rate 1.0 mL/min Wavelength 191 nm Time (min) A (%) B (%) 0 8 92 10 10 90 20 10 90 Animal Model and Treatment -------------------------- All experimental protocols and animal procedures complied with the ethical principle guidelines of the National Research Council. A total of 100 male C57BL/6J mice (18--22 g) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. and housed in the Experimental Animal Center of Guangzhou University of Chinese Medicine (No. S2017051, Guangzhou, China) under room temperature and exposed to a 12 h/12 h light--dark cycle, with free access to food and water. The animals were fed with normal diet for 3 days and then divided into two groups, namely, diabetic (*n* = 85) and control (*n* = 15) groups. The mice in the diabetic group were fed with HFD, whereas those in the control group received normal diet. After 4-week feeding, the mice in the diabetic group were injected with STZ at 100 mg/kg dissolved in citrate buffer for four times (0.05 M, pH 4.3--4.5). The mice in the control group were treated with an equal volume of vehicle (0.05 M citric acid, pH 4.3--4.5). Fasting Bg (FBG) was measured using an ACCU-CHEK advantage Bg monitoring system (Roche, Indianapolis, IN, United States) through the tail vein 72 h after the last injection. The mice with FBG levels above 11.1 mmol/L were considered diabetic and selected for subsequent experiments. The mice in the diabetic group were divided into five groups: model (*n* = 13), positive (tolterodine, 0.82 mg/kg, *n* = 13), high-dose (SQW H, 2.208 g/kg, *n* = 13), medium-dose (SQW M, 1.104 g/kg, *n* = 13), and low-dose (SQW L, 0.552 g/kg, *n* = 13) groups. After 3-week feeding, the six mouse groups individually received oral gavage of distilled water (control and model group mice), tolterodine, SQW H, SQW M, and SQW L for 3 weeks. During the experiment, the mice in the control group were given normal diet, whereas those in the other groups were continually fed with HFD. FBG Test and Oral Glucose Tolerance Test (OGTT) ----------------------------------------------- Fasting blood glucose test and OGTT were conducted after the 3-week SQW treatment. All animals were fasted overnight, and the Bg concentration was measured using a glucometer (ACCU-CHEK active) through a drop of tail blood. All the mice were then given with glucose (2 mg/g body weight) by gavage, and tail blood samples were obtained at 0, 15, 30, 60, 90, and 120 min to measure the Bg concentration. The area under the curve of the Bg time course from 0 to 120 min (AUC~0-2~ ~h~) was calculated according to the following formula: AUC 0 − 2 h = \[ ( Bg 0 \+ Bg 15 ) \] × 0.5 \] ÷ 4 \+ \[ ( Bg 15 \+ Bg 30 ) × 0.5 \] ÷ 4 \+ \[ ( Bg 30 \+ Bg 60 ) × 0.5 \] ÷ 2 \+ \[ ( Bg 60 \+ Bg 120 ) × 0.5 \] Measurement of Water Intake, Urine Output, and Frequency -------------------------------------------------------- The mice were placed individually in metabolic cages for 24 h with food and water *ad libitum*. Water intake was measured based on the water consumption for 24 h. Urine output and micturition frequency were analyzed through the VSOP test. Urine output was measured by evaluating the volume of urine in the collector after the mice were placed in the cages for 5 h. Micturition frequency was measured by visualizing and analyzing the collected papers, which were placed under the metabolic cage for 5 h under ultraviolet light to identify the area of urine spots ([@B38]). The sizes of the urine spots were divided into two levels, namely, bigger volume (\>50 μL) and smaller volume (\<50 μL) voids, to measure the micturition frequency. Urodynamic Test --------------- The urodynamic test was performed using a micro-injection pump and urodynamic measuring device (Laborite Delphis 94-R01-BT, Canada). All mice were anesthetized by the intraperitoneal injection of 25% urethane (2.0 mg/kg). A ventral midline incision was made to expose the bladder, and a 25-gauge needle was inserted into the bladder dome and fixed with silk suture. The needle was connected through a three-way adapter, which was connected with urodynamics at one end and a micro-injection pump at the other. After the bladder was emptied, 0.9% saline solution was injected into the bladder through the micro-injection pump at a rate of 3 mL/h. Pumping was stopped immediately when urine was observed at the external urethra. The bladder pressure line was automatically recorded with a computer. Urodynamic test parameters included the frequency of NVC (higher than 4 cm H~2~O spontaneous bladder contraction that did not result in urination before first urination) frequency, MBC (the volume of saline pumped before first urination), maximum voiding pressure (MP, the maximum peak pressure reached during micturition), RV (manually drained and measured with 1 mL syringe), MV (calculated as MBC - RV), VE (calculated as \[(MBC - RV)/MBC\] × 100%), and BC \[calculated as (MBC/MP) × 100%\]. The mice were euthanized at the end of the experiment through cervical dislocation ([@B19]). Histological Test ----------------- After the mice were euthanized, the bladders were excised and fixed using 4% paraformaldehyde solution for approximately 24 h at room temperature. After fixation, the bladders were conventionally dehydrated and embedded in paraffin. The tissues were cut into 6 μm thickness and stained with hematoxylin and eosin (HE) and Masson's trichrome. The color segmentation of Masson's trichrome was used to identify the whole cross-sectional area and the tissue areas that were stained "pink" (urothelium), "blue" (collagen), and "red" (smooth muscle). The HE images (100×) were used to determine the BWT, whereas the Masson's trichrome stained images (100×) to measure the smooth-muscle-to-collagen ratio. The stained bladder sections were examined under a light microscope, and representative images were photographed with a digital camera mounted on the microscope. All the images were analyzed using image analysis software (Image Pro 6.0). Assessment of Detrusor Smooth Muscle Contractility Study *in vitro* ------------------------------------------------------------------- The mice were executed, and the bladders were excised at the bladder neck. Full-thickness longitudinal DSM strips (0.7--1 mm × 5 mm) were obtained and mounted in a 5 mL organ bath filled with Krebs-Henseleit solution (NaCl, 118 mM; KCl, 4.75 mM; MgSO~4~, 1.18 mM; NaHCO~3~, 24.8 mM; KH~2~PO~3~, 1.18 mM; CaCl~2~ 2.5 mM; and C~6~H~12~O~6~⋅H~2~O, 10 mM; pH = 7.4) bubbled with a mixture of 5% carbon dioxide and 95% oxygen at 37°C. One side of the strip was attached to the hook with silk suture, and the other side was connected to the force signal transducer ([@B38]). The passive tension was loaded at 0.5 g, and the tissues were equilibrated for 60 min before the experiments. The forced change signals of the DSM strips were recorded with a PowerLab recorder. Purinergic agonist, α,β-methylene ATP (100 μM) was added to the organ bath twice (30 min between each assay) to measure the difference in the contractile responses. The contraction of bladder tissue to electrical field stimulation (EFS, 1, 2, 4, 8, 16, 32, and 64 Hz; 40 V; and 0.5 ms pulse duration for 10 s) was also measured. Furthermore, tests for dose--response curve to carbachol (10^-8^--10^-5^ M) and the contractile response to KCl (120 mM) were performed in the DMS strips. At the end of the experiments, the weight and length of each detrusor strip were recorded. Real-Time RT-PCR ---------------- The total RNA from the whole bladder samples were extracted using TRIzol Reagent (Invitrogen, United States). The absorption of the samples at 260 and 280 nm was used to estimate the RNA quality. A260/A280 was used to check the purity, and A260 values confirmed the concentration of RNA (Shimadzu BioSpec-nano, Japan). The total RNA was reverse transcribed into cDNA using a PrimeScript RT Reagent Kit with gDNA eraser (TIANGEN, China). Real-time PCR analysis was performed using SYBR Green (TIANGEN, China) according to the manufacturer's instructions. Synthetic oligonucleotide primers were designed to amplify the cDNA for the genes encoding the myosin Va, SLC17A9, and β-actin. [Table 3](#T3){ref-type="table"} shows the primer pairs. The reaction program was presented as follows: 95°C for 3 min, followed by 39 cycles at 95°C for 5 s and 55°C for 10 s. Results were recorded and analyzed using complementary software, and the gene expression levels were calculated by 2^-ΔΔCt^ method. The target gene expression levels were individually normalized according to the β-actin expression. ###### Primer sequences of myosin Va, SLC17A9, and β-actin. Gene Primers (5′--3′) ----------------- ------------------------ *myosin Va - F* AGCTCAACTCCTTCCACTC *myosin Va - R* ACACACCCCTTTATCCTTCC *SLC17A9 - F* GCTTCCTCAAGGCTATGATCTT *SLC17A9 - R* AGGTCCTGAATGTTGACTGAAA *β-actin - F* CTACCTCATGAAGATCCTGACC *β-actin - R* CACAGCTTCTCTTTGATGTCAC Western Blot Analysis --------------------- The bladder tissues were homogenized using tissue grinders (Shanghai Jingxin, Shanghai, China) at 65 Hz for 2 min to extract the total protein. BCA protein assay Kit (Beyotime Biotechnology, China) was used to measure the protein concentration. Equivalent proteins (20 μg) were subjected to 10% or 8% SDS-PAGE at 80 V for 30 min or 120 V for 60 min, respectively, to separate the proteins of different molecular weights and transfer to the PVDF membranes using the transblotting apparatus (Bio-Rad Laboratories, Hercules, CA, United States) for 55 or 110 min, respectively, at 300 mA. The PVDF membranes were blocked with 5% (w/v) non-fat milk buffer at room temperature for 2 h and incubated with a primary antibody in TBST \[Myosin Va (1:1000, Santa Cruz), SLC17A9 (1:1000, MBL), or β-actin (1:1000, 4A Biotech)\] overnight at 4°C. The immune-labeled membranes were washed three times with TBST for 15 min each time, and then conjugated with a secondary antibody (1:5000, 4A Biotech) at room temperature for 2 h. After the non-binding secondary antibodies were washed away, the target protein bands were visualized using a chemiluminescent reagent (Millipore, United States). Data were processed using ImageJ, and the immunoblot protein expression levels of myosin Va and SLC17A9 were normalized using β-actin. The antibodies used in the present study are listed in [Supplementary Table 1](#SM1){ref-type="supplementary-material"}. Statistical Analysis -------------------- All data were expressed as mean ± SD. Statistical analyses were performed using SPSS 19.0 (SPSS, United States). The amplitude of contractile responses to stimulus was recorded in tension (Newton) and normalized by the weight (g) of the detrusor strips ([@B5]). Western blot analysis data were processed using ImageJ. Histological test images were analyzed using Image-Pro Plus 6.0, and one-way ANOVA was used for data analysis. *P* \< 0.05 was considered statistically significant. Results ======= HPLC Analysis of SQW -------------------- For quality assessment of SQW, HPLC analysis was conducted. The detection wavelength of norisoboldine was set at 280 nm and the allantoin was set at 191 nm. The retention times of norisoboldine and allantoin were detected at approximately 17.960 and 11.632 min, respectively ([Figure 1A](#F1){ref-type="fig"}--[D](#F1){ref-type="fig"}). According to the chromatograms results, the contents of norisoboldine and allantoin in SQW sample were 0.72 mg/0.3 g and 0.73 mg/0.3 g, respectively, indicating that the SQW samples meet the requirement. ![HPLC chromatogram of standards and samples; **(A)** norisoboldine standard; **(B)** norisoboldine sample; **(C)** allantoin standard; **(D)** allantoin sample.](fphar-10-00552-g001){#F1} General Characteristics of the Diabetic Model --------------------------------------------- Compared with the mice in the control group, the T2DM mice exhibited diabetes characteristics, including significantly reduced weight (*P* \< 0.01) and increased water intake (*P* \< 0.01), urine volume (*P* \< 0.01), Bg levels \[high FBG (*P* \< 0.01), OGTT (*P* \< 0.01), and AUC~0-2h~ (*P* \< 0.01)\]. No considerable differences in these parameters were observed among the mice in SQW and model groups ([Figure 2A](#F2){ref-type="fig"}--[F](#F2){ref-type="fig"}). ![Effects of SQW on the general characteristics of diabetic model after the 3-week treatment (*n* = 8). **(A)** Weight, **(B)** water intake, **(C)** 5 h urine volume, **(D)** FBG, **(E)** OGTT, and **(F)** area under the curve of AUC~0-2~ ~h~ (calculated according to the following formula: AUC~0-2h~ = \[(Bg0 + Bg15) × 0.5\] ÷ 4+ \[(Bg15 + Bg30) × 0.5\] ÷ 4+\[(Bg30 + Bg60) × 0.5\] ÷ 2+\[(Bg60 + Bg120) × 0.5\]). Data represent the means ± SD (model vs. control group, ^∗∗^*P* \< 0.01 or ^∗^*P* \< 0.05).](fphar-10-00552-g002){#F2} VSOP and Urodynamic Tests ------------------------- The representative urodynamic response curves of each group are presented in [Figure 3](#F3){ref-type="fig"}. The urinary voiding patterns showed that the frequencies of bigger volume voids (\>50 μL) and smaller volume voids (\<50 μL) were higher in diabetic mice than in the controls [Figure 4A](#F4){ref-type="fig"}. Treatment with SQW M markedly decreased both frequencies (*P* \< 0.05), whereas treatments with SQW H and SQW L reduced the frequencies of smaller volume and bigger volume voids, respectively. In addition, urodynamic test revealed that compared with the controls, the diabetic mice had significantly increased NVC, MBC, RV, and BC (*P* \< 0.01) but markedly decreased VE (*P* \< 0.01), thereby showing typical DBD in the early compensated phase ([Figure 4B](#F4){ref-type="fig"}--[F](#F4){ref-type="fig"}). SQW M treatment remarkably decreased the NVC, MBC, RV, and BC (*P* \< 0.01 or *P* \< 0.05) but significantly increased the VE of the mice (*P* \< 0.01). Furthermore, treatments with SQW H and SQW L remarkably decreased the NVC of the mice (*P* \< 0.05). No significant differences in MP were found among the control, SQW-treated, and model mice ([Figure 4G](#F4){ref-type="fig"}). ![Representative urodynamic test recording from the six groups of mice. Red arrows indicate the micturition peaks, and black arrows represent the NVC frequency.](fphar-10-00552-g003){#F3} ![VSOP and urodynamic test results in all groups (*n* = 8). **(A)** Frequencies of bigger volume (\>50 μL) and smaller volume voids (\<50 μL); **(B)** Frequency of NVC before the first micturition; **(C)** MBC; **(D)** RV; **(E)** VE; **(F)** BC; and **(G)** MP. Data represent the means ± SD (model vs. control group, ^∗∗^*P* \< 0.01; treatment vs. model group ^\#^*P* \< 0.05 or ^\#\#^*P* \< 0.01).](fphar-10-00552-g004){#F4} Morphometric Analysis --------------------- The bladder weight (absolute and relative to body weight) was increased in the diabetic mice (*P* \< 0.01) but decreased in the SQW M- and SQW L-treated mice (*P* \< 0.05) compared with that in the controls ([Figure 5A,B](#F5){ref-type="fig"}). The results of the morphometric analysis were consistent with the bladder weight. The BWT was significantly increased in diabetic mice (*P* \< 0.01), but SQW treatment effectively inhibited this alteration ([Figure 5C](#F5){ref-type="fig"}). No substantial differences in the smooth-muscle-to-collagen ratio were observed among the control, SQW-treated, and model mice ([Figure 5D](#F5){ref-type="fig"}). ![Bladder weight and digital images (100×) of HE and Masson's trichrome staining from the six groups of mice (*n* = 8). **(A)** Bladder weight; **(B)** bladder-weight-to-body-weight ratio; **(C)** BWT measured from HE images; **(D)** smooth-muscle-to-collagen ratio determined by the Masson's trichrome images. Data represent the means ± SD (model vs. control group, ^∗∗^*P* \< 0.01; treatment vs. model group ^\#^*P* \< 0.05 and ^\#\#^*P* \< 0.01).](fphar-10-00552-g005){#F5} Contractility Studies *in vitro* -------------------------------- We found that the DMS strips of diabetic mice exhibited significantly higher amplitudes of spontaneous activity that those of the controls (*P* \< 0.01). SQW H and SQW M treatments markedly decreased this alteration (*P* \< 0.01) ([Figure 6A](#F6){ref-type="fig"}--[C](#F6){ref-type="fig"}). α,β-methylene ATP (100 μM), which is the P2X receptor agonist, caused higher contractions in the DMS strips of diabetic mice compared with those of the controls (*P* \< 0.01). α,β-methylene ATP (100 μM) was added twice to activate the bladder strip. The responses evidently increased in the first reaction but markedly decreased in the second response in the diabetic mice compared with those in the controls (*P* \< 0.05). SQW H treatment markedly reverted all these alterations (*P* \< 0.01 or *P* \< 0.05), and SQW M treatment decreased the ATP-induced contractions ([Figure 6D,E](#F6){ref-type="fig"}). In addition, the contractions caused by KCl (120 mM), EFS (1--64 Hz) were higher in the diabetic mice than in the controls (*P* \< 0.01 or *P* \< 0.05). The cumulative concentration -response curve of carbachol (10^-8^--10^-5^ M) was also higher in the diabetic mice than in the controls (*P* \< 0.01 or *P* \< 0.05). The contractions of the DSM strips were markedly decreased due to the treatment with SQW (*P* \< 0.01 or *P* \< 0.05) ([Figure 6F](#F6){ref-type="fig"}--[I](#F6){ref-type="fig"}). No significant differences in pEC50 were found among the control, SQW-treated, and model mice. The Emax of diabetic mice exhibited significantly higher than the controls (*P* \< 0.01). Positive and SQW-M treatments markedly decreased (*P* \< 0.01) ([Table 4](#T4){ref-type="table"}). ![DSM strips of diabetic mice exhibited high amplitudes of spontaneous activity and increased bladder contractions to stimuli, and SQW treatment inhibited the changes (*n* = 5). **(A)** Representative spontaneous contractions of the bladder detrusor strips from the mice in the six groups. Quantification of the **(B)** amplitude and **(C)** frequency of spontaneous contraction; **(D)** DSM strip contractions induced by α,β-methylene ATP (100 μM). **(E)** ATP ratio (calculated as \[before-cons -- after-cons\]/pro-cons), α,β-methylene ATP (100 μM)-induced contractions (two times added α,β-methylene ATP); **(F)** DSM strip contractions induced by KCl (120 mmol/L); and **(G)** DSM strip contractions induced by EFS (1--64 Hz); and **(H)** carbachol (10^-8^--10^-5^ M); **(I)** The cumulative concentration--response curves of carbachol. Data represent the means ± SD (model vs. control group, ^∗^*P* \< 0.05 or ^∗∗^*P* \< 0.01; treatment vs. model group, ^\#^*P* \< 0.05 or ^\#\#^*P* \< 0.01).](fphar-10-00552-g006){#F6} ###### The pEC50 and Emax of carbachol (means ± SD, *n* = 5). Group Dose pEC50 Emax ---------- ------------ ------------- -------------------- Control -- 5.68 ± 0.38 15.07 ± 4.51 Model -- 5.93 ± 0.20 27.58 ± 7.76^∗∗^ Positive 0.82 mg/kg 6.23 ± 0.21 18.14 ± 8.11^\#\#^ SQW-H 2.208 g/kg 5.97 ± 0.65 22.95 ± 7.78 SQW-M 1.104 g/kg 5.94 ± 0.32 16.86 ± 4.68^\#\#^ SQW-L 0.552 g/kg 6.15 ± 0.24 24.59 ± 6.60 Model vs. control group, ∗∗ P \< 0.01; treatment vs. model group \#\# P \< 0.01. Real-Time RT-PCR Analysis ------------------------- According to the results of RT-PCR analysis, the mRNA expression levels of myosin Va and SLC17A9 were significantly increased in the diabetic mice compared with those in the control mice (*P* \< 0.01 or *P* \< 0.05). The 3-week SQW M treatment, markedly decreased the mRNA expression levels of myosin Va and SLC17A9 (*P* \< 0.01 or *P* \< 0.05), whereas the SQW H and SQW L treatments reduced the myosin Va mRNA expression level only (*P* \< 0.01 or *P* \< 0.05) ([Figure 7A,B](#F7){ref-type="fig"}). ![Effects of SQW treatment on the mRNA expression levels of myosin Va and SLC17A9 in the bladder tissues (*n* = 8). Quantification of mRNA expression levels of **(A)** myosin Va and **(B)** SLC17A9 normalized with β-actin by 2^-ΔΔCt^ method. Data represent the means ± SD (model vs. control group ^∗^*P* \< 0.05 or ^∗∗^*P* \< 0.01; treatment vs. model group ^\#^*P* \< 0.05 and ^\#\#^*P* \< 0.01).](fphar-10-00552-g007){#F7} Western Blot Analysis --------------------- The protein expression levels of myosin Va, SLC17A9, and β-actin were evaluated through Western blotting. The results showed that the protein expression levels of myosin Va and SLC17A9 were significantly increased in the bladder tissues of diabetic mice compared with those in the controls (*P* \< 0.01). After the 3-week SQW treatment, SQW M treatment markedly decreased the protein expression levels of myosin Va and SLC17A9, whereas SQW H and SQW L treatments significantly reduced the protein expression of myosin Va only (*P* \< 0.01) ([Figure 8A,B](#F8){ref-type="fig"}). ![Representative immunoblots of the protein expression levels of myosin Va, SLC17A9; and β-actin and effects of SQW treatment on these expression levels in the bladder tissues (*n* = 8). Quantification of protein expression of **(A)** myosin Va and **(B)** SLC17A9 by normalizing with β-actin. Data represent the means ± SD (model vs. control group, ^∗∗^*P* \< 0.01; treatment vs. model group, ^\#\#^*P* \< 0.01).](fphar-10-00552-g008){#F8} Discussion ========== Diabetic bladder dysfunction is a major lower urinary tract complication of diabetes, but its molecular mechanism remains unknown. Several studies reported that rat models with STZ-induced T2DM usually exhibited major clinical urodynamic alterations ([@B11]; [@B22]; [@B38]). We have previously reported that TCM, namely SQW, had therapeutic effects on many lower urinary tract diseases, including operation-induced OAB and outlet obstruction ([@B18]). Two components (norisoboldine and allantoin) were detected as quality standards in SQW, and the results of HPLC showed that the active ingredient content met the standard. SQW is an oral traditional Chinese formula for the treatment of lower urinary tract diseases, and the oral dose is 5.4 g/day. The high dose in our study was double the clinical dose, and the middle dose was equation, while the low dose was half. Also, we calculated the equivalent doses of mice administration according to the equation ([@B39]). In the present study, we explored the effects and potential mechanism of SQW on diabetic OAB by using STZ-induced T2DM mouse model. The results of our research showed that the model mice were characterized by a range of T2DM symptoms, including abnormal fat, carbohydrate metabolism, and high Bg levels, thereby indicating the successful establishment of the T2DM model. Our VSOP and urodynamic tests showed significant alternations in the micturition of diabetic mice as characterized by increased frequency of voids (smaller micturition was apparent), NVCs, MBC, BC, and RV and markedly reduced VE after 6 weeks of hyperglycemia status. These results indicated that the diabetic mice entered the stage of OAB, which is consistent with those of previous studies ([@B22]; [@B38]). SQW treatment effectively improved the bladder function of the T2DM mice but did not change the high Bg status. In healthy bladders, the contractions caused by ATP were limited ([@B22]). However, in the early phase of DBD, the purinergic-induced contractions account for up to 150% compared with those in healthy individuals ([@B1]). The transfer of ATP-filled vesicles to the varicose membranes and the effects of DBD on the transfer mechanism were poorly explored. Recent studies have provided key information about the contributions of SLC17A9 and myosin Va in the storage and exocytosis of ATP in secretory vesicles ([@B31]; [@B6]). SLC17A9 is a vesicular nucleotide transporter that plays an essential role in the specific transport of ATP inside purinergic vesicles ([@B30]). This transporter was discovered in various invertebrates and vertebrates, indicating that the molecular mechanisms of purinergic transmission are common in animals ([@B31]). Accurate localization of the neurotransmitter-filled vesicles in varicose membranes is indispensable for exocytosis ([@B29]). The local transport of organelles, including purinergic vesicles, requires energy that is provided by molecular motors, such as myosin Va. Myosin Va is a subtype of the unconventional myosin V and is primarily expressed in the central and peripheral nervous systems and melanocytes ([@B35]). The structure of myosin Va is provided for the continuous forward transport of intracellular cargoes ([@B32]), and its movement changes with cargo binding ([@B15]). Recent studies reported that myosin Va played a key role in purinergic vesicular transport and was closely associated with ATP-containing vesicles by binding directly to SLC17A9 ([@B6]). These findings provided us the research direction to explore the effects of SQW and the alterations in purinergic vesicular transport in DBD mice. The results *in vitro* contractility study on the bladder were in accordance with the "temporal theory" of DBD. In the early phase, the DSM of diabetic mice exhibited markedly high amplitudes of spontaneous activity and increased responsiveness to stimuli, such as α,β-methylene ATP, KCL, EFS, and carbachol ([@B11]). *In vitro* studies the results showed that the DMS of diabetic mice exhibited markedly high amplitudes of spontaneous activity compared with those of the controls, and the waves were disordered. The frequency of spontaneous activity remained stable, which was consistent with a previous report ([@B38]). Compared with those of the controls, the contractile responses of the DSM strips of diabetic mice to α,β-methylene ATP were substantially increased during the first response but markedly decreased during the second time. These results suggested that the DSM strips of diabetics exhibited increased responsiveness to exogenous purinergic agonists, and the bladder remained high responsiveness after the first stimulate in diabetic mice. In agreement with the above findings, the contractions of DSM strips to KCL, EFS, and carbachol were high in the diabetic mice. After the 3-week SQW treatment, the mice in the SQW treatment groups displayed varying degrees of reduction for their contractile responses to all stimulus, especially to α,β-methylene ATP, as compared with the models. This result was consistent with those of VSOP and urodynamic test, thus further confirming the therapeutic effects of SQW in the OAB of diabetic mice. The hyper-responsiveness of diabetic DSM to α,β-methylene ATP, KCL, EFS, and carbachol reflects the changes at the neurotransmitter level and/or beyond the neurotransmitters related to the alterations in the upstream vesicular nucleotide transporters. Therefore, we evaluated the expression levels of protein and mRNA for myosin Va and SLC17A9. In the bladders of the models, we found high expression levels of protein and mRNA for myosin Va and SLC17A9, indicating that the increased expression of these genes is the potential pathogenic mechanism of diabetic OAB. Moreover, we found that the expression of levels of proteins and mRNA of myosin Va and SLC17A9 were significantly decreased after SQW treatment, suggesting that the downregulation of expression levels of myosin Va and SLC17A9 contributes to the therapeutic effect of SQW in DBD. In this study, SQW was demonstrated to have effective treatment on DBD, and the possible mechanisms were also explored. However, the active ingredients of SQW are unclear, since the complexity of its component. To further explore the major effective ingredient in SQW is our goal in the future. We want to explore weather a single herbal has effects in DBD. In the present study, we explored the effects and potential mechanism of SQW on diabetic OAB by using STZ-induced T2DM mouse model. SQW is one of the most commonly traditional Chinese formula to treat various urinary system diseases in China for thousands of years ([@B18]), such as urinary incontinence, nocturnal enuresis and OAB symptom syndrome ([@B19]; [@B21]). And diabetic OAB has the same symptom like nocturia and urgency, SQW also was used to treat DBD or combined with other drugs in clinical ([@B25]; [@B8]). Studies has already found some mechanisms, such as β receptor, P2X, TRPV1 ([@B19]; [@B42]), but its complicated. We intend to conduct further research, learn its potential herbal components. Study found that radix linderae extracts have effects on OAB and diabetic bladder ([@B34]; [@B43]). The main components in radix linderae is norisoboldine and ursolide ([@B7]; [@B27]). We intend to conduct further research on efficacy and mechanism using the component in radix linderae. Conclusion ========== In summary, our study revealed that HFD with STZ-induced T2DM model mice showed OAB symptoms after 6 weeks of hyperglycemia status. In addition, the traditional Chinese formula, SQW, exhibited therapeutic effects on the OAB of model mice. SQW directly targeted the bladder, rather than improving the Bg levels. The mechanism was related to the inhibition of the transmission of purinergic neurotransmitters in the bladder of diabetic mice by downregulating the expression levels of myosin Va and SLC17A9. Ethics Statement ================ This study was carried out in accordance with the ethical principle guidelines of the National Research Council. The experimental protocol was approved by the Committee on Ethics of Guangzhou University of Chinese Medicine. Author Contributions ==================== H-yC conceived and designed the study. PH directed the experiments. JW, X-fY, W-jL, RW, L-yT, W-kR, L-jF, F-jC, and D-wL constructed the animal model. JW, X-fY, and Y-fX analyzed the data and drafted the manuscript. D-wL made a great contribution in the revision of the later articles. All authors read and approved the final manuscript. Conflict of Interest Statement ============================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. **Funding.** This study was supported by the National Natural Science Foundation of China (Grant No. 81673676) entitled "Effect and Mechanism of SQW on Neurotransmission Abnormality in the Treatment Recovery of Diabetic Cystopathy" and the Science and Technology Bureau (Grant No. 2019622101002). The authors thank the School of Fundamental Medical Science, Guangzhou University of Chinese Medicine for technical support and the Hunan Hansen Pharmaceutical Co., Ltd. for providing quality control materials. Supplementary Material ====================== The Supplementary Material for this article can be found online at: <https://www.frontiersin.org/articles/10.3389/fphar.2019.00552/full#supplementary-material> ###### Click here for additional data file. ANOVA : analysis of variance BC : bladder compliance Bg : blood glucose BWT : bladder wall thickness DBD : diabetic bladder dysfunction DSM : detrusor smooth muscle EFS : electrical-field stimulation FBG : fasting blood glucose HFD : high-fat diet MBC : maximum bladder capacity MV : micturition volume NVC : non-voiding contraction OAB : overactive bladder OGTT : oral glucose tolerance test RV : residual volume SD : standard deviation SQW : Suo Quan Wan SQW H : high-dose SQW SQW L : low-dose SQW SQW M : medium-dose SQW STZ : streptozotocin T2DM : type 2 diabetes mellitus VSOP : voided stain on paper VE : voided efficiency [^1]: Edited by: Adolfo Andrade-Cetto, National Autonomous University of Mexico, Mexico [^2]: Reviewed by: Geng Wenye, Fudan University, China; Fabiola Zakia Mónica, Campinas State University, Brazil [^3]: ^†^These authors have contributed equally to this work [^4]: This article was submitted to Ethnopharmacology, a section of the journal Frontiers in Pharmacology
{ "pile_set_name": "PubMed Central" }