text
stringlengths
8
5.77M
Site Mobile Navigation Black Incomes Surpass Whites in Queens The Cambria Heights neighborhood in Queens, a county that belies the “stereotype of blacks living in dangerous, concentrated, poor, slum, urban neighborhoods,” one policy expert says.Credit Suzanne DeChillo/The New York Times Across the country, the income gap between blacks and whites remains wide, and nowhere more so than in Manhattan. But just a river away, a very different story is unfolding. In Queens, the median income among black households, nearing $52,000 a year, has surpassed that of whites in 2005, an analysis of new census data shows. No other county in the country with a population over 65,000 can make that claim. The gains among blacks in Queens, the city’s quintessential middle-class borough, were driven largely by the growth of two-parent families and the successes of immigrants from the West Indies. Many live in tidy homes in verdant enclaves like Cambria Heights, Rosedale and Laurelton, just west of the Cross Island Parkway and the border with Nassau County. David Veron, a 45-year-old lawyer, is one of them. He estimates that the house in St. Albans that he bought with his wife, Nitchel, three years ago for about $320,000 has nearly doubled in value since they renovated it. Two-family homes priced at $600,000 and more seem to be sprouting on every vacant lot, he says. “Southeast Queens, especially, had a heavy influx of West Indian folks in the late 80’s and early 90’s,” said Mr. Veron, who, like his 31-year-old wife, was born on the island of Jamaica. “Those individuals came here to pursue an opportunity, and part of that opportunity was an education,” he said. “A large percentage are college graduates. We’re now maturing and reaching the peak of our earning capacity.” “It really is the best illustration that the stereotype of blacks living in dangerous, concentrated, poor, slum, urban neighborhoods is misleading and doesn’t predominate,” he said. Andrew A. Beveridge, a Queens College demographer who analyzed results of the Census Bureau’s 2005 American Community Survey, released in August, for The New York Times, said of the trend: “It started in the early 1990’s, and now it’s consolidated. They’re married-couple families living the American dream in southeast Queens.” In 1994, an analysis for The Times found that in some categories, the median income of black households in Queens was slightly higher than that of whites — a milestone in itself. By 2000, whites had pulled slightly ahead. But blacks have since rebounded. Photo Kenneth C. Holder, elected to a judgeship last year, says he could move from Queens, “but why?”Credit Suzanne DeChillo/The New York Times The only other places where black household income is higher than among whites are much smaller than Queens, like Mount Vernon in Westchester, Pembroke Pines, Fla.; Brockton, Mass.; and Rialto, Calif. Most of the others also have relatively few blacks or are poor. But Queens is unique not only because it is home to about two million people, but also because both blacks and whites there make more than the national median income, about $46,000. Even as blacks have surged ahead of whites in Queens, over all they have fallen behind in Manhattan. With the middle class there shrinking, those remaining are largely either the wealthy, who are predominantly white, or the poor, who are mostly black and Hispanic, the new census data shows. Median income among blacks in Manhattan was $28,116, compared with $86,494 among whites, the widest gap of any large county in the country. In contrast, the middle-class black neighborhoods of Queens evoke the “zones of emergence” that nurtured economically rising European immigrants a century ago, experts say. “It’s how the Irish, the Italians, the Jews got out of the slums,” Professor Nathan said. Despite the economic progress among blacks in Queens, income gaps still endure within the borough’s black community, where immigrants, mostly from the Caribbean, are generally doing better than American-born blacks. “Racism and the lack of opportunity created a big gap and kind of put us at a deeper disadvantage,” said Steven Dennison, an American-born black resident of Springfield Gardens. Mr. Dennison, a 49-year-old electrical contractor, has four children. One is getting her doctoral degree; another will graduate from college this school year. “It starts with the school system,” Mr. Dennison said. Mr. Vernon, the lawyer from Jamaica, said: “It’s just that the people who left the Caribbean to come here are self-starters. It only stands to reason they would be more aggressive in pursuing their goals. And that creates a separation.” Photo Elvira and Lloyd Hicks at home in Cambria Heights, where they moved from Harlem in 1959. Credit Suzanne DeChillo/The New York Times Housing patterns do, too. While blacks make more than whites — even those in the borough’s wealthiest neighborhoods, including Douglaston — they account for fewer than 1 in 20 residents in some of those communities. And among blacks themselves, there are disparities, depending on where they live. According to the latest analysis, black households in Queens reported a median income of $51,836 compared with $50,960 for non-Hispanic whites (and $52,998 for Asians and $43,927 among Hispanic people). Among married couples in Queens, the gap was even greater: $78,070 among blacks, higher than any other racial or ethnic group, and $74,503 among whites. Hector Ricketts, 50, lives with his wife, Opal, a legal secretary, and their three children in Rosedale. A Jamaican immigrant, he has a master’s degree in health care administration, but after he was laid off more than a decade ago he realized that he wanted to be an entrepreneur. He established a commuter van service. “When immigrants come here, they’re not accustomed to social programs,” he said, “and when they see opportunities they had no access to — tuition or academic or practical training — they are God-sent, and they use those programs to build themselves and move forward.” Immigrants helped propel the gains among blacks. The median income of foreign-born black households was $61,151, compared with $45,864 for American-born blacks. The disparity was even more pronounced among black married couples. The median for married black immigrants was $84,338, nearly as much as for native-born white couples. For married American-born blacks, it was $70,324. One reason for the shifting income pattern is that some wealthier whites have moved away. “As non-Hispanic whites have gotten richer, they have left Queens for the Long Island suburbs, leaving behind just middle-class whites,” said Professor Edward N. Wolff, an economist at New York University. “Since home ownership is easier for whites than blacks in the suburbs — mortgages are easier to get for whites — the middle-class whites left in Queens have been relatively poor. Middle-class black families have had a harder time buying homes in the Long Island suburbs, so that blacks that remain in Queens are relatively affluent.” The white median also appeared to have been depressed slightly by the disproportionate number of elderly whites on fixed incomes. Photo Credit The New York Times But even among the elderly, blacks fared better. Black households headed by a person older than 65 reported a median income of $35,977, compared with $28,232 for white households. Lloyd Hicks, 77, who moved to Cambria Heights from Harlem in 1959, used to run a freight-forwarding business near Kennedy Airport. His wife, Elvira, 71, was a teacher. Both were born in New York City, but have roots in Trinidad. He has a bachelor’s degree in business. She has a master’s in education. “Education was always something the families from the islands thought the children should have,” Mr. Hicks said. In addition to the larger share of whites who are elderly, said Andrew Hacker, a Queens College political scientist, “black Queens families usually need two earners to get to parity with working whites.” Kenneth C. Holder, 46, a former prosecutor who was elected to a Civil Court judgeship last year, was born in London of Jamaican and Guyanese parents and grew up in Laurelton. His wife, Sharon, who is Guyanese, is a secretary at a Manhattan law firm. They own a home in Rosedale, where they live with their three sons. “Queens has a lot of good places to live; I could move, but why?” Mr. Holder said. “There are quite a number of two-parent households and a lot of ancillary services available for youth, put up by organized block associations and churches, like any middle-class area.” In smaller categories, the numbers become less precise. Still, for households headed by a man, median income was $61,151 for blacks and $54,537 for whites. Among households headed by a woman, the black and white medians were the same: $50,960. Of the more than 800,000 households in Queens, according to the Census Bureau’s 2005 American Community Survey, about 39 percent are white, 23 percent are Hispanic, 18 percent are Asian, and 17 percent are black — suggesting multiple hues rather than monotone black and white. “It is wrong to say that America is ‘fast becoming two nations’ the way the Kerner Commission did,” said Professor Nathan, who was the research director for the National Advisory Commission on Civil Disorders in 1968 and disagreed with its conclusion. “It might be, though, that it was more true then than it is now.” A version of this article appears in print on , on page A29 of the New York edition with the headline: In Queens, Blacks Are the Have-Nots No More. Order Reprints|Today's Paper|Subscribe
Saturday, 3 January 2009 The Start Of December Current mood: scared This is my first blog of 2009, and yet it covers the first few days of the last month of last year...if that makes sense. There's so much happening in the present day that I want to be able to get off my chest, and since I don't even have an offline diary this year, I'm not even going to have a record of what's been going on when I get to that stage. So I'd better catch up before I forget, if you see what I mean. Oh, I give up trying to explain! Let's just continue with the boring story. December 1st 2008 Another rather boring day. Feeling even sicker than the day before, with a sore throat, swollen glands, headache, earache plus general bad cold symptoms, and with our trip to Luxembourg coming closer by the day, Mum announced that I wasn't to leave the house. Especially considering I didn't even have a proper coat, and the weather had turned pretty nasty. So I spent the day trying to clear the lounge up a bit in preperation for the Christmas decorations, but I kept going hot and cold and having to sit down. I catalogued some ponies, and finished recording my MLP music cassette tapes. The new G3 MLP theme tune is growing on me a bit now. For those who have not heard it, they basically modernised the traditional MLP theme tune and then added a line about each of the Core Seven ponies. Lyrics are as follows: My Little Pony, My Little Pony, Every day is a dream come true. My Little Pony, My Little Pony, How I love to play with you. No way of knowing where we'll be going, Our adventures never end! My Little Pony, My Little Pony, I'm so glad you're my friend! We'll plan a party with Pinkie Pie, A bunch of balloons lift her up to the sky. Scootaloo will show us games to play, And Toola Roola will be painting away. Rainbow Dash always dresses in style, Sweetie Belle's magic brings a great big smile. I hope we'll hear a story from Cheerilee, And a beautiful Star Song melody... My Little Pony, My Little Pony, I'm so glad you're my friend! I cannot figure out who the singer is for the life of me, and it's really bothering me now. Can anybody who has heard the song help me out here? I'm presuming it's the voice of one of the Core Seven ponies, as she kind of sounds familiar. Maybe Tabitha St.Germain? Although at the same time it just doesn't sound quite right... A kind US collector on the Arena helped me to get a copy of the Core Seven DVD, "Meet The Ponies" after I posted a wanted ad on there, because I was so desperate to see the credits. Then I discovered there were no credits! I made a recording from the DVD, so if anybody would be kind enough to listen and give me their thoughts on who that singer is, I would be most grateful! The recording can be found here. I know it isn't of the best quality, but half my recording equipment has gone wrong in the last month (more of that later) so it's the best I can currently do. December 2nd 2008 Again, I couldn't leave the house, so I just did more clearing up, washed and catalogued some ponies... Didn't I tell you this was going to be a boring blog?! I decided that since I'm VERY short of money right now, and the house is such a tip, I'd try to sell some stuff on the Arena. If I was stuck indoors anyway, I had plenty of time to deal with e-mails, pack stuff up for weighing etc. And sure enough, almost immediately I got a message from RainbowWindy. I've bought from RainbowWindy a couple of times in the past, and she's always been a great seller to me (keeps in good communication, ships so fast that the stuff seems to arrive before you even knew it was posted, despite coming from Canada, and all that stuff), so I was determined to do my best for her in return. So I get out the stuff she was interested in - Megan's UK outfit, Baby Sunset's bottle, and a Scrub-a-Dub-Tub. Immediately, I meet opposition from my parents. "You're not selling that bath, are you?!" Mum asked me. "Yes, I was going to. I've been offered $5 for it. Why?" I reply. "I thought your other one only had three feet." Gosh, she's got a good memory considering this isn't even her collection! Sure enough, my other Scrub-a-Dub-Tub is missing one of the Seahorse feet. However, I'm always seeing those coming up for sale in accessory lots, so I can easily get another, and my other bath has the shower attachment...not to mention the fact that I remember getting that bath with a few of my old ponies, so I wanted to keep that one for sentimental reasons. I told Mum all of this and she seemed to calm down. She shrugged, and told me she thought I was being stupid, but that was all. Then David came in the room. Mum's first words to him; "Do you know what she's selling now?""What?"David said, one eye on the TV. Pretty soon, they were both yelling at me, and telling me I shouldn't be selling the bath. "I spent good money on that. You shouldn't sell things that people have given to you!" I pointed out to him that I had bought the bath for myself from a car boot sale in Canvey Island. It cost the grand total of 20p, and was bought back in 2002, when all of my pocket money came from Grandma. Maybe I shouldn't have been selling something that I bought with her money, but I'm sure she'd have understood, and been glad I was making a bit of money on what was, after all, a plastic bath tub I already have in my collection. David told me that he wasn't going to help me to sell it by getting it weighed at the post office when he went to post the Christmas cards the next morning. So I was left apologising and making excuses to RainbowWindy for several days until he calmed down enough to weigh the thing for me. How I wish we were in the other house by now, and I could walk to a post office myself. December 3rd 2008 I still wasn't allowed to leave the house, for fear I'd make myself too sick to go to Luxembourg, despite the fact I felt much better by this point. The phone went wrong, so we didn't even have any internet access all day either...I almost went mad, being totally isolated, and unable to talk to anybody, even online. I tried to keep myself busy, clearing up (without much success) and photographing and cataloguing my entire adult G2 MLP collection. I was starting to re-assemble a list of my ponies on Excel by this point, but every time I so much as look at it, I feel like I'm rubbing salt in the wounds, realising just how much information got lost on the other file. December 4th 2008 I was finally allowed out of the house again! Boringly, just to Northolt post office to look at the envelope they were holding for us because the sender hadn't paid the postage. We couldn't figure out who it was from, and it didn't look very interesting, so we didn't bother to pay for it and just left it there. Then to a building society in Ealing Broadway to talk about what to do with some of Mum's money and all of my money. It was the first time I'd really been allowed to know how much money I had or where it all was, and to have a concious decision in where it went. I didn't want it locked up for a long time, in case I have to use it to get an education once we move...I really can't see my parents being able to afford college fees and whatnot. But to get it into a high interest fixed rate account where it wouldn't be locked up for years we needed my National Insurance number. I've spoken to a few people overseas about this situation, and found most of you have never heard of National Insurance Cards, so here's the Wikipedia link for anyone who's curious. http://en.wikipedia.org/wiki/National_Insurance Anyways, the only way to get my number was to have the card, which we didn't have with us. So the woman said we could ring her up once we got home and give her the number over the phone. Basically, we were worried that if we waited even until the next day all of the interest rates would drop... I was stunned to find out how few savings I actually have anyway. Most of the money that I do have came from two sources ~ when I was about nine years old, I decided to save most of the pocket money Grandma gave me to "Save the rides". You know those coin operated children's rides I go around photographing? I really thought I could save up enough money to open a museum for them someday! Well, I've always been ambitious! Anyways, all that money is still there. Plus a little tiny bit Mum decided to put away for me over the past few years. She calls it my "wedding money", so presumably it's my "sit around forever money", as I have NO INTENTION of EVER getting married if I can possibly help it, as you all know. After all, what's the point of marrying some horrid man in this country if my only goal in life is to emigrate anyway? If I had a load of kids and tied myself down in this dump, how could I ever even hope to get to Canada? However, I'm not really sure of my choices... When we got back to Grottsville, David went looking for my National Insurance card. He takes all the paperwork to do with Mum and I and puts it in a box upstairs - usually. If he's not too busy staring at the TV or reading Mills & Boon books, that is. Anyways, my National Insurance card ISN'T THERE. You can only EVER get ONE replacement card in your LIFETIME, so I really don't want to have to apply for it yet. But without that card (and number) I won't be able to get a job or anything. So God knows where I'm going to end up now. I'm making myself feel sick just thinking about this whole situation again. Anyways, sorry to end on such a downbeat note, but it's late and I want to get up relatively early in the morning. No comments: Post a Comment About Me Hi, my name's Desirée Skylark. I’m a 24-year-old daydreamer, stranded in the UK but hoping someday to move to Vancouver. I would love to be a professional actress, specialising in voice overs in animation and commercials, and in musical theatre... did I mention that I like daydreaming? I am the proud owner of a large herd of My Little Ponies – some people might laugh, but they help me to de-stress and take me back to my happier childhood days! I’m also a coin operated ride enthusiast – I have over 1600 photos of them from when I was small! Almost ALL of these rides have now been scrapped so I’m trying to put together a website about them and the people who made them. It’s quite fascinating to discover their history! The last few years of my life have been an utter nightmare (read my older blog entries for the full details), and I have been left with no education or chance to achieve my full potential as far as finding a job goes. I do hope now that we have finally moved house that I will be able to start getting my life back on track, but it’s going to be hard. Join me on my blogging journey – can I turn this into a real life Cinderella story with a happy ending?
About This School Beachwood Elementary School is located in Lakewood, WA and is one of 20 elementary schools in Clover Park School District. It is a public school that serves 458 students in grades PK-5. See Beachwood Elementary School's test results to learn more about school performance. In 2011, Beachwood Elementary School had 22 students for every full-time equivalent teacher. The Washington average is 19 students per full-time equivalent teacher. Grade 3 Reading Performance Beachwood Elementary School Reviews Excellent School. This is our families 5th elementary school. Of the 5 schools we have been in this is the best. The teachers work hard to make sure my son and daughter get what they need and push my son to learn more. The teachers, counselor, principal, and I all met before school to help my daughter who was having a hard time in reading. She goes to the before school learning help and is in LAP for extra help. I went to him a few times with problems and he took care of them right away. The school just keeps getting better and better and that is because of the teachers and Mr. Zarling. My daughter is doing things in 3rd grade that my son did in 4th grade in Tennessee. Unfortunately, we only have 1 more year at Beachwood. We will miss Beachwood and Mr. Zarling since he is leaving after this year. Beachwood Elementary School Photos Test Scores Show About the MSP What is it? The Measurements of Student Progress (MSP) are annual tests used to measure a student's mastery of the state's grade-level academic standards contained in the Essential Academic Learning Requirements (EALRs). Which Grades and Subjects? Students in grades 3 through 8 are assessed in reading and math, in grades 4 and 7 in writing, and in grades 5 and 8 in science. How is it Scored? A student's performance on the reading, math and science MSP is reported using scale scores. Scale scores are three-digit numbers that are used to place the student into one of four levels: Advanced (Level 4), Proficient (Level 3), Basic (Level 2) and Below Basic (Level 1). The goal is for all students to meet or exceed standards (at or above Level 3). About the WASL What is it? The Washington Assessment of Student Learning (WASL) are annual tests used to measure a student's mastery of the state's grade-level academic standards. Which Grades and Subjects? Students in grades 3 through 8 and 10 are assessed in reading and math, in grades 4, 7, and 10 in writing, and in grades 5, 8 and 10 in science. How is it Scored? Students score at one of four levels: level 4 (exceeds standard), level 3 (meets standard), level 2 (below standard) and level 1 (well below standard). The goal is for all students to meet or exceed standards (at or above level 3). Students must pass th 5 TestRating5 out of 10 The Education.com TestRating is a number (1-10) calculated by Education.com that provides an overview of a school’s test performance for a given year, by comparing the school’s state standardized test results to those of other schools in the same state. For Washington, the TestRating is calculated using a school's 2011 High School Proficiency Exam Results and Measurements of Student Progress Results for all subjects tested.more...
Highlights for Dark Brown Hair Gagan Dhillon Oct 28, 2018 Tap to Read ➤ Often, women are confused about which highlights would look the best on dark brown hair. Well, here's a small secret -- choosing highlights lies in the kind of look you want to create, your particular skin tone, eye color, and the haircut that you sport. Nothing looks more appealing on a woman than a nice luscious mane. Today, undoubtedly men, too, are not far behind in experimenting and trying new hairstyles and highlights. Before deciding to choose highlights for your dark brown hair, you have some questions to answer. Why are you getting highlights done? Is it to cover your grays or just to revamp your look? Depending on your requirement, take a call. And remember to consult a professional hairdresser. Getting the Best Color Highlights ~ Highlighting the hair requires many skills, and many other elements play an important role while choosing a hair color -- from skin tone, hair type to natural pigments, etc. ~ Knowing if you will look better in cool or warm tones is also crucial. A simple way to figure this out is looking at the color of your inner wrist vein during the day, while standing by an open window. ~ Using the right shampoo for colored hair is also of extreme importance, for the hair color to last longer and for it to limit fading, or causing any damage. ~ Ask a professional hairdresser to choose the hair dye color and shampoo that will look best on your hair, keeping your complexion and hair type in mind. Light Highlights If you want light highlights, then consider shades like chestnut brown. It works well on warm skin-toned people, whereas light ash brown works well on people with cool skin tone. These will help you add a depth to your existing hair color. To get the best out of these colors, add both highlights and low lights. If you are opting for light colors like caramel, then don't be afraid to try chunky highlights. This is the right shade to cover grays easily. Subtle Highlights If you want a very subtle highlight which just gives a shy hint of color, then it is best to opt for red tones. Shades of dark red and auburn highlights, as shown in both images, look really stunning on dark brown hair. Auburn highlights are ideal if you want to create depth. Copper red highlights is another great idea for highlighting dark hair. Adding highlights around your face will also make your complexion seem fairer and clearer. Loud Highlights If you are sure of what you want, then don't shy away from blonde highlights in brown hair. Blonde highlights when done correctly can instantaneously give you a glamorous look. Keeping your skin tone in mind, you can play with anything from strawberry blond to ash blonde highlights. You can have them nicely hidden in -- also known as the Peekaboo highlight -- so that they just peek out with slight movements of the head, or you can add some chunky highlights. When it comes to hair coloring ideas with blonde shades, sky is the limit! You can also try some honey blonde highlights on dark brown hair. It is best to get your blonde highlights from a professional, so take a picture of the look you want along with you, so that he knows what you want exactly. So you see, the secret to getting great, natural-looking highlights is understanding which shades will go well with your hair color, and which shades will flatter or enhance your look. Don't be afraid to experiment. With the right shade, your rich dark brown hair will look chic and glamorous.
Q: the characterization of the center of a finite non-nilpotent group which contained in a maximal Sylow subgroup Let $G$ be a finite non-nilpotent group and $G \cong (C_{p^n} \times C_p) \ltimes C_q$, where $P \cong (C_{p^n} \times C_p)$ is an abelian non-cyclic and non-normal sylow $p$-subgroup of $G$, which is maximal subgroup of $G$. Also we have $Z(G) \leq P$ and $G' \cong C_q$. I want to know why $Z(G) \cong C_p \times C_{P^{n-t}}$, where $t \mid q-1$. Note that $C_q$ is a cyclic group of order $q$. Note that $p,q$ are prime number. A: I assume $p$ and $q$ are primes. Also, your statement does not appear entirely correct to me, see the last paragraph below. The automorphism group of the cyclic group of prime order $C_{q}$ is cyclic, of order $q-1$. Now $$ G / C_{G}(C_{q}) $$ is isomorphic to a non-trivial subgroup of the automorphism group of $C_{q}$. Moreover $C_{G}(C_{q}) \ge C_{q}$, and clearly $$ C_{G}(C_{q}) = Z(G) \times C_{q}. $$ It follows that $P/Z(G)$ is cyclic, of order $p^{s}$ dividing $q-1$. Now what are the subgroups $H$ of $P$ with this property? If $a$ is a generator of $C_{p^{n}}$, and $b$ is a generator of $C_{p}$, clearly $a^{p^{s}} \in H$. Now distinguish two cases, according to whether $b \in H$ or $b \notin H$.
A former contestant on The Apprentice who alleges Donald Trump groped and forcibly kissed her at a hotel in 2007 filed a defamation lawsuit Tuesday, alleging the president-elect publicly disparaged her when he called her a liar. Trump called the allegations by Summer Zervos that he sexually assaulted her in Beverly Hills lies, and vowed to sue her and a dozen other women who came forward with allegations of misconduct during his campaign. Since he has been unwilling to retract his statements calling her a liar, or acknowledge that she was telling the truth, Zervos said she had no other choice but to file a lawsuit "in order to vindicate [her] reputation." Her attorney, Gloria Allred, called the decision to sue brave. "She knows that she will be attacked by Donald Trump," Allred said at a news conference in Los Angeles. "We've waited two months. Time is up."
Easy Organizing: Master Bathroom Learn how to clear the clutter with a few simple tricks and ideas to organize your bathroom. Over Easy If your shower doesn't have built-in shelves, a hanging organizer means shampoo, soap, etc., aren't precariously perched on the edge of the tub or windowsill. Bonus: no more mildew buildup at the bottles' bases to scrub away. Good Housekeeping already has an account with this email address. Link your account to use Facebook to sign in to Good Housekeeping. To insure we protect your account, please fill in your password below. Your information has been saved and an account has been created for you giving you full access to everything goodhousekeeping.com and Hearst Digital Media Network have to offer. To change your username and/or password or complete your profile, click here.
DEFINITION: "Cloud Computing" is a general term for anything that involves delivering hosted services over the Internet. The name "Cloud Computing" was inspired by the cloud symbol that's often used to represent the Internet in flow charts and diagrams. In this model customers pay subscription or usage fees to "lease" their application from the vendor. SaaS applications do not require the installation of any desktop software nor do they require any hardware investment by the customer. Cloud computing for today's companies is equivalent to the electricity grid for companies a century ago. Once companies no longer had to produce their own power they were able to focus more of their resources on running the business. The same goes for cloud computing…now everything they need can be bought and used via "the Cloud. The Wall Street Journal: "Spending Soars on Internet's Plumbing""Behind the recovery in business spending is a surge in purchases of the computers that form the backbone of the Internet, as companies scramble to meet growing demand for video and other Web-based services."
A Well Home, Inc. Before you make that big real estate transaction you should have a home inspection for peace of mind about your new home. Please call for our home inspection rates or to schedule an appointment with our home inspector. Qualifications: NC State Lic.#2274, North Carolina Home Inspectors Association, City of Asheville Third Party Inspector About: A Well Home, Inc. providing prompt and professional home inspections for buyers and sellers and commercial real estate. Best customer service in Western North Carolina and we will be there after the inspection! Thank you, Grant for a complete and comprehensive report. I am grateful for our discussion regarding issues and how to repair them. I have a whole new appreciation for inspection reports. Next time, I will read them more carefully with closer attention to detail and photos. It is clear to me now more than ever, that the inspection report protects the buyer from potential problems that can be very costly down the road. The inspection was money well spent. - B.R., July 2013 Thank you for your prompt assistance on the inspection related to our re-finance closing. Your professionalism was much appreciated! - D.M., December 2011 He was great. Very professional and very detailed. - J.D., August 2010 You were a great help, thank you. Sandy - S.D., June 2010 Grant, thanks so very much for all your help. We found a great home that has few issues, mainly things to keep up with and manage....like all homes. We appreaciate your attention to detail - M.C., May 2010 Grant provided a most pleasant experience, as he was both extremely professional and very personable. He did an outstanding job and was very thorough in his work. I would absolutely recommend him to anyone in need of home inspection services. I will, when the time comes, consider him for repair/remodel work as well. He is a class act and am fortunate to have found him! - J.C., March 2010 We appreciate the thorough Inspection performed by Grant Morrill. Very happy to recommend his services. - D.H., November 2009
Intravenous dexamethasone followed by oral prednisolone versus oral prednisolone in the treatment of childhood Henoch-Schönlein purpura. The aim of this study was to evaluate the effectiveness of intravenous corticosteroid therapy when Henoch-Schönlein purpura (HSP) patients are unable to tolerate oral medications due to abdominal pain. We retrospectively analyzed 111 children with a diagnosis of HSP (mean age 6.9 ± 2.3 years, male:female = 54:57) from the years 2000 to 2007. They were divided into two groups: 49 patients received only oral prednisolone (PL group) and 62 patients received oral prednisolone after intravenous dexamethasone (Dexa + PL group). Palpable purpura was seen in all 111 patients (100%), abdominal pain in 55 (50%), and arthralgia in 65 (59%). Dexa + PL group had significantly longer duration of fasting than PL group (0.7 ± 1.2 vs. 0.02 ± 0.1 days, P < 0.01) due to more severe and frequent abdominal pain (68 vs. 27%, P < 0.01). Intravenous dexamethasone resulted in the rapid resolution of abdominal pain or arthralgia in all patients without major complications. However, the development of nephritis (21% in PL group versus 32% in Dexa + PL group, P = 0.098), the number of relapse (4 vs. 11%, P = 0.167), and persistent nephritis at last follow-up (12 vs. 16%, P = 0.563) were not different between the two groups despite more severe symptoms in Dexa + PL group. Intravenous dexamethasone followed by oral prednisolone may be a useful and effective therapeutic strategy in HSP children who cannot tolerate oral medications due to severe abdominal pain.
Q: Are both /etc/my.cnf and /etc/mysql/my.cnf needed? I recently setup a new web server with Ubuntu 12.04. I noticed that there are two my.cnf files, and MySQL successfully reads both: /etc/my.cnf /etc/mysql/my.cnf Do I need to keep both of these files? If possible, I would like to simply merge the lines from one my.cnf into the other, thereby having only one my.cnf to manage. A: You should only need one of those, but for legacy reasons MySQL will LOOK in those places (and a few others, some of them really strange, because that's how it is). Feel free to consolidate.
North Miami Grow House North Miami Grow House North Miami Police, Courtesy Nayip Laboy Negron, 36, was taken into custody Wednesday and charged with marijuana trafficking, possession of marijuana with intent to distribute and other drug charges, according to North Miami police Maj. Neal Cuevas. Nayip Laboy Negron, 36, was taken into custody Wednesday and charged with marijuana trafficking, possession of marijuana with intent to distribute and other drug charges, according to North Miami police Maj. Neal Cuevas. (North Miami Police, Courtesy) Nayip Laboy Negron, 36, was taken into custody Wednesday and charged with marijuana trafficking, possession of marijuana with intent to distribute and other drug charges, according to North Miami police Maj. Neal Cuevas.
I'm a 4.0 level player with a one-handed backhand, and I play an agressive type of all-court game. I just lost a match against a guy who hits moon-balls whenever possible. He tosses most of the baseline groundstrokes as high as possible (almost high enough for overheads, but not quite), and most of them land close to the baseline. He did this on 90% of his groundstrokes, and he enjoyed seeing me suffer from it. He used this type of shots as a weapon, and he has a lot of practice. I don't think he can be categorized as a "pusher". I tried to deal with the moon balls by hitting them on the rise, but it is a very difficult shot for me, resulting in a. reduced accuracy in my shot placements b. more errors (mis-hits or wide/long shots) c. when I attack the net, his moon balls becomes perfect deep lobs As a result, his moon balls got to me. I hit them back once, twice, three times, but then I made an error sooner or later. Could anybody give me some advice how I should play this guy the next time we meet? It will be very soon, in a couple of days. That totally blows. I've been there myself. My suggestion would be to slice as much as you can, forehand and backhand. The lower you keep the ball on his end, the harder it will be for him to get under it for a lob/moonball. Another thing you could do is moonball him right back. Play his game, it will suck but you might frustrate him just as much as he frustrates you. If you get a lot of these types of players at your level, then you should play up, 4.5's. You'll likely get more competition and less moonballers. How often would you get to play him? You could just try hitting them on the rise, taking them in the air, making errors and losing until eventually get get good enough at those shots that it becomes increasingly difficult for him to beat you. You have to move the ball around the court and use variety of spin pace and angles. You've got to have good footwork and fitness because those guys don't usually beat themselves. There is a WTA player who plays like this. I was amazed. Frankly I enjoyed watching her play and was impressed with how well she stuck to her game plan. The only pro player I've ever seen play that style. Julie Cohen. Hit shorter balls or even drop shots if he has a bad net game. Its hard to moonball when your at the net The short balls will force him to hit with less height. What I mean by short I mean SHORT. If hes good at running, you'll need to aim around the court more. Thanks for the tip, but I am not sure whether I should combat his well-practised strength with my absolute weakness (because I have never practised moon-balling). But trying low slices makes sense. Click to expand... no no. you never want to play the same game as the moonballer. that is exactly what he wants. He can stay out there and play 10 hours if he has to. and that is what he wants. You have to be good enough to take the ball out of the air and hit swing volleys. 1-handed swing volley is alot easier than you think if your technique is correct. 4.0 guys don't even try it cause idiot pros tell them it is low % shot. It is easy shot even on the backhand side. Also hit volleys off of the moonball too. but Location is Critical. Do not hit volley deep. Hit angles low and short. That totally blows. I've been there myself. My suggestion would be to slice as much as you can, forehand and backhand. The lower you keep the ball on his end, the harder it will be for him to get under it for a lob/moonball. Another thing you could do is moonball him right back. Play his game, it will suck but you might frustrate him just as much as he frustrates you. If you get a lot of these types of players at your level, then you should play up, 4.5's. You'll likely get more competition and less moonballers. Click to expand... yap, perfect advise. One cannot beat a solid steady 4.0 player that hits good shots near the baseline, and that bounce high - so let's move to a higher level. What 'more competition' does OP need at higher level if he can't beat a 4.0 :roll: If he moonballs often, then you should be able to anticipate and volley the ball out of the air, or even get under the ball to hit an overhead if the ball is high enough. That will rob him of his time. What I'm thinking is: What if this happens in the pro level? How will moon-balls be punished? Moon-balls must not be an effective way of play in pro level because not many pros do this. I guess there must be something I can do to punish such slow and high balls? But how? Click to expand... The male college varsity players I play with just eat up moonballs my taking it early. this is because they have great timing, can generate their own power, and send it back with speed and spin. Women have more problems, and will use moonballs more often, but only when they are very high and deep. Anything inside the service line get murdered with a swing volley. Try working on your overhead - you will have lots of time to run around your backhand if necessary - I know you said it is not as high as an overhead - but you can stoop down just a little bit to attack - remember you have the whole court to play with versus trying to get it into the service box. Probably need to perform some drills first - before you try it a match - because once you miss a couple, you become tentative and might abandon the strategy all together. I recently watched a pair of 6.0 females in a final and one started popping up moonballs when she started losing badly. The other responding by hitting the first one back rather hard, thus encouraging a second moonball and immediately ran into the middle of no man's land to either overhead it or volley it into a corner. I think the fact that she was getting aggressive on the moonballs scared the other girl and she stopped moonballing. Less work is to just carry a pistol in your bag and shoot him. With all the problems in the world today, we certainly don't need moonballers. I suppose this goes for junk ballers and pushers as well. Long Face, do you play with an Eastern FH grip? What about your BH? I think which grips you play with will make a difference on the kind of advice given. I regularly face moonballers, and one player constantly hits particularly high, deep, and spinny moonballs. Eastern FH & 1HBH are particularly vulnerable to moonballs (I used to hit w/these strokes myself). Now I usually do well against them by: Hitting better moonballs. Hitting aggressive groundstrokes against them (but they must be consistent, placed well, and angled) until they hit too shallow of a moonball - and here you must be able to take the opportunity to finish off the point from mid court (I had trouble hitting aggressively off moonballs until I switched to SW grip). Rally until you find the opportunity to pressure them with an aggressive groundstroke that you think will cause a weaker reply (lower moonball) and follow up with volleys. If you can't do one or two of those things well enough to beat the moonballer then they probably just play better and/or you lack the ability to deal with them. They can be pretty tough. A trick that might work for an Eastern Grip player is to slice, chop, hack, or dink the moonballs a bit short to mid court, keeping it low or with little pace. Hopefully, your opponent is not that good at finishing off midcourt balls. Most of these guys can't hit moonballs from mid cout (they go out) so they will come in and either hit a flatter faster shot or slice. This would give you a ball that lands in your strike zone/height and give you a chance to counter-attack while they are stuck in mid court or coming to the net. I recently watched a pair of 6.0 females in a final and one started popping up moonballs when she started losing badly. The other responding by hitting the first one back rather hard, thus encouraging a second moonball and immediately ran into the middle of no man's land to either overhead it or volley it into a corner. I think the fact that she was getting aggressive on the moonballs scared the other girl and she stopped moonballing. Click to expand... Exactly. When the varsity girls I've seen smell a short moonball (usually with the opponent backing up), they move in for an short angled or crosscourt swing volley at the service line, specially if she has a 2-handed backhand. when I see play like this I bend my knees a little more than a normal overhead to hit and overhead...or I forehand slice it on the rise making it skip low and causing them to dig it out and give me an easy overhead/volley if i get to net in time. when I see play like this I bend my knees a little more than a normal overhead to hit and overhead...or I forehand slice it on the rise making it skip low and causing them to dig it out and give me an easy overhead/volley if i get to net in time. Click to expand... I don't moonball (am not a patient or proficient lobber), but a friend does this shot to me often from his baseline on a ball that bounces to him over shoulder height. It has wicked side spin which wrenches my racquet when I try to move in for a volley, and skids really low if I let it bounce... it stays so low I have trouble getting under it for my topspin FH or BH. So I end up blocking it back on a short hop, and he has usually followed his shot in and has an open court volley put-away. The shot you describe is a weapon if you can hit it well... it's big trouble for me for sure. I respectfully hate that shot and can see how it could punish a moonballer or pusher. I'm a 4.0 level player with a one-handed backhand, and I play an agressive type of all-court game. I just lost a match against a guy who hits moon-balls whenever possible. He tosses most of the baseline groundstrokes as high as possible (almost high enough for overheads, but not quite), and most of them land close to the baseline. He did this on 90% of his groundstrokes, and he enjoyed seeing me suffer from it. I tried to deal with the moon balls by hitting them on the rise, but it is a very difficult shot for me, resulting in a. reduced accuracy in my shot placements b. more errors (mis-hits or wide/long shots) c. when I attack the net, his moon balls becomes perfect deep lobs As a result, his moon balls got to me. I hit them back once, twice, three times, but then I made an error sooner or later. Could anybody give me some advice how I should play this guy the next time we meet? It will be very soon, in a couple of days. If you are not in good shape, you will lose to a moonballer. It doesn't matter how much your shots look textbook and you hit awsome topspin shots. to beat a moonballer you have to be in good shape. being able to hit 30 shots and stay out there 5 hours is part of the game. It is just as important as hitting 70 mph topspin shots and hitting great kick serves. Moonballers and pushers will teach you that. A 4.0 player should have a good overhead. Move up in the court and hit overheads. Click to expand... if the strokes are almost lobs, this is the key. No matter how deep a lob or semi-lob is, if you approach and are anticipating this shot, you should/could put these balls away fairly easily... either w/ overhead or an aggressive volley. if this doenst work (i dont see why it wouldnt) draw him to the net. pusher-types typically (not always) are not comfortable at the net/no mans land. since they guy is an annoying pusher, dont go for passing shot winners. hit it hard straight at him.:twisted: I'm a 4.0 level player with a one-handed backhand, and I play an agressive type of all-court game. I just lost a match against a guy who hits moon-balls whenever possible. He tosses most of the baseline groundstrokes as high as possible (almost high enough for overheads, but not quite), and most of them land close to the baseline. He did this on 90% of his groundstrokes, and he enjoyed seeing me suffer from it. I tried to deal with the moon balls by hitting them on the rise, but it is a very difficult shot for me, resulting in a. reduced accuracy in my shot placements b. more errors (mis-hits or wide/long shots) c. when I attack the net, his moon balls becomes perfect deep lobs As a result, his moon balls got to me. I hit them back once, twice, three times, but then I made an error sooner or later. Could anybody give me some advice how I should play this guy the next time we meet? It will be very soon, in a couple of days. Thanks again. Click to expand... Unless you have a reliable weapon that you can consistently punish moonballs with (uncommon among rec level players), the best response to a moonball is a moonball. It's a safe shot that keeps you in the point until you have a high percentage opportunity to attack with the weapons you have. Further, you will find that many moonballers, who are accustomed to hitting mb's off of aggressive shots, don't handle mb's that well themselves. can he keep his deep moonballs *in the court* if you bring him closer to the net? If he can't, then either bring him to net by hitting short on purpose, or taking his moonballs in the air with soft angled volleys they'll force him to moonball from spots in the court he's not familiar with (like beyond the doubles alley but inside the baseline) THANK YOU, guys, for so many constructive suggestions. I'm playing him again tonight, and I will report back to you how it turns out. Last time I lost 6-2, 6-3. Here I will summerize several tips that I think I will employ in tonight's match: 1. If a moonball is too difficult for me to strike back agressively, I will play it back safe. The worst thing that can happen is another moonball. 2. If a moonball is high enough for an overhead smash, I should step into the court (instead of backing up) and hit an overhead. Try not to smash down. Hit it like a serve so it will go over the net, and use placement. 3. If a moonball is not high enough for an overhead, and if it is deep, try to take it early on the rise. Play it safe, too. Footwork! 4. If a moonball is short, wait for it, put it into a corner with heat, and guard the net. Be ready for a lob. 5. Serve and volley from time to time. Serve wide and drop the ball to the other side of the court. Be ready for a lob. 6. Try some soft and short balls, and see how he handles mid-court and volleys. Get him to the net, and pass/lob him. This should be his most uncomfortable zone. I don't moonball (am not a patient or proficient lobber), but a friend does this shot to me often from his baseline on a ball that bounces to him over shoulder height. It has wicked side spin which wrenches my racquet when I try to move in for a volley, and skids really low if I let it bounce... it stays so low I have trouble getting under it for my topspin FH or BH. So I end up blocking it back on a short hop, and he has usually followed his shot in and has an open court volley put-away. The shot you describe is a weapon if you can hit it well... it's big trouble for me for sure. I respectfully hate that shot and can see how it could punish a moonballer or pusher. Click to expand... Yep...soon as I see moonballing I just smile because I have my game plan in place. Works even better on clay because the ball digs in and comes up about ankle height. I do use Eastern grips, both forehand and backhand. This is why it is so hard to strike the moonballs, even with my forehand. If I can't take it on the rise, it becomes almost impossible to hit it back, unless making it a moonball. My opponent actually smiles when he sees me returning with moonballs. I guess this is exactly the type of game he enjoys playing, and he will be able to dominate me with better and deeper moonballs. I don't buy into the claim to beat a moonballer you must moonball. I think the key thing you need to beat them is patience. It's a good learning process, and I myself used to lose to this one guy who had great strokes but insisted on moonballing instead. After losing two matches to him through frustration I adapted then began beating him. Thing is, it still wasn't much fun so we don't hit anymore but I do thank him for improving my game. Only issue (and it's quite a big issue)with the patience approach is it must be allied to consistency, and I'm not sure how much of that 4.0 players have. Unless you have a reliable weapon that you can consistently punish moonballs with (uncommon among rec level players), the best response to a moonball is a moonball. It's a safe shot that keeps you in the point until you have a high percentage opportunity to attack with the weapons you have. Further, you will find that many moonballers, who are accustomed to hitting mb's off of aggressive shots, don't handle mb's that well themselves. Click to expand... You win again. I'm amazed how often I agree with your assessment of things in this section. As an aside, I often moonball / push, etc when playing an unfamiliar opponent. Its amazing how many people don't know what to do and get frustrated and just hand me the victory. I've played against many moonball / pusher opponents as well. I just stay steady. Often, that's enough (they are expecting their opponent to get impatient and frustrated). But against the better guys, you just have to be patient and until you get a ball you can be aggressive on. Sometimes you win those points, and sometimes you make an error (or get lobbed or passed) anyway. Even against better opponents that can be both patient and more aggressive against the moonball stratedgy (meaning they do both), mixing in moonballs and push shots are a great way to not give them rhythm and often "steal" a few cheap points. Only issue (and it's quite a big issue)with the patience approach is it must be allied to consistency, and I'm not sure how much of that 4.0 players have. Click to expand... I have good consistency as long as I don't try to paint the sidelines. However my moonballing opponent has good movement and stamina, and I do get a little frustrated facing one moonball after another. This is when consistency goes out of the window and I try to hit impossible shots (such as wide shots and drop shots that are either too short or too long). Error after error, and the set is gone. Moon Balled to death? So the player was more consistent with better top spin and they jump off your backhand side? people always assume moon balls is a bad way of playing tennis. rafael nadal is a known moonballer not because he has ugly technique but he can just hit an amazing amount of spins that explodes of the ground. the secret to moonballers is patience. you should be willing to grind things out and take out the one shot winner mentality. moonballers only know one thing and that is to win. they are metnally tough and amazingly physically fit. pushers don't exist on higher levels is also not valid. people just say that all the time. but in reality the more consistent player always win. i was an aggressive player before and i ended up losing to players "with lower ranks" when as a matter of fact they were smarter players. they feed of my errors and played high percentage tennis. don't be tempted to go for winner shots and try to go for percentage. slice it down and don't topspin your backhand. Here's the thing...unless you can devote considerable amount of time to tennis there is always a chance you will lose to good lobber/pusher. I have seen it happen in open tournaments to good players or ex college player who don't play 6 hrs per day anymore. That's the truth. Reason you don't see it at the pro level is because they have perfect timing practicing every day for hours. They will simply take it on the rise or out of the air. Something that even if you develop will quickly go away if you don't have time to maintain that level by hitting every day, a luxury most of us don't have. One thing is that as an amateur you can chose not to play with him anymore and no harm done. Life is too short do deal with that. If you guys are part of the league and have to play one advice i would give you is don't underestimate him because of his choice of strokes BUT take easy almost as a practice match and clear your afternoon schedule since you will be on the court for a while. it is after you relax you will find way that you can beat him. Often this notion that he is beating u with inferior strokes puts too much pressure on you to go for winners and punish him for his "poor strokes" while the truth is at this level you are not equipped to do that consistently. I have good consistency as long as I don't try to paint the sidelines. However my moonballing opponent has good movement and stamina, and I do get a little frustrated facing one moonball after another. This is when consistency goes out of the window and I try to hit impossible shots (such as wide shots and drop shots that are either too short or too long). Error after error, and the set is gone. Click to expand... You lack what is called "shot tolerance." When a rally continues too long, your anxiety rises and you get impatient because you don't have confidence in your ability to just keep the ball in play. So, you end your anxiety by ending the point, for better or for worse. Sometimes just knowing this phenomenon is enough to cure the problem. You won't turn in to a pumpkin just because you have to hit your 5th or 6th shot in a rally. Be patient, moonball with him, and wait for a short ball to attack. Having said that, if you really want to be able to be a human ball machine, and it is an invaluable skill to have if you want to win tennis matches, there's nothing better than doing cross court drills with a like minded drilling partner. When you can hit 10, 15, 20 cross court groundies in a row (each), deep to your opponents corner in practice drills, hitting 5-6 in a match is easy, and you will usually draw an UE or weak reply to attack well within that shot count. When your cross court groundies are really grooved in, it's a piece of cake to set up for a moonball and drive it with good pace and spin cross court to the opponent's corner, and continue to do that until he hits a weak reply you can attack, an UE, or he is so far out of position that a standard cross court groundie becomes an outright winner. You don't have to kill the ball, just execute good quality cross court groundies. rafael nadal is a known moonballer not because he has ugly technique but he can just hit an amazing amount of spins that explodes of the ground. the secret to moonballers is patience. . Click to expand... While I agree with the patience part as I wrote it earlier, describing Nadal as a "known moonballer" is way off base in my opinion. He hits with some loopy topsin no doubt, but when I think of moonballs I'm thinking of things that clear the net by 20 feet ala a cramping Chang in the FO final. Well, if the mooner was a real 4.5, any 4.0 would lose. But drawing the moonballer IN, dropshots, short angles, and then lobbing them takes some of their legs away, and avoids their best shot, the baseline high moonballs. You know he's only a 4.5, he cannot hit every shot well and every shot consistently. haha. there's this old man that plays on the courts where i play. he's a moonballer and like long face, my overheads weren't good enough to derail his game. Apparently, he had been playing like this for years and I had not been hitting overheads for that long but I knew i was stronger, fitter, faster, and just as consistent as he was. So, whenever he moonballed, i would return his moonball back. It was like shooting at airplanes but I would end up winning most of these exchanges. Nowadays, whenever he plays me, he doesn't moonball anymore and if he doesn't, i would not play that way either I guess this was applying what Kaptain Karl was saying about Bill Tilden's advice on playing against your opponent's strength. I do use Eastern grips, both forehand and backhand. This is why it is so hard to strike the moonballs, even with my forehand. If I can't take it on the rise, it becomes almost impossible to hit it back, unless making it a moonball. My opponent actually smiles when he sees me returning with moonballs. I guess this is exactly the type of game he enjoys playing, and he will be able to dominate me with better and deeper moonballs. Click to expand... Yeah the big problem I found with eastern grips was that it was difficult to drive aggressive topspin shots much above waist height. Its also difficult to hit with pace unless you are moving forward, and moonballers tend to push you back. Hitting it on the rise requires that you already have the timing to do this regularly, and if you wait for the moonball to fall low enough to land in your strike zone you will be so far back that you lose angles and even a hard shot will take a long time to get to your opponent. I actually hit better moonballs and topspin lobs than most of my moonballing opponents and even they find it frustrating to deal with their own medicine, but I guess you shouldn't try this against them if you are not good at it. I have good consistency as long as I don't try to paint the sidelines. However my moonballing opponent has good movement and stamina, and I do get a little frustrated facing one moonball after another. This is when consistency goes out of the window and I try to hit impossible shots (such as wide shots and drop shots that are either too short or too long). Error after error, and the set is gone. Click to expand... That is exactly what they want you to do. Play it patiently and don't attack shots the shots that you know you can't, send them back with enough to prevent youself from getting attacked, but don't give into the urge of overhitting and making UE's. With eastern grips I found that at shoulder height I couldn't drive a topspin shot but I could still hit a good moonball, topspin lob, or slice/chop effectively, find the shots he dislikes or returns weakly. I'm a 4.0 level player with a one-handed backhand, and I play an agressive type of all-court game. I just lost a match against a guy who hits moon-balls whenever possible. He tosses most of the baseline groundstrokes as high as possible (almost high enough for overheads, but not quite), and most of them land close to the baseline. He did this on 90% of his groundstrokes, and he enjoyed seeing me suffer from it. Click to expand... LOL. I moonballed a guy to death a few week ago. You sure it wasn't me that you played? My calves were cramping badly, to such an extent that I felt as if my calf muscles were going to snap every time I pushed off from my legs, particularly on serves, hitting anything on the run etc. So in order to reduce my movement, instead of setting up to rip the ball, I just lofted the ball up and by chance noticed that he really struggled with anything high and deep. I honestly couldn't believe it because I had spent almost the entire match cranking the ball and running myself into the ground with 'conventional' tennis. Over the next few games, I tried to refining it by adding more top spin so in addition to the ball landing deep, almost on the bseline, it would in addition, push him even further back beyond the baseline. He just couldn't handle it, and his game just compltely self-destructed from that point. He was trying to hit balls above his head, standing 7ft behind the baseline. If it wasn't for the fact that there was crowd of people watching, I would have started laughing with every super destructive moonball. :mrgreen: Heres what - in my view - you need to consider to have a chance of countering moonballs:- 1. You need be able to neutralize the rally. In other words, you need to able to return the ball in a way that doesn't leave you exposed. If you return a short ball, you're just inviting him to finish the point or play a drop shot, and you're never retrieving that if you're starting from 7ft behind the baseline. If you can't neutralize the moonball, you've lost control of the rally. You need to return deep, so that he can't attack off your return, or have time to launch another moonball. Ideally, you want to return to his weaker side (see below). 2. You need to be physically fit. You need antcipate the trajectory of the moonball, move quickly and best position yourself so that you can hit the most effective (deep) return. Usually this is going to be from the FH side, so don't be afraid to run around your 1BH if that's what it takes to hit a deep return. Unless you have a very well developed 1HB, its more difficult to return a moonball bouncing high and deep to your BH side. If you can't repeatedly run around your BH, repeatedly run deep, repeatedly run forward to get yourself in the best position to hit a neutralizing return, you're already at a disadvantage. 3. You need to be patient enough to wait for an opening, and mentally tough enough not to let this faze you. Return to his weakness, which may be his BH side, high and deep. This is a real battle of wills here, and game intelligence. 4. Don't give him time. The moonballer can only really hit an effective moonball if they have time to do so. It's hard to hit a moonball if the moonballer is under pressure, on the run, pulled out of position, scrambling for the ball etc. So given the that, it follows that you need to put the moonballer under pressure so that he doesn't have the opportunity to repeatedly hit moonballs. Don't let the moonballs get started if you can prevent this (see below). 5. You need to be press your game onto your opponent and play onto his weaknesses (whatever they are). You need press, press, press, keeping up the pressure of your normal game without overhitting or making UEs. How you do that is down to you, your individual abilities and how your opponent plays. It might be hitting with pace and spin onto his backhand for example, particularly if he has a 1HB. It's hard for him to hit a controlled moonball if the ball's coming at him big, with pace and spin onto his BH side. You can't give him time. It might mean you slicing and keeping the ball low, because its harder to tee up a controlled moonball if the ball is slicing low through the court. It might be moving him around and coming to the net to finish with a volley or OH. You put him under pressure and the quality of the moonball he hits will diminish in quality or you'll force him into a UE. The worst thing you can do is hit a mid court ball with medium or little pace and no spin. What's going to be most effective is going to depends on your opponent particular strengths/weaknesses. If you can't press your game onto your opponent enough of the time and when you have the chance to do so, then you just need to work on your own game, because he's found a weakness that causes your game to self-destruct. If that's the case, then kudos to him (he presumably wants to win) and its something that you need to work on neutralizing. 6. I don't agree with the comments earlier this thread about volleying or slicing moonballs. The way I was hitting moonballs in my match, they were landing practically on the baseline. You're not going to volley or slice that from there. Sure, you can try a swinging volley or a serve type OH, but its not the percentage play and you're at risk of alot of UEs. At the end of the day, you just need to have game, patience, intelligence, and tenacity to deal with this stuff, but if you take into account the above, you should be in a good position to deal with him, and it should push to you develop your technique/physical/mental game further, which is all good in my book. Going to try and find you a video in minute of a couple of ITF players effectively dealing with deep / high balls because I saw a video on here a months back that illustrates how good players effectively deal with this. I played the moonballer again on Friday evening, and this time I won. 6-4, 6-3. I believe the following two changes in my game helped me win this match: 1. I was much more patient this time. When I faced the high moonballs this time, I was less scared and less frustrated. If it was deep, I just hit it back, on the rise, in a very safe manner, to the general left or right side (depending on his position), and wait for the next one. No errors, I told myself. I found that with a more relaxed mind-set, I felt that the moonballs became less tiring, and I could last a much longer time in long and boring rallies. If the moonball was short, I made sure I punished it with authority. I would drive it deep into one corner, and then kept him running, or went to the net. I had some success at the net, and was able to hit some good overheads, since I was expecting the lobs. Got passed quite a few times, but those didn't change the final result. 2. I found out that he was not so good at net, so I drew him close as much as possible. During several of those "safe shots", the ball landed short on his side. I saw that those short balls actually made him uncomfortable. I started using soft forehands or backhand slices to produce pathetic short balls (which usually mean suicide), passed/lobbed him and won many points doing so. But it was still a tough long match, and I still lost a number of points by mis-hitting some deep topspin moonballs that landed near the baseline. The first set lasted more than one hour due to the many long rallies. I broke him at 4-4, and was able to serve it out. At the end of the first set, I could see that he was very tired. Me too. The 2nd set was somehow easier because his moonballs often landed shorter. In this match, I didn't try the following two advanced techniques, because I have never practised these shots, and I didn't feel that they were safe/high percentage for my game plan: 1. Volleying the moonball in the air; 2. Hitting an overhead if the moonball is high enough. Maybe I can try a few of these when I play him next time. Now with a little more confidence, I could probably start trying new things. The goal of the last match was to win. Derailed. I like to end the points quickly, whether by S/V, hard hitting groundies, or drawing the opponent IN then attempting a pass. I love playing those kinds of points, because to me, it's real tennis. Real losing tennis, maybe. But I get to hit and I get to face shots hit like the pros. Or closer than facing a pusher's shots. The "pro" part is pure delusion, of course.
Eddie Murphy could use a win. It’s been six long years since his last role that he can be proud, as James “Thunder” Early in Dreamgirls, which is a controversial opinion only to the three members of N.S.T.T.M.D.I.T.S.F.A.T.H.A.S, or the Norbit, Shrek the Third, Meet Dave, Imagine That, Shrek Forever After, and Tower Heist Appreciation Society. So, like any middle-aged star in need of a quick buck, Murphy is teaming up with The Shield creator Shawn Ryan to relive his greatest, wise-crackingest success: Beverly Hills Cop. Vulture has learned exclusively that the project is not only a reality, but that Shield creator Shawn Ryan is leading the charge to resurrect Axel Foley. Multiple industry insiders tell us Ryan, Eddie Murphy, and Sony Pictures Television have partnered on a small screen adaptation of the massively successful film franchise, and that broadcast networks began hearing their pitch on the spin-off this afternoon. In addition to his main role as an exec producer, Murphy has agreed to an on-camera role in the show. (Via) And they shall call it Banana In the Tailpipe. In an interview with Rolling Stone last year, Murphy said, “What I’m trying to do now is produce a TV show starring Axel Foley’s son, and Axel is the chief of police now in Detroit…I’d do the pilot, show up here and there.” Even if the premise doesn’t impress, it still sounds about a million Klumps better than Beverly Hills Cop 4, directed by Brett Ratner, which was Murphy’s original, now stalled plan. There are no further details on the TV project, but I hope it takes place shortly after the actions of Beverly Hills Cop 3 when that guy…did that thing…and that one lady…also did that thing. I have not seen Beverly Hills Cop 3. Still, Judge Reinhold or GTFO. You haven’t seen Beverly Hills Cop 3? Don’t. It is by FAR the worst of the series to the point that it shouldn’t even be considered a trilogy. I didn’t just jump the shark, it beat, raped, killed, then super raped it.
Unreleased Changes ------------------ 1.27.0 (2020-09-23) ------------------ * Feature - Improvements to DeleteTerminology API. 1.26.0 (2020-09-15) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.25.0 (2020-08-25) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.24.0 (2020-06-23) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.23.1 (2020-06-11) ------------------ * Issue - Republish previous version with correct dependency on `aws-sdk-core`. 1.23.0 (2020-06-10) ------------------ * Issue - This version has been yanked. (#2327). * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.22.0 (2020-05-28) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.21.0 (2020-05-07) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.20.0 (2020-03-09) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.19.0 (2020-01-08) ------------------ * Feature - This release adds a new family of APIs for asynchronous batch translation service that provides option to translate large collection of text or HTML documents stored in Amazon S3 folder. This service accepts a batch of up to 5 GB in size per API call with each document not exceeding 1 MB size and the number of documents not exceeding 1 million per batch. See documentation for more information. 1.18.0 (2019-10-23) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.17.0 (2019-07-25) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.16.0 (2019-07-01) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.15.0 (2019-06-17) ------------------ * Feature - Code Generated Changes, see `./build_tools` or `aws-sdk-core`'s CHANGELOG.md for details. 1.14.0 (2019-05-21) ------------------ * Feature - API update. 1.13.0 (2019-05-15) ------------------ * Feature - API update. 1.12.0 (2019-05-14) ------------------ * Feature - API update. 1.11.0 (2019-03-21) ------------------ * Feature - API update. 1.10.0 (2019-03-18) ------------------ * Feature - API update. 1.9.0 (2019-03-14) ------------------ * Feature - API update. 1.8.0 (2018-11-28) ------------------ * Feature - API update. 1.7.0 (2018-11-20) ------------------ * Feature - API update. 1.6.0 (2018-10-24) ------------------ * Feature - API update. 1.5.0 (2018-10-23) ------------------ * Feature - API update. 1.4.0 (2018-09-06) ------------------ * Feature - Adds code paths and plugins for future SDK instrumentation and telemetry. 1.3.0 (2018-09-05) ------------------ * Feature - API update. 1.2.0 (2018-06-26) ------------------ * Feature - API update. 1.1.0 (2018-04-03) ------------------ * Feature - API update. 1.0.0 (2017-11-29) ------------------ * Feature - Initial release of `aws-sdk-translate`.
Look, I don’t want any drama—I don’t want any drama—but I’m also not totally on board with The Hotwives of Orlando, a new Bravo-spoofing comedy that debuted on Hulu at midnight Tuesday. (Hulu Plus subscribers can watch all seven 22-minute episodes right now; Hulu Plus-less plebes can only see the first two today, while the rest will be released on a week-to-week basis.) The basic gist: A large ensemble of ladies you already love (or should love), including Casey Wilson, Kristen Schaal, and The Office’s Angela Kinsey, don an assortment of skintight dresses and tacky wigs to play the titular Hotwives, archetypes familiar to anyone who’s caught an episode (or 500) of Bravo’s indomitable Real Housewives franchise. Wilson is Tawny St. John (chyron: “Trophy Wife”), a Gretchen Rossi-esque bimbo who’s both having a hot affair with her trainer (played by… Joey McIntyre?!) and caring for her deathly ill husband (played by the delightful Stephen Tobolowsky; the joke is he isn’t actually dying). Schaal is Amanda Simmons, a Kim Richards-ian former child star (read: she appeared in prune juice commercials) whose drug addiction and alcoholism is played for uncomfortable laughs. Kinsey is Amanda’s sister Crystal, a devout Christian modeled on Orange County’s Alexis Bellino. The cast is rounded out by three more Hotwives, all of whom also have clear Housewives inspirations: Series co-creator Danielle Schneider takes the role of Teresa Giudice avatar Shauna Maducci, a bundle of Jersey stereotypes who comes complete with a debilitating shopping addiction and a husband who hates her (him: “You are such a dumb idiot!” Her: “I love when you say things like that”). Andrea Savage tackles Veronica Von Vandervon, who’s a British sexpot of a certain age obsessed with her dog, just like Lisa Vanderpump. Finally, there’s Tymberlee Hill, whose Phe Phe Reed is sort of like every Atlanta Housewife rolled into one irresistible package; she’s a tireless multitasker who counts law, cake designing, Zumba, and taxidermy among her many vocations. And her husband’s a professional mascot. And she’s determined to make “I gotta be Phe Phe” into a catchphrase. Get the picture? The issue here, in as much as there’s an issue here, isn’t that the jokes aren’t funny—it’s that they’re kinda lazy, especially coming from a group this capable. By this point, the real Housewives are so utterly bonkers (Scary Island happened four years ago, guys) that picking on things like their dumb charity projects (Tawny’s raising money to give high heels to needy Orlando dogs) and their general hypocrisy (everyone says they don’t want any drama, even though they tooootaaally want drama) just seems like making jam out of low-hanging fruit. Plus, Housewives parodies have been done before, and done well; 30 Rock’s “Queen of Jordan” remains the gold standard, mostly because of the way Angie Jordan says the word “ham,” but Kevin Hart’s Real Husbands of Hollywood has deservedly earned a following of its own. If Hotwives wants to set itself apart, it’s going to have to do more than give Veronica lines like this one: “Do you get it? I made an orgasm joke. ‘Cause dogs come, and men do the other kind of coming. It’s kind of a play on words because I’m so naughty. Do you need me to explain that again?” That said, if you like Bravo originals already, you’ll definitely get a kick out of the Hotwives—and the show is spot-on about some of the franchise’s more specific fixtures, like That Woman Who’s Friends With the Housewives But Isn’t Actually a Housewife Herself and the overarching creepiness of Andy Cohen (here, he’s called Matty Green and he’s played by Paul Scheer; he appears at the end of each episode to plug his after-show, The Hotwives Cooldown, and season 1’s finale is a “reunion special” he’ll host). A few bits also sail past easy punch lines to reach a place that’s deeper and weirder, like what happens when party planner Antoine (Jeff Hiller) appraises Tawny’s ostentatious living room: “This candle is terrible. This book is wonderful. I like this table, but not a lot.” If Hotwives embraces this aspect, getting weirder and less predictable as season 1 continues, it might reach the same level as Burning Love (which, like Hotwives, is produced by Paramount Digital Entertainment), or Childrens Hospital, or Kroll Show—genre parodies that eventually transcended their premises to become something new and exciting. And hey, even if it doesn’t, the series is amusing enough to qualify as a good summer diversion—if not exactly must-see TV.
[The use of the Epidemiology Cancer registry Baden-Württemberg in the follow-up of the EPIC cohort]. The European Prospective Investigation into Cancer and Nutrition (EPIC) is a prospective multicentre study that has been implemented to further the understanding of the association between diet and chronic diseases with emphasis on cancer. In Heidelberg from June 1994 until October 1998 about 25,500 subjects aged 35 to 65 years in women and 40 to 65 years in men were recruited. Apart from extensive questions about food intake, the participants were also asked to provide detailed information about their smoking habits, physical activity, subjective well-being, medical history and use of medications. As well as completing the questionnaire and a personal interview, the participants also gave blood samples and anthropometiric measures and the blood pressure were taken in standardized manner. The analyses of the EPIC study will depend on achieving a comprehensive record of all new cancer cases, and all deaths, together with the corresponding cause of death, within the study population. To date all self-reported incident cancer cases are verified by comparing them with pathology reports and hospital records. They are then coded according to the coding list for the International Classification of Diseases (ICD-O2) issued by the World Health Organisation (WHO). Since at begin of the investigation in the study region no cancer registration existed, the participants are followed -up by interval questioning ('active follow-up'). In order to integrate increasingly the data of the Cancer registry Baden-Württemberg (EKR-BW) attempts were made to explore record linkage systems. For this purpose, in the years 2000, 2002, 2003 record linkages between EPIC-Heidelberg and EKR-BW were performed. Procedures were evaluated for performing an anonymous linkage of the EPIC data with the data of the EKR-BW. After a pilot project on the feasibility of the linkage the program was evaluated on the EPIC data, record linkages are performed regularly. Different coding systems were applied. Simultaneously, the EPIC data about on cancer cases among the Heidelberg study participants are passed on to the Cancer Registry, thus contributing to improve completeness of the registry. So far the active follow-up can not be replaced by passive follow-up through record linkage with the cancer registry, but in the long-term it may be possible. Since the technical requirements are complied with, attempts should be made to improve the completeness of the epidemiological cancer registry Baden- Württemberg.
Oh boy is this insane. SO some fancy schmancy Harvard lady wrote this insane article calling for a ban on homeschooling because kids are really the property of the government and the parents are just getting in the way of statist indoctrination. I’m only embellishing a tiny bit. Well Ted Cruz isn’t down with that at all: At the same time that Harvard—w/ a $41bn endowment—indefensibly gets $9mm in taxpayer-funded coronavirus “relief,” they publish a cover story attacking home-schooling & people of faith. Elitist condescension looking down on the rest of America doesn’t wear well, even in crimson. https://t.co/EeggyKerTU — Ted Cruz (@tedcruz) April 19, 2020 It does seem like the relief bill was yet the latest example of incompetent government doing what it does worst. Harvard is just one example of an institution that really did not need more millions of the taxpayer’s money, but this insanity really forces the issue: Harvard Magazine: "The Risks of Homeschooling" The elites are terrified that families are figuring out they can educate their own children at home. pic.twitter.com/bXLEc0IMLR — Corey A. DeAngelis (@DeAngelisCorey) April 18, 2020 The thread is worth reading because of the extreme position that Elizabeth Bartholet takes. Harvard's Elizabeth Bartholet "recommends a presumptive ban" on homeschooling. They are coming after your right to educate your own children at home. pic.twitter.com/mxjfyTQarC — Corey A. DeAngelis (@DeAngelisCorey) April 18, 2020 The Harvard professor says "homeschooling violates children's right to a meaningful education and their right to be protected from potential child abuse" Yeah because abuse never ever happens in government schools. And all government schools provide meaningful education.🤦‍♂️ pic.twitter.com/NaQnlpMh75 — Corey A. DeAngelis (@DeAngelisCorey) April 18, 2020 Elizabeth says the burden of proof should be on parents to get permission to homeschool from the government. She has it backwards. Our children don't belong to the government. pic.twitter.com/hp9C78vcWq — Corey A. DeAngelis (@DeAngelisCorey) April 18, 2020 Do we really want the people who bungle nearly every job they’re given to have complete and total control of our children too? I don’t think so and God bless people like Ted Cruz who are in positions of power and can sound the alarm against this kind of crap. I mean just look at this: Woah. I just noticed the bizarre cover image used for the Harvard Magazine article. It shows a sad homeschool child imprisoned in a house while the other kids are outside playing. Notice the house is made of books, one of them being the Bible 😱👻 pic.twitter.com/IZfaVuIA0G — Corey A. DeAngelis (@DeAngelisCorey) April 18, 2020 They can’t even spell “Arithmetic” correctly, and think the Bible is to blame for kids doing what exactly? Not playing outside enough? The image makes zero sense, but actually I’d prefer nonsense to the ideological totalitarianism that Bartholet advocates in the article itself. Meanwhile every idiot on the fringe of the right-wing side of things is made to be representative of the entire movement by the media…
Q: Проблема перевода byte[] в String[] или в String. JAVA Я посылаю запрос на сервер, он мне отправляет String[] или String с помощью byte[]. Моя задача - преобразовать этот массив байтов хоть в что-нибудь читаемое. Пока, что получаю каракули. Пробовал делать через new String(bytes, charset), new String(bytes), Arrays.toString(bytes), но результата нет. Насколько я знаю, с сервера данные приходят в кодировке 1251. A: Чтобы преобразовать массив байт в строку нужно знать в какой кодировке они записаны. А потом вызвать конструктор String с указанием нужной кодировки Charset ch = Charset.forName("windows-1251"); byte[] data = {(byte)0xCF, (byte)0xF0, (byte)0xE8, (byte)0xE2, (byte)0xE5, (byte)0xF2}; String str = new String(data, ch); System.out.println(str); Привет
Details This update has been rated as having important security impact by the Red Hat Security Response Team. MySQL is a multi-user, multi-threaded SQL database server. MySQL is a client/server implementation consisting of a server daemon (mysqld) and many different client programs and libraries. A flaw was found in the way the MySQL mysql_real_escape() function escaped strings when operating in a multibyte character encoding. An attacker could provide an application a carefully crafted string containing invalidly-encoded characters which may be improperly escaped, leading to the injection of malicious SQL commands. (CVE-2006-2753) An information disclosure flaw was found in the way the MySQL server processed malformed usernames. An attacker could view a small portion of server memory by supplying an anonymous login username which was not null terminated. (CVE-2006-1516) An information disclosure flaw was found in the way the MySQL server executed the COM_TABLE_DUMP command. An authenticated malicious user could send a specially crafted packet to the MySQL server which returned random unallocated memory. (CVE-2006-1517) A log file obfuscation flaw was found in the way the mysql_real_query() function creates log file entries. An attacker with the the ability to call the mysql_real_query() function against a mysql server can obfuscate the entry the server will write to the log file. However, an attacker needed to have complete control over a server in order to attempt this attack. (CVE-2006-0903) This update also fixes numerous non-security-related flaws, such as intermittent authentication failures. All users of mysql are advised to upgrade to these updated packages containing MySQL version 4.1.20, which is not vulnerable to these issues. Solution Before applying this update, make sure all previously released errata relevant to your system have been applied. This update is available via Red Hat Network. To use Red Hat Network, launch the Red Hat Update Agent with the following command: up2date This will start an interactive process that will result in the appropriate RPMs being upgraded on your system.
  IN THE TENTH COURT OF APPEALS   No. 10-05-00362-CV   Linda Beaumont, Roni Beaumont Bellhouse, AND BEAUMONT RANCH, L.L.C.                                                                                 Appellants  v.   Tanya Basham,                                                                                 Appellee       From the 413th District Court Johnson County, Texas Trial Court No. C200200061   Opinion             Tanya Basham filed suit against Linda Beaumont, Roni Beaumont Bellhouse, and Beaumont Ranch, L.L.C., alleging causes of action for defamation, invasion of privacy, theft, wrongful termination, sexual harassment, and intentional infliction of emotional distress.  A jury found in Basham’s favor on the defamation, invasion of privacy, and theft claims and awarded her $260,000 in actual damages, $30,000 in additional/exemplary damages, and $40,000 in attorney’s fees.  Appellants contend in ten issues that: (1)        the court abused its discretion by denying the Appellants’ special exceptions; (2)        the court erred by submitting a single question in the charge for multiple allegations of defamatory statements;   (3)        there is no evidence or factually insufficient evidence to support the award of damages for past and future mental anguish;   (4)        there is no evidence to support the award of damages for loss of reputation;   (5)        there is no evidence to support the amount of damages awarded under the Texas Theft Liability Act;   (6)        the court erred by awarding additional damages under the Theft Liability Act or, alternatively, by awarding additional damages in excess of $1,000;   (7)        there is no evidence that Beaumont committed an invasion of privacy and the court erroneously permitted a double recovery for theft and invasion of privacy because Basham suffered the same injury from the invasion of privacy as from the theft;   (8)        the court erred by committing several errors which resulted in “cumulative harm” because the court allowed “the case to be tried generally on the issue of whether or not Linda Beaumont was a bad person”;   (9)        there is no evidence or factually insufficient evidence to support the attorney’s fee award; and   (10)           the court erred by the manner in which it computed prejudgment interest.   We will affirm in part and reverse and render in part. Background           Basham was the bookkeeper for the Beaumont Ranch.  Because of financial difficulties at the Ranch, Linda Beaumont, a co-owner of the Ranch, told Basham to give vendors false information about when they would be paid.  After several months in an unpleasant work environment, Basham gave two weeks’ notice.  About two weeks after Basham’s departure, her nephew (and employee of the Ranch) Bryan Williams broke into her house at Beaumont’s direction, looking for items belonging to the Ranch which Beaumont claimed Basham had stolen.  Williams apparently broke into Basham’s home on four separate occasions, being accompanied on the last occasion by Roni Beaumont Bellhouse, Beaumont’s daughter.           Beaumont filed a report with the Sheriff’s Department alleging that Basham had stolen one million dollars from the Ranch.  An investigation was conducted, but no charges were ever brought against Basham.           Several witnesses testified about various statements which were made around the community or to others by telephone regarding alleged embezzlement by Basham and Basham’s alleged involvement in a sexual relationship with a minor.           The court granted Appellants’ motion for directed verdict as to Basham’s claims for wrongful termination, and intentional infliction of emotion distress.  The court also granted Basham’s oral motion to dismiss her sexual harassment claim against the Ranch.           The jury found that Beaumont had slandered Basham but failed to find that Bellhouse had.  The jury found that both Beaumont and Bellhouse had committed theft and invasion of privacy and had acted with malice.  The jury awarded damages on the slander claim of $50,000 for loss of reputation, $100,000 for past mental anguish, and $25,000 for future mental anguish.  The jury awarded damages on the theft claim of $25,000 against Beaumont and $10,000 against Bellhouse.  The jury awarded damages on the invasion of privacy claim of $25,000 against Beaumont and $25,000 against Bellhouse.  The jury awarded additional/exemplary damages against Beaumont of $10,000 for slander, $10,000 for theft, and $10,000 for invasion of privacy.  The jury failed to find that Bellhouse should pay additional/exemplary damages.  The jury awarded attorney’s fees of $40,000 on Basham’s theft claim. Invasion of Privacy           Appellants contend in their seventh issue that (1) there is no evidence that Beaumont committed an invasion of privacy and (2) the court erroneously permitted a double recovery for theft and invasion of privacy because Basham suffered the same injury from the invasion of privacy as from the theft.           When we conduct a no-evidence review, we must determine “whether the evidence at trial would enable reasonable and fair-minded people to reach the verdict under review.”  City of Keller v. Wilson, 168 S.W.3d 802, 827 (Tex. 2005).  We “must credit favorable evidence if reasonable jurors could, and disregard contrary evidence unless reasonable jurors could not.”  Id.           The elements of a claim for invasion of privacy are (1) the defendant intentionally intruded on the plaintiff’s solitude, seclusion, or private affairs; and (2) the intrusion would be highly offensive to a reasonable person.  See Valenzuela v. Aquino, 853 S.W.2d 512, 513 (Tex. 1993) (citing Restatement (Second) of Torts § 652B (1977)); Russell v. Am. Real Estate Corp., 89 S.W.3d 204, 212 (Tex. App.—Corpus Christi 2002, no pet.); see also Clayton v. Wisener, 190 S.W.3d 685, 696 (Tex. App.—Tyler 2005, pet. denied).           The jury found that Beaumont committed an invasion of privacy and also engaged in a conspiracy to commit an invasion of privacy.  Appellants do not challenge this conspiracy finding or the finding the Bellhouse committed an invasion of privacy.           “Once a conspiracy is proven, each co-conspirator ‘is responsible for all acts done by any of the conspirators in furtherance of the unlawful combination.’”  Carroll v. Timmers Chevrolet, Inc., 592 S.W.2d 922, 926 (Tex. 1979) (quoting State v. Standard Oil Co., 130 Tex. 313, 107 S.W.2d 550, 559 (1937)); accord Operation Rescue-Nat’l v. Planned Parenthood of Houston & Se. Tex., 975 S.W.2d 546, 561 (Tex. 1998); Goldstein v. Mortenson, 113 S.W.3d 769, 779 (Tex. App.—Austin 2003, no pet.).  Thus, if Bellhouse is liable for an invasion of privacy, then Beaumont is liable as well.  See id.           Appellants contend that they are not liable for invasion of privacy because Basham suffered the same injury from the invasion of privacy as from the theft.  We disagree.           Under the “one satisfaction rule,” Texas law prohibits a “double recovery.”  See Crown Life Ins. Co. v. Casteel, 22 S.W.3d 378, 390 (Tex. 2000); Betts v. Reed, 165 S.W.3d 862, 873 (Tex. App.—Texarkana 2005, no pet.); Baribeau v. Gustafson, 107 S.W.3d 52, 60 (Tex. App.—San Antonio 2003, pet. denied).  “This rule applies when multiple defendants commit the same act as well as when defendants commit technically different acts that result in a single injury.”  Crown Life Ins., 22 S.W.3d at 390.  Here, Appellants contend the latter applies.           Nevertheless, if a plaintiff pleads alternate theories of liability, a judgment awarding damages on each alternate theory may be upheld if the theories depend on separate and distinct injuries and if separate and distinct damages findings are made as to each theory.  Baribeau, 107 S.W.3d at 60 (citing Birchfield v. Texarkana Mem’l Hosp., 747 S.W.2d 361, 367 (Tex. 1987)); Household Credit Servs., Inc. v. Driscol, 989 S.W.2d 72, 80 (Tex. App.—El Paso 1998, pet. denied); Borden, Inc. v. Guerra, 860 S.W.2d 515, 528 (Tex. App.—Corpus Christi 1993, writ dism’d by agr.).           Here, Basham alleged that an invasion of privacy was committed each time Williams and/or Bellhouse unlawfully entered her home to search for property belonging to the Ranch.  The injuries Basham suffered from these invasions of her privacy arose from these intrusions upon her “solitude, seclusion, or private affairs.”  See Valenzuela, 853 S.W.2d at 513; Clayton, 190 S.W.3d at 696; Russell, 89 S.W.3d at 212; Restatement (Second) of Torts § 652B.           Conversely, the injuries Basham suffered because of the thefts of property from her home arose from the “unlawful appropriations” of her property.  See Tex. Civ. Prac. & Rem. Code Ann. § 134.002(2) (Vernon 2005).           The only element of damages addressed in the court’s charge with respect to Basham’s invasion of privacy and theft claims is mental anguish.  The court expressly instructed the jury in the charge to not include in the award of damages for invasion of privacy “an amount for mental anguish, if any, arising from theft.”  The court provided a similar instruction with the damages question for Basham’s theft claim.           Therefore, because Basham suffered separate and distinct injuries from the invasions of privacy and from the thefts and because the jury made separate and distinct damages findings as to each theory, we hold that there was no double recovery.  See Baribeau, 107 S.W.3d at 61; Borden, 860 S.W.2d at 529.           Accordingly, we overrule Appellants’ seventh issue. Mental Anguish           Appellants contend in their third issue that there is no evidence or factually insufficient evidence to support the jury’s award of $210,000 in damages for past and future mental anguish.           .  .  .  [A]n award of mental anguish damages will survive a legal sufficiency challenge when the plaintiffs have introduced direct evidence of the nature, duration, and severity of their mental anguish, thus establishing a substantial disruption in the plaintiffs’ daily routine.  Such evidence, whether in the form of the claimants’ own testimony, that of third parties, or that of experts, is more likely to provide the factfinder with adequate details to assess mental anguish claims.  Although we stop short of requiring this type of evidence in all cases in which mental anguish damages are sought, the absence of this type of evidence, particularly when it can be readily supplied or procured by the plaintiff, justifies close judicial scrutiny of other evidence offered on this element of damages.             When claimants fail to present direct evidence of the nature, duration, or severity of their anguish, we apply traditional “no evidence” standards to determine whether the record reveals any evidence of “a high degree of mental pain and distress” that is “more than mere worry, anxiety, vexation, embarrassment, or anger” to support any award of damages.    Parkway Co. v. Woodruff, 901 S.W.2d 434, 444 (Tex. 1995); accord Bentley v. Bunton, 94 S.W.3d 561, 606 (Tex. 2002).           Not only must there be evidence of the existence of compensable mental anguish, there must also be some evidence to justify the amount awarded.  We disagree with the court of appeals that “[t]ranslating mental anguish into dollars is necessarily an arbitrary process for which the jury is given no guidelines.”  While the impossibility of any exact evaluation of mental anguish requires that juries be given a measure of discretion in finding damages, that discretion is limited.  Juries cannot simply pick a number and put it in the blank.  They must find an amount that, in the standard language of the jury charge, “would fairly and reasonably compensate” for the loss.  Compensation can only be for mental anguish that causes “substantial disruption in . . . daily routine” or “a high degree of mental pain and distress.”  There must be evidence that the amount found is fair and reasonable compensation, just as there must be evidence to support any other jury finding.  Reasonable compensation is no easier to determine than reasonable behavior—often it may be harder—but the law requires factfinders to determine both.  And the law requires appellate courts to conduct a meaningful evidentiary review of those determinations.     Bentley, 94 S.W.3d at 606 (quoting Saenz v. Fid. & Guar. Ins. Underwriters, 925 S.W.2d 607, 614 (Tex. 1996)) (citations omitted).           Here, the jury awarded Basham $100,000 in damages for past mental anguish on the defamation claim and $25,000 in damages for future mental anguish on this claim.  The jury awarded Basham $25,000 in mental anguish damages against Beaumont and $10,000 against Bellhouse on the theft claim.  Finally, the jury awarded $25,000 in mental anguish damages against Beaumont and $25,000 against Bellhouse on the invasion of privacy claim.           Appellants concede that Basham is entitled to at least “nominal damages” for the defamatory statements but argue that there is no evidence or factually insufficient evidence “justifying more than a nominal award.”  They also contend that the award lacks evidentiary support because Basham failed to differentiate in her testimony among the symptoms of mental anguish caused by the defamation, those caused by the theft, and those caused by the invasion of privacy.           Basham testified that, after she was fired, she went to her son’s basketball game and sat by a friend, but her friend “just got up and walked away.”  She noticed that “[e]verybody’s neck was breaking to look at me.”  This caused her to feel “very humiliated and embarrassed.”    Because of this, she left the bleachers and waited in the car for the game to be over.  She experienced similar situations at other basketball games and at her son’s baseball games.  There was “a lot of whispering.”  She was “totally devastated” when Williams told her that Beaumont had told him to spread the rumors about her in town.           Because of the embarrassment and humiliation, Basham stopped going to town as much as possible.  She stopped going to her children’s school functions, and she shopped for groceries in another town.  She started having anxiety attacks on those occasions when she did go to town, experiencing shortness of breath and an accelerated heart rate.  She had “many” sleepless nights and continued to experience sleeping problems up to the date of trial.  She thinks about it “[e]very minute of my life.”  Basham testified that her “heart drops” whenever she hears a knock on her door because she is “thinking what else is Ms. Beaumont going to do to me.”           Basham ultimately moved to another town because she “couldn’t take living there anymore.”  This caused separation from her sons because they chose to stay with their father and remain in their hometown.  Her oldest son moved back in with her about three years later.  Her youngest son moved back in for about six months but ultimately returned to his father’s home.  She is still “struggling” with the consequences of her decision to move.           Basham is no longer able to trust people.  She feels “very uncomfortable” in social settings.  She has “basically become isolated” and does not participate in social events held at her new employer’s location.  Her dating life has been significantly affected.  She is afraid to get close to people because she is afraid they will hurt her.[1]  She sought counseling for a period of time but could not afford to continue it.           The context of Basham’s testimony that her “heart drops” whenever she hears a knock on the door indicates that Basham was referring here to the occasion when sheriff’s deputies came to search her home in response to Beaumont’s false report of embezzlement.           With regard to the break-ins, Basham testified that her dating life had been adversely affected because it is difficult to start a new relationship “with as much emotions as I’m having to deal with.”  Basham testified, “This experience just consumes me.  It’s like this is just all that is on my mind all the time.”           Basham testified that her feeling and anxieties have not improved in the four years since the Appellants committed the wrongful acts against her.           As with the plaintiff in Bentley, “[t]he record leaves no doubt that [Basham] suffered mental anguish as a result of [the defamatory] statements.”  See Bentley, 94 S.W.3d at 606.  Other courts have likewise held that testimony similar to that given by Basham constitutes some evidence and/or factually sufficient evidence to support an award of damages for past mental anguish.  See, e.g., Royal Maccabees Life Ins. Co. v. James, 146 S.W.3d 340, 350-51 (Tex. App.—Dallas 2004, pet. denied); Ramirez v. Fifth Club, Inc., 144 S.W.3d 574, 591 (Tex. App.—Austin 2004), rev’d in part on other grounds, 49 Tex. Sup. Ct. J. 863, 2006 Tex. LEXIS 638 (Tex. June 30, 2006); Cram Roofing Co. v. Parker, 131 S.W.3d 84, 92-93 (Tex. App.—San Antonio 2003, no pet.).           As the Supreme Court has held, the record must also contain evidence that the amount of damages awarded for mental anguish is “fair and reasonable.”  See Bentley, 94 S.W.3d at 606 (quoting Saenz, 925 S.W.2d at 614).  Here, the jury awarded $100,000 for past mental anguish on Basham’s slander claim.  Texas courts have concluded that comparable awards were “fair and reasonable” based on testimony like Basham’s.  See, e.g., Houston Livestock Show & Rodeo, Inc. v. Hamrick, 125 S.W.3d 555, 580-81 (Tex. App.—Austin 2003, no pet.); Ysleta Indep. Sch. Dist. v. Monarrez, 170 S.W.3d 122, 128-29 (Tex. App.—El Paso 2002), rev’d on other grounds, 177 S.W.3d 915 (Tex. 2005) (per curiam).  We likewise hold that the record contains some evidence and factually sufficient evidence to support the jury’s determination that $100,000 is “fair and reasonable compensation” for the past mental anguish Basham suffered because of the defamatory statements.           Basham’s testified that her embarrassment, anxiety and other symptoms continued during the four years leading up to the time of trial and that there has been no improvement.  This constitutes some evidence and factually sufficient evidence to support an award of damages for future mental anguish.  See Fifth Club, Inc. v. Ramirez, 49 Tex. Sup. Ct. J. 863, 869, 2006 Tex. LEXIS 638, at *23-25 (Tex. June 30, 2006).           The jury awarded $25,000 for future mental anguish on Basham’s defamation claim.  We hold that the record contains some evidence and factually sufficient evidence to support the jury’s determination that $25,000 is “fair and reasonable compensation” for the future mental anguish Basham will suffer because of the defamatory statements.  See Haggar Clothing Co. v. Hernandez, 164 S.W.3d 407, 423 (Tex. App.—Corpus Christi 2003), rev’d on other grounds, 164 S.W.3d 386 (Tex. 2005) (per curiam); see also Fifth Club, 49 Tex. Sup. Ct. J. at 869, 2006 Tex. LEXIS 638, at *23-25.           With regard to Basham’s claims for theft and invasion of privacy, she specifically testified that the break-ins have directly affected her dating life and that she is “consumed” by these intrusions and they are constantly on her mind.  Although Basham did not testify that she was afraid to live in her home after the break-ins or that the loss of her family photos shocked or devastated her, the jury could consider her testimony about the emotional impact of the break-ins together with her testimony about the emotional impact of the defamatory statements and conclude that the break-ins caused additional injury.           Accordingly, we hold that the record contains some evidence and factually sufficient evidence to support the jury’s award of mental anguish damages for Basham’s theft and invasion of privacy claims.  See Royal Maccabees Life Ins., 146 S.W.3d at 350-51; Ramirez, 144 S.W.3d at 591; Cram Roofing Co., 131 S.W.3d at 92-93.           The jury awarded Basham $35,000 in mental anguish damages for theft and $50,000 for invasion of privacy.  We likewise hold that the record contains some evidence and factually sufficient evidence to support the jury’s determination that these amounts are “fair and reasonable compensation” for the mental anguish Basham suffered because of the theft and invasion of privacy.  See Houston Livestock Show & Rodeo, 125 S.W.3d at 580-81; Monarrez, 170 S.W.3d at 128-29.           Thus, we overrule Appellants’ third issue. Texas Theft Liability Act           Appellants contend in their fifth issue that there is no evidence to support the amount of damages awarded under the Texas Theft Liability Act.  More specifically, they contend in this issue that Basham offered no evidence of the value of the property stolen, cannot recover mental anguish damages under the Act, and is entitled to no more than nominal damages.  Appellants contend in their sixth issue that the court erred by awarding additional damages under the Theft Liability Act or, alternatively, by awarding additional damages in excess of $1,000.           A person who commits theft is civilly liable under the Act “for the damages resulting from the theft.”  Tex. Civ. Prac. & Rem. Code Ann. § 134.003(a) (Vernon 2005).  A “person who has sustained damages resulting from theft may recover . . . the amount of actual damages found by the trier of fact and, in addition to actual damages, damages awarded by the trier of fact in a sum not to exceed $1,000.”  Id. § 134.005(a)(1) (Vernon 2005).           The Act provides for the recovery of “actual damages.”  Id. § 134.005(a)(1).  Because the Act does not further define “actual damages,” we hold that “actual damages” under the Act are those recoverable at common law.  Cf. Arthur Andersen & Co. v. Perry Equip. Corp., 945 S.W.2d 812, 816 (Tex. 1997) (“actual damages” recoverable under DTPA “are those damages recoverable under common law”); Matheus v. Sasser, 164 S.W.3d 453, 458 (Tex. App.—Fort Worth 2005, no pet.) (same); Houston Livestock Show & Rodeo, 125 S.W.3d at 582 (same).[2]           As previously indicated however, the only element of damages for theft authorized by the charge is mental anguish.  Although the parties spend much of their argument disputing the type and quantum of evidence necessary to establish the value of stolen property for purposes of a claim under the Act, this was not an element of damages included in the charge.           When we measure the sufficiency of the evidence, we do so under the law as submitted in the charge if the complaining party did not object to the charge.  See Osterberg v. Peca, 12 S.W.3d 31, 55 (Tex. 2000); Ancira Enters., Inc. v. Fischer, 178 S.W.3d 82, 93 (Tex. App.—Austin 2005, no pet.); O’Connor v. Miller, 127 S.W.3d 249, 254 (Tex. App.—Waco 2003, pet. denied).  Neither party objected at trial that the damages question submitted on Basham’s theft claim omitted an element of damages for the value of the property stolen.[3]  Thus, the jury’s award of “actual damages” on Basham’s theft claim can be upheld only if damages for mental anguish can be recovered under the Act and if there is some evidence to support an award for mental anguish damages.           We have already determined that there is some evidence and factually sufficient evidence to support the jury’s award of mental anguish damages for Basham’s theft claim.  Therefore, we need decide only whether such damages can be recovered under the Act.           Mental anguish damages are not recoverable as a matter of law for the negligent destruction of property.  City of Tyler v. Likes, 962 S.W.2d 489, 497 (Tex. 1997); Petco Animal Supplies, Inc. v. Schuster, 144 S.W.3d 554, 562 (Tex. App.—Austin 2004, no pet.); Seminole Pipeline Co. v. Broad Leaf Partners, Inc., 979 S.W.2d 730, 754 (Tex. App.—Houston [14th Dist.] 1998, no pet.).  Rather, “[m]ental anguish damages are recoverable for some common law torts that generally involve intentional or malicious conduct such as libel.”  Likes, 962 S.W.2d at 495.  Thus, the Supreme Court has upheld an award of mental anguish damages under section 4.402 of the UCC for wrongful dishonor where the jury found that the bank acted with malice.  See Farmers & Merchants State Bank of Krum v. Ferguson, 617 S.W.2d 918, 921 (Tex. 1981); see also Luna v. N. Star Dodge Sales, Inc., 667 S.W.2d 115, 117 (Tex. 1984) (mental anguish damages under DTPA upheld where jury found defendant acted knowingly).           Consistent with these decisions, the Fourteenth Court of Appeals has concluded: where a claim of mental anguish is based solely upon property damage resulting from gross negligence, recovery is contingent upon evidence of some ill-will, animus, or design to harm the plaintiff personally.  We believe this rationale is more consistent with the general principle that emotional distress is not usually recoverable as an element of property damages unless an improper motive is involved.    Seminole Pipeline, 979 S.W.2d at 757.           Here, the jury found that Beaumont and Bellhouse acted with malice when they committed (or conspired to commit) the theft.  Appellants do not challenge this malice finding.  Therefore, because the jury found that Appellants acted with malice, we hold that Basham could recover mental anguish damages under the Theft Liability Act.  See Likes, 962 S.W.2d at 495; Luna, 667 S.W.2d at 117; Farmers & Merchants State Bank, 617 S.W.2d at 921; Petco Animal Supplies, 144 S.W.3d at 562; Seminole Pipeline, 979 S.W.2d at 757.  Accordingly, we overrule Appellants’ fifth issue.           Having determined that the evidence supports the jury’s award of mental anguish damages under the Act, we now address Appellants’ sixth issue, in which Appellants contend that Basham can recover only $1,000 as additional damages under the express terms of the Act.[4]           Section 134.005(a)(1) provides that “a person who has sustained damages resulting from theft may recover” actual damages and additional damages “in a sum not to exceed $1,000.”  Tex. Civ. Prac. & Rem. Code Ann. § 134.005(a)(1).           We agree with Appellants that, under the plain language of section 134.005(a)(1), a prevailing plaintiff may recover no more than $1,000 in additional damages under the Act.  Therefore, we sustain Appellants’ sixth issue in part. Loss of Reputation           Appellants contend in their fourth issue that there is no evidence to support the jury’s award of damages for loss of reputation.           “Our law presumes that statements that are defamatory per se injure the victim’s reputation and entitle him to recover general damages, including damages for loss of reputation.”  Bentley, 94 S.W.3d at 604.           Here, Appellants do not contest that the statements at issue were defamatory per se.  Basham testified that the defamatory statements jeopardized the reputation she had established during her fifteen-year career in the banking industry.  We hold that Basham’s testimony about the reactions she observed and the “whispering” she heard in the community after these statements were made supports the jury’s award of $50,000 in damages for loss of reputation.  See id. at 604-07 (observing that “the evidence support[ed]” an award of $150,000 in damages for loss of reputation).           Accordingly, we overrule Appellants’ fourth issue. Attorney’s Fees           Appellants contend in their ninth issue that there is no evidence or factually insufficient evidence to support the attorney’s fee award.           Section 134.005(b) of the Act provides that a prevailing plaintiff may recover “reasonable and necessary attorney’s fees.”  Tex. Civ. Prac. & Rem. Code Ann. § 134.005(b) (Vernon 2005).           Here, Basham’s counsel testified that Basham and his firm have a contingent fee contract whereby counsel would receive one-third of any recovery.[5]  Counsel testified that he charges $200 per hour and that his billing records indicated that approximately $190,000 in attorney’s fees had been incurred at the time of trial.  He stated that these fees were reasonable and necessary and consistent with the customary rate in the Cleburne area.  Counsel concluded by estimating that about two-fifths of these attorney’s fees were attributable to Basham’s theft and sexual harassment claims.  Appellants made no objection to counsel’s testimony on this issue.           Appellants’ counsel testified to his attorney’s fees as well.  Appellants’ counsel testified that he believed $240 per hour to be a reasonable attorney’s fee for this type of case.  Appellants’ counsel testified that approximately $113,000 in attorney’s fees had been incurred on Appellants’ behalf at the time of trial.           Appellants now contend that the testimony of Basham’s counsel is not competent evidence to support the attorney’s fee award because the testimony is wholly conclusory.  Although an objection must be made to challenge the reliability of an expert’s testimony, no trial objection is required “[w]hen the testimony is challenged as conclusory or speculative and therefore non-probative on its face.”  Coastal Transp. Co. v. Crown Cent. Petroleum Corp., 136 S.W.3d 227, 233 (Tex. 2004).  Expert testimony is considered “conclusory or speculative” when it has no factual substantiation in the record.  See United Servs. Auto. Ass’n v. Croft, 175 S.W.3d 457, 463-64 (Tex. App.—Dallas 2005, no pet.); Gabriel v. Lovewell, 164 S.W.3d 835, 846 (Tex. App.—Texarkana 2005, no pet.).           Here, Basham’s counsel’s testimony is supported by his billing records and his stated familiarity with attorney’s fees charged in the area.[6]  Thus, we reject Appellants’ contention that counsel testimony is conclusory on its face.  See Hachar v. Hachar, 153 S.W.3d 138, 143 (Tex. App.—San Antonio 2004, no pet.); Marquez v. Providence Mem’l Hosp., 57 S.W.3d 585, 596 (Tex. App.—El Paso 2001, pet. denied).           Counsel testified that $200 was a reasonable hourly rate for the area and that $190,000 was a reasonable and necessary attorney’s fee for his representation.  Counsel offered his billing records to support his testimony.  Counsel also segregated his attorney’s fees to the point that approximately two-fifths were attributable to Basham’s sexual harassment and theft claims, the two claims then pending for which the jury could award attorney’s fees.           The court subsequently granted Basham’s oral motion to dismiss her sexual harassment claim.  Thus, the jury awarded attorney’s fees of approximately one-fifth the amount counsel testified to.           Because the jury awarded approximately one-half of the attorney’s fees requested and because Appellant’s counsel testified that an hourly rate twenty percent higher than Basham’s counsel charged was reasonable, we hold that there is some evidence and factually sufficient evidence to support the attorney’s fee award of $40,000.  Cf. Cantu v. Moore, 90 S.W.3d 821, 825 (Tex. App.—San Antonio 2002, pet. denied) (upholding award of trial attorney’s fees where jury “made its own calculation” regarding amount of attorney’s fees incurred during trial and added those to attorney’s fees incurred before trial).           Accordingly, we overrule Appellants’ ninth issue. Special Exceptions           Appellants contend in their first issue that the court abused its discretion by denying their special exceptions to Basham’s pleadings on her claim of defamation.  Specifically, they contend that Basham should have pleaded the alleged defamatory statements with more particularity.           Appellants filed special exceptions in response to Basham’s second amended petition.  The court overruled Appellants’ special exceptions.  Basham then filed a third amended petition which made essentially the same allegations which were the object of Appellants’ special exceptions.  However, Appellants did not renew or reurge their special exceptions.           Basham contends that, because they did not, they have not preserved this issue for appellate review.  Our research discloses at least two cases which support Basham’s position.  See Alpert v. Crain, Caton & James, P.C., 178 S.W.3d 398, 404 n.3 (Tex. App.—Houston [1st Dist.] 2005, pet. denied); State ex rel. White v. Bradley, 956 S.W.2d 725, 744-45 (Tex. App.—Fort Worth 1997), rev’d on other grounds, 990 S.W.2d 245 (Tex. 1999).  The approach taken in these cases is similar to the well-established rule that any error in the admission of evidence is deemed harmless if the same or similar evidence is subsequently admitted without objection.  See Volkswagen of Am., Inc. v. Ramirez, 159 S.W.3d 897, 907 (Tex. 2004).           Thus, we hold that any error in the court’s denial of Appellants’ special exceptions was rendered harmless by Appellants’ failure to reurge their special exceptions in response to Basham’s third amended petition.  See Alpert, 178 S.W.3d at 404 n.3; Bradley, 956 S.W.2d at 744-45.           Accordingly, we overrule Appellants’ first issue. Jury Charge           Appellants contend in their second issue that the court abused its discretion by submitting a single question in the charge for multiple slander allegations.           Rule of Civil Procedure 277 provides in pertinent part, “In all jury cases the court shall, whenever feasible, submit the cause upon broad-form questions.”  Tex. R. Civ. P. 277.  Appellants contend in essence that broad-form submission is infeasible when a plaintiff alleges two or more defamatory statements.  They cite Crown Life Insurance Co. and Harris County v. Smith for the proposition that the charge as submitted was erroneous because it cannot be determined whether the jury based its verdict on an invalid theory of liability.  Smith, 96 S.W.3d 230, 233-34 (Tex. 2002); Crown Life Ins., 22 S.W.3d at 388-89.  We disagree.           The cited cases are distinguishable from this case because in the cited cases, the appellants actually identified the “invalid theory of liability.”  In Crown Life Insurance, the invalid theories were four of the five DTPA claims submitted in a single question which were invalid because the plaintiff did not have standing as a consumer to bring them.  See 22 S.W.3d at 388.  In Smith, the invalid theories were damages claims for loss of earning capacity as to one plaintiff and physical impairment as to the other, which claims were invalid because there was no evidence to support them.  See 96 S.W.3d at 232.           In both cases, after the charge errors were identified, the issue became whether these errors could be found harmless because both charges contained valid theories of liability which could conceivably support the jury’s respective verdicts.  The Supreme Court concluded that, when a single question in the charge combines valid and invalid theories of liability, the error in submitting the invalid theory “is harmful when it cannot be determined whether the improperly submitted theories formed the sole basis for the jury’s finding.”  Crown Life Ins., 22 S.W.3d at 389; accord Smith, 96 S.W.3d at 233-34.           Here, the defamation question essentially identifies three different defamatory statements which Basham alleges to have been made.  Appellants do not identify any of these statements as being an invalid basis for recovery either because it is not defamatory as a matter of law or because there is no evidence in the record to show that they made the statement.  Rather, Appellants argue that the combination of these statements in a single question presents an issue of “potential error.”  However, the Supreme Court clearly stated in Smith that the harmless error rule announced in Crown Life Insurance and Smith applies only to “actual errors” and not to “imagined or potential ones.”  See 96 S.W.3d at 235.           Accordingly, we overrule Appellants’ second issue. Prejudgment Interest           Appellants contend in their tenth issue that the court erred by the manner in which it calculated prejudgment interest because, as argued in other issues, the amount of compensatory damages awarded is excessive.  However, we have overruled each of Appellants’ issues challenging the amount of compensatory damages awarded.  Accordingly, we overrule Appellants’ tenth issue. Cumulative Harm           Appellants contend in their eighth issue that the court committed several errors which resulted in “cumulative harm” because the court allowed “the case to be tried generally on the issue of whether or not Linda Beaumont was a bad person.”           Appellants argue that the court erred by: (1) denying their special exceptions; (2) denying their summary judgment motion(s); and (3) admitting irrelevant evidence of “bad acts supposedly committed by Beaumont.”  We have already determined that Appellants have not preserved their challenge to the denial of their special exceptions.  They acknowledge that the erroneous denial of a summary judgment motion “is generally not itself subject to review for reversible error.”  Accordingly, we construe their eighth issue as a contention that the repeated admission of evidence of bad acts caused cumulative harm.           Basham responds that evidence of other bad acts was relevant and admissible under Rule of Evidence 404(b) to show malice and to show that Beaumont engaged in a pattern of discriminatory conduct in the workplace.           Appellants complain that the court abused its discretion by admitting the following evidence: (1)               that company policy strictly prohibits the consumption or use of alcoholic beverages or illegal substances while on company property, that an employee of the Ranch was not disciplined for smoking marihuana on the premises, and that Beaumont instructed the staff “not to bother” her with these allegations;   (2)               that company policy prohibits sexual harassment, that several employees had reported being subjected to some form of sexual harassment (which reports Beaumont largely denied having been made), and that Beaumont had not taken disciplinary action in most cases;   (3)               that Beaumont had instructed Basham and another employee to “sleep with the chef” so he would “keep his hands off the girls in the kitchen”;   (4)               that Beaumont had instructed employees to spread rumors about the local mayor because Bellhouse had been given a ticket for violating a municipal ordinance;   (5)               that Beaumont had instructed Williams to lie at a Texas Workforce Commission hearing on a sexual harassment claim so the complainant would not prevail and that in Beaumont’s opinion the complainant “didn’t deserve a dime”;   (6)               that “many” of an employee’s paychecks from the Ranch were returned due to insufficient funds;   (7)               that Beaumont, in one employee’s opinion, is “evil personified” and “one of the cruelest, most vile people I’ve ever met”;   (8)               that Beaumont would encourage employees to perjure themselves (as she did with Williams) to retaliate against other employees who registered complaints;   (9)               that the Ranch had failed to pay a contractor’s bill in full for some projects;   (10)           that Beaumont had instructed this same contractor to look in a terminated employee’s house on the Ranch property (and kick in a locked bedroom door) to find a notebook with information about a recent event at the Ranch; and   (11)           that this terminated employee was owed several thousand dollars by the Ranch for company expenses he had charged to his own credit card.             As Appellants summarize this testimony, it allowed Basham “to try Beaumont generally as a bad person who allegedly slandered the mayor, didn’t put a stop to crude behavior from her employees, and failed to discipline drug abusers on her property.”           Evidence of extraneous conduct is admissible under Rule 404(b) to show malice in a defamation suit.  See Porous Media Corp. v. Pall Corp., 173 F.3d 1109, 1117-18 (8th Cir. 1999).  An employer’s pattern of behavior can be probative of whether the employer engaged in unlawful employment practices.  See Haggar Clothing Co. v. Hernandez, 164 S.W.3d 386, 389 (Tex. 2005) (per curiam); Passons v. Univ. of Tex., 969 S.W.2d 560, 564 (Tex. App.—Austin 1998, no pet.); Durbin v. Dal-Briar Corp., 871 S.W.2d 263, 268-69 (Tex. App.—El Paso 1994, writ denied).           Therefore, we cannot say that the court abused its discretion by admitting the evidence complained of.  Accordingly, we overrule Appellants’ eighth issue. Conclusion We reverse that portion of the judgment awarding additional damages of $10,000 under the Theft Liability Act and render judgment that Basham recover $1,000 in additional damages under the Act.  The remainder of the judgment is affirmed.   FELIPE REYNA Justice Before Chief Justice Gray, Justice Vance, and Justice Reyna Affirmed in part, Reversed           and Rendered in part Opinion delivered and filed August 30, 2006 [CV06] [1]           Basham explained that the pain and anxiety she has experienced as a result of the wrongful acts committed against her is different than that she experienced after her divorce because, after the divorce, she “started going out” again.   [2]           The DTPA currently provides that a prevailing plaintiff may recover “economic damages.”  Tex. Bus. & Com. Code Ann. § 17.50(b)(1) (Vernon Supp. 2006).  Before 1995, the statute provided for the recovery of “actual damages.”  See Act of May 29, 1989, 71st Leg., R.S., ch. 380, § 2, 1989 Tex. Gen. Laws 1490, 1491 (amended 1995).  Although the statutory language has changed, Texas courts continue to treat the term “economic damages” as synonymous with “actual damages.”  See, e.g., Dal-Chrome Co. v. Brenntag Sw., Inc., 183 S.W.3d 133, 143-44 (Tex. App.—Dallas 2006, no pet.); Matheus v. Sasser, 164 S.W.3d 453, 458-59 (Tex. App.—Fort Worth 2005, no pet.); Garza v. Chavarria, 155 S.W.3d 252, 257 n.2 (Tex. App.—El Paso 2004, no pet.).   [3]           Instead, Appellants objected that Basham should not be allowed to recover mental anguish damages for her theft claim. [4]           Because we have determined that the jury’s malice finding and the evidence on mental anguish provide a sufficient basis to uphold the award of mental anguish damages, we need not address Appellant’s initial contention in their sixth issue, namely, that Basham cannot recover additional damages under the Act because she failed to prove up actual damages. [5]           According to the terms of the contract, which was admitted in evidence, counsel will receive 40% of any recovery because an appeal has been pursued. [6]           Basham’s counsel testified that his $200 hourly rate was reasonable and “probably lower than customary based on my asking a number of attorneys in this Cleburne area.”  Conversely, Appellants’ counsel stated that his hourly rate was $240 “which I believe to be a reasonable fee . . . for this type of case in Johnson County.”  Appellants’ counsel, whose office is in Denton County, provided no basis in his testimony to conclude that he was familiar with the customary rate charged by attorneys in Johnson County.
Invité le 6 décembre dans le Late Night Show, une émission américaine à succès, le cinéaste engagé Michael Moore a estimé que Donald Trump pourrait en fin de compte ne pas accéder à la Maison Blanche. «Il n'est pas encore président des Etats-Unis», a déclaré Michael Moore en précisant qu'il restait encore 6 semaines avant le 20 janvier, date à laquelle Donald Trump devrait devenir officiellement le nouveau président américain. Au cours de l'interview, le cinéaste américain a indiqué : «Personne n'avait prévu ce qui s'est passé et c'est pourtant arrivé», avant de se demander si «dans les six prochaines semaines, quelque chose d'autre pourrait arriver, quelque chose de fou, quelque chose que nous n'attendons pas.» Le réalisateur de Fahrenheit 9/11 et Bowling for Columbine a même annoncé une nouvelle surprise à venir : «[Donald Trump] pourrait décider qu'il veut partir [de la Maison Blanche] avant même de s'y installer.» La nouvelle annonce de Michael Moore survient alors que la candidate défaite à l’élection présidentielle américaine Hillary Clinton a désormais plus de 2,7 millions de voix de plus que le président élu Donald Trump, selon le dernier décompte fait par le Cook Political Report. Le candidat républicain a toutefois remporté 290 grands électeurs contre 232 pour la démocrate, qui a concédé sa défaite. Il en fallait 270, soit la majorité des 538 grands électeurs en jeu, pour accéder au Bureau ovale. Bien qu'opposé à Donald Trump, le documentariste et activiste Michael Moore avait correctement prédit en août dernier la victoire surprise du candidat républicain, là où la plupart des experts s'étaient trompés. Lire aussi : Michael Moore : Donald Trump est une «grenade» dégoupillée contre le système
[Mesoporous silica nanoparticles for two-photon fluorescence]. Mesoporous silica nanoparticles have unique properties: a specific large surface or a narrow casting of the sizes of pores. The perspectives of use are the creation of new tools for the premature diagnosis. For these potential biological applications, the harmlessness of these nanoparticles must be established.
Q: How to avoid a Conan SSL user authentication error with Jenkins Artifactory plugin? My company is new to Conan, Artifactory, and Jenkins, but we set up some test pipeline scripts a few months ago and utilized the Jenkins Artifactory plugin to publish some Conan packages to our Artifactory server. These scripts are now failing with an SSL certification failure. We are using the following packages: Jenkins v2.121 Jenkins Artifactory Plugin v2.16.1 Artifactory Pro v5.10.3 Conan v1.3.3 Our "package and publish" stage in our pipline scripts look similar to this when it comes to Artifactory configuration: stage('Package and Publish') { def artifactory_name = "MyCompanyArtifactory" def artifactory_repo = "conan-local" def server = Artifactory.server artifactory_name def client = Artifactory.newConanClient() def serverName = client.remote.add server: server, repo: artifactory_repo client.run(command: "export-pkg . ci-user/stable -s os=Linux -s arch=x86_64 -s build_type=Debug") client.run(command: "export-pkg . ci-user/stable -s os=Linux -s arch=x86_64 -s build_type=Release") String myCmd = "upload MyLib/* --all -r ${serverName} --confirm" def bInfo = client.run(command: myCmd) //server.publishBuildInfo bInfo } This code was working at one time, but I believe it stopped working when our IT department switched Artifactory over to HTTPS access. Now, Jenkins errors out when attempting to set the Conan user for our repo: [Pipeline] // stage [Pipeline] stage [Pipeline] { (Package and Publish) [Pipeline] getArtifactoryServer [Pipeline] initConanClient [shared-mylib] $ sh -c 'conan config set log.trace_file=\"/home/builduser/jenkins/workspace/shared-mylib@tmp/conan.tmp261537390058591873/conan_log.log\" ' [Pipeline] conanAddRemote [shared-mylib] $ sh -c "conan remote add b519966f-f612-4094-b3ea-453a017cf793 https://artifactory.mycompany.com/artifactory/api/conan/conan-local " WARN: Remotes registry file missing, creating default one in /home/builduser/jenkins/workspace/shared-rtplib@tmp/conan.tmp261537390058591873/.conan/registry.txt [Pipeline] conanAddUser Adding conan user 'ci-user', server 'b519966f-f612-4094-b3ea-453a017cf793' [shared-mylib] $ sh -c ******** ERROR: HTTPSConnectionPool(host='artifactory.mycompany.com', port=443): Max retries exceeded with url: /artifactory/api/conan/conan-local/v1/users/authenticate (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)'),)) This behavior is not limited to Jenkins access; it is also happening when regular users attempt to access the Artifactory Conan repo, but we can get around it by adding the remote repo with Verify_SSL as False (at the end of the following command): conan remote add myco-conan-local https://artifactory.mycompany.com/artifactory/api/conan/conan-local False I believe the Conan documentation indicates we have two options: Disable the SSL verification via a conan remote command (above) Append the server crt file to the cacert.pem file in the conan home directory. Unfortunately I haven't been able to figure out how to accomplish either solution when it comes to the Jenkins pipeline script. So my questions: Is there a way to disable SSL verification with the client.remote.add command (or something similar) in the Jenkins pipeline script? Is there a way to include the necessary server certificate via the Jenkins pipeline script (so that it gets added to the workspace-specific conan home directory automatically)? Option #1 is probably preferred for a simpler short-term solution, but I'd like to understand how Option #2 is accomplished as well. Thanks for reading. A: The command: $ conan remote add <remote-name> <remote-url> False -f forces the overwrite of the existing <remote-name> setting verifyHttps=False Although the plugin DSL does not contain interface to that argument, it allows to execute arbitrary commands, so you could do something like: node { def server = Artifactory.server "artifactory" def client = Artifactory.newConanClient() def serverName = client.remote.add server: server, repo: "conan-local" stage("Setremotehttp"){ String command = "remote add ${serverName} http://localhost:8081/artifactory/api/conan/conan-local False -f" client.run(command: command) } stage("Search"){ String command = "search zlib -r=${serverName}" client.run(command: command) } } The URL of the remote is needed, which is a bit of duplication, but I have tested and it works, so can be used as a workaround.
Modern sewage treatment technologies include a physical treatment method, a chemical treatment method and a biological treatment method. The physical and chemical treatment methods generally act as pretreatment methods in sewage treatment. And the biological treatment method acts as a main treatment process. Currently, the biological treatment method employed more frequently in sewage treatment processes of various countries is also an activated sludge method. The activated sludge method is a sewage biological treatment process method with activated sludge as a main body. The activated sludge method is that air is constantly introduced into wastewater, after a certain time period sludge-like flocculation is formed due to reproduction of an aerobic microorganism. The sludge-like flocculation is habited thereon with microbiota with zoogloea as a main component, having a strong capability of adsorbing and oxidizing organics. Biological coagulation, adsorption and oxidization of the activated sludge are used to decompose and eliminate organic pollutants in sewage, then the sludge is separated from water, most of the sludge flows back to an aeration tank, and a redundant portion may be drained out of an activated sludge system. A sewage treatment plant employing a sewage treatment process of the activated sludge method generally adopts the following flow taking A2O process as a example: the sewage first passes through a grid, enters a grit chamber, a primary sedimentation tank, an anaerobic tank, an anoxic tank, an aerobic tank and a secondary sedimentation tank after lifted by a water pump, finally the treated water reaching an emission standard is drained out. In a sewage treatment plant, various treatment units are generally arranged horizontally. To enable the sewage to smoothly pass through the various treatment units from front to back, a free water surface of a later treatment unit shall be lower than that of a previous treatment unit, and a height difference of these two water surfaces shall be able to satisfy head loss of the sewage passing through a connection pipeline. This conventional sewage treatment process with a horizontal layout has the following drawbacks: (1) a large occupation of ground, as all sewage treatment constructions are laid flat on the ground, so that the sewage treatment plant occupies a large area of the ground; (2) a large head loss: as pipelines are needed to connect between various sewage treatment constructions, resulting in a certain head loss between all of the various treatment constructions; (3) a low oxygenation efficiency: since the aeration tank has a limited depth, resulting in a lower water pressure, a low oxygenation efficiency, and a large power loss; (4) more invalid structure volumes of the constructions: to ensure no overflow of sewage out of the constructions, a certain superelevation portion is left at an upper end of each construction, resulting in more invalid structure volumes of the constructions; (5) a large heat loss in winter: since all sewage treatment constructions are laid flat on the ground, thereby forming a very large water surface, resulting in a large heat loss in cold climate conditions in winter.
Q: Can I use REG_EXPAND_SZ for the locations of shell folders instead of REG_SZ I'm working on re-arranging a number of the shell folders in windows 7 to utilize Dropbox to keep a set of machines in sync. I'd like to create a .reg file which I can use to update the locations of these folders rather than manually changing them from the UI, but I don't want to rely on the path to the home folder being the same each time. So my question is, is it possible to replace the REG_SZ values in HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders with REG_EXPAND_SZ values specifying an offset from %HOME% instead of an exact path? A: Gah, ignore the rest of this answer. The registry key you have there is useless. It won't change anything with your shell folders. Heck, there is even a value there: You see, The reason that that key existed and shell folder locations were stored in the registry is that they initially were stored there. But since there was a documented way to get at them this was an implementation detail. Explorer might still update those values for you as a convenience since a great many applications are incorrectly relying on that key but you should never use it, anyway. Back to the topic, since this is just a static list reflecting (or not) what Explorer stores elsewhere changes there won't affect the systen in any way. Explorer simply doesn't care about it. Image you write where you are on a slip of paper every time you go somewhere. Do you magically pop up in another location when someone else writes on that paper? As for REG_SZ to REG_EXPAND_SZ: Try it, but don't assume that it will magically work. The registry itself doesn't care about REG_SZ and REG_EXPAND_SZ–that's all done by the application reading the data. And since this value is a REG_SZ I'm guessing that you can't just replace it with REG_EXPAND_SZ and have it work.
Reblogged:Kagan's Darkest Fears Realized Recommended Posts George Will discusses the much-needed recent reversal by the Supreme Court of a 1977 decision that had allowed public sector unions to garnish partial dues from non-union public sector employees. Among other things, Will notes the following revealing passage from Elena Kagan's dissenting opinion: There is no sugarcoating today's opinion. The majority overthrows a decision entrenched in this nation's law -- and in its economic life -- for over 40 years. As a result, it prevents the American people, acting through their state and local officials, from making important choices about workplace governance. And it does so by weaponizing the First Amendment, in a way that unleashes judges, now and in the future, to intervene in economic and regulatory policy. Will's commentary on this is amusing and informative, to say the least, and he is quite correct in his conclusion: There is no sugarcoating today's reality. Public-sector unions are conveyor belts that move a portion of government employees' salaries -- some of the amount paid in union dues -- into political campaigns, almost always Democrats', to elect the people with whom the unions "negotiate" for taxpayers' money. Progressives who are theatrically distraught about there being "too much money in politics" are now theatrically distraught that the court has ended coercing contributions that have flowed to progressive candidates. I can fault Will only for not going far enough -- not questioning the propriety of taxation or of the government performing some of the functions that so many government employees enable it to perform. But this ruling might make it a bit easier for the rare candidate (so far) who does to be heard, and to be elected. The Supreme Court may have blown it on internet sales taxes, but at least it got this more important decision right.
[Xsb-development] Seg fault caused by xsb_query_string_string The query answering primitives in the C<-->XSB interface seem to break when the given goals and/or results are large-ish. For instance, suppose that the following straightforward definition of append (on user-defined lists, not on native Prolog lists) has been loaded: append(nil,X,X). append(cons(X,L1),L2,cons(X,R)) :- append(L1,L2,R). and let goal be the string "append(x,y,Result)" where x and y are ground cons-nil lists of integer numerals from 1 to 100K and from 100K to 200K respectively. I.e., x1 and x2 are cons(100000,cons(99999,... cons(1,nil) ...)) and cons(200000,... cons(100000,nil)...) respectively. So, goal has ~200K nodes and Result should have ~200K nodes as well. Then the following query code chokes (causes a segmentation fault): XSB_StrDefine(return_string); xsb_query_string_string(CTXTc goal,&return_string,"|"); That shouldn't happen, both b/c there's plenty of memory on the machine, and - more importantly - b/c the specification of xsb_query_string_string claims that it will return an appropriate error code (via XSB_ERROR) if something catastrophic happens. The exact same code works fine when the terms are smaller. The "fixed string" interface, using xsb_query_string_string_b, does not fare better: rc = xsb_query_string_string_b(CTXTc goal,return_string,ret_size,&ans_len,"|"); This seg-faults when the goal and the result is large-ish, *even* if return_string has more than enough space to hold the result and ret_size has the right value. (E.g., try this with goal = append(x1,x2,Result) where x1 has 100K integer elements and x2 has 200K integer elements.) I'm using the single-threaded engine. Keep in mind that for today's standard terms with 200K-500K nodes are fairly small. E.g., SAT/SMT solvers routinely have to deal with terms/formulas with millions of nodes, and one might want to use XSB to manipulate such objects (I would!). Any suggestions on how to get around this (or any fixes) will be appreciated. Thanks, K. > > Hi Terry, Thanks for the reply. >You're absolutely right that sharing tables can >(and most probably would) lead to problems with >concurrently executing threads. >>In principle, it shouldn't. In local evaluation, there is a >>mechanism to share concurrently evaluated shared tables which >>I think is described in the manual, and is described in gory >>detail in Marques and Swift, 2008 ICLP. It shouldn't, as long as there are no insertions while concurrent queries are being answered. But if there are such insertions (as is almost always the case in question-answering systems), then these will not occur with abolished tables but rather with whatever tables happen to be in effect at insertion time, which will in turn depend on the state of the various active queries at the time. This will most likely invalidate the insertions and lead to wrong results. It seems to me that what one really needs is the ability to spawn a query at time t with a *fresh* (empty) set of tables and with whatever data happens to be in the database at t. I don't know if, with private predicates/tables, there is a way to forcibly copy the contents of one thread's predicates/tables into another. If so, then this could be readily implemented in XSB as things stand. But if not, then I don't see how XSB's tabling can be used to implement a system that does concurrent updates/queries. >>However if you are using MT XSB, I strongly recommend first >>testing out the programming idioms you use from the command-line >>shell, then putting things into the C-XSB interface. XSB's >>MT C-Prolog interface is quite ambitious, but as a result it >>may have some undiagnosed bugs. Just as importantly, XSB-only >>code will be far, far easier to debug. I'm sure that's the case, but I would think that the point of having a C<-->XSB interface is to facilitate things that, for some reason or other, cannot be easily done exclusively in XSB. If I could program the whole thing in XSB by itself, I could, but I can't. Moreover, it may well be that some bugs are peculiar to the C<-->XSB interface and are not reproducible when the code is expressed purely in XSB. What happens to them? >So I tried declaring >tables to be private, *but* it seems you cannot have >shared dynamic predicates with private tables. The >predicates themselves (the facts in the database) have >to be "dynamic as shared", because otherwise a new thread >spawned to answer a query would not have any usable facts >to work with. But again, the combination "table p/2 as private" >with "dynamic p/2 as shared" does not seem to be allowed by XSB >(perhaps someone can set me straight if I'm missing something). >>I haven't looked at your code, but in principle you can do this. >>See the attached file, though I'm sure that your program is much >>more complex than the attached file. I didn't see an attached file - could you please resend? I'd be very interested to see how this could be done. >At any rate, we're still left with the more narrow question >of why in the world the query threads in this particular >example are not exiting when they are explicitly killed. >>XSB threads should exit when they are killed, although there are cases ?>>where this doesn't happen. For instance if the thread is waiting on I/O, >>or waiting on a mutex, etc. the waiting thread needs to be signaled, to >>wake up, quit doing whatever it is >>doing, and exit. XSB has a lot of OS interfaces, and I haven't yet >>handled this in every single case. In addition, when a thread is killed >>it must clean up after itself, and free what ever mutexes, db_cursors, >>etc. that it has (but may not be currently working >> on). While XSB can and does do this for some resources, it doesn't >>always have full knowledge of all of the resources a thread has taken. >>So getting thread cancellation to work in any system is awkward, and >>requires a lot from both the system and the user. >>In pretty much any MT system, its better to have the thread itself exit, >>and save thread cancellation for special cases. Well, these are all special cases unfortunately - these threads *never* exit by themselves because for some reason which I don't quite understand, they go into a read-eval loop *after* they are finished evaluating the goal that they were intended to answer. As far as I can see there is no mechanism in the C<-->XSB interface to create a thread to do a specific job and then quit. The only way to do that would be to execute embedded XSB code using thread_create, but then keeping track of the thread id and getting the thread to communicate properly becomes very difficult because, again, we are not in XSB proper but in the embedded C/XSB world. So xsb_ccall_thread_create seems to be the only viable option, but again, these threads don't quit by themselves, so killing them is the only option (and an absolute must, because it seems that when multiple threads are active, tables cannot be abolished, resources cannot be reclaimed, etc.). >So if you can show me how your program works from the command-line >interface, I'll help debug at that level. Once we are sure the program is >doing what you want at that level, we can see how to add the C-XSB >interface. Like I said earlier, this seems to assume that the bug will be reproducible in pure XSB, and I'm not at all sure about that. The C code sample is very small so I hope someone will be able to take a look at it and reach a (tentative, at least) verdict. In the meanwhile I'll try to take the client/server I've written in C and re-express it in pure XSB using the samples given in the /examples/sockets directory of the distribution. Is there anyone in particular in this list to whom I should direct questions pertaining to that client/server code? Many thanks, Konstantine --- On Sat, 8/18/12, David Warren <warren@...> wrote: > From: David Warren <warren@...> > Subject: RE: Bug in multi-threaded XSB? > To: "K. A." <k_a_7245@...>, "Xsb-development@..." <Xsb-development@...> > Cc: "Terrance Swift" <tswift@...> > Date: Saturday, August 18, 2012, 12:17 PM > I strongly suspect it is because you > are sharing the tables and that other thread is not exiting > (or at least XSB doesn't know it has exited.) > I would suggest that you don't use shared tables but private > tables. If they are not causing this problem now, they > certainly will if you run a multithreaded query service > where you have a number of queries and some can update the > underlying data. > -David > > -----Original Message----- > From: K. A. [mailto:k_a_7245@...] > > Sent: Friday, August 17, 2012 5:24 PM > To: Xsb-development@... > Cc: David Warren; Terrance Swift > Subject: Bug in multi-threaded XSB? > > Could someone please explain to me why the following C code > fails to find any answers to the last query in the main > function - the parentOf(X,Y,L) query? > > Note that every insertion does an abolish_all_tables first, > and that right before each insertion or query in main, there > should be (as far as I can see) only one XSB thread active - > the main one. > > I assume this is a bug in multi-threaded XSB unless someone > can provide an alternative explanation. > > If you put the C code below in a file test.c, you can make > the program as follows: > > gcc -c -I/home/.../XSB/emu > -I/home/.../XSB/config/x86_64-unknown-linux-gnu-mt -O3 > -fno-strict-aliasing -Wall -pipe -D_GNU_SOURCE test.c > > gcc -o test.out -lm -ldl -Wl -export-dynamic -lpthread > /home/.../XSB/config/x86_64-unknown-linux-gnu-mt/saved.o/xsb.o > test.o > > #include <stdio.h> > #include <unistd.h> > #include <stdlib.h> > #include <string.h> > #include <sys/types.h> > #include <pthread.h> > #include <ctype.h> > #include "cinterf.h" > #include "varstring_xsb.h" > #include "context.h" > > void doInsert(char* insertion_command,th_context* th){ int > res = xsb_command_string(th, "abolish_all_tables."); res = > xsb_command_string(th, insertion_command);} > > void doCommand(char* cmd,th_context* th){ int res = > xsb_command_string(th, cmd);} > > struct QueryArgs { > char* query; > th_context* th;}; > > void* doQuery(void* args) { > struct QueryArgs * pt = (struct QueryArgs *) args; > char* query = (*pt).query; > th_context* th = (*pt).th; > XSB_StrDefine(retstr); > th_context* new_query_thread; > xsb_ccall_thread_create(th,&new_query_thread); > int rc = > xsb_query_string_string(new_query_thread,query,&retstr,"|"); > int answer_count = 0; > while ((rc == XSB_SUCCESS) && (++answer_count)) > { > printf("\nAnswer: %s\n",retstr.string); > rc = xsb_next_string(new_query_thread, > &retstr,"|"); > } > printf("\n%d answers for this query: > %s\n",answer_count,query); > xsb_kill_thread(new_query_thread); > return NULL;} > > void answerQuery(char* query,th_context* th){ > void* exit_status; > pthread_t new_thread; > struct QueryArgs args = {.query = query, .th = th}; > pthread_create(&new_thread,NULL,doQuery,(void*) > &args); > pthread_join(new_thread,&exit_status); > } > > int main() { > char init_string[MAXPATHLEN]; > char* xsbHome = getenv("XSB_HOME"); > strcpy(init_string, xsbHome); > xsb_init_string(init_string); > th_context *main_th = xsb_get_main_thread(); > > doCommand("consult('prelude.P').",xsb_get_main_thread()); > doCommand("load_dyn('main.P').",xsb_get_main_thread()); > > doInsert("assert(auntOf(woman9,foo,[])).",xsb_get_main_thread()); > answerQuery("familyOf(X,Y,L).",xsb_get_main_thread()); > > doInsert("assertAll([parentOf(man2,man3,[1/555]),parentOf(man1,man2,[7/982,1/34])]).",xsb_get_main_thread()); > answerQuery("parentOf(X,Y,L).",xsb_get_main_thread()); > xsb_close(xsb_get_main_thread()); > return 0;} > > The contents of the other 2 files are as follows: > > /************* Contents of prelude.P : ***************/ > assertAll([]). > assertAll([H|T]) :- asserta(H),assertAll(T). > > /************* Contents of main.P : ***************/ > :- import append/3 from basics. > :- table motherOf/3 as shared. > :- table fatherOf/3 as shared. > :- table parentOf/3 as shared. > :- table ancestorOf/3 as shared. > :- table auntOf/3 as shared. > :- table familyOf/3 as shared. > :- dynamic motherOf/3 as shared. > :- dynamic fatherOf/3 as shared. > :- dynamic parentOf/3 as shared. > :- dynamic ancestorOf/3 as shared. > :- dynamic auntOf/3 as shared. > :- dynamic familyOf/3 as shared. > > familyOf(X,Y,L) :- ancestorOf(X,Y,L). > ancestorOf(X,Y,L) :- parentOf(X,Y,L). > parentOf(X,Y,L) :- motherOf(X,Y,L). > parentOf(X,Y,L) :- fatherOf(X,Y,L). > familyOf(X,Y,L) :- auntOf(X,Y,L). > familyOf(X,Y,L) :- familyOf(Y,X,L). > ancestorOf(X,Y,L) :- ancestorOf(X,Z,L1), ancestorOf(Z,Y,L2), > append(L1,L2,L). > > The query answering primitives in the C<-->XSB interface seem to break when the given goals and/or results are large-ish. For instance, suppose that the following straightforward definition of append (on user-defined lists, not on native Prolog lists) has been loaded: append(nil,X,X). append(cons(X,L1),L2,cons(X,R)) :- append(L1,L2,R). and let goal be the string "append(x,y,Result)" where x and y are ground cons-nil lists of integer numerals from 1 to 100K and from 100K to 200K respectively. I.e., x1 and x2 are cons(100000,cons(99999,... cons(1,nil) ...)) and cons(200000,... cons(100000,nil)...) respectively. So, goal has ~200K nodes and Result should have ~200K nodes as well. Then the following query code chokes (causes a segmentation fault): XSB_StrDefine(return_string); xsb_query_string_string(CTXTc goal,&return_string,"|"); That shouldn't happen, both b/c there's plenty of memory on the machine, and - more importantly - b/c the specification of xsb_query_string_string claims that it will return an appropriate error code (via XSB_ERROR) if something catastrophic happens. The exact same code works fine when the terms are smaller. The "fixed string" interface, using xsb_query_string_string_b, does not fare better: rc = xsb_query_string_string_b(CTXTc goal,return_string,ret_size,&ans_len,"|"); This seg-faults when the goal and the result is large-ish, *even* if return_string has more than enough space to hold the result and ret_size has the right value. (E.g., try this with goal = append(x1,x2,Result) where x1 has 100K integer elements and x2 has 200K integer elements.) I'm using the single-threaded engine. Keep in mind that for today's standard terms with 200K-500K nodes are fairly small. E.g., SAT/SMT solvers routinely have to deal with terms/formulas with millions of nodes, and one might want to use XSB to manipulate such objects (I would!). Any suggestions on how to get around this (or any fixes) will be appreciated. Thanks, K. > > Hi Terry, Thanks for the suggestion. One question: Can a private thread table be explicitly abolished on that thread while other threads are running? If so, then your suggestion could work with the unkillable threads created by xsb_ccall_thread_create - I can maintain a queue of 20 or so such threads and reuse them to answer queries, since I can't kill them. (This would obviously be unworkable in general, but in this case it's unlikely that I'll have more than 20 concurrent queries at any given time, so it might be viable.) But this presupposes that I can abolish their private tables while other threads are running. If not, then I'd have to continually spawn new threads to answer new queries, and if these threads never quit then I'll be out of memory before too long. Thanks, Konstantine --- On Mon, 8/20/12, Terrance Swift <tswift@...> wrote: > From: Terrance Swift <tswift@...> > Subject: RE: Bug in multi-threaded XSB? > To: "K. A." <k_a_7245@...>, "Xsb-development@..." <Xsb-development@...>, "David Warren" <warren@...> > Date: Monday, August 20, 2012, 9:34 AM > Here is the simple attached file. > > A couple of people have written large applications with > MT-XSB but they probably did not use all the features you > are trying to use, and MT-XSB is certainly less stable than > single-threaded XSB. I wish I had sufficient time to > help every user, but looking very briefly at your code, its > impossible to understand. > > 1) Why you can't remove the C/XSB interface, at least for > debugging > 2) Why you can't use thread exiting, at least for debugging > (actually its not an issue from the prolog level, threads > exit once they have been joined.) > 3) How many of the problems in XSB and how many are > multi-programming errors of yours. No offense, > everybody makes MT-programming errors. > > I hope you'll take seriously the suggestions I gave you to > start breaking down the program to help isolate the > bugs. I'll try to fix the bugs I can once they become > clear to me. > > Terry > ________________________________________ > From: K. A. [k_a_7245@...] > Sent: Sunday, August 19, 2012 7:06 PM > To: Xsb-development@...; > David Warren; Terrance Swift > Subject: RE: Bug in multi-threaded XSB? > > Hi Terry, > > Thanks for the reply. > > >You're absolutely right that sharing tables can > >(and most probably would) lead to problems with > >concurrently executing threads. > > >>In principle, it shouldn't. In local > evaluation, there is a > >>mechanism to share concurrently evaluated shared > tables which > >>I think is described in the manual, and is described > in gory > >>detail in Marques and Swift, 2008 ICLP. > > It shouldn't, as long as there are no insertions while > concurrent > queries are being answered. But if there are such > insertions > (as is almost always the case in question-answering > systems), > then these will not occur with abolished tables but rather > with whatever tables happen to be in effect at insertion > time, > which will in turn depend on the state of the various > active > queries at the time. This will most likely invalidate the > insertions > and lead to wrong results. It seems to me that what one > really > needs is the ability to spawn a query at time t with a > *fresh* (empty) > set of tables and with whatever data happens to be in the > database at t. I don't know if, with private > predicates/tables, > there is a way to forcibly copy the contents of one > thread's > predicates/tables into another. If so, then this could be > readily implemented in XSB as things stand. But if not, > then > I don't see how XSB's tabling can be used to implement a > system that does concurrent updates/queries. > > >>However if you are using MT XSB, I strongly > recommend first > >>testing out the programming idioms you use from the > command-line > >>shell, then putting things into the C-XSB > interface. XSB's > >>MT C-Prolog interface is quite ambitious, but as a > result it > >>may have some undiagnosed bugs. Just as > importantly, XSB-only > >>code will be far, far easier to debug. > > I'm sure that's the case, but I would think that the point > of having a C<-->XSB interface is to facilitate things > that, > for some reason or other, cannot be easily done exclusively > in XSB. If I could program the whole thing in XSB by > itself, > I could, but I can't. Moreover, it may well be that some > bugs > are peculiar to the C<-->XSB interface and are not > reproducible > when the code is expressed purely in XSB. What happens to > them? > > >So I tried declaring > >tables to be private, *but* it seems you cannot have > >shared dynamic predicates with private tables. The > >predicates themselves (the facts in the database) have > >to be "dynamic as shared", because otherwise a new > thread > >spawned to answer a query would not have any usable > facts > >to work with. But again, the combination "table p/2 as > private" > >with "dynamic p/2 as shared" does not seem to be allowed > by XSB > >(perhaps someone can set me straight if I'm missing > something). > > >>I haven't looked at your code, but in principle you > can do this. > >>See the attached file, though I'm sure that your > program is much > >>more complex than the attached file. > > I didn't see an attached file - could you please resend? > I'd be very interested to see how this could be done. > > >At any rate, we're still left with the more narrow > question > >of why in the world the query threads in this > particular > >example are not exiting when they are explicitly > killed. > > >>XSB threads should exit when they are killed, > although there are cases ?>>where this doesn't > happen. For instance if the thread is waiting on I/O, > >>or waiting on a mutex, etc. the waiting thread needs > to be signaled, to >>wake up, quit doing whatever it > is > >>doing, and exit. XSB has a lot of OS > interfaces, and I haven't yet >>handled this in every > single case. In addition, when a thread is killed > >>it must clean up after itself, and free what ever > mutexes, db_cursors, >>etc. that it has (but may not > be currently working > >> on). While XSB can and does do this for some > resources, it doesn't >>always have full knowledge of > all of the resources a thread has taken. >>So > getting thread cancellation to work in any system is > awkward, and >>requires a lot from both the system and > the user. > >>In pretty much any MT system, its better to have the > thread itself exit, >>and save thread cancellation for > special cases. > > Well, these are all special cases unfortunately - these > threads > *never* exit by themselves because for some reason which I > don't > quite understand, they go into a read-eval loop *after* > they > are finished evaluating the goal that they were intended > to answer. As far as I can see there is no mechanism in > the C<-->XSB interface to create a thread to do a > specific > job and then quit. The only way to do that would be to > execute embedded XSB code using thread_create, but then > keeping track of the thread id and getting the thread > to communicate properly becomes very difficult because, > again, we are not in XSB proper but in the embedded C/XSB > world. So xsb_ccall_thread_create seems to be the only > viable > option, but again, these threads don't quit by themselves, > so killing them is the only option (and an absolute must, > because it seems that when multiple threads are active, > tables cannot be abolished, resources cannot be reclaimed, > etc.). > > >So if you can show me how your program works from the > command-line >interface, I'll help debug at that > level. Once we are sure the program is >doing what > you want at that level, we can see how to add the C-XSB > >interface. > > Like I said earlier, this seems to assume that the bug > will be reproducible in pure XSB, and I'm not at all > sure about that. The C code sample is very small so > I hope someone will be able to take a look at it and > reach a (tentative, at least) verdict. In the meanwhile > I'll try to take the client/server I've written in C and > re-express it in pure XSB using the samples given in the > /examples/sockets directory of the distribution. Is there > anyone in particular in this list to whom I should direct > questions pertaining to that client/server code? > > Many thanks, > > Konstantine > > > --- On Sat, 8/18/12, David Warren <warren@...> > wrote: > > > > > From: David Warren <warren@...> > > > Subject: RE: Bug in multi-threaded XSB? > > > To: "K. A." <k_a_7245@...>, > "Xsb-development@..." > <Xsb-development@...> > > > Cc: "Terrance Swift" <tswift@...> > > > Date: Saturday, August 18, 2012, 12:17 PM > > > I strongly suspect it is because you > > > are sharing the tables and that other thread is not > exiting > > > (or at least XSB doesn't know it has exited.) > > > I would suggest that you don't use shared tables but > private > > > tables. If they are not causing this problem now, they > > > certainly will if you run a multithreaded query > service > > > where you have a number of queries and some can update > the > > > underlying data. > > > -David > > > > > > -----Original Message----- > > > From: K. A. [mailto:k_a_7245@...] > > > > > > Sent: Friday, August 17, 2012 5:24 PM > > > To: Xsb-development@... > > > Cc: David Warren; Terrance Swift > > > Subject: Bug in multi-threaded XSB? > > > > > > Could someone please explain to me why the following C > code > > > fails to find any answers to the last query in the > main > > > function - the parentOf(X,Y,L) query? > > > > > > Note that every insertion does an abolish_all_tables > first, > > > and that right before each insertion or query in main, > there > > > should be (as far as I can see) only one XSB thread > active - > > > the main one. > > > > > > I assume this is a bug in multi-threaded XSB unless > someone > > > can provide an alternative explanation. > > > > > > If you put the C code below in a file test.c, you can > make > > > the program as follows: > > > > > > gcc -c -I/home/.../XSB/emu > > > -I/home/.../XSB/config/x86_64-unknown-linux-gnu-mt -O3 > > > -fno-strict-aliasing -Wall -pipe -D_GNU_SOURCE test.c > > > > > > gcc -o test.out -lm -ldl -Wl -export-dynamic -lpthread > > > > /home/.../XSB/config/x86_64-unknown-linux-gnu-mt/saved.o/xsb.o > > > test.o > > > > > > #include <stdio.h> > > > #include <unistd.h> > > > #include <stdlib.h> > > > #include <string.h> > > > #include <sys/types.h> > > > #include <pthread.h> > > > #include <ctype.h> > > > #include "cinterf.h" > > > #include "varstring_xsb.h" > > > #include "context.h" > > > > > > void doInsert(char* insertion_command,th_context* th){ > int > > > res = xsb_command_string(th, "abolish_all_tables."); > res = > > > xsb_command_string(th, insertion_command);} > > > > > > void doCommand(char* cmd,th_context* th){ int res = > > > xsb_command_string(th, cmd);} > > > > > > struct QueryArgs { > > > char* query; > > > th_context* th;}; > > > > > > void* doQuery(void* args) { > > > struct QueryArgs * pt = (struct QueryArgs *) args; > > > char* query = (*pt).query; > > > th_context* th = (*pt).th; > > > XSB_StrDefine(retstr); > > > th_context* new_query_thread; > > > xsb_ccall_thread_create(th,&new_query_thread); > > > int rc = > > > > xsb_query_string_string(new_query_thread,query,&retstr,"|"); > > > int answer_count = 0; > > > while ((rc == XSB_SUCCESS) && > (++answer_count)) > > > { > > > printf("\nAnswer: %s\n",retstr.string); > > > rc = xsb_next_string(new_query_thread, > > > &retstr,"|"); > > > } > > > printf("\n%d answers for this query: > > > %s\n",answer_count,query); > > > xsb_kill_thread(new_query_thread); > > > return NULL;} > > > > > > void answerQuery(char* query,th_context* th){ > > > void* exit_status; > > > pthread_t new_thread; > > > struct QueryArgs args = {.query = query, .th = th}; > > > pthread_create(&new_thread,NULL,doQuery,(void*) > > > &args); > > > pthread_join(new_thread,&exit_status); > > > } > > > > > > int main() { > > > char init_string[MAXPATHLEN]; > > > char* xsbHome = getenv("XSB_HOME"); > > > strcpy(init_string, xsbHome); > > > xsb_init_string(init_string); > > > th_context *main_th = xsb_get_main_thread(); > > > > > > > doCommand("consult('prelude.P').",xsb_get_main_thread()); > > > > doCommand("load_dyn('main.P').",xsb_get_main_thread()); > > > > > > > doInsert("assert(auntOf(woman9,foo,[])).",xsb_get_main_thread()); > > > answerQuery("familyOf(X,Y,L).",xsb_get_main_thread()); > > > > > > > doInsert("assertAll([parentOf(man2,man3,[1/555]),parentOf(man1,man2,[7/982,1/34])]).",xsb_get_main_thread()); > > > answerQuery("parentOf(X,Y,L).",xsb_get_main_thread()); > > > xsb_close(xsb_get_main_thread()); > > > return 0;} > > > > > > The contents of the other 2 files are as follows: > > > > > > /************* Contents of prelude.P : > ***************/ > > > assertAll([]). > > > assertAll([H|T]) :- asserta(H),assertAll(T). > > > > > > /************* Contents of main.P : ***************/ > > > :- import append/3 from basics. > > > :- table motherOf/3 as shared. > > > :- table fatherOf/3 as shared. > > > :- table parentOf/3 as shared. > > > :- table ancestorOf/3 as shared. > > > :- table auntOf/3 as shared. > > > :- table familyOf/3 as shared. > > > :- dynamic motherOf/3 as shared. > > > :- dynamic fatherOf/3 as shared. > > > :- dynamic parentOf/3 as shared. > > > :- dynamic ancestorOf/3 as shared. > > > :- dynamic auntOf/3 as shared. > > > :- dynamic familyOf/3 as shared. > > > > > > familyOf(X,Y,L) :- ancestorOf(X,Y,L). > > > ancestorOf(X,Y,L) :- parentOf(X,Y,L). > > > parentOf(X,Y,L) :- motherOf(X,Y,L). > > > parentOf(X,Y,L) :- fatherOf(X,Y,L). > > > familyOf(X,Y,L) :- auntOf(X,Y,L). > > > familyOf(X,Y,L) :- familyOf(Y,X,L). > > > ancestorOf(X,Y,L) :- ancestorOf(X,Z,L1), > ancestorOf(Z,Y,L2), > > > append(L1,L2,L). > > > > > > > > > > > > > I just did a fresh install of XSB on a different Linux box, using the exact same sources I had used before (which I had downloaded from the XSB site in July: XSB version 3.3.6), and with the exact same version of gcc as before (which is, btw, identical to yours: gcc 4.1.2 20080704 - Red Hat 4.1.2-52), and I got the exact same error. See below for the screen shot. I then did another fresh install with the XSB sources that were posted on the XSB site in August (XSB version 3.3.7) and with *those* sources the problem doesn't appear. Conclusion #1: This is a problem that went away in the August sources, as of version 3.3.7. Conclusion #2: Given that gcc incompatibility is ruled out, there is definitely a "floundering" bug in XSB 3.3.6 (this is easily verified: I can post the tarballed distribution of XSB 3.3.6 that I got in July online, along with the example, so that anyone could try it out on any Linux machine with the right version of gcc; there is no way they wouldn't get the floundering error), which, to my mind anyway, would still call for explanation. Thanks, K. root@... bin]# gcc --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [root@... bin]# ./xsb-mt [xsb_configuration loaded] [sysinitrc loaded] XSB Version 3.3.6 (Pignoletto) of January 2, 2012 [x86_64-unknown-linux-gnu 64 bits; mode: optimal; engine: multi-threading; scheduling: local] [Patch date: 2012/01/09 03:50:32] | ?- consult('prelude.P'). [prelude loaded, cpu time used: 0.0010 seconds] yes | ?- assert(parentOf(a,b,[])). yes | ?- thread_create(findall(X/Y/L,familyOfP(X,Y,L),R), Id), thread_join(Id,E). ++Error[XSB/Runtime/P]: [Miscellaneous] [th 1] Floundering goal in tnot/1 familyOfP(_v329030656 X = _h167 Y = _h181 L = _h195 R = _h257 Id = 1 E = exception(error(misc_error,[th 1] Floundering goal in tnot/1 familyOfP(_v329030656,[[Forward Continuation...,... standard:call/1,... standard:call/1,... standard:call/1],Backward Continuation...])) --- On Mon, 8/27/12, Terrance Swift <tswift@...> wrote: From: Terrance Swift <tswift@...> Subject: RE: Bug in multi-threaded XSB? To: "K. A." <k_a_7245@...>, "David Warren" <warren@...> Cc: "Xsb-development@..." <Xsb-development@...> Date: Monday, August 27, 2012, 8:57 AM Ok , this is weird. I can't reproduce on Linux, either (version info below) I'm wondering whether it would help to download a new version of XSB (either 3.3.7 or directly from CVS) and see if that helps anything. AFAIK, I"m doing what you're doing so the problem must either be in the linux/C compiler you're using, (mine is gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) or in your version of XSB. So I guess the next XSB Version 3.3.6 (Pignoletto) of January 2, 2012 [x86_64-unknown-linux-gnu 64 bits; mode: optimal; engine: multi-threading; schedu\ ling: local] [Patch date: 2012/03/09 18:27:01] | ?- consult('prelude.P'). [Compiling ./prelude] [prelude compiled, cpu time used: 0.1170 seconds] [prelude loaded] yes | ?- assert(parentOf(a,b,[])). yes | ?- thread_create(findall(X/Y/L,familyOfP(X,Y,L),R),Id), thread_join(Id,ExitCode\ ). X = _h52 Y = _h66 L = _h80 R = _h142 Id = 1 ExitCode = true yes | ?- shell('uname -a'). Linux fermi 2.6.18-308.4.1.el5 #1 SMP Tue Apr 17 17:08:00 EDT 2012 x86_64 x86_64 \ x86_64 GNU/Linux yes From: K. A. [k_a_7245@...] Sent: Sunday, August 26, 2012 3:57 PM To: David Warren; Terrance Swift Cc: Xsb-development@... Subject: RE: Bug in multi-threaded XSB? Hi Terry, I'm not sure what code you have in the files that you cite (no files were actually attached to your email), but the interaction with xsb shown below is a verbatim screen shot. This time I've actually attached prelude.P as a separate file that you can save and then 'consult' directly. K. -------------------------------------------------------------------------------- [root@... new]$ ./xsb [xsb_configuration loaded] [sysinitrc loaded] XSB Version 3.3.6 (Pignoletto) of January 2, 2012 [x86_64-unknown-linux-gnu 64 bits; mode: optimal; engine: multi-threading; scheduling: local] [Patch date: 2012/01/09 03:50:32] | ?- consult('prelude.P'). [Compiling ./prelude] [prelude compiled, cpu time used: 0.1060 seconds] [prelude loaded, cpu time used: 0.0010 seconds] yes | ?- assert(parentOf(a,b,[])). yes | ?- thread_create(findall(X/Y/L,familyOfP(X,Y,L),R),Id), thread_join(Id,ExitCode). ++Error[XSB/Runtime/P]: [Miscellaneous] [th 1] Floundering goal in tnot/1 familyOfP(_v212918432 X = _h51 Y = _h65 L = _h79 R = _h141 Id = 1 ExitCode = exception(error(misc_error,[th 1] Floundering goal in tnot/1 familyOfP(_v212918432,[[Forward Continuation...,... standard:call/1,... standard:call/1,... standard:call/1],Backward Continuation...])) -------------------------------------------------------------------------------- --- On Sun, 8/26/12, Terrance Swift <tswift@...> wrote: From: Terrance Swift <tswift@...> Subject: RE: Bug in multi-threaded XSB? To: "K. A." <k_a_7245@...>, "David Warren" <warren@...> Cc: "Xsb-development@..." <Xsb-development@...> Date: Sunday, August 26, 2012, 11:17 AM I tried a couple of ways, but I haven't yet been able to reproduce the error -- at least not yet. The attached file ka.P has your code in it, plus various predicates of the form mytest<...> that call things in different ways. The file ka1.P consults ka.P and then does the same thing. I must be missing something that you are doing? Terry PS: in the code you sent a parenthesis was missing as shown below. This was the only change to your code that I'm aware of making... thread_create((findall(X/Y/L,familyOfP(X,Y,L),R),Id), >>>)<<<< thread_join(Id,ExitCode) From: K. A. [k_a_7245@...] Sent: Tuesday, August 21, 2012 4:56 PM To: David Warren Cc: Xsb-development@...; Terrance Swift Subject: Re: Bug in multi-threaded XSB? Hi David, Terry, The manual lists abolish_all_private_tables/0 (in Chapter 6) as a "standard predicate", described on p. 100 as predicates "which are always available to the Prolog interpreter and do not need to be imported or loaded explicitly as do other Prolog predicates." Terry has volunteered to fix that in the manual (thanks!). I've run into what appears to be a threading bug though, which I hope that Terry can shed some light on. Per Terry's suggestion, I've declared privately tabled versions of all my shared dynamic predicates, by appending the capital letter 'P' at the end of each predicate's name (e.g., if I have a shared dynamic predicate 'before' then I introduce a privately tabled predicate 'beforeP'). Then I define the privately tabled versions to be supersets of the shared ones (again, as Terry suggested), e.g.: beforeP(X,Y,Z) :- before(X,Y,Z). However, I'm running into another thread issue, which is (again) a discepancy between doing something on the main thread and doing the exact same thing on a spawned thread. This time I was able to reproduce the problem in pure XSB - no C involved. Consider the following interaction with XSB: consult('prelude.P'). assert(parentOf(a,b,[])). findall(X/Y/L,familyOfP(X,Y,L),R), write(R). (See below for the contents of prelude.P.) This works fine (because it's done on the main thread): it correctly reasons out that a is familyOfP b, b is familyOfP a, etc. Now I *should* get the same result when I evaluate this query on a spawned thread. But I don't. When I do: consult('prelude.P'). assert(parentOf(a,b,[])). thread_create((findall(X/Y/L,familyOfP(X,Y,L),R),Id), thread_join(Id,ExitCode). I get the following odd error: | ?- thread_create((findall(X/Y/L,familyOfP(X,Y,L),R),write(R)),Id), thread_join(Id,ExitCode). ++Error[XSB/Runtime/P]: [Miscellaneous] [th 1] Floundering goal in tnot/1 familyOfP(_v185356496 X = _h171 Y = _h185 L = _h199 R = _h261 Id = 1 ExitCode = exception(error(misc_error,[th 1] Floundering goal in tnot/1 familyOfP(_v185356496,[[Forward Continuation...,... standard:call/1,... standard:call_c/1,... standard:call/1,... standard:call/1],Backward Continuation...])) This is especially perplexing given that there is no negation anywhere, tabled or otherwise. How/where does floundering enter the picture? I've actually run into a number of such "floundering" errors recently while trying to do seemingly innocuous things on spawned threads. (I've forwarded some such errors before to David, though we were both stumped by the nature of the error message.) Again, I've only seen such errors on spawned threads, never on the main XSB thread. Clarification/feedback would be appreciated. Thanks, Konstantine The contents of prelude.P are pasted below. Most of this stuff is irrelevant to the problem, but I've spent the last 3 hours throwing away the majority of the stuff from prelude.P. What's left is, I think, pretty manageable and readable: :- import append/3 from basics. appendAll([],[]). appendAll([H|T],L) :- appendAll(T,L1), append(H,L1,L). append3(L1,L2,L3,L) :- append(L1,L2,R), append(L3,R,L). append4(L1,L2,L3,L4,L) :- append3(L1,L2,L3,R), append(L4,R,L). append5(L1,L2,L3,L4,L5,L) :- append4(L1,L2,L3,L4,R), append(L5,R,L). /***************************************************************************** Private tabled predicate declarations ******************************************************************************/ :- table afterP/3 as private. :- table beforeP/3 as private. :- table dateAfterP/3 as private. :- table dateBeforeP/3 as private. :- table intervalAfterP/3 as private. :- table intervalBeforeP/3 as private. :- table differentFromP/3 as private. :- table knowsOfP/3 as private. :- table relatedPersonOfP/3 as private. :- table familyOfP/3 as private. :- table auntOfP/3 as private. :- table cousinOfP/3 as private. :- table nephewOfP/3 as private. :- table nieceOfP/3 as private. :- table uncleOfP/3 as private. :- table ancestorOfP/3 as private. :- table grandParentOfP/3 as private. :- table grandMotherOfP/3 as private. :- table grandFatherOfP/3 as private. :- table parentOfP/3 as private. :- table fatherOfP/3 as private. :- table motherOfP/3 as private. :- table descendantOfP/3 as private. :- table childOfP/3 as private. :- table daughterOfP/3 as private. :- table sonOfP/3 as private. :- table grandChildOfP/3 as private. :- table grandsonOfP/3 as private. :- table granddaughterOfP/3 as private. :- table inLawOfP/3 as private. :- table brotherInLawOfP/3 as private. :- table daughterInLawOfP/3 as private. :- table fatherInLawOfP/3 as private. :- table motherInLawOfP/3 as private. :- table sisterInLawOfP/3 as private. :- table sonInLawOfP/3 as private. :- table siblingOfP/3 as private. :- table brotherOfP/3 as private. :- table sisterOfP/3 as private. :- table spouseOfP/3 as private. :- table husbandOfP/3 as private. :- table wifeOfP/3 as private. :- table unknownPersonRelationP/3 as private. :- table endsInDateP/3 as private. :- table startsInDateP/3 as private. :- table expirationDateP/3 as private. :- table hasLocationP/3 as private. :- table ownedByP/3 as private. :- table involvedP/3 as private. :- table occursInP/3 as private. :- table occursDuringP/3 as private. :- table operationAreaP/3 as private. :- table ownerOfP/3 as private. :- table relatedEventP/3 as private. :- table teacherP/3 as private. :- table relationObjectP/3 as private. :- table studentP/3 as private. :- table ageP/3 as private. :- table dateDayP/3 as private. :- table dateMonthP/3 as private. :- table dateYearP/3 as private. :- table descriptionP/3 as private. :- table dateP/1 as private. :- table dateIntervalP/1 as private. :- table eventP/1 as private. :- table locationP/1 as private. :- table cityP/1 as private. :- table countryP/1 as private. :- table regionP/1 as private. :- table stateP/1 as private. /***************************************************************************** Dynamic predicate declarations ******************************************************************************/ :- dynamic after/3 as shared. :- dynamic before/3 as shared. :- dynamic dateAfter/3 as shared. :- dynamic dateBefore/3 as shared. :- dynamic intervalAfter/3 as shared. :- dynamic intervalBefore/3 as shared. :- dynamic differentFrom/3 as shared. :- dynamic knowsOf/3 as shared. :- dynamic relatedPersonOf/3 as shared. :- dynamic familyOf/3 as shared. :- dynamic auntOf/3 as shared. :- dynamic cousinOf/3 as shared. :- dynamic nephewOf/3 as shared. :- dynamic nieceOf/3 as shared. :- dynamic uncleOf/3 as shared. :- dynamic ancestorOf/3 as shared. :- dynamic grandParentOf/3 as shared. :- dynamic grandMotherOf/3 as shared. :- dynamic grandFatherOf/3 as shared. :- dynamic parentOf/3 as shared. :- dynamic fatherOf/3 as shared. :- dynamic motherOf/3 as shared. :- dynamic descendantOf/3 as shared. :- dynamic childOf/3 as shared. :- dynamic daughterOf/3 as shared. :- dynamic sonOf/3 as shared. :- dynamic grandChildOf/3 as shared. :- dynamic grandsonOf/3 as shared. :- dynamic granddaughterOf/3 as shared. :- dynamic inLawOf/3 as shared. :- dynamic brotherInLawOf/3 as shared. :- dynamic daughterInLawOf/3 as shared. :- dynamic fatherInLawOf/3 as shared. :- dynamic motherInLawOf/3 as shared. :- dynamic sisterInLawOf/3 as shared. :- dynamic sonInLawOf/3 as shared. :- dynamic siblingOf/3 as shared. :- dynamic brotherOf/3 as shared. :- dynamic sisterOf/3 as shared. :- dynamic spouseOf/3 as shared. :- dynamic husbandOf/3 as shared. :- dynamic wifeOf/3 as shared. :- dynamic unknownPersonRelation/3 as shared. :- dynamic endsInDate/3 as shared. :- dynamic startsInDate/3 as shared. :- dynamic expirationDate/3 as shared. :- dynamic hasLocation/3 as shared. :- dynamic ownedBy/3 as shared. :- dynamic involved/3 as shared. :- dynamic occursIn/3 as shared. :- dynamic occursDuring/3 as shared. :- dynamic operationArea/3 as shared. :- dynamic ownerOf/3 as shared. :- dynamic relatedEvent/3 as shared. :- dynamic teacher/3 as shared. :- dynamic relationObject/3 as shared. :- dynamic student/3 as shared. :- dynamic age/3 as shared. :- dynamic dateDay/3 as shared. :- dynamic dateMonth/3 as shared. :- dynamic dateYear/3 as shared. :- dynamic description/3 as shared. :- dynamic date/1 as shared. :- dynamic dateInterval/1 as shared. :- dynamic event/1 as shared. :- dynamic location/1 as shared. :- dynamic city/1 as shared. :- dynamic country/1 as shared. :- dynamic region/1 as shared. :- dynamic state/1 as shared. /****************************************************************/ /***************************************************************************** Private predicate definitions ******************************************************************************/ afterP(X,Y,L) :- after(X,Y,L). beforeP(X,Y,L) :- before(X,Y,L). dateAfterP(X,Y,L) :- dateAfter(X,Y,L). dateBeforeP(X,Y,L) :- dateBefore(X,Y,L). intervalAfterP(X,Y,L) :- intervalAfter(X,Y,L). intervalBeforeP(X,Y,L) :- intervalBefore(X,Y,L). differentFromP(X,Y,L) :- differentFrom(X,Y,L). knowsOfP(X,Y,L) :- knowsOf(X,Y,L). relatedPersonOfP(X,Y,L) :- relatedPersonOf(X,Y,L). familyOfP(X,Y,L) :- familyOf(X,Y,L). auntOfP(X,Y,L) :- auntOf(X,Y,L). cousinOfP(X,Y,L) :- cousinOf(X,Y,L). nephewOfP(X,Y,L) :- nephewOf(X,Y,L). nieceOfP(X,Y,L) :- nieceOf(X,Y,L). uncleOfP(X,Y,L) :- uncleOf(X,Y,L). ancestorOfP(X,Y,L) :- ancestorOf(X,Y,L). grandParentOfP(X,Y,L) :- grandParentOf(X,Y,L). grandMotherOfP(X,Y,L) :- grandMotherOf(X,Y,L). grandFatherOfP(X,Y,L) :- grandFatherOf(X,Y,L). parentOfP(X,Y,L) :- parentOf(X,Y,L). fatherOfP(X,Y,L) :- fatherOf(X,Y,L). motherOfP(X,Y,L) :- motherOf(X,Y,L). descendantOfP(X,Y,L) :- descendantOf(X,Y,L). childOfP(X,Y,L) :- childOf(X,Y,L). daughterOfP(X,Y,L) :- daughterOf(X,Y,L). sonOfP(X,Y,L) :- sonOf(X,Y,L). grandChildOfP(X,Y,L) :- grandChildOf(X,Y,L). grandsonOfP(X,Y,L) :- grandsonOf(X,Y,L). granddaughterOfP(X,Y,L) :- granddaughterOf(X,Y,L). inLawOfP(X,Y,L) :- inLawOf(X,Y,L). brotherInLawOfP(X,Y,L) :- brotherInLawOf(X,Y,L). daughterInLawOfP(X,Y,L) :- daughterInLawOf(X,Y,L). fatherInLawOfP(X,Y,L) :- fatherInLawOf(X,Y,L). motherInLawOfP(X,Y,L) :- motherInLawOf(X,Y,L). sisterInLawOfP(X,Y,L) :- sisterInLawOf(X,Y,L). sonInLawOfP(X,Y,L) :- sonInLawOf(X,Y,L). siblingOfP(X,Y,L) :- siblingOf(X,Y,L). brotherOfP(X,Y,L) :- brotherOf(X,Y,L). sisterOfP(X,Y,L) :- sisterOf(X,Y,L). spouseOfP(X,Y,L) :- spouseOf(X,Y,L). husbandOfP(X,Y,L) :- husbandOf(X,Y,L). wifeOfP(X,Y,L) :- wifeOf(X,Y,L). unknownPersonRelationP(X,Y,L) :- unknownPersonRelation(X,Y,L). endsInDateP(X,Y,L) :- endsInDate(X,Y,L). startsInDateP(X,Y,L) :- startsInDate(X,Y,L). expirationDateP(X,Y,L) :- expirationDate(X,Y,L). hasLocationP(X,Y,L) :- hasLocation(X,Y,L). ownedByP(X,Y,L) :- ownedBy(X,Y,L). involvedP(X,Y,L) :- involved(X,Y,L). occursInP(X,Y,L) :- occursIn(X,Y,L). occursDuringP(X,Y,L) :- occursDuring(X,Y,L). operationAreaP(X,Y,L) :- operationArea(X,Y,L). ownerOfP(X,Y,L) :- ownerOf(X,Y,L). relatedEventP(X,Y,L) :- relatedEvent(X,Y,L). teacherP(X,Y,L) :- teacher(X,Y,L). relationObjectP(X,Y,L) :- relationObject(X,Y,L). studentP(X,Y,L) :- student(X,Y,L). ageP(X,Y,L) :- age(X,Y,L). dateDayP(X,Y,L) :- dateDay(X,Y,L). dateMonthP(X,Y,L) :- dateMonth(X,Y,L). dateYearP(X,Y,L) :- dateYear(X,Y,L). descriptionP(X,Y,L) :- description(X,Y,L). dateP(X) :- date(X). dateIntervalP(X) :- dateInterval(X). eventP(X) :- event(X). locationP(X) :- location(X). cityP(X) :- city(X). countryP(X) :- country(X). regionP(X) :- region(X). stateP(X) :- state(X). schoolP(X) :- school(X). personP(X) :- person(X). relationP(X) :- relation(X). directRelationP(X) :- directRelation(X). acquaintanceP(X) :- acquaintance(X). cohabitationP(X) :- cohabitation(X). colleagueshipP(X) :- colleagueship(X). enmityP(X) :- enmity(X). friendshipP(X) :- friendship(X). neighborhoodP(X) :- neighborhood(X). teachingP(X) :- teaching(X). indirectRelationP(X) :- indirectRelation(X). /***************************************************************************** Subproperty rules ******************************************************************************/ relationObjectP(X,Y,L) :- studentP(X,Y,L). relationSubjectP(X,Y,L) :- teacherP(X,Y,L). differentFromP(X,Y,L) :- knowsOfP(X,Y,L). knowsOfP(X,Y,L) :- relatedPersonOfP(X,Y,L). relatedPersonOfP(X,Y,L) :- unknownPersonRelationP(X,Y,L). relatedPersonOfP(X,Y,L) :- familyOfP(X,Y,L). familyOfP(X,Y,L) :- spouseOfP(X,Y,L). spouseOfP(X,Y,L) :- wifeOfP(X,Y,L). spouseOfP(X,Y,L) :- husbandOfP(X,Y,L). familyOfP(X,Y,L) :- siblingOfP(X,Y,L). siblingOfP(X,Y,L) :- sisterOfP(X,Y,L). siblingOfP(X,Y,L) :- brotherOfP(X,Y,L). familyOfP(X,Y,L) :- inLawOfP(X,Y,L). inLawOfP(X,Y,L) :- sonInLawOfP(X,Y,L). inLawOfP(X,Y,L) :- sisterInLawOfP(X,Y,L). inLawOfP(X,Y,L) :- motherInLawOfP(X,Y,L). inLawOfP(X,Y,L) :- fatherInLawOfP(X,Y,L). inLawOfP(X,Y,L) :- daughterInLawOfP(X,Y,L). inLawOfP(X,Y,L) :- brotherInLawOfP(X,Y,L). familyOfP(X,Y,L) :- descendantOfP(X,Y,L). descendantOfP(X,Y,L) :- grandChildOfP(X,Y,L). grandChildOfP(X,Y,L) :- granddaughterOfP(X,Y,L). grandChildOfP(X,Y,L) :- grandsonOfP(X,Y,L). descendantOfP(X,Y,L) :- childOfP(X,Y,L). childOfP(X,Y,L) :- sonOfP(X,Y,L). childOfP(X,Y,L) :- daughterOfP(X,Y,L). familyOfP(X,Y,L) :- ancestorOfP(X,Y,L). ancestorOfP(X,Y,L) :- parentOfP(X,Y,L). parentOfP(X,Y,L) :- motherOfP(X,Y,L). parentOfP(X,Y,L) :- fatherOfP(X,Y,L). ancestorOfP(X,Y,L) :- grandParentOfP(X,Y,L). grandParentOfP(X,Y,L) :- grandFatherOfP(X,Y,L). grandParentOfP(X,Y,L) :- grandMotherOfP(X,Y,L). familyOfP(X,Y,L) :- uncleOfP(X,Y,L). familyOfP(X,Y,L) :- nieceOfP(X,Y,L). familyOfP(X,Y,L) :- nephewOfP(X,Y,L). familyOfP(X,Y,L) :- cousinOfP(X,Y,L). familyOfP(X,Y,L) :- auntOfP(X,Y,L). /***************************************************************************** Symmetry rules ******************************************************************************/ relatedPersonOfP(X,Y,L) :- relatedPersonOfP(Y,X,L). intervalOverlapP(X,Y,L) :- intervalOverlapP(Y,X,L). familyOfP(X,Y,L) :- familyOfP(Y,X,L). cousinOfP(X,Y,L) :- cousinOfP(Y,X,L). siblingOfP(X,Y,L) :- siblingOfP(Y,X,L). spouseOfP(X,Y,L) :- spouseOfP(Y,X,L). /***************************************************************************** Transitivity rules ******************************************************************************/ beforeP(X,Y,L) :- beforeP(X,Z,L1), beforeP(Z,Y,L2), append(L1,L2,L). afterP(X,Y,L) :- afterP(X,Z,L1), afterP(Z,Y,L2), append(L1,L2,L). dateAfterP(X,Y,L) :- dateAfterP(X,Z,L1), dateAfterP(Z,Y,L2), append(L1,L2,L). dateBeforeP(X,Y,L) :- dateBeforeP(X,Z,L1), dateBeforeP(Z,Y,L2), append(L1,L2,L). intervalAfterP(X,Y,L) :- intervalAfterP(X,Z,L1), intervalAfterP(Z,Y,L2), append(L1,L2,L). intervalBeforeP(X,Y,L) :- intervalBeforeP(X,Z,L1), intervalBeforeP(Z,Y,L2), append(L1,L2,L). relatedPersonOfP(X,Y,L) :- relatedPersonOfP(X,Z,L1), relatedPersonOfP(Z,Y,L2), append(L1,L2,L). familyOfP(X,Y,L) :- familyOfP(X,Z,L1), familyOfP(Z,Y,L2), append(L1,L2,L). ancestorOfP(X,Y,L) :- ancestorOfP(X,Z,L1), ancestorOfP(Z,Y,L2), append(L1,L2,L). cousinOfP(X,Y,L) :- cousinOfP(X,Z,L1), cousinOfP(Z,Y,L2), append(L1,L2,L). descendantOfP(X,Y,L) :- descendantOfP(X,Z,L1), descendantOfP(Z,Y,L2), append(L1,L2,L). siblingOfP(X,Y,L) :- siblingOfP(X,Z,L1), siblingOfP(Z,Y,L2), append(L1,L2,L). hasLocationP(X,Y,L) :- hasLocationP(X,Z,L1), hasLocationP(Z,Y,L2), append(L1,L2,L). /***************************************************************************** Inverse rules ******************************************************************************/ beforeP(X,Y,L) :- afterP(Y,X,L). afterP(X,Y,L) :- beforeP(Y,X,L). dateBeforeP(X,Y,L) :- dateAfterP(Y,X,L). dateAfterP(X,Y,L) :- dateBeforeP(Y,X,L). intervalBeforeP(X,Y,L) :- intervalAfterP(Y,X,L). intervalAfterP(X,Y,L) :- intervalBeforeP(Y,X,L). ancestorOfP(X,Y,L) :- descendantOfP(Y,X,L). descendantOfP(X,Y,L) :- ancestorOfP(Y,X,L). childOfP(X,Y,L) :- parentOfP(Y,X,L). parentOfP(X,Y,L) :- childOfP(Y,X,L). grandChildOfP(X,Y,L) :- grandParentOfP(Y,X,L). grandParentOfP(X,Y,L) :- grandChildOfP(Y,X,L). ownedByP(X,Y,L) :- ownerOfP(Y,X,L). ownerOfP(X,Y,L) :- ownedByP(Y,X,L). /***************************************************************************** Property-Chain rules ******************************************************************************/ uncleOfP(X1,X3,L) :- brotherOfP(X1,X2,L1), parentOfP(X2,X3,L2), append(L1,L2,L). uncleOfP(X1,X4,L) :- husbandOfP(X1,X2,L1), sisterOfP(X2,X3,L2), parentOfP(X3,X4,L3), append3(L1,L2,L3,L). parentOfP(X1,X3,L) :- parentOfP(X1,X2,L1), siblingOfP(X2,X3,L2), append(L1,L2,L). nieceOfP(X1,X3,L) :- daughterOfP(X1,X2,L1), siblingOfP(X2,X3,L2), append(L1,L2,L). nephewOfP(X1,X3,L) :- sonOfP(X1,X2,L1), siblingOfP(X2,X3,L2), append(L1,L2,L). sonInLawOfP(X1,X3,L) :- husbandOfP(X1,X2,L1), childOfP(X2,X3,L2), append(L1,L2,L). sisterInLawOfP(X1,X3,L) :- sisterOfP(X1,X2,L1), spouseOfP(X2,X3,L2), append(L1,L2,L). sisterInLawOfP(X1,X3,L) :- wifeOfP(X1,X2,L1), siblingOfP(X2,X3,L2), append(L1,L2,L). motherInLawOfP(X1,X3,L) :- motherOfP(X1,X2,L1), spouseOfP(X2,X3,L2), append(L1,L2,L). fatherInLawOfP(X1,X3,L) :- fatherOfP(X1,X2,L1), spouseOfP(X2,X3,L2), append(L1,L2,L). daughterInLawOfP(X1,X3,L) :- wifeOfP(X1,X2,L1), childOfP(X2,X3,L2), append(L1,L2,L). brotherInLawOfP(X1,X3,L) :- brotherOfP(X1,X2,L1), spouseOfP(X2,X3,L2), append(L1,L2,L). brotherInLawOfP(X1,X3,L) :- husbandOfP(X1,X2,L1), siblingOfP(X2,X3,L2), append(L1,L2,L). grandMotherOfP(X1,X3,L) :- motherOfP(X1,X2,L1), parentOfP(X2,X3,L2), append(L1,L2,L). grandFatherOfP(X1,X3,L) :- fatherOfP(X1,X2,L1), parentOfP(X2,X3,L2), append(L1,L2,L). grandsonOfP(X1,X3,L) :- sonOfP(X1,X2,L1), childOfP(X2,X3,L2), append(L1,L2,L). granddaughterOfP(X1,X3,L) :- daughterOfP(X1,X2,L1), childOfP(X2,X3,L2), append(L1,L2,L). On Aug 21, 2012, at 12:34 PM, David Warren <warren@...> wrote: abolish_all_private_tables/0 is defined in thread.P, so I guess you need to import it from there. The manual should say what module a predicate is defined in, and if it doesn't say it's a standard predicate, then it must be imported -David -----Original Message----- From: K. A. [mailto:k_a_7245@...] Sent: Tuesday, August 21, 2012 10:47 AM To: Xsb-development@...; David Warren; Terrance Swift Subject: RE: Bug in multi-threaded XSB? That's good to hear. After checking with the manual I thought that I would want to call abolish_all_private_tables/0 on a thread poised to answer a query. (This is described on p. 237 of vol. I.) However, when I try that I get: "Error: [XSB/Runtime/P]: [Existence (No procedure usermod: abolish_all_private_tables/0 exists)] []" The command-line version doesn't seem to recognize this predicate either. Do I need to import it first from somewhere? If that doesn't work, what other alternatives are there for destroying private tables? Thanks, K. --- On Tue, 8/21/12, Terrance Swift <tswift@...> wrote: From: Terrance Swift <tswift@...> Subject: RE: Bug in multi-threaded XSB? To: "K. A." <k_a_7245@...>, "Xsb-development@..." <Xsb-development@...>, "David Warren" <warren@...> Date: Tuesday, August 21, 2012, 8:02 AM Absolutely -- private tables are handled just as tables in the single threaded engine, so you can abolish them whenever it makes sense in the single threaded engine. Heap gc for threads works just as in the single-threaded engine, and private dynamic code also works as in the st-engine. Terry ________________________________________ From: K. A. [k_a_7245@...] Sent: Monday, August 20, 2012 10:08 PM To: Xsb-development@...; David Warren; Terrance Swift Subject: RE: Bug in multi-threaded XSB? Hi Terry, Thanks for the suggestion. One question: Can a private thread table be explicitly abolished on that thread while other threads are running? If so, then your suggestion could work with the unkillable threads created by xsb_ccall_thread_create - I can maintain a queue of 20 or so such threads and reuse them to answer queries, since I can't kill them. (This would obviously be unworkable in general, but in this case it's unlikely that I'll have more than 20 concurrent queries at any given time, so it might be viable.) But this presupposes that I can abolish their private tables while other threads are running. If not, then I'd have to continually spawn new threads to answer new queries, and if these threads never quit then I'll be out of memory before too long. Thanks, Konstantine --- On Mon, 8/20/12, Terrance Swift <tswift@...> wrote: From: Terrance Swift <tswift@...> Subject: RE: Bug in multi-threaded XSB? To: "K. A." <k_a_7245@...>, "Xsb-development@..." <Xsb-development@...>, "David Warren" <warren@...> Date: Monday, August 20, 2012, 9:34 AM Here is the simple attached file. A couple of people have written large applications with MT-XSB but they probably did not use all the features you are trying to use, and MT-XSB is certainly less stable than single-threaded XSB. I wish I had sufficient time to help every user, but looking very briefly at your code, its impossible to understand. 1) Why you can't remove the C/XSB interface, at least for debugging 2) Why you can't use thread exiting, at least for debugging (actually its not an issue from the prolog level, threads exit once they have been joined.) 3) How many of the problems in XSB and how many are multi-programming errors of yours. No offense, everybody makes MT-programming errors. I hope you'll take seriously the suggestions I gave you to start breaking down the program to help isolate the bugs. I'll try to fix the bugs I can once they become clear to me. Terry ________________________________________ From: K. A. [k_a_7245@...] Sent: Sunday, August 19, 2012 7:06 PM To: Xsb-development@...; David Warren; Terrance Swift Subject: RE: Bug in multi-threaded XSB? Hi Terry, Thanks for the reply. You're absolutely right that sharing tables can (and most probably would) lead to problems with concurrently executing threads. In principle, it shouldn't. In local evaluation, there is a mechanism to share concurrently evaluated shared tables which I think is described in the manual, and is described in gory detail in Marques and Swift, 2008 ICLP. It shouldn't, as long as there are no insertions while concurrent queries are being answered. But if there are such insertions (as is almost always the case in question-answering systems), then these will not occur with abolished tables but rather with whatever tables happen to be in effect at insertion time, which will in turn depend on the state of the various active queries at the time. This will most likely invalidate the insertions and lead to wrong results. It seems to me that what one really needs is the ability to spawn a query at time t with a *fresh* (empty) set of tables and with whatever data happens to be in the database at t. I don't know if, with private predicates/tables, there is a way to forcibly copy the contents of one thread's predicates/tables into another. If so, then this could be readily implemented in XSB as things stand. But if not, then I don't see how XSB's tabling can be used to implement a system that does concurrent updates/queries. However if you are using MT XSB, I strongly recommend first testing out the programming idioms you use from the command-line shell, then putting things into the C-XSB interface. XSB's MT C-Prolog interface is quite ambitious, but as a result it may have some undiagnosed bugs. Just as importantly, XSB-only code will be far, far easier to debug. I'm sure that's the case, but I would think that the point of having a C<-->XSB interface is to facilitate things that, for some reason or other, cannot be easily done exclusively in XSB. If I could program the whole thing in XSB by itself, I could, but I can't. Moreover, it may well be that some bugs are peculiar to the C<-->XSB interface and are not reproducible when the code is expressed purely in XSB. What happens to them? So I tried declaring tables to be private, *but* it seems you cannot have shared dynamic predicates with private tables. The predicates themselves (the facts in the database) have to be "dynamic as shared", because otherwise a new thread spawned to answer a query would not have any usable facts to work with. But again, the combination "table p/2 as private" with "dynamic p/2 as shared" does not seem to be allowed by XSB (perhaps someone can set me straight if I'm missing something). I haven't looked at your code, but in principle you can do this. See the attached file, though I'm sure that your program is much more complex than the attached file. I didn't see an attached file - could you please resend? I'd be very interested to see how this could be done. At any rate, we're still left with the more narrow question of why in the world the query threads in this particular example are not exiting when they are explicitly killed. XSB threads should exit when they are killed, although there are cases ?>>where this doesn't happen. For instance if the thread is waiting on I/O, or waiting on a mutex, etc. the waiting thread needs to be signaled, to >>wake up, quit doing whatever it is doing, and exit. XSB has a lot of OS interfaces, and I haven't yet >>handled this in every single case. In addition, when a thread is killed it must clean up after itself, and free what ever mutexes, db_cursors, >>etc. that it has (but may not be currently working on). While XSB can and does do this for some resources, it doesn't >>always have full knowledge of all of the resources a thread has taken. So getting thread cancellation to work in any system is awkward, and requires a lot from both the system and the user. In pretty much any MT system, its better to have the thread itself exit, >>and save thread cancellation for special cases. Well, these are all special cases unfortunately - these threads *never* exit by themselves because for some reason which I don't quite understand, they go into a read-eval loop *after* they are finished evaluating the goal that they were intended to answer. As far as I can see there is no mechanism in the C<-->XSB interface to create a thread to do a specific job and then quit. The only way to do that would be to execute embedded XSB code using thread_create, but then keeping track of the thread id and getting the thread to communicate properly becomes very difficult because, again, we are not in XSB proper but in the embedded C/XSB world. So xsb_ccall_thread_create seems to be the only viable option, but again, these threads don't quit by themselves, so killing them is the only option (and an absolute must, because it seems that when multiple threads are active, tables cannot be abolished, resources cannot be reclaimed, etc.). So if you can show me how your program works from the command-line >interface, I'll help debug at that level. Once we are sure the program is >doing what you want at that level, we can see how to add the C-XSB interface. Like I said earlier, this seems to assume that the bug will be reproducible in pure XSB, and I'm not at all sure about that. The C code sample is very small so I hope someone will be able to take a look at it and reach a (tentative, at least) verdict. In the meanwhile I'll try to take the client/server I've written in C and re-express it in pure XSB using the samples given in the /examples/sockets directory of the distribution. Is there anyone in particular in this list to whom I should direct questions pertaining to that client/server code? Many thanks, Konstantine --- On Sat, 8/18/12, David Warren <warren@...> wrote: From: David Warren <warren@...> Subject: RE: Bug in multi-threaded XSB? To: "K. A." <k_a_7245@...>, "Xsb-development@..." <Xsb-development@...> Cc: "Terrance Swift" <tswift@...> Date: Saturday, August 18, 2012, 12:17 PM I strongly suspect it is because you are sharing the tables and that other thread is not exiting (or at least XSB doesn't know it has exited.) I would suggest that you don't use shared tables but private tables. If they are not causing this problem now, they certainly will if you run a multithreaded query service where you have a number of queries and some can update the underlying data. -David -----Original Message----- From: K. A. [mailto:k_a_7245@...] Sent: Friday, August 17, 2012 5:24 PM To: Xsb-development@... Cc: David Warren; Terrance Swift Subject: Bug in multi-threaded XSB? Could someone please explain to me why the following C code fails to find any answers to the last query in the main function - the parentOf(X,Y,L) query? Note that every insertion does an abolish_all_tables first, and that right before each insertion or query in main, there should be (as far as I can see) only one XSB thread active - the main one. I assume this is a bug in multi-threaded XSB unless someone can provide an alternative explanation. If you put the C code below in a file test.c, you can make the program as follows: gcc -c -I/home/.../XSB/emu -I/home/.../XSB/config/x86_64-unknown-linux-gnu-mt -O3 -fno-strict-aliasing -Wall -pipe -D_GNU_SOURCE test.c gcc -o test.out -lm -ldl -Wl -export-dynamic -lpthread /home/.../XSB/config/x86_64-unknown-linux-gnu-mt/saved.o/xsb.o test.o #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <pthread.h> #include <ctype.h> #include "cinterf.h" #include "varstring_xsb.h" #include "context.h" void doInsert(char* insertion_command,th_context* th){ int res = xsb_command_string(th, "abolish_all_tables."); res = xsb_command_string(th, insertion_command);} void doCommand(char* cmd,th_context* th){ int res = xsb_command_string(th, cmd);} struct QueryArgs { char* query; th_context* th;}; void* doQuery(void* args) { struct QueryArgs * pt = (struct QueryArgs *) args; char* query = (*pt).query; th_context* th = (*pt).th; XSB_StrDefine(retstr); th_context* new_query_thread; xsb_ccall_thread_create(th,&new_query_thread); int rc = xsb_query_string_string(new_query_thread,query,&retstr,"|"); int answer_count = 0; while ((rc == XSB_SUCCESS) && (++answer_count)) { printf("\nAnswer: %s\n",retstr.string); rc = xsb_next_string(new_query_thread, &retstr,"|"); } printf("\n%d answers for this query: %s\n",answer_count,query); xsb_kill_thread(new_query_thread); return NULL;} void answerQuery(char* query,th_context* th){ void* exit_status; pthread_t new_thread; struct QueryArgs args = {.query = query, .th = th}; pthread_create(&new_thread,NULL,doQuery,(void*) &args); pthread_join(new_thread,&exit_status); } int main() { char init_string[MAXPATHLEN]; char* xsbHome = getenv("XSB_HOME"); strcpy(init_string, xsbHome); xsb_init_string(init_string); th_context *main_th = xsb_get_main_thread(); doCommand("consult('prelude.P').",xsb_get_main_thread()); doCommand("load_dyn('main.P').",xsb_get_main_thread()); doInsert("assert(auntOf(woman9,foo,[])).",xsb_get_main_thread()); answerQuery("familyOf(X,Y,L).",xsb_get_main_thread()); doInsert("assertAll([parentOf(man2,man3,[1/555]),parentOf(man1,man2,[7 /982,1/34])]).",xsb_get_main_thread()); answerQuery("parentOf(X,Y,L).",xsb_get_main_thread()); xsb_close(xsb_get_main_thread()); return 0;} The contents of the other 2 files are as follows: /************* Contents of prelude.P : ***************/ assertAll([]). assertAll([H|T]) :- asserta(H),assertAll(T). /************* Contents of main.P : ***************/ :- import append/3 from basics. :- table motherOf/3 as shared. :- table fatherOf/3 as shared. :- table parentOf/3 as shared. :- table ancestorOf/3 as shared. :- table auntOf/3 as shared. :- table familyOf/3 as shared. :- dynamic motherOf/3 as shared. :- dynamic fatherOf/3 as shared. :- dynamic parentOf/3 as shared. :- dynamic ancestorOf/3 as shared. :- dynamic auntOf/3 as shared. :- dynamic familyOf/3 as shared. familyOf(X,Y,L) :- ancestorOf(X,Y,L). ancestorOf(X,Y,L) :- parentOf(X,Y,L). parentOf(X,Y,L) :- motherOf(X,Y,L). parentOf(X,Y,L) :- fatherOf(X,Y,L). familyOf(X,Y,L) :- auntOf(X,Y,L). familyOf(X,Y,L) :- familyOf(Y,X,L). ancestorOf(X,Y,L) :- ancestorOf(X,Z,L1), ancestorOf(Z,Y,L2), append(L1,L2,L).
This is a scandal that must be immediately and proactively addressed by Democratic Party and the United States Senate. What I know is this — when it comes to winning elections and defeating Donald Trump, these same Democrats who refuse to hire black folk for senior management, will want Beyoncé, Jay Z, Oprah and Chance the Rapper to stand on stage with them. They will appear on the radio with Charlamagne, Steve Harvey and Tom Joyner to drum up black support. They will visit black colleges and hold pep rallies. They will film promo videos featuring black victims of police brutality and racial violence and trot out Mothers of the Movement all across the country, but how much do Senate Democrats truly value black folk if they don't have a single black chief of staff? How much do Senate Democrats honestly and earnestly value African-Americans if less than 1% of their senior staff members are black?
the drunkard’s walk basically, the drunkard’s walk is a history of the mathematical study of randomness, including physics, probability, normal distribution, and other concepts. but, really, it’s a look at the role that randomness plays in our lives, and how most things are quantifiably less random than they may seem. there were dozens of times, while reading, that i thought, that makes complete sense, but i can’t imagine that i’m going to remember it. this was often because the proof of the theory made sense at an objective level when explained, but was counter-intuitive to real life and regular ol’ human thinking. a great example of this is the author’s extended explanation of the marilyn vos savant “let’s make a deal” problem. marilyn vos savant writes a column in parade magazine where she answers questions from readers, using her “world’s record highest iq”. she famously responded to a question, years ago, that posed this problem: if a contestant on “let’s make a deal” (the 70s game show) were given three doors to choose from, and told that a new car was behind one of them, and lousy prizes behind the other two; then, after choosing a door, and having monty hall reveal one of the remaining doors as a loser prize and given the opportunity to shift choice on the remaining two, should the contestant make the change? her response was that, statistically — yes, the odds are better if the contestant changes her answer. people freaked at her response, including lots of professional mathematicians, who (wrongly) argued that, with two remaining choices, the chances are still 50/50 that the car is behind the door of the contestant’s original choosing. the proof of this fallacy is all based on probability computations. the contestant’s original choice had a 33% chance of being correct — or 1 in 3. but monty hall removed one of those three (knowing which doors had the good and loser prizes). so, sticking with the original choice still leaves the original probability of 1 in 3. but changing choices raises the probability to 1 in 2 — better odds. the author acknowledges that while this kind of proof is true, and mathematically observable, it’s contrary to how our brains are wired to consider options. that said, it was this kind of story – the book has hundreds of them — and the author’s wittiness, that kept me reading through the brain strain. oh, btw, the title refers to the term scientists use to describe the path of atoms and sub-atomic particles — seemingly random as they carom off each other in a willy-nilly path. ultimately, this path is not actually random, but is merely beyond our ability to compute, based on the absurd quantity of possibilities rising from interactions with other moving particles. 2 thoughts on “the drunkard’s walk” I remember when a friend told me about this problem, but he focused on the odds of *not* winning the car. The odds of choosing a goat are 2/3, so it’s more likely that our door has a goat behind it. Monty reveals another door with a goat behind it, so we assume the odds have changed to 1/2 and we should stay with our choice. However, our odds of choosing a goat aren’t affected by the revealing of a goat behind a different door; it’s still more likely that we chose a goat to begin with, so the smart move is to change doors. @Paul Loeffler – The last episode of season one of Numb3rs shows Charlie explaining this problem to his class. I’m Mark Oestreicher I'm a partner in The Youth Cartel, providing services and resources for individual youth workers and organizations. I’ve been married to Jeannie for 30 years, and have two great kids: Riley (22) and Max (18). Here's The Youth Cartel's website. twitter: @markosbeard instagram: @whyismarko
Rising from the floor in older adults. The primary goal was to determine the ability of older adults to rise from the floor. A secondary goal was to explore how rise ability might differ based on initial body positions and with or without the use of an assistive device. Cross-sectional analysis of young, healthy older, and congregate housing older adults. University-based laboratory and congregate housing facility. Young adult controls (12 men and 12 women, mean age 23 years), healthy older adults (12 men and 12 women, mean age 73 years), and congregate housing older adults (32 women and 6 men, mean age 80 years). The healthy older adult women (n = 12, mean age 75 years) and a subset of the congregate housing women (n = 27, mean age 81 years) were identified for further analyses. Videotaping and timing of rising from the floor from controlled initial body positions (supine, on side, prone, all fours, and sitting) and with or without the use of a furniture support. Whether subjects were successful in rising, and if they were, the time taken to rise. Subjects also rated their perceived difficulty of the task as compared to the reference task, rising from a supine position. Older adults have more difficulty rising from the floor than younger adults. The healthy old took twice as long as the young to rise, whereas the congregate old took two to three times as long as the healthy old to rise. Although all young and healthy old rose from every position, a subset of the congregate housing residents was unable to rise from any position, 24% when attempting to rise without a support and 13% when attempting to rise with a support. Congregate old were most likely to be successful when rising from a side-lying position while using the furniture for support. The more able congregate old, as well as the young and healthy old, rose more quickly and admitted to the least difficulty when rising from the all fours position. The inability to rise from the floor is relatively common in congregate housing older adults. Based on the differences between groups in time to complete the rise, determining the differences in rise strategies, and the underlying biomechanical requirements of rising from different positions with or without a support would appear to be useful. These data may serve as the foundation for future interventions to improve the ability to rise from the floor.
#include <iostream> #include <vector> #include "quick_sort.h" int main() { std::vector<int> vector = {1, 2, 3, 6, 4, 5}; std::vector<int> temp = vector; std::sort(temp.begin(), temp.end()); quick_sort(vector); std::cout << std::boolalpha << (vector == temp); }
Q: How to speed up drawing of scaled image? Audio playback chokes during window resize I am writing an audio player for OSX. One view is a custom view that displays a waveform. The waveform is stored as a instance variable of type NSImage with an NSBitmapImageRep. The view also displays a progress indicator (a thick red line). Therefore, it is updated/redrawn every 30 milliseconds. Since it takes a rather long time to recalculate the image, I do that in a background thread after every window resize and update the displayed image once the new image is ready. In the meantime, the original image is scaled to fit the view like this: // The drawing rectangle is slightly smaller than the view, defined by // the two margins. NSRect drawingRect; drawingRect.origin = NSMakePoint(sideEdgeMarginWidth, topEdgeMarginHeight); drawingRect.size = NSMakeSize([self bounds].size.width-2*sideEdgeMarginWidth, [self bounds].size.height-2*topEdgeMarginHeight); [waveform drawInRect:drawingRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1]; The view makes up the biggest part of the window. During live resize, audio starts choking. Selecting the "big" graphic card on my Macbook Pro makes it less bad, but not by much. CPU utilization is somewhere around 20-40% during live resizes. Instruments suggests that rescaling/redrawing of the image is the problem. Once I stop resizing the window, CPU utilization goes down and audio stops glitching. I already tried to disable image interpolation to speed up the drawing like this: [[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone]; That helps, but audio still chokes during live resizes. Do you have an idea how to improve this? The main thing is to prevent the audio from choking. A: The graphics redraw should not affect audio playback. I have an audio app that does live redraw when resizing the window, as well as a background thread to render the waveform, and it has no problem playing back audio. The audio thread that the ioProc which reads the audio data is a real-time thread, and has a higher priority than most other threads. If your graphics thread has a lock or calls something that blocks (including memory allocations or freeing) something in the audio thread, that could cause the audio to stutter. Multi-threading issues are complex, with issues such as thread-safety of data structures, locks, priority inversion, blocking, and many other things to deal with.
We hypothesize that microRNAs (miRNAs) play important roles in the human brain during healthy aging and in Alzheimer's disease (AD). MiRNAs are small RNAs that regulate gene expression. Most miRNAs act through hybridization with "target" mRNAs. We and others have observed an altered pattern of miRNA expression in AD brain tissue. In various models where miRNAs are reduced, neurodegeneration ensues quickly. The mechanisms of miRNA-mediated neuroprotection and neurodegeneration are poorly understood, partly because most of the mRNA targets of miRNAs are still unknown. Unfortunately, there has been no suitable technique to indicate experimentally which mRNAs are miRNA targets. We developed a high-throughput assay to identify miRNA targets in human brain tissue. Here we propose to use this novel assay to better understand the neurochemistry of AD. Our research program has the following Specific Aims: 1. Obtain brain tissue from the University of Kentucky Alzheimer's Disease Center Brain Bank that is thoroughly characterized including clinical evaluations near death, short post-mortem intervals, and state-of-the-art neuropathology. Clinical cohorts will include brain tissue of non- demented controls, mild cognitive impairment controls, AD, and non-AD dementia controls. 2. Optimize a novel biochemical assay for accurate, specific, and direct miRNA target identification. This assay, the Co-immunoprecipitation MicroRNA Assay Procedure (CoMAP), has been optimized in cell culture and preliminarily in brain tissue. CoMAP will be performed on the brain tissue described in Specific Aim #1. 3. Analyses of CoMAP data will incorporate data from mRNA microarray, miRNA microarray, clinical data, and pathological data referent to the same specimens. These analyses will focus on discovering miRNA targets relevant to AD treatment and diagnosis, and on elucidating the mRNA targets that subserve the neuroprotective functions of miRNA. The raw and analyzed datasets, and the CoMAP assay itself, will be shared freely with other investigators. These Specific Aims are intended to answer the following fundamental questions: 7 What mRNAs are miRNA targets in healthy brain aging? 7 What mRNAs are miRNA targets in Alzheimer's disease and other altered brain states? 7 How do changes in mRNA targets correlate with changes in expression of miRNAs? PUBLIC HEALTH RELEVANCE: The objective of this project is to use a novel technique to better understand the causes of Alzheimer's disease and why some people remain Alzheimer's disease-free during aging. We will characterize in human brains a newly-discovered high impact level of gene regulation, which are called microRNAs, using a method called a Co-immunoprecipitation MicroRNA Assay Procedure (CoMAP). We think that we can produce information that directly or indirectly contributes to diagnostics and therapeutics for Alzheimer's disease patients.
E. P. Sanders Ed Parish Sanders (born 18 April 1937) is an American New Testament scholar and a principal proponent of the "New Perspective on Paul". He is a major scholar in the scholarship on the historical Jesus and contributed to the view that Jesus was part of a renewal movement within Judaism. He has been Arts and Sciences Professor of Religion at Duke University, North Carolina, since 1990. He retired in 2005. Sanders is a Fellow of the British Academy. In 1966 he received a Doctor of Theology degree from Union Theological Seminary in New York City. In 1990 he received a Doctor of Letters degree from the University of Oxford and a Doctor of Theology degree from the University of Helsinki. He has authored, co-authored or edited 13 books and numerous articles. He has received a number of prizes, including the 1990 University of Louisville and Louisville Presbyterian Theological Seminary Grawemeyer Award for the best book on religion published in the 1980s for Jesus and Judaism. Biography Sanders was born on April 18, 1937, in Grand Prairie, Texas. He attended Texas Wesleyan College (1955–1959) and Perkins School of Theology at Southern Methodist University (1959–1962). He spent a year (1962–1963) studying at Göttingen, the University of Oxford, and in Jerusalem. Between September 1963 and May 1966 Sanders studied at Union Theological Seminary, New York City, for his Doctor of Theology degree. His thesis was entitled The Tendencies of the Synoptic Tradition (published in 1969), which used form criticism to examine whether the Gospel tradition changed in consistent ways. The thesis was supervised by W. D. Davies. He taught at McMaster University (Hamilton, Ontario) from 1966 to 1984. In 1968 he won a fellowship from the Canada Council and spent a year in Israel, studying Rabbinic Judaism. In 1984, he became Dean Ireland's Professor of the Exegesis of Holy Scripture at the University of Oxford and a Fellow of Queen's College, positions he kept until his move to Duke University in 1990. He has also held visiting professorships and lectureships at Trinity College, Dublin, and the University of Cambridge. Sanders identifies himself as a "liberal, modern, secularized Protestant" in his book Jesus and Judaism; fellow scholar John P. Meier calls him a postliberal Protestant. In any case, he is cognizant of Albert Schweitzer's indictment of liberal theology's attempt to make Jesus in its own image, and seeks to keep his religious convictions out of his scholarship. Thought and writings Sanders is known for his New Testament scholarship. His field of special interest is Judaism and Christianity in the Greco-Roman world. He is one of the leading scholars in contemporary historical Jesus research, the so-called "third quest," which places Jesus firmly in the context of Judaism. In contemporary scholarship, Jesus is seen as the founder of a "renewal movement within Judaism," to use Sanders' phrase. He promotes the predominant view that Jesus was an apocalyptic prophet. Sanders' first major book was Paul and Palestinian Judaism, which was published in 1977. He had written the book by 1975, but had difficulty in having it published. Sanders argued that the traditional Christian interpretation that Paul was condemning Rabbinic legalism was a misunderstanding of both Judaism and Paul's thought, especially since it assumed a level of individualism in these doctrines that was not present, and disregarded notions of group benefit or collective privilege. Rather, Sanders argued, the key difference between pre-Christian Judaism and Pauline teaching was to be found in ideas of how a person becomes one of the People of God. Sanders termed the Jewish belief "covenantal nomism": one was a member of the people by virtue of God's covenant with Abraham, and one stayed in it by keeping the Law. Sanders claimed that Paul's belief was one of participationist eschatology: the only way to become one of the People of God was through faith in Christ ("dying with Christ") and the Old Covenant was no longer sufficient. But, once inside, appropriate behavior was required of the Christian, behavior based on the Jewish Scriptures, but not embracing all aspects of it. Both patterns required the grace of God for election (admission), and the behavior of the individual, supported by God's grace. The dividing line, therefore, was Paul's insistence on faith in Christ as the only way to election. However, Sanders stressed that Paul also "loved good deeds" and that when his words are taken in context, it emerges that Paul advocates good works in addition to faith in Christ. Sanders' next major book was Jesus and Judaism, published in 1985. In this work he argued that Jesus began as a follower of John the Baptist and was a prophet of the restoration of Israel. Sanders saw Jesus as creating an eschatological Jewish movement through his appointment of the Apostles and through his preaching and actions. After his execution (the trigger for which was Jesus overthrowing the tables in the temple court of Herod's Temple, thereby antagonizing the political authorities) his followers continued the movement, expecting his return to restore Israel. One consequence of this return would involve Gentiles worshiping the god of Israel. Sanders could find no substantial points of opposition between Jesus and the Pharisees, and he viewed Jesus as abiding by Jewish law and the disciples as continuing to keep it (cf. e.g., Acts 3:1; 21:23–26, for their worship in the Temple). Sanders also argues that Jesus' sayings did not entirely determine Early Christian behavior and attitudes, as is shown by Paul's discussion of divorce (1 Cor. 7:10–16) where the latter quotes Jesus' sayings and then gives his own independent ruling. In one interview, Sanders stated that Paul felt that "he was the model to his churches." Judaism: Practice and Belief was published in 1992 and tested Sanders' thesis in the light of concrete Jewish practices. Sanders argued that there was a "Common Judaism", that is, beliefs and practices common to all Jews, regardless of which religious party they belonged to. After the reign of Salome Alexandra, the Pharisees were a small but very respected party which had a varying amount of influence within Judaism. The main source of power, however, was with the rulers and especially the aristocratic priesthood (Sadducees). Sanders argues that the evidence indicates that the Pharisees did not dictate policy to any of these groups or individuals. In general, Sanders stressed the importance of historical context for a proper understanding of first century religion. He attempted to approach Judaism on its own terms, not in the context of the Protestant–Catholic debates of the sixteenth century in order to redefine views on Judaism, Paul, and Christianity as a whole. As Sanders said, he reads Paul in his context, which is "Palestine in the first century and especially first century Judaism." In this spirit, one of Sanders' articles is titled "Jesus in Historical Context". In a 2000 encyclopedia entry on Jesus whom Sanders calls an 'eschatological prophet', the subject avoids the word 'angel' although mention is made of the two men 'in dazzling clothes' at the empty tomb. Sanders has argued that more comparative studies are needed, with wider examinations conducted between New Testament texts and the other available historical sources of the period. Speaking at a conference organized in his honor, he described the attractiveness of these types of comparative studies: "They are not all that easy, but they are an awful lot of fun." Selected works Books Articles and Chapters Festschrift References Further reading External links E. P. Sanders, "Intellectual Autobiography" E. P. Sanders, "Jesus in Historical Context" E. P. Sanders, "The Question of Uniqueness in the Teaching of Jesus" The New Perspective on Paul Category:1937 births Category:Living people Category:People from Grand Prairie, Texas Category:Grand Prairie High School alumni Category:American theologians Category:American biblical scholars Category:Critics of the Christ myth theory Category:New Testament scholars Category:Jewish–Christian debate Category:Union Theological Seminary (New York City) alumni Category:Southern Methodist University alumni Category:McMaster University faculty Category:Duke University faculty Category:Fellows of The Queen's College, Oxford Category:Fellows of the British Academy Category:Dean Ireland's Professors of the Exegesis of Holy Scripture Category:Guggenheim Fellows Category:20th-century theologians Category:21st-century theologians Category:American Protestants
Wyatt Gaines Deschutes County residents could vote on a ballot measure in November to prohibit the county from enforcing gun laws. The proposed measure, called the Second Amendment Preservation Ordinance, would broadly re-interpret the U.S. and Oregon Constitutions' right to bear arms and add "ancillary firearms rights." The Deschutes County sheriff would decide whether any local, state or federal gun law is unconstitutional, under the measure's broader interpretations of those rights. And if the sheriff thinks a law is unconstitutional, the measure would forbid the county from enforcing it. Individual violations could result in a fine of up to $2,000. The measure defines ancillary firearms rights as "rights that are closely related to the right to keep and bear arms protected by the Second Amendment; including the right to manufacture, transfer, buy and sell firearms, firearm accessories and ammunition." Redmond resident Jerrad Robison is the chief petitioner on the initiative. He's joined by three other Redmond residents, including B.J. Soper, who's involved with the Pacific Patriots Network and Oregon Oath Keepers. "Our Second Amendment has been attacked over and over," Robison said. "It's time for us to take the offensive and stop all of this." He pointed to a statewide ballot initiative, IP-43, as an example of such attacks. IP-43 would forbid sales and transfers of assault rifles and high-capacity magazines in Oregon. Supporters could begin collecting signatures soon to place it on the November ballot. "IP-43 is one of the most unconstitutional things I've ever seen," Robison said. He worries that it, like other restrictions on gun rights, could be forced onto rural counties by voters in Portland, Eugene and Salem. He said the Second Amendment Preservation Ordinance would counter it and other gun laws he believes are unconstitutional. Supporters hope to have it adopted by county lawmakers or voters in every Oregon county. "We're doing this at a grassroots level, county by county. We're letting the citizens of the counties decide, rather than the state," he said. Penny Okamoto, executive director of Ceasefire Oregon, the group behind IP-43, believes that Robison has it backward. She argues that it's the Second Amendment Preservation Ordinance that's unconstitutional because local law cannot pre-empt state and federal laws. "These types of laws might make someone feel good about something, but they aren't constitutional and are setting up counties for a lawsuit that I truly hope never happens because it would probably be someone who was shot because the law wasn't enforced suing the county," she said. Others who might have standing to sue could include a county official fired for enforcing a gun law that the sheriff has declared unconstitutional under the ordinance. Okamoto also suggested such laws could encourage criminal activity. "When law enforcement officials say they are not going to enforce laws like Oregon's universal background laws, it invites criminals to come to the county to buy guns because law enforcement won't do anything about it," she said. Local law enforcement Deschutes County Sheriff Shane Nelson recently publicly announced opposition to IP-43 and support for the Second Amendment. While he agrees with the ideals behind the Second Amendment Preservation Ordinance, he says he does not support it as a ballot initiative. "I respect and understand the reasoning to want to establish this type of ordinance and I support the spirit of the ordinance, but I believe that the U. S. Constitution carries much more weight and respect than a county ordinance. I believe the Second Amendment is sufficient and this would not be an effective or good use of county ordinances," he said via an email from a department spokesman. He also said that the better way to deal with unconstitutional laws is to work through the courts, advocacy organizations and the legislature to change them. Nelson added that he does not agree with a local ordinance giving him additional job responsibilities—such as deciding if gun laws are constitutional—that go beyond what state law sets out as his job description. Under state law, District Attorney John Hummel was responsible for writing the ballot title and summary of the measure, which you can read on the Source website. "Oregon law required me to succinctly and accurately describe the content of the proposed ballot measure and to explain the result of its passage. I worked hard to craft a summary that will well inform voters and I'm proud of the result," he said. "Now that my legal work is done, I hope voters join me in rejecting this proposal that is clearly unconstitutional and, if enacted, would make our community less safe." Already the law elsewhere The Deschutes County version of the Second Amendment Preservation Ordinance is somewhat vague on the specifics, but a version already adopted in other Oregon counties was clearer. In 2015, Coos County voted 61 percent to 39 percent to adopt a similar measure. The same year, the Wheeler County Court unanimously adopted one without even going to voters. Those counties' versions explain that laws regulating semi-automatic weapons, assault-style firearms, magazine size, ammunition types and modifications such as bump stocks are all unenforceable in the counties. They also remove background check requirements for private and internet sales. Coos County Sheriff Craig Zanni said that since the Second Amendment Preservation Ordinance passed there, it has not affected his job and he has not been asked to determine whether any local, state or federal gun law is constitutional. He supports its spirit, though. "I think it makes a great statement about putting more restrictions on law-abiding citizens instead of dealing with people who are violating the laws we already have," Zanni said. Supporters of the Deschutes County Second Amendment Preservation Ordinance must collect 4,144 verified signatures by Aug. 6 to place the measure on the Nov. 6 ballot.
DC Dispatch Arizona Sen. Jeff Flake’s criticism of Trump wins him national prominence His decision to sit out the GOP convention reflects his principles and the purpling of Arizona. When asked whether he would attend the Republican Party Convention in Cleveland, Arizona, Sen. Jeff Flake told a reporter he would be mowing his lawn instead. Flake wasn’t alone in skipping his party’s biggest bash: His fellow Arizona Republican, Sen. John McCain, opted to campaign in northern Arizona. Alaska Sen. Lisa Murkowski, who also faces reelection, chose to spend the week traveling by small plane and boat to communities along the Yukon River. It’s not unheard of for senators to miss their party’s conventions, especially if they’re running for reelection and need the time to connect with constituents. But while the others have endorsed nominee Donald Trump, Flake has been an outspoken critic. A first-term senator who clearly has his eyes on the future, Flake’s principled disapproval of Trump’s positions on race and immigration have propelled him onto the national stage. They have also won him accolades back home in a state that surely faces a major political shift during what Flake, who is 53, clearly hopes will be a long political career. The senator declined to be interviewed by High Country News. But last week, in his second of two interviews on NPR this month, Flake recounted his heated interchange with Trump during a private meeting with Republican senators in the Capitol in early June. “Donald Trump pointed at me and said, ‘You've been very critical of me,’” Flake told NPR. “And I said, ‘Yes I have.’” Flake rebuked Trump for saying that federal Judge Gonzalo Curiel was biased because of his Mexican heritage and for calling Mexicans rapists. “We can't hope to win the White House using language like that,” Flake said. Flake warned that Trump could lose Arizona, long a red state. In fact, early polls show Trump and Hillary Clinton in a tight race. Flake, a conservative Mormon who comes from a political family in Arizona, knows from personal experience that his state is no longer an easy win for Republicans. He was a veteran House representative when he ran for Senate in 2012, but his race grew closer as the election neared. He won by just 49 percent to 46 percent. Polling showed Latino voters vastly favored his Democratic opponent Richard Carmona. Although all statewide elected officials currently are Republicans, the inexorable shifts in demographics underway in Arizona mean that the state will likely follow the pattern of Colorado and Nevada, which are solidly purple. “The Latino population is the fastest-growing population in the state and in the not-too-distant future will eclipse the white population. The Republican Party needs to embrace and understand diversity to survive into the future,” says Fred Solop, a professor of politics at Northern Arizona University. “The demographics are changing; we will become a purple state.” So perhaps it’s not surprising that Flake has been willing to take issue so publicly with his party’s nominee on race and immigration. He was one part of the bipartisan so-called “gang of eight,” which passed immigration reform in the Senate in 2013. The legislation went nowhere in the GOP-dominated House. He also sided with President Obama on normalizing relations with Cuba. Flake supports Trump’s vice presidential pick, Indiana Gov. Mike Pence, and hopes to be able to endorse Trump in the future, but not until he stops insulting people from diverse backgrounds. “He cannot continue to call judges born in Indiana a Mexican in a pejorative way and expect to win independents and be in the White House,” Flake told KFYI Radio earlier this week. Flake’s positions on Trump seem to stem from the senator’s deeply held values, not just political expediency. In December, after Trump advocated banning Muslims from entering the United States, Flake visited a mosque and denounced the anti-Muslim rhetoric of the campaign as “not in keeping with the values and ideals that have made this country the shining city on the hill that it is.” Even Democrats in Arizona believe Flake’s critiques of Trump reflect the senator’s ethics and their own. For instance, Deb Gericke Mcleod from Chandler, Arizona, posted on Facebook that she nearly always opposes Flake’s politics, but “he deserves plenty of kudos for standing up to that vile, disgusting man and his syncopaths (sic). Finally there is a Republican who isn't afraid to say, ‘ENOUGH’ and then stand up for what is correct.” Many Arizonans shared Flake’s outrage when Trump suggested that McCain was not a war hero; McCain never said much about the insults, but Flake did. “Arizonans feel insulted and astounded that McCain just takes it and smiles,” says Bill Scheel, a 34-year-veteran of Arizona politics and a partner at Javelina, a campaign consultancy and public relations firm that largely works on Democratic issues. “Flake’s principled stand is very much in contrast to the disappointment people have in McCain for not standing up.” But of course McCain, at 79, is nearing the end of his career. So Flake has a lot more incentive to make sure Trump’s rhetoric doesn’t hasten the purpling of Arizona.
The late-sixties in Germany represent a new era of freedom and revolutions of sex, music and culture. Yet following a confrontation with his step-father, the rebellious fourteen-year-old Wolfgang is sent to Freistatt, a foster home for difficult children. Once there, Wolfgang resists the brutal working conditions and education methods of the wardens. But for how long can he manage to withstand a system of violence and oppression that seems so at odds with the rest of society? Marc Brummund's feature film debut holds a lens to the stark realities of the foster system, and its effect on the nascent adulthood of Germany's forgotten boys. Sanctuary premiered at Baden-Baden, Beijing and Göteborg, and won the Golden Lola, Germany's equivalent of the Oscar, for Best Screenplay. Trailer Press "Approximately 800,000 children and adolescents were admitted to homes in West Germany between 1949 and 1975, according to the Home Care Fund of the Ministry of Family Affairs. As opposed to ecclesiastical institutions in which clergy looked after the children, the educational concept of these homes was based on drill, discipline and submission, even by physical violence... Sanctuary's images settle deeper in the mind and the gut of the audience than would a documentary, communicating inner and outer resistance possibilities... Not only is the film's topic socially important and too little discussed; Sanctuary is worth seeing because these feelings are important, too"
A practical guide to evaluating cardiovascular, renal, and pulmonary function in mice. The development and widespread use of genetically altered mice to study the role of various proteins in biological control systems have led to a renewed interest in methodologies and approaches for evaluating physiological phenotypes. As a result, cross-disciplinary approaches have become essential for fully realizing the potential of these new and powerful animal models. The combination of classical physiological approaches and modern innovative technology has given rise to an impressive arsenal for evaluating the functional results of genetic manipulation in the mouse. This review attempts to summarize some of the techniques currently being used for measuring cardiovascular, renal, and pulmonary variables in the intact mouse, with specific attention to practical considerations useful for their successful implementation.
1. Introduction {#sec1-sensors-18-03289} =============== Hyperspectral images contain both the spatial and spectral characteristics. In recent years, they have been widely used in agriculture and forestry research, marine monitoring, natural disaster monitoring, and military reconnaissance \[[@B1-sensors-18-03289]\]. However, with the increasing development of remote sensing technology, the requirement to increase the resolution of hyperspectral data has led to an extreme increase in its amount, which has caused tremendous pressure on the transmission and storage of hyperspectral images \[[@B2-sensors-18-03289],[@B3-sensors-18-03289]\]. Solving this problem can start from the hardware itself, such as increasing the storage space of the hardware. However, attempting to solve this problem from the hardware will inevitably raise the cost significantly, and finally turn the problem into an expensive hardware cost problem. Another feasible means to solve this problem is to perform effective data compression and solve the problem at the data source in the form of a small amount of information to represent all the information. The compressed sensing theory was proposed by Donoho et al. in 2006 \[[@B4-sensors-18-03289]\]. The theory states that if the signal is sparse itself or in a certain transform domain, the signal can be sampled with much less data than those of Nyquist sampling criterion, and reconstructed accurately with these sampled data \[[@B5-sensors-18-03289]\]. Berger \[[@B6-sensors-18-03289]\] pointed out that the high correlation of the signal itself will help improve the compression ratio and the reconstructed quality of compressed sensing. Unlike ordinary 2D images, hyperspectral images contain high interspectral and interspatial correlation. How to make full use of these characteristics of hyperspectral images to improve reconstruction performance is a hot research field of hyperspectral compressive sensing. Huang et al. proposed a block compressive sensing (BCS) of hyperspectral images based on prediction error \[[@B7-sensors-18-03289]\]. Lin et al. proposed a hyperspectral image compression algorithm based on adaptive band grouping \[[@B8-sensors-18-03289]\]. Zhang proposed a structured sparsity-based hyperspectral blind compressive sensing (SSHBCS) method to sparsify hyperspectral images \[[@B9-sensors-18-03289]\]. Spatial autocorrelation coefficients were involved in the strategy of spatial adaptive partitioning to determine the size of the block \[[@B10-sensors-18-03289]\]. Gao pointed out that the k-means clustering algorithm was suitable for spectral adaptive grouping \[[@B11-sensors-18-03289]\]. Gaussian measurement matrix \[[@B12-sensors-18-03289]\], the discrete cosine transform (DCT) sparse dictionary \[[@B13-sensors-18-03289],[@B14-sensors-18-03289]\], and the stagewise orthogonal matching pursuit (StOMP) algorithm \[[@B15-sensors-18-03289],[@B16-sensors-18-03289]\] were used in the hyperspectral compressive sensing. Xu et al. proposed an adaptive grouping distributed compressive sensing reconstruction (AGDCS) of plant hyperspectral data \[[@B17-sensors-18-03289]\]. A sparse and low-rank near-isometric linear embedding (SLRNILE) method based on the John-Lindenstrauss lemma for dimensionality reduction and to extract proper features for hyperspectral imagery (HSI) classification \[[@B18-sensors-18-03289]\]. A robust kernel archetypoid analysis (RKADA) method was proposed to extract pure endmembers from HSI \[[@B19-sensors-18-03289]\], in which each pixel is assumed to be a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. A fast and robust principal component analysis on Laplacian graph (FRPCALG) method was proposed to select bands of hyperspectral imagery \[[@B20-sensors-18-03289]\]. In our previous research \[[@B21-sensors-18-03289]\], we have developed SSCS technology for plant hyperspectral data in the spectral domain. Huang et al. introduced BCS for hyperspectral images in the spatial domain \[[@B7-sensors-18-03289]\]. Hyperspectral images have strong spectral and spatial correlations. The compressive sensing of hyperspectral images using both the spectral and spatial correlations can further improve their sparse representation, which is also able to improve the accuracy of reconstruction. Therefore, the strategies of interspatial blocking and interspectral grouping are still needed to be further studied. In order to further improve the compression and reconstruction performance of hyperspectral compressive sensing, adaptively interspatial blocking strategy, adaptively interspectral grouping strategy and linear interspectral prediction technology are integrated to construct the new prediction-based spatial-spectral adaptive hyperspectral compressive sensing (PSSAHCS) algorithm, which can not only compress and reconstruct hyperspectral images effectively, but also have strong denoising performance. In this paper, the row correlation and column correlation of hyperspectral images are studied according to the spatial autocorrelation coefficients \[[@B22-sensors-18-03289]\], and used to determine the optimal block size. In addition, after analyzing the interspectral correlation of adjacent bands \[[@B22-sensors-18-03289],[@B23-sensors-18-03289],[@B24-sensors-18-03289],[@B25-sensors-18-03289]\], the introduction of a k-means clustering algorithm \[[@B26-sensors-18-03289],[@B27-sensors-18-03289],[@B28-sensors-18-03289]\] is used to group the hyperspectral images in the spectral domain, and all highly correlated bands are divided into the same group. At the same time, it can be seen that the correlation of some adjacent bands decreases significantly according to the spectral correlation curve, and that the spectral curves are very jittery near these bands. Gao \[[@B11-sensors-18-03289]\] pointed out that this phenomenon is caused by the significant absorption of electromagnetic waves in these bands by the atmosphere, which means that the images in these bands contain a lot of noise. Therefore, this paper introduced the idea of intragroup prediction to improve the reconstruction quality of these noise bands. The reference image is chosen in the group, and then the rest of the images in the group are predicted using the reference image. The residual image can be calculated by using the intragroup reference image to subtract the intragroup prediction image, and then the residual image is encoded and compressed. Additionally, the residual image is reconstructed using a reconstruction algorithm. Finally, the reconstructed image can be obtained by the reconstructed residual image and the reference image \[[@B29-sensors-18-03289]\]. 2. Methods {#sec2-sensors-18-03289} ========== 2.1. PSSAHCS Algorithm {#sec2dot1-sensors-18-03289} ---------------------- [Figure 1](#sensors-18-03289-f001){ref-type="fig"} is the flowchart of the PSSAHCS algorithm and experiments. Firstly, the spatial correlation of hyperspectral images is analyzed and the appropriate ranges of row correlation coefficients and column correlation coefficients are obtained to determine the spatial block size. Secondly, the spectral correlation of the adjacent bands is calculated in the spectral domain and the grouping of hyperspectral images is adaptively decided using the k-means clustering algorithm. Thirdly, the local means and local standard deviations (LMLSD) criterion is used to choose the optimal band with the lowest noise as the key band in a group, and the non-keys bands are linearly predicted according to the key bands. Fourthly, the Gaussian measurement matrix is used to compress key bands, and DCT is used as the sparse dictionary combining with Gaussian measurement to structure the sensor matrix. Finally, the reconstruction results are evaluated from the spatial domain and the spectral domain, respectively. The spatial evaluation is performed from the three perspectives of the subjective evaluation, the peak signal-to-noise ratio, and the spatial autocorrelation coefficient. The spectral evaluation is performed using two levels: spectral curve comparison and correlation between spectra. 2.2. Adaptive Spatial Blocking {#sec2dot2-sensors-18-03289} ------------------------------ Similar to ordinary two-dimensional images, hyperspectral images show a certain spatial correlation. The spatial correlation of hyperspectral images is caused by the similarities between the local structures of the objects, adjacent pixels or similar pixels in the same band. The spatial correlation is generally expressed by the spatial correlation coefficient, η(Δ*x*, Δ*y*), as follows:$$\mathsf{\eta}\left( {\Delta x,\Delta y,z} \right) = \iint f_{x,y,z} \times f_{x + \Delta x,y + \Delta y,z}dxdy$$ where $f_{x,y,z}$ represents the gray value of the *x*-th row and the *y*-th column pixel in the *z*-th band. Δ*x*, Δ*y* represent the distances between the target pixel and the current pixel, respectively. Because the above equation is not convenient for calculation, it is discretized and normalized to the following equation:$$\mathsf{\eta}\left( {\Delta x,\Delta y,z} \right) = \frac{\sum_{x = 1}^{a}\sum_{y = 1}^{b}{({f_{x,y,z} - \overline{f_{z}}})} \times {({f_{x + \Delta x,y + \Delta y,z} - \overline{f_{z}}})}}{\sum_{x = 1}^{a}\sum_{y = 1}^{b}{({f_{x,y,z} - \overline{f_{z}}})}^{2}}$$ where *a* and *b* represent the number of rows and columns of the image, respectively; $\overline{f_{z}}$ denotes the average gray level of the *z*-th band image of the hyperspectral image. $$M = \min\left( {\Delta x,\Delta y,z} \right)\ s.t.\ 0.9 \leq \mathsf{\eta}\left( {\Delta x,\Delta y,z} \right) \leq 0.95$$ where M is the block size. 2.3. Adaptive Spectral Grouping {#sec2dot3-sensors-18-03289} ------------------------------- ### 2.3.1. Adaptive Spectral Grouping Using k-Means Clustering Algorithm {#sec2dot3dot1-sensors-18-03289} For the distribution of spectral correlation, high-correlation bands should be divided into the same group to make full use of the interspectral redundancy. The k-means clustering algorithm is used to group camellia sinensishyperspectral images. The basic idea of the k-means clustering algorithm is as follows: In the initial stage, it is necessary to give k centroids as the initial k cluster centers, and then calculate the distance between each sample and k centroids. Each class recalculates the mean value as the new k centroids. Finally, repeat the above steps until the centroids do not change. In the k-means clustering algorithm, the Euclidean distance is generally used to measure the distance between the samples and the centroid. For tea hyperspectral images, the distance between each sample and the centroid can be calculated as follows:$$D\left( {z_{i},z_{c}} \right) = \sqrt{\sum_{x = 1}^{a}\sum_{y = 1}^{b}\left( {f_{x,y,z_{i}} - f_{x,y,z_{c}}} \right)^{2}}$$ where $z_{i}$ denotes the band, and $z_{c}$ denotes thezc cluster centroid. ### 2.3.2. LMLSD {#sec2dot3dot2-sensors-18-03289} After the spectral clustering is grouped, it is necessary to select the image with the least noise from the group as the key image. In this paper, LMLSD \[[@B18-sensors-18-03289]\] is used to find the minimum noise image in the group. The calculation equations of LMLSD are as follows:$$M_{num}\left( z \right) = \frac{1}{a \times b}\sum_{i = 1}^{a}\sum_{j = 1}^{b}f\left( {i,j,z} \right)$$ $$D_{num}\left( z \right) = \sqrt{\frac{1}{\left( {a \times b - 1} \right)}\sum_{i = 1}^{a}\sum_{j = 1}^{b}\left( {f_{num}\left( {i,j,z} \right) - M_{num}\left( z \right)} \right)^{2}}$$ $$R\left( z \right) = 20lg\frac{M_{mean}\left( z \right)}{D_{mean}\left( z \right)}$$ In Equation (5), *a*, *b* are the row and column of the sub-block image, respectively, *z* is the *z*-th band in the group, *f~num~* is the *num*-th sub-block, and *M~num~* is the mean gray value of the *num*-th sub-block. In Equation (6), *D~num~* is the standard deviation of the *num*-th sub-block. After obtaining the maximum and minimum values of *D~num~*, we can get the count of sub-blocks in the same interval, and calculate the mean gray *M~mean~* and standard deviation *D~mean~* of the all sub-blocks in the interval with the most sub-blocks. In Equation (7), *R* is the PSNR value of LMLSD. ### 2.3.3. Spectral Grouping Based on Linear Prediction {#sec2dot3dot3-sensors-18-03289} After grouping by the k-means clustering algorithm, there is a high interspectral correlation in each group, so the linear predictor can be used to predict the images in the group. The linear prediction model is shown, $$f_{g}\left( {x,y} \right) = m \times f_{R}\left( {x,y} \right) + n$$ where $f_{R}\left( {x,y} \right)$ is the gray value of the pixel of the *x*-th row and the *y*-th column of the reference image in the group; $f_{g}\left( {x,y} \right)$ is the gray value of the pixel of the *x*-th row and *y*-th column of the image to be predicted in the group; and *m* and *n* are prediction coefficients. Assuming that the size of the image is $a \times b$, the prediction error of each image to be predicted can be $$\mathsf{\varepsilon} = \sqrt{\frac{1}{a \times b}\sum_{x = 1}^{a}\sum_{y = 1}^{b}\left( {f_{g}\left( {x,y} \right) - m \times f_{R}\left( {x,y} \right) - n} \right)^{2}}$$ For Equation (9), in order to minimize *ε*, we need to satisfy Equations (10) and (11):$$\frac{\partial\varepsilon^{2}}{\partial m} = 0$$ $$\frac{\partial\varepsilon^{2}}{\partial n} = 0$$ According to Equations (9)--(11), the solutions for *m* and *n* can be obtained respectively, as shown in Equations (12) and (13):$$m = \frac{R\left( {f_{R}\left( {x,y} \right),f_{g}\left( {x,y} \right)} \right) - u\left( {f_{R}\left( {x,y} \right)} \right) \times u\left( {f_{g}\left( {x,y} \right)} \right)}{R\left( {f_{R}\left( {x,y} \right),f_{R}\left( {x,y} \right)} \right) - u\left( {f_{R}\left( {x,y} \right)} \right)^{2}}$$ $$n = u\left( {f_{g}\left( {x,y} \right)} \right) - \frac{R\left( {f_{R}\left( {x,y} \right),f_{g}\left( {x,y} \right)} \right) - u\left( {f_{R}\left( {x,y} \right)} \right) \times u\left( {f_{g}\left( {x,y} \right)} \right)}{R\left( {f_{R}\left( {x,y} \right),f_{R}\left( {x,y} \right)} \right) - u\left( {f_{R}\left( {x,y} \right)} \right)^{2}} \times u\left( {f_{R}\left( {x,y} \right)} \right)$$ where $$R\left( {f_{R}\left( {x,y} \right),f_{g}\left( {x,y} \right)} \right) = \frac{1}{a \times b}\sum_{x = 1}^{a}\sum_{y = 1}^{b}f_{R}\left( {x,y} \right) \times f_{g}\left( {x,y} \right)$$ $$u\left( {f_{R}\left( {x,y} \right)} \right) = \frac{1}{a \times b}\sum_{x = 1}^{a}\sum_{y = 1}^{b}f_{R}\left( {x,y} \right)$$ $$u\left( {f_{g}\left( {x,y} \right)} \right) = \frac{1}{a \times b}\sum_{x = 1}^{a}\sum_{y = 1}^{b}f_{g}\left( {x,y} \right)$$ After the prediction is completed, the prediction residual for a certain pixel is obtained by subtracting the predicted value from the actual gray value of the pixel. The residual image of the prediction is compressed, and the reconstructed image is then added to the corresponding predicted image to obtain the reconstructed image. 2.4. Stagewise Orthogonal Matching Pursuit Algorithm {#sec2dot4-sensors-18-03289} ---------------------------------------------------- The stagewise orthogonal matching pursuit (StOMP) algorithm was proposed by Donoho et al. in 2012 \[[@B30-sensors-18-03289]\]. The algorithm is an improved algorithm of orthogonal matching pursuit (OMP) \[[@B31-sensors-18-03289]\]. Compared with the OMP algorithm, this algorithm selects multiple atoms per iteration. Therefore, the number of iterations is lower than that of the OMP algorithm, which greatly improves the reconstruction efficiency while ensuring the reconstruction accuracy. 2.5. The Evaluation Measures {#sec2dot5-sensors-18-03289} ---------------------------- ### 2.5.1. PSNR {#sec2dot5dot1-sensors-18-03289} Peak signal-to-noise ratio (PSNR) is chosen to evaluate the reconstructed performance in the spatial domain; mean square error (MSE) and PSNR are defined by $${MSE} = \frac{1}{a \times b \times c}\sum_{x = 1}^{a}\sum_{y = 1}^{b}\sum_{z = 1}^{c}\left| {f_{ori}\left( {x,y,z} \right) - f_{rec}\left( {x,y,z} \right)} \right|^{2}$$ where *a*, *b* and *c* are the row, column and band count of the hyperspectral images, respectively, $f_{ori}$ is the original image, and $f_{rec}$ is the reconstructed image. $${PSNR} = 10 \times log_{10}\frac{\left( {2^{n} - 1} \right)^{2}}{MSE}$$ where *n* is the bits of the image. ### 2.5.2. Interspectral Correlation {#sec2dot5dot2-sensors-18-03289} The interspectral correlation of hyperspectral images is formed by the reflection of a certain object in different wavebands, and there is a high correlation between adjacent pixels at the same spatial position in different bands. The interspectral correlation in hyperspectral images is usually expressed by the spectral correlation coefficient $\mathsf{\zeta}\left( {z_{1},z_{2}} \right)$. The calculation of the spectral correlation coefficient is shown in Equation (19):$$\mathsf{\zeta}\left( {z_{1},z_{2}} \right) = \frac{\sum_{x = 1}^{a}\sum_{y = 1}^{b}{(f_{x,y,z_{1} - {\overline{f}}_{z_{1}}})} \times {(f_{x,y,z_{2} - {\overline{f}}_{z_{2}}})}}{\sqrt{\sum_{x = 1}^{a}\sum_{y = 1}^{b}{(f_{x,y,z_{1} - {\overline{f}}_{z_{1}}})}^{2} \times \sum_{x = 1}^{a}\sum_{y = 1}^{b}{(f_{x,y,z_{2} - {\overline{f}}_{z_{2}}})}^{2}}}$$ where $z_{1}$ and *z*~2~ represent different bands of hyperspectral images, respectively. 3. Experimental Results and Discussion {#sec3-sensors-18-03289} ====================================== 3.1. Data Description {#sec3dot1-sensors-18-03289} --------------------- A visible and near-infrared hyperspectral imaging system covering the spectral wavelengths of 380--1030 nm was used in this study. The system includes an imaging spectrograph, a charge coupled device (CCD) camera (C8484-05, Hamamatsu City, Japan), a lens, two light sources provided by two 150 W quartz tungsten halogen lamps and V10E software (Isuzu Optics Corp, Hsinchu County, Taiwan) for operating the hyperspectral image system. The area CCD array detector of the camera has 6726512 pixels and the spectral resolution is 2.8 nm. The data used in the experiment are hyperspectral images of 12 pieces of camellia sinensis. A single pixel is defined by a 12-bit unsigned integer and the resolution of the processed image is 128 × 256. 3.2. Performance Evaluation in the Spatial Domain {#sec3dot2-sensors-18-03289} ------------------------------------------------- ### 3.2.1. Subjective Performance Comparison {#sec3dot2dot1-sensors-18-03289} SSCS, block hyperspectral compressive sensing (BHCS), AGDCS and PSSAHCS are also used to give the experimental results. [Figure 2](#sensors-18-03289-f002){ref-type="fig"} shows the original images of the 440 nm, 620 nm and 980 nm bands. [Figure 3](#sensors-18-03289-f003){ref-type="fig"}, [Figure 4](#sensors-18-03289-f004){ref-type="fig"}, [Figure 5](#sensors-18-03289-f005){ref-type="fig"}, [Figure 6](#sensors-18-03289-f006){ref-type="fig"}, [Figure 7](#sensors-18-03289-f007){ref-type="fig"}, [Figure 8](#sensors-18-03289-f008){ref-type="fig"}, [Figure 9](#sensors-18-03289-f009){ref-type="fig"}, [Figure 10](#sensors-18-03289-f010){ref-type="fig"}, [Figure 11](#sensors-18-03289-f011){ref-type="fig"}, [Figure 12](#sensors-18-03289-f012){ref-type="fig"}, [Figure 13](#sensors-18-03289-f013){ref-type="fig"} and [Figure 14](#sensors-18-03289-f014){ref-type="fig"} show the reconstructed hyperspectral images of the 440 nm, 620 nm and 980 nm bands at different bit rates for different algorithms. It can be seen from [Figure 3](#sensors-18-03289-f003){ref-type="fig"}, [Figure 4](#sensors-18-03289-f004){ref-type="fig"}, [Figure 5](#sensors-18-03289-f005){ref-type="fig"}, [Figure 6](#sensors-18-03289-f006){ref-type="fig"}, [Figure 7](#sensors-18-03289-f007){ref-type="fig"}, [Figure 8](#sensors-18-03289-f008){ref-type="fig"}, [Figure 9](#sensors-18-03289-f009){ref-type="fig"}, [Figure 10](#sensors-18-03289-f010){ref-type="fig"}, [Figure 11](#sensors-18-03289-f011){ref-type="fig"}, [Figure 12](#sensors-18-03289-f012){ref-type="fig"}, [Figure 13](#sensors-18-03289-f013){ref-type="fig"} and [Figure 14](#sensors-18-03289-f014){ref-type="fig"} that the subject quality of the reconstructed hyperspectral images at different bit rates become better, especially for the details such as edges, veins, leaf stems and so on, when the bit rate rises for all algorithms. At the same time, it can be seen that for the 620 nm image with no significant noise, BHCS and PSSAHCS can achieve a good reconstruction effect for different bit rates. For the reconstructed 440 nm and 980 nm images with significant noise, there is "edge effect" for SSCS, BHCS and AGDCS at low bit rates, while PSSAHCS can denoise effectively. Therefore, PSSAHCS can not only retain the details of the original image, but also remove the noise effectively at different bit rates. ### 3.2.2. The Peak Signal-to-Noise Ratio (PSNR) Performance Comparison {#sec3dot2dot2-sensors-18-03289} [Figure 15](#sensors-18-03289-f015){ref-type="fig"} shows the PSNR of the reconstructed hyperspectral images of the 440 nm, 620 nm and 980 nm bands at different bit rates for different algorithms. As it can be seen from [Figure 15](#sensors-18-03289-f015){ref-type="fig"}, with the increase of the bit rate, the fidelity of reconstructed images of all algorithms can be improved. The reconstructed PSNRs of PSSAHCS for most bands are significantly higher than those of SSCS and BHCS at different bit rates. [Table 1](#sensors-18-03289-t001){ref-type="table"} shows that the average PSNR of most bands of PSSAHCS are significantly higher than those of SSCS, BHCS and AGDCS at the same compression rates. [Table 1](#sensors-18-03289-t001){ref-type="table"} shows the average PSNR for the different algorithms at different bit rates. PSSAHCS can achieve about 2 dB higher average PSNR than that of SSCS, BHCS and AGDCS. ### 3.2.3. Comparison of Spatial Correlation {#sec3dot2dot3-sensors-18-03289} Spatial correlation is one of the characteristics of hyperspectral images. [Figure 16](#sensors-18-03289-f016){ref-type="fig"}, [Figure 17](#sensors-18-03289-f017){ref-type="fig"} and [Figure 18](#sensors-18-03289-f018){ref-type="fig"} show the row and column correlation curves of the reconstructed hyperspectral images of 440 nm, 620 nm and 980 nm of different algorithms at different bit rates. It can be seen that the row correlation and column correlation curves of reconstructed tea hyperspectral images and the original images show the same trend, that is, as the interval increases, the correlation drops. In addition, the row correlations and column correlations of different reconstructed algorithms at 440 nm and 980 nm are higher than the row correlations and column correlations of the original image. This is because the StOMP reconstruction algorithm has a certain denoising ability, and the correlation is obviously improved after denoising. Moreover, it also shows that the row correlation and column correlation of PSSAHCS is slightly higher than that of SSCS, BHCS and AGDCS for different bands. 3.3. Comparison in the Spectral Domain {#sec3dot3-sensors-18-03289} -------------------------------------- ### 3.3.1. Comparison of Spectral Curve {#sec3dot3dot1-sensors-18-03289} A spectral curve is an important way to describe and distinguish different features in hyperspectral images. [Figure 19](#sensors-18-03289-f019){ref-type="fig"} shows the reconstructed spectral curves of different algorithms at different compression rates. It can be seen that the reconstructed spectral curves of different algorithms are closer and closer to the original spectral curves as the compression rate increases. PSSAHCS puts similar bands into the same group, and then uses the prediction algorithm to perform linear prediction to improve the degree of linearity within the group. Therefore, the reconstructed spectral curves of PSSAHCS are obviously smoother than those of SSCS, BHCS and AGDCS at different bit rates. At the same time, the linear prediction algorithm plays a role in removing noise and is useful for hyperspectral imagery. ### 3.3.2 Spectral Correlation Comparison {#sec3dot3dot2-sensors-18-03289} Interspectral correlations of hyperspectral images are actually much higher than their spatial correlations. [Figure 20](#sensors-18-03289-f020){ref-type="fig"} shows the spectral correlation curves of reconstructed tea hyperspectral images of different algorithms at different compression rates. It shows that the ends of the spectral correlation curves of original tea hyperspectral images decrease significantly. Additionally, the interspectral correlations of reconstructed tea hyperspectral images of PSSAHCS are better than those of SSCS, BHCS and AGDCS, especially for those bands with wavelengths larger than 700 nm. 4. Conclusions {#sec4-sensors-18-03289} ============== Spatial adaptive blocking, which is based on the row and column correlations of hyperspectral images, can utilize the spatial correlation effectively. Spectral adaptive grouping divides the bands with high spectral correlation into the same group, so that it can make full use of interspectral correlation. Moreover, the prediction-based strategy is based on the linear model to denoise the hyperspectral images significantly. Therefore, the proposed PSSAHCS algorithm shows huge potential for hyperspectral images. Conceptualization and Methodology, P.X.; Software and Validation, B.C.; Supervision, J.Z. and L.Z.; Funding Acquisition, L.X. and J.Z. This project was funded by the State Scholarship Fund of China Scholarship Council, the Joint Funds of National Natural Science Foundation of China under Grants No. U1609218, the National Key Foundation for Exploring Scientific Instrument of China under Grants No. 61427808, the National Nature Science Foundation of China under Grants Nos. 41671415 and 61205200, and Zhejiang public welfare Technology Application Research Project of China under Grants No. 2016C32087. The authors declare no conflict of interest. ![The flowchart of the prediction-based spatial-spectral adaptive hyperspectral compressive sensing (PSSAHCS) algorithm and experiments.](sensors-18-03289-g001){#sensors-18-03289-f001} ![Original images of different wavelengths. (**a**) 440 nm; (**b**) 660 nm; (**c**) 980 nm.](sensors-18-03289-g002){#sensors-18-03289-f002} ![Reconstructed images of the 440 nm band of single spectral compression sensing (SSCS). (**a**) 0.10 bytes per pixel (bpp); (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g003){#sensors-18-03289-f003} ![Reconstructed images of the 660 nm band of SSCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g004){#sensors-18-03289-f004} ![Reconstructed images of the 980 nm band of SSCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g005){#sensors-18-03289-f005} ![Reconstructed images of the 440 nm band of BHCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g006){#sensors-18-03289-f006} ![Reconstructed images of the 660 nm band of BHCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g007){#sensors-18-03289-f007} ![Reconstructed images of the 980 nm band of BHCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g008){#sensors-18-03289-f008} ![Reconstructed images of the 440 nm band of AGDCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g009){#sensors-18-03289-f009} ![Reconstructed images of the 660 nm band of AGDCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g010){#sensors-18-03289-f010} ![Reconstructed images of the 980 nm band of AGDCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g011){#sensors-18-03289-f011} ![Reconstructed images of the 440 nm band of PSSAHCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g012){#sensors-18-03289-f012} ![Reconstructed images of the 660 nm band of PSSAHCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g013){#sensors-18-03289-f013} ![Reconstructed images of the 980 nm band of PSSAHCS. (**a**) 0.10 bpp; (**b**) 0.15 bpp; (**c**) 0.20 bpp; (**d**) 0.25 bpp.](sensors-18-03289-g014){#sensors-18-03289-f014} ![PSNR of reconstructed images at different bit rates for different algorithms. (**a**) SSCS; (**b**) BHCS; (**c**) AGDCS; (**d**) PSSAHCS.](sensors-18-03289-g015){#sensors-18-03289-f015} ###### Row and column correlations for the 440 nm band of different algorithms. (**a**) Row correlation of SSCS; (**b**) column correlation of SSCS; (**c**) row correlation of BHCS; (**d**) column correlation of BHCS; (**e**) row correlation of AGDCS; (**f**) column correlation of AGDCS; (**g**) row correlation of PSSAHCS; (**h**) column correlation of PSSAHCS. ![](sensors-18-03289-g016a) ![](sensors-18-03289-g016b) ###### Row and column correlations for the 620 nm band of different algorithms. (**a**) Row correlation of SSCS; (**b**) column correlation of SSCS; (**c**) row correlation of BHCS; (**d**) column correlation of BHCS; (**e**) row correlation of AGDCS; (**f**) column correlation of AGDCS; (**g**) row correlation of PSSAHCS; (**h**) column correlation of PSSAHCS. ![](sensors-18-03289-g017a) ![](sensors-18-03289-g017b) ###### Row and column correlations for the 980 nm band of different algorithms. (**a**) Row correlation of SSCS; (**b**) column correlation of SSCS; (**c**) row correlation of BHCS; (**d**) column correlation of BHCS; (**e**) row correlation of AGDCS; (**f**) column correlation of AGDCS; (**g**) row correlation of PSSAHCS; (**h**) column correlation of PSSAHCS. ![](sensors-18-03289-g018a) ![](sensors-18-03289-g018b) ![Spectral curve comparison of different algorithms at different bit rates. (**a**) SSCS; (**b**) BHCS; (**c**) AGDCS; (**d**) PSSAHCS.](sensors-18-03289-g019){#sensors-18-03289-f019} ![Interspectral correlation of reconstructed tea hyperspectral images for different algorithms at different compression rates. (**a**) SSCS; (**b**) BHCS; (**c**) AGDCS; (**d**) PSSAHCS.](sensors-18-03289-g020){#sensors-18-03289-f020} sensors-18-03289-t001_Table 1 ###### The average PSNR of reconstructed tea hyperspectral images at different bit rates. Different Algorithms Average PSNR of Reconstructed Tea Hyperspectral Images (dB) ---------------------- ------------------------------------------------------------- --------- --------- --------- SSCS 31.0994 32.4488 33.3739 33.5721 BHCS 32.2594 32.8965 33.4452 33.6834 AGDCS 32.5154 32.8186 33.0399 33.3976 PSSAHCS 34.6838 34.9093 35.0225 35.0945
Requirements of external Ca and Na for the electrical and mechanical responses to noradrenaline in the smooth muscle of guinea-pig vas deferens. Noradrenaline (NA) evoked a contraction which consisted of two components, an initial one associated with only depolarization and a second one with spike discharges. Both contractions were abolished in a Ca-free solution leaving only small depolarization. In a Na-free solution, the initial contraction was increased in keeping with the short term depolarization, while the second contraction was abolished. It is suggested that NA initially causes Ca influx which secondarily increases Na permeability.
AP PHOTOS: Brazilian cowboys take on bulls in unique rodeo SERRITA, Brazil (AP) — The men appear on horseback at daybreak, emerging from the thorny thicket of shrubs and small trees that mark the semiarid landscape of the northeastern Brazilian state of Pernambuco. The 200 or so cowboys known in Portuguese as "vaqueiros" are clad head-to-toe in traditional garb called "gibao." The protective leather clothing consists of elaborately stitched chaps, jackets, hats and hand coverings decorated with bits of orange, red and blue that they have donned for the annual festival known as "Pega de Boi," or "Catch the Bull." Most of the "vaqueiros" make the special clothing themselves. Those who cannot, buy their outfits from local artisans for about $200. The cowboys sit on small leather saddles, their boots slipped inside the stirrups, and occasionally take a sip of cachaca, a distilled spirt made from sugarcane juice, out of a bull's horn. Women never participate in the event, and at this particular competition there aren't even any female spectators. Hundreds of bulls supplied by local ranchers have been herded into a corral where they wait to be let loose into the shrub-dotted terrain. "May the fastest win," an announcer shouts over a loudspeaker, and the annual competition begins. The gate swings open and the first bull rushes out into the shrub land, known as the "caatinga," and a team of two cowboys gives chase. The two grasp the bull's tail, knock the animal down, and grab a leather necklace that was earlier placed over its head. They then deliver the necklace to the judge as fast as they can. The exercise is repeated with other bulls and teams, each timed to determine which "vaqueiros" are the fastest. After the winners are declared, the bulls are allowed to roam freely until the following day when they are rounded up and returned to their owners. Joao de Cazuza, 56, recalls how his father and grandfather introduced him to the event when he was a young boy. They would take more than four days herding bulls through the bush just to get to the festival. He says that he was just 15 when he donned his first "gibao," and "from that moment on I knew I was a 'vaqueiro.'" Deda Carvalho, who is 54, works inside the corral preparing the bulls to be set loose into the "caatinga." Later, he sits on a corral fence made of tree branches with his 19-year-old son, Thiago, who he says is now a "real cowboy" as well. Thiago Carvalho says this is all he has ever wanted. "I grew up listening to the cowboy stories of my father and uncles," says the younger Carvalho. "I am proud to be a 'vaqueiro.'"
// -*- Mode: vala; indent-tabs-mode: nil; tab-width: 4 -*- // // Copyright (C) 2011-2012 Giulio Collura // // This program is free software: you can redistribute it and/or modify // it under the terms of the GNU General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program. If not, see <http://www.gnu.org/licenses/>. // using Gtk; namespace Panther.Widgets { public class SearchView : Gtk.ScrolledWindow { const int MAX_RESULTS = 20; const int MAX_RESULTS_BEFORE_LIMIT = 10; public signal void start_search (Synapse.SearchMatch search_match, Synapse.Match? target); public bool in_context_view { get; private set; default = false; } private Gee.HashMap<Backend.App, SearchItem> items; private SearchItem selected_app = null; private Gtk.Box main_box; private Gtk.Box context_box; private Gtk.Fixed context_fixed; private int context_selected_y; private int n_results = 0; private int _selected = 0; public int selected { get { return _selected; } set { _selected = value; var max_index = (int)n_results - 1; // cycle if (_selected < 0) _selected = max_index; else if (_selected > max_index) _selected = 0; select_nth (main_box, _selected); if (in_context_view) toggle_context (false); } } private int _context_selected = 0; public int context_selected { get { return _context_selected; } set { _context_selected = value; var max_index = (int)context_box.get_children ().length () - 1; // cycle if (_context_selected < 0) _context_selected = max_index; else if (_context_selected > max_index) _context_selected = 0; select_nth (context_box, _context_selected); } } public signal void app_launched (); private PantherView view; public SearchView (PantherView parent) { view = parent; hscrollbar_policy = Gtk.PolicyType.NEVER; items = new Gee.HashMap<Backend.App, SearchItem> (); main_box = new Gtk.Box (Gtk.Orientation.VERTICAL, 0); main_box.margin_start = 12; context_box = new Gtk.Box (Gtk.Orientation.VERTICAL, 0); context_fixed = new Gtk.Fixed (); context_fixed.margin_start = 12; context_fixed.put (context_box, 0, 0); var box = new Gtk.Box (Gtk.Orientation.HORIZONTAL, 0); box.pack_start (main_box, true); box.pack_start (context_fixed, false); add_with_viewport (box); parent.search_entry.key_press_event.connect ((e) => { if (parent.search_entry.text == "") _selected = 0; return false; }); } public void set_results (Gee.List<Synapse.Match> matches, string search_term) { // we have a hashmap of the categories with their matches and keep // their order in a separate list, as the keys list of the map does // not always keep the same order in which the keys were added var categories = new HashTable<int,Gee.LinkedList<Synapse.Match>> (null, null); var categories_order = new Gee.LinkedList<int> (); foreach (var match in matches) { Gee.LinkedList<Synapse.Match> list = null; // we're cheating here to give remote results a separate category. We assign 8 as // the id for internet results, which currently is the lowest undefined MatchType int type = match.match_type; if (type == Synapse.MatchType.GENERIC_URI) { var uri = (match as Synapse.UriMatch).uri; if (uri.has_prefix ("http://") || uri.has_prefix ("ftp://") || uri.has_prefix ("https://")) type = 8; } if (match is Synapse.DesktopFilePlugin.ActionMatch) type = 10; if ((list = categories.get (type)) == null) { list = new Gee.LinkedList<Synapse.Match> (); categories.set (type, list); categories_order.add (type); } list.add (match); } n_results = 0; clear (); // if we're showing more than about 10 results and we have more than // categories, we limit the results per category to the most relevant // ones. var limit = MAX_RESULTS; if (matches.size + 3 > MAX_RESULTS_BEFORE_LIMIT && categories_order.size > 2) limit = 5; foreach (var type in categories_order) { string label = ""; switch (type) { case Synapse.MatchType.UNKNOWN: label = _("Other"); break; case Synapse.MatchType.TEXT: label = _("Text"); break; case Synapse.MatchType.APPLICATION: label = _("Applications"); break; case Synapse.MatchType.GENERIC_URI: label = _("Files"); break; case Synapse.MatchType.ACTION: label = _("Actions"); break; case Synapse.MatchType.SEARCH: label = _("Search"); break; case Synapse.MatchType.CONTACT: label = _("Contacts"); break; case 8: label = _("Internet"); break; case 10: label = _("Application Actions"); break; } var header = new Gtk.Label (label); ((Gtk.Misc) header).xalign = 0.0f; header.margin_start = 8; header.margin_bottom = 4; header.use_markup = true; header.get_style_context ().add_class ("h4"); header.show (); main_box.pack_start (header, false); var list = categories.get (type); var old_selected = selected; for (var i = 0; i < limit && i < list.size; i++) { var match = list.get (i); if (type == 10) { show_action (new Backend.App.from_synapse_match (match)); n_results++; continue; } // expand the actions we get for UNKNOWN if (match.match_type == Synapse.MatchType.UNKNOWN) { var actions = Backend.SynapseSearch.find_actions_for_match (match); foreach (var action in actions) { show_app (new Backend.App.from_synapse_match (action, match), search_term); n_results++; } } else { show_app (new Backend.App.from_synapse_match (match), search_term); n_results++; } } selected = old_selected; } } private void show_app (Backend.App app, string search_term) { var search_item = new SearchItem (app, search_term); app.start_search.connect ((search, target) => start_search (search, target)); search_item.button_release_event.connect (() => { if (!search_item.dragging) { app.launch (); app_launched (); } return true; }); main_box.pack_start (search_item, false, false); search_item.show_all (); items[app] = search_item; } private void show_action (Backend.App app) { var search_item = new SearchItem (app, "", true, app.match.title); app.start_search.connect ((search, target) => start_search (search, target)); search_item.button_release_event.connect (() => { if (!search_item.dragging) { ((Synapse.DesktopFilePlugin.ActionMatch) app.match).execute (null); app_launched (); } return true; }); main_box.pack_start (search_item, false, false); search_item.show_all (); items[app] = search_item; } public void toggle_context (bool show) { var prev_y = vadjustment.value; if (show && in_context_view == false) { if (selected_app.app.match.match_type == Synapse.MatchType.ACTION) return; in_context_view = true; foreach (var child in context_box.get_children ()) context_box.remove (child); var actions = Backend.SynapseSearch.find_actions_for_match (selected_app.app.match); foreach (var action in actions) { var app = new Backend.App.from_synapse_match (action, selected_app.app.match); app.start_search.connect ((search, target) => start_search (search, target)); context_box.pack_start (new SearchItem (app)); } context_box.show_all (); Gtk.Allocation alloc; selected_app.get_allocation (out alloc); context_fixed.move (context_box, 0, alloc.y); context_selected_y = alloc.y; context_selected = 0; } else { in_context_view = false; // trigger update of selection selected = selected; } vadjustment.value = prev_y; } public void clear () { if (in_context_view) toggle_context (false); foreach (var child in main_box.get_children ()) child.destroy (); } public void down () { if (in_context_view) context_selected ++; else selected++; } public void up () { if (in_context_view) context_selected--; else selected--; } private void select_nth (Gtk.Box box, int index) { if (selected_app != null) // enable to make main item stay blue // && !(box == context_box && selected_app.get_parent () == main_box)) selected_app.unset_state_flags (Gtk.StateFlags.PRELIGHT); if (box == main_box) selected_app = get_nth_main_item (index) as SearchItem; else selected_app = box.get_children ().nth_data (index) as SearchItem; selected_app.set_state_flags (Gtk.StateFlags.PRELIGHT, true); Gtk.Allocation alloc; selected_app.get_allocation (out alloc); vadjustment.value = double.max (alloc.y - vadjustment.page_size / 2, 0); } private Gtk.Widget? get_nth_main_item (int n) { var i = 0; foreach (var child in main_box.get_children ()) { if (i == n && child is SearchItem) return child; if (child is SearchItem) i++; } return null; } /** * Launch selected app * * @return indicates whether panther should now be hidden */ public bool launch_selected () { if (selected_app.action) { ((Synapse.DesktopFilePlugin.ActionMatch) selected_app.app.match).execute (null); return true; } return selected_app.launch_app (); } } }
Chicago Mayor Rahm Emanuel. CNN CNN's Dana Bash pushed Chicago Mayor Rahm Emanuel on Sunday to answer whether he wanted Hillary Clinton to run for president in 2020, a query he repeatedly attempted to dodge. In an interview on "State of the Union," the CNN host repeatedly pressed the former adviser to President Bill Clinton whether another Clinton bid would benefit the Democratic party, citing her recent forceful condemnation of President Donald Trump's administration. "If Hillary Clinton is up for another presidential run, would that be a good thing for your party?" Bash asked. "Well look, you're asking something that we're not even through the mid-term election," Emanuel said. "She hasn't even declared." "I know, but I asked the question," Bash replied. "Do you think she should?" He added: "I love you. It's not a good question." Emanuel emphasized that there was plenty of time until the next election in which it would become obvious whether she would be a good candidate. "I happen to love Hillary, and I think she's full of energy," Emanuel said. "I happen to think there's a lot of time between now and the presidential election. She has to decide whether that's in her heart." "Hillary has a lot of offer. The core question is not whether I think she would be a good candidate. It's whether she wants to run. Because at the end of the day, the public is pretty smart. And if it's only going through the motions, they'll pick that up." The longtime Democrat and chief of staff under President Barack Obama has a long, somewhat mixed history with the Clintons. As first lady, Hillary Clinton sought to sideline Emanuel. In the 2016 Democratic primary, she garnered his support early on, but distanced herself from him when Sen. Bernie Sanders attempted to make the Illinois primary a referendum on Emanuel's criminal justice record and handling of a controversial shooting of a 17-year-old by the police. Though Clinton has ruled out a third presidential bid, that hasn't stopped pundits and some right-leaning publications from stoking the idea that she may run again. Watch the clip via CNN:
The Cytotoxicity and Genotoxicity of Particulate and Soluble Cobalt in Human Urothelial Cells. Cobalt use is increasing particularly due to its use as one of the primary metals in cobalt-chromium-molybdenum (CoCrMo) metal-on-metal prosthetics. CoCrMo is a high-strength, wear-resistant alloy with reduced risk for prosthetic loosening and device fracture. More than 500,000 people receive hip implants each year in the USA which puts them at potential risk for exposure to metal ions and particles released by the prosthetic implants. Data show cobalt ions released from prosthetics reach the bloodstream and accumulate in the bladder. As patients with failed hip implants show increased urinary and blood cobalt levels, no studies have considered the effects of cobalt on human urothelial cells. Accordingly, we investigated the cytotoxic and genotoxic effects of particulate and soluble cobalt in urothelial cells. Exposure to both particulate and soluble cobalt resulted in a concentration-dependent increase in cytotoxicity, genotoxicity, and intracellular cobalt ions. Based on intracellular cobalt ion levels, we found, when compared to particulate cobalt, soluble cobalt was more cytotoxic, but induced similar levels of genotoxicity. Interestingly, at similar intracellular cobalt ion concentrations, soluble cobalt induced cell cycle arrest indicated by a lack of metaphases not observed after particulate cobalt treatment. These data indicate that cobalt compounds are cytotoxic and genotoxic to human urothelial cells and solubility may play a key role in cobalt-induced toxicity.
Q: How to keep a running balance of payments vs. billings in Excel I have a row of cells with 12 Monthly columns, G4 (January) through R4 (December), in which I will input the amount paid towards a monthly billing. [Note that an early comment discusses multiple years in multiple rows. However, that is no longer the case; it is just a single year, in row 4.] When the bill has an amount due, it is a standard amount, the value of which is stored in cell F4. If a month does not have an amount due, I leave the associated G4:R4 cell blank. If an amount is due, I enter the amount paid (zero if nothing), in the associated cell. I would like for cell A4 to show a running total of the outstanding balance (amounts due minus amounts paid). Example: Monthly bill is 100.00 (F4). I paid 75 dollars in January (G4), so the running total I still owe is 25.00 (A4). I paid 50 dollars in February towards February's bill (H4), so now I owe 75 (A4). Note that there will never be a case where there is no billed amount but I pay toward a prior balance; if there is a prior balance, there will always be a billing for the standard amount. So, a blank cell in G4:R4 means that nothing was billed; a value in G4:R4 means the standard amount was billed and the entered amount was paid. A: In A4 simply put: =$F4*COUNT($G4:$R4)-SUM($G4:$R4) EDIT: after your comment removed some "$" signs so that the formula can be easily copied over the 15 needed rows.
Thursday, April 9, 2015 The end... or a new beginning? I thought long and hard about writing this post or not, wondering if this was perhaps too personal and if it had anything to do with travelling. I also considered if it would contribute anything to this blog or not. In the end I did write it, as you can see, for the simple reason that for us this has unfortunately become a part of travelling and has a big impact on both the trip and the world record attempt. We realise that this sort of thing is not often written about in a travel blog. Most tend to stick to the positives and portray everything that happens through rose tinted glasses. But travelling is not just wheels turning round and kilometres stacking up. It’s also not a fantasy world where everything goes smoothly and not a cross word is said. Not even when you are living your dream... Quite the contrary apparently. According to studies, done by people who should know, most relationships break up during or shortly after the holidays. Ours did too… and as we've always written openly and honestly about what we found, we feel we should do the same now. Emotionally, the last couple of months have been a nightmare for both Mike and me. Jeanette, my partner for 34 years, suddenly dropped the bombshell that she no longer wanted to travel with us… Assuming she was a bit travel weary, which would be understandable after two years on the road, we thought she needed a break. As we had a stop planned to sort out the visas, carnets etc for the next and final part of this trip and as more maintenance was needed on the XT660, I was optimistic that all would come good in time. But I guess I was wrong. While Mike and I were organising the next part of the trip, she was organising to leave us altogether and start a new life for herself…! Breaking up is never easy, but living in a tent while breaking up and in the middle of an around the world trip brings it to a whole new level. To this day Mike and I still don’t understand why it happened, which doesn’t make it any easier. I know that one in three relationships seem to break up, but as we had been together for 34 years I thought we had passed that stage. We apparently hadn’t and now we are just another statistic. Visas, carnets, shipping, insurance etc etc... I won't go into detail about what happened during the break-up. Simply because I feel that, despite all that has happened, it wouldn't be fair to Jeanette as she can't defend herself. It wouldn't contribute anything to this blog either as all I want to show here is that this can happen, the impact it had on us and the effects it has on the trip. In theory travelling as a family is great as it adds another dimension to the whole thing. The last two years have, without a shadow of a doubt, been the best of my life. Being able to share beautiful nature and experience moments of pure joy together means so much more, especially when you see the people you love enjoying it just as much. It brings it all to a much intenser level. It's an indescribable feeling. But, as I’ve found out the last couple of months, when it does go wrong the ‘fall’ is much more substantial too. Bombshell number two was delivered a couple of weeks later when the, apparently also customary, squabbles about dividing the finances started. As the most expensive part of the trip was about to begin this couldn’t have come at worse time. We did the maths on visa costs, bonds we had to put up for carnets and the shipping quotes; and realised we couldn’t continue… In a matter of weeks not only did my relationship end, but also Mike’s world record attempt came to a halt… All this started around the normally happiest time of the year. While others were unwrapping their Christmas gifts, we were unwrapping this... Mike and I had completely lost the will to continue at that stage. This was always intended as a family trip after all. Now that the family aspect was gone, what was the point? Mike found his mum walking out on him incredibly hard to take. He is a good guy who has never done anything wrong to his mum. Quite the opposite, he's helpful, social, cheerful and deserves better. I found loosing my partner of 34 years for no apparent reason difficult to understand too. At the same time we also found ourselves in a situation where we had to find a way through a maze of paperwork, had daily dealings with frustrating people at embassies that have no imagination and don't know the meaning of the word 'human', while we were struggling with an emotional baggage we didn't understand and found very difficult to take. To top it all off we were looking for a way to be able to continue, while neither of us really wanted to anymore and knew we didn't have the finances to continue either. It felt like we were sliding deeper and deeper into a big hole. There were quite a few times where we looked at each-other and just wanted to chuck it in, switch off the blog and start doing something else... not having the faintest idea what. The dream had become a nightmare. I’m sure there are people out there that claim to be happily single but at the moment it doesn't feel like it yet. Expensive wifi in Europe meant our blog had become behind quite a bit, so we had a lot of catching up to do too. But I found writing difficult under the circumstances. How can you write about something positive when you feel negative? Many a post has been written 4 times before I published it. It took quite a few weeks before we could finally look past the 'why' part and wanted to look forward again. To be honest, as I'm writing this we are both still struggling emotionally. I had learned in the past that when things don't go as planned and nothing seems to work, it's important to keep your eyes on the end goal, the reason why we wanted to do it. But in this case we didn't even knew what that was anymore. We were lucky that we could stay with my parents because, as you can read in the first post of this blog, we had sold our house to do this trip. Luckily there was a lot of long overdue maintenance to be done at their place, as it both gave us a way to do something back for their unbelievable support through all this and at the same time helped us get our minds off the problems we had. They had been looking forward to spending Christmas with us, the first Christmas together in years, but instead found themselves in the middle of all this. Slowly it started to dawn that we simply had to continue... we had come too far and worked too hard on this to let it all slip away. If we wouldn't continue now then it would haunt us for the rest of our lives. So we dug in the incomprehensible paperwork again. Contacted embassies again, looked into carnets again, looked at all the routes again and tried to find a way, despite minimal finances, to continue. All the time our emotions were all over the place. There were days where I had to pick Mike up from a deep emotional hole, while on others he did the same for me. There were also days where we openly asked ourselves if we really wanted to do this. Gradually we climbed out of that deep hole though and saw some daylight again. The bond between Mike and myself had always been very good, but through all this has become even stronger. Having lost a lot of time we had to put our skates on and get things moving quickly if we weren't to miss our window through China. Some people plan these sort of trips a year in advance, we had just a couple of weeks. The bikes needed lots of maintenance too, especially the XT, and there was a pile of paperwork to be done. It was literally all hands on deck. Through all this the atmosphere changed. The determination to finish what we had started grew by the day. And even though we have to cut our finances back to the bone and have to do a lot of concessions as to where we can and cannot go, we absolutely want to see this through. The headline of this post reads 'The end... or a new beginning?' as for a long time it seemed it was the end of the trip and the end of the dream. What happened over the last months hasn't been easy. At the same time it made us stronger. It really feels like a new beginning and inspired Mike to make the video embedded in this post, which he called 'The next chapter'. Earth-Roamers is now a father and son team, one Bonneville and one XT660R. Even though we don't know if we can make it all the way through, as we don't know if our finances will stretch that far, we're back on track. The Dream is back!
The confirmation of Brett Kavanaugh to the Supreme Court, an appointment made by a president who admitted on tape to sexual assault, is one of our country’s greatest moral failings, but one that none of us should find surprising. Sexual violence is about power, and what we have seen in our culture is a function and extension of entrenched power dynamics. Kavanaugh's confirmation yet again reflects a political system that is built upon and reinforced by a set of toxic masculine principles. For perhaps the first time, a majority of Americans is coming to realize that, as Audre Lorde said, "the master's tools will never dismantle the master's house," and the time has come to raise a new house (and Senate, for that matter). Thus, it is time to raise a new generation of leaders. The elected officials who facilitated yet another man accused of sexual misconduct achieving a lifetime appointment to the Supreme Court should be held accountable and lose their jobs — but this fight has a broader place in history beyond simple election cycles. Real progress must include respecting the humanity of survivors of sexual violence, whose leadership continuously drives our culture forward in spite of their continuous marginalization and denigration. The predominately white and male institutions of power must shift and evolve beyond the present standards to address sexual violence. Our culture has to embrace the complexities of changing, and to accept that both change and accountability cannot occur without hurt feelings. This type and depth of change cannot, nor should not be easy. The impact of sexual violence should be placed at the center of our public debate, because the resistance to this broken piece of our culture should never have been partisan and the work to change it is only just beginning, despite decades of activist effort behind us. We need to tell a new story about what change looks like and it looks like this: To never stop fighting for the humanity of survivors are to be seen, valued and believed. That is what it means to fight for a country where we all get to thrive, and for one that shifts the burden of shame away from those who have survived sexualized violence and toward those who are complicit in the continuity of that violence. The current national reckoning over the Kavanaugh nomination has been a revolution against the shame that survivors are burdened to carry and trauma that erodes our communities. Taking sexual violence seriously is the moral fight of our generation and what has unfolded over the last few weeks is the messy, complicated and necessary work required to meet this moment. The Kavanaugh confirmation fight broke the norm that sexual trauma and the long-term, emotional impacts of sexual violence should not be seen or spoken. Every story shared, every testimonial, every angry voice raised, each and every moment where we as a country bore witness to our collective humanity and glaringly absurd double standards to which men and women are held was a rejection of shame and a spark toward new era of leadership. There are models for building this new world. Take Anita Hill, who gave an interview to Rolling Stone in 2013 in which she talked about the lessons we could take from her experiences detailing being sexually harassed by then-Supreme Court nominee Clarence Thomas only to have his nomination sail through, "We have to make decisions, like, yeah, that does disqualify you from being a Supreme Court justice if we find that you have been sexually abusive," she said."Recognizing it, acknowledging it, bringing women into the process as full participants who can testify honestly about their experiences, and then being willing to make hard decisions." Or, on Friday, the latest Nobel Peace Prize was awarded to Nadia Murad and Denis Mukwege, two leaders who spearheaded campaigns against sexual violence as a weapon war. Murad, a 25-year-old Iraqi Yazidi who was tortured and raped by Islamic State militants, brought her story to the global stage in a 2016 speech to the United Nations General Assembly where she said to world leaders, “This world was not only created for you and your families. We also want life, and it’s our right to live it.” Mukwege, a Congolese physician who leads a hospital treating tens of thousands survivors of brutal sexual violence in eastern Democratic Republic Congo, has continued his work to help women heal in the face of death threats and assassination attempts, “Our society has to say stop and draw a line in the sand: Some acts are such that society as a whole must oppose them,” he said of his mission. Murad and Mukwege are models to survivorship as leadership, where bearing witness and giving voice to hard truths catalyzes necessary change. Mukege’s Nobel acceptance speech reflects this, "To the survivors from all over the world, I would like to tell you that through this prize, the world is listening to you and refusing to remain indifferent. The world refuses to sit idly in the face of your suffering." It's never been that we don't know how to change a world that treats women's bodies as commodities for men to amass; it's that we've often deemed it too inconvenient to even try. Women who are tired of their autonomy being inconvenient, however, may have been defeated by Kavanaugh's confirmation but we are prepared for more fights. We are beleaguered, as a nation, by the pain and trauma this nomination process has dragged into the public square; that is not dissimilar to the exhaustion that every survivor has felt at the weight of a culture that maligns, marginalizes and erases their pain. We are wearied by the systems of power that have tacitly supported and enabled the expansion of that pain. The cost of sexual misconduct should not be paid by survivors anymore. This ugly chapter will be seared into the moral conscience of this country; this confirmation was a moral test we failed. And while we figure out how and why, we need to also think about tomorrow and ask what the leaders we are raising today will need from us for the outcome to be different next time.
/* Simple arduino/serial wrapper for CLI & node-webkit non-GUI code to do stuff with arduino */ var firmata = require('firmata'); var serialport = require("serialport"); var child_process = require('child_process'); // mac serilport doesn't list ports correctly // https://github.com/voodootikigod/node-serialport/issues/83 exports.list = function(callback){ callback = callback || function (err, ports){}; if (process.platform !== 'darwin'){ serialport.list(function(err, ports){ out = []; ports.forEach(function(port){ out.push(port.comName); }); callback(null, out); }); return; } child_process.exec('ls /dev/tty.*', function(err, stdout, stderr){ if (err) return callback(err); if (stderr !== "") return callback(stderr); return callback(null, stdout.split("\n").slice(0,-1)); }); }; exports.board = function(com, callback){ callback = callback || function (err, board){}; var board = new firmata.Board(com, function(err){ callback(err, board); }); };
Vehicles employing an inverted pendulum in posture control (hereafter simply termed “inverted pendulum vehicles”) have attracted attention and are currently being put into use. For example, Patent Document 1 discloses a technique of driving two co-axially disposed drive wheels by using the movement in the center of gravity of a driver to monitor a posture of the drive wheels. In addition, vehicles which move by controlling the posture of a single related-art circular drive wheel or a single spherical drive wheel and various types of inverted pendulum vehicles have been proposed. [Patent Document 1] Japanese Patent Application Publication No. JP-A-2004-276727 [Patent Document 2] Japanese Patent Application Publication No. JP-A-2004-129435 In this manner, a vehicle maintains a stationary state or a running state while performing posture control based on a body weight movement amount of a driver, an operation amount from a remote controller or operating device, or pre-set running command data, for example. Posture control during running is performed by controlling output torque of the drive wheels so that the vehicle coincides with a target angle of inclination. For example, when an external force causes the vehicle to incline in a forward direction by more than the target angle of inclination, the vehicle posture (inclination angle) is controlled to coincide with a target inclination angle by increasing output torque from the drive wheels and increasing the rotation speed of the drive wheels.
Q: What are the benefits of explicit type cast in C++? What are the benefits of explicit type cast in C++ ? A: They're more specific than the full general C-style casts. You don't give up quite as much type safety and the compiler can still double check some aspects of them for you. They're easy to grep for if you want to try clean up your code. The syntax intentionally mimics a templated function call. As a a result, you can "extend" the language by defining your own casts (e.g., Boost's lexical_cast).
INTRODUCTION {#sec1-1} ============ Traditionally, periodontal disease therapy has been directed to altering the periodontal environment to one that is less conducive to the retention of bacterial plaque in the vicinity of gingival tissues. With the increasing awareness of the bacterial etiology of periodontal diseases,\[[@ref1][@ref2]\] and in particular the hypothesis that specific bacteria are involved,\[[@ref3]\] a more direct approach, using antibacterial agents has become an integral part of the therapeutic armamentarium. Recently, a new sustained local drug delivery chlorhexidine (CHX) with xanthan gel, Chlosite (1.5% of CHX in 0.5 ml of xanthan gel) has been introduced. Therefore, it was deemed important to evaluate the efficacy of this drug clinically and microbiologically in the treatment of chronic periodontitis smoker and non-smoker patients. The objective of the study is to determine the effect of chlosite as a monotherapy, chlosite compared with scaling and root planning (SRP), chlosite with SRP (combination therapy), and to determine the efficacy of chlosite on periodontopathogens. MATERIALS AND METHODS {#sec1-2} ===================== A total number of 141 sites from six patients (67 sites from three non-smoker patients and 74 sites from three smoker patients) with periodontal pockets measuring 5 to 7 mm in different quadrants of mouth were selected, between 20 to 50 years from Outpatient Department of Periodontics College of Dental Sciences, Davangere, Karnataka, India. The Ethical committee of College of Dental Sciences, Davangere, Karnataka, India, approved the study and written informed consent was obtained from all patients. The study included patients diagnosed as suffering from chronic generalized periodontitis. Patient selected had periodontal pocket measuring 5-7 mm in depth in different quadrants of the mouth and signs of bone loss on clinical and radiographic examination were observed \[[Figure 1](#F1){ref-type="fig"}\]. Pregnant women, nursing mothers, teeth with furcation involvement, and patients with known hypersensitivity reaction to CHX were excluded from the study. These patients did not receive any periodontal therapy for past 6 months and were free of any systemic diseases.\[[@ref4]\] The selected sites were randomly divided into 3 groups \[[Table 1](#T1){ref-type="table"}\]. ![Site selected for study 26 (Mesial)](JISP-15-221-g001){#F1} ###### Number of sites involved in treatment modalities of different groups ![](JISP-15-221-g002) Prior to SRP, each selected site was subjected to assessment of the following clinical parameter: Plaque index (PI) (Silness and Loe),\[[@ref5]\] Gingival index (GI) (Loe and Silness),\[[@ref6]\] Bleeding index (BI) (Ainamo and Bay),\[[@ref7]\] Relative attachment level (RAL) using UNC-15 periodontal probe, subgingival microbiological plaque samples. The clinical parameters were assessed on day '0', 30^th^, and 90^th^ day. RAL was assessed only on '0' day and 90^th^ day and microbiological samples were collected on the '0' and 30^th^ day only. Subgingival microbiological palque samples {#sec2-1} ------------------------------------------ Subgingival microbial plaque samples were taken from the periodontal pocket at baseline and on 30^th^ day with the help of fine endodontic paper points. After removing supragingival plaque, two fine endodontic paper points were inserted to the depth of each periodontal pocket for 10 seconds and then transferred to 1 ml Thioglycollate broth (transport medium) and sealed tightly to avoid contamination \[Figures [2](#F2){ref-type="fig"} and [3](#F3){ref-type="fig"}\]. Samples were processed within 2 days of collection. Once it was received in the laboratory, the sample was mixed thoroughly and 5 μl, each was inoculated using sterile loop onto the following mediums: Enriched blood agar (*Porphyromonas gingivalis*), Brewer\'s anaerobic agar (*Fusobacterium nucleatum*), and Bacteroides bile esculin agar (*Tannerella forsythia*)\[[@ref8]\] ![Subgingival sampling with absorbent paper points](JISP-15-221-g003){#F2} ![Transfer of plaque sample to glycolate broth](JISP-15-221-g004){#F3} Procedural steps for chlosite administration {#sec2-2} -------------------------------------------- Chlosite was provided as a single-dose syringe with 0.5 ml of xanthan gel, which contains a mixture of chlorhexidine digluconate and chlorhexidine dihydrochloride, in a ratio of 1: 2. The gel was administered in experimental site A (SRP plus Chlosite) and experimental site B (Chlosite only) on '0' day. The periodontal pocket was washed with distilled water and then dried with paper points before subgingival administration of Chlosite. Subgingival administration was accomplished by inserting the single-dose syringe to the base of the periodontal pocket first and then working the way up, until the gingival margin \[Figures [4](#F4){ref-type="fig"} and [5](#F5){ref-type="fig"}\]. Chlosite undergoes a progressive process of imbibition, and gets physically removed from the application site within 10 to 30 days, making a follow-up visit for removal of the material unnecessary. After the treatment, patients were instructed to avoid eating hard, crunchy or sticky foods for one week, and postpone brushing for 12 hours period, as well as touching the treated areas. Patients should also postpone the use of interproximal cleaning devices for 10 days. ![Occlusal stent with grooves](JISP-15-221-g005){#F4} ![Administration of Chlosite](JISP-15-221-g006){#F5} RESULTS {#sec1-3} ======= On comparison of non-smokers vs smokers \[Graphs [1](#F6){ref-type="fig"}--[4](#F9){ref-type="fig"}\], the mean difference of plaque score between '0' to 90^th^ day for SRP and CHL was statistically significant and for SRP + CHL, it was statistically not significant. The mean difference in GI between '0' to 90^th^ day was statistically significant for SRP and SRP + CHL and statistically not significant for CHL. The mean difference of bleeding score between '0' to 90^th^ day was statistically significant for SRP and statistically not significant for SRP + CHL and CHL. The mean difference in RAL between '0' to 90^th^ day for SRP, SRP + CHL, and CHL was statistically not significant. ![Plaque index (Comparison of smokers and non-smokers)](JISP-15-221-g007){#F6} ![Bleeding index (Comparison of smokers and non-smokers)](JISP-15-221-g008){#F7} ![Gingival index (Comparison of smokers and non-smokers)](JISP-15-221-g009){#F8} ![Relative attachment level (Comparison of smokers and non-smokers)](JISP-15-221-g010){#F9} Prevalence of various ous microorganisms at different intervals of studymicroorganisms at different intervals of study period in smokers and non-smokers \[Graphs [5](#F10){ref-type="fig"}--[7](#F12){ref-type="fig"}, Tables [2](#T2){ref-type="table"} & [3](#T3){ref-type="table"} and Figures [6](#F13){ref-type="fig"}--[8](#F15){ref-type="fig"}\]. ![*Fusobacterium nucleatum* (Comparison of smokers and non-smokers)](JISP-15-221-g011){#F10} ![*Porphyromonas gingivalis* (Comparison of smokers and non-smokers)](JISP-15-221-g012){#F11} ![*Tannerella forsythia* (Comparison of smokers and non-smokers)](JISP-15-221-g013){#F12} ###### Microbial profile of non-smokers (CFU) ![](JISP-15-221-g014) ###### Microbial profile of smokers (CFU) ![](JISP-15-221-g015) ![Blood agar plate showing haemolytic colonies of *Porphyromonas gingivalis*](JISP-15-221-g016){#F13} ![Blood agar plate showing colonies of *Fusobacterium nucleatum*](JISP-15-221-g017){#F14} ![B.B.E Agar showing colonies of *Tannerella forsythia*](JISP-15-221-g018){#F15} On comparison of non-smokers to smokers, *Fusobacterium nucleatum* showed 94% reduction in non-smokers and 100% reduction in smokers, which was statistically not significant. On comparison of non-smokers with smokers, *Porphyromonas gingivalis* showed 81.2% reduction in non-smokers and 100% reduction in smokers, which was statistically not significant. On comparison of non-smokers with smokers, *Tannerella forsythia* showed 100% reduction in both groups which was statistically not significant. In smokers, the results of this study are not compared due to paucity of literature on the effect of CHX on smokers. DISCUSSION {#sec1-4} ========== CHX is a widely used broad-spectrum antimicrobial agent to inhibit bacterial growth and, thus, an adjunctive mean to control oral hygiene in patients with periodontal disease. Attempts to prolong the subgingival application of CHX by incorporation of an antiseptic in a gel have not resulted in improved treatment outcomes,\[[@ref9]\] However, with the use of chlosite, effective subgingival concentration of CHX can be maintained for several days. The physical properties of xanthan render it an optimum substrate for the formation of a stable gel that is easily extruded from a syringe needle; therefore, xanthan appears to be the best biocompatible vehicle for clinical application.\[[@ref10]\] In non-smokers, the mean reduction in PI from '0' to 90^th^ day was 83.3 and 84.6% for SRP and SRP + CHL, respectively. However, the difference in PI at '0' to 90^th^ day between the two groups was statistically not significant. These findings were similar to the findings of Azmak *et al*.\[[@ref11]\] who studied the effect of subgingival-controlled release delivery of 2.5 mg of CHX chip on clinical parameters of chronic periodontitis patients, in patients receiving SRP + CHX and SRP alone groups. Mean reduction in plaque score from baseline to 90^th^ day for CHL was 58.3%, which was statistically highly significant. In smokers, the mean reduction in plaque score from '0' to 90^th^ day was 46.2, 66.7, and 14.3% in SRP, SRP + CHL, and CHL, respectively, which was statistically highly significant in SRP and SRP + CHL and significant in relation to CHL. On comparison of non-smokers vs smokers, the mean difference of plaque score between '0' to 90^th^ day for SRP and CHL was statistically significant and for SRP + CHL, it was statistically not significant. In non-smokers, the mean reduction in BI from '0' to 90^th^ day was 100% for SRP and SRP + CHL, respectively. However, the difference in BI at '0' to 90^th^ day between the two groups was statistically significant. These findings are similar to the findings of Pennuti *et al*.\[[@ref12]\] who studied the efficacy of 0.5% CHX gel on the control of gingivitis in mentally handicapped patients over a period of 8 weeks. In smokers, the mean reduction in bleeding score from '0' to 90^th^ day was 100% in SRP, SRP + CHL, and CHL, and it was statistically highly significant in all the above-mentioned three treatment modalities. On comparison of non-smokers vs smokers, the mean difference of bleeding score between '0' to 90 ^th^ day was statistically significant for SRP and statistically not significant for SRP + CHL and CHL. In non-smokers, the mean reduction in gingival score from '0' to 90^th^ day was 88.9 and 85.7% for SRP and SRP + CHL, respectively. These findings were similar to that of Unsal *et al*.\[[@ref13]\] who studied the clinical effects of subgingival placement of 1% CHX gel in adult periodontitis patients over a duration of 12 weeks. There was 54.6% mean reduction in gingival score from baseline to 90^th^ day, which was statistically highly significant. In smokers, the mean reduction in gingival score from '0' to 90^th^ day was 84.5, 63.6, and 66.7% which was highly significant for SRP and SRP + CHL, whereas significant for CHL. On comparison of non-smokers vs smokers, the mean difference in GI between '0' to 90^th^ day was statistically significant for SRP and SRP + CHL and statistically not significant for CHL. In non-smokers, the mean gain in RAL from '0' to 90^th^ day was 24.7 and 18.5% for SRP and SRP + CHL, respectively. These findings were similar to that of Soskolne *et al*.\[[@ref14]\] who studied the changes in probing depth following 2 years of periodontal maintenance therapy including adjunctive controlled release of biodegradable CHX chip. There was 20.4% mean gain in RAL from baseline to 90^th^ day for CHL, which was statistically highly significant. In smokers, the mean gain in RAL from '0' to 90^th^ day was 23.7, 21, and 20.4% for SRP, SRP + CHL, and CHL, respectively, which was highly significant. On comparison of RAL in non-smokers vs smokers, the mean difference in RAL between '0' to 90^th^ day for SRP, SRP + CHL, and CHL was statistically not significant. On comparison of non smokers with smokers *Fusobacterium nucleatum*, *Porphyromonas gingivalis* and *Tannerella forsythis* showed reduction which was statistically not significant. The findings for the three bacteria are similar to that of Daneshmand *et al*.\[[@ref15]\] CONCLUSION {#sec1-5} ========== On comparison of smokers and non-smokers, in SRP group, non-smokers showed a higher reduction in BI and GI and smokers showed a higher reduction in PI. There was no significant gain in RAL of both smokers and non-smokers. In SRP + CHL group, non-smokers showed a higher reduction in relation to BI and GI and smokers showed a higher reduction in relation to PI. There was no significant gain in RAL of both smokers and non-smokers. In CHL group, both smokers and non-smokers showed a nonsignificant reduction in BI, GI, and RAL, but smokers showed a significant reduction in PI as compared with non-smokers. The authors would like to thank Mr. Anurag Singh (Chairman), Dr. Snehalata Chaudhary (Secretary) and Dr. Praveen Mehrotra (Principal) of the college (SPPGIDMS, Lucknow) for their contribution and support in this original study. We would like to thank all the faculty members of the department of Periodontology and Implantology, especially Dr. (Prof.) K.K Gupta , Dr. (Prof.) Pradeep Tandon. **Source of Support:** Nil **Conflict of Interest:** None declared.
Q: Override ActiveRecord::Base find method (to accept non-default id field as a search parameter) I have followed this tutorial on how to accept not-only-numeric primary key id when creating instance of my ModelName in my Ruby on Rails application. Everything is okay, but there is a paragraph: Be aware that Product.find won’t work anymore, and other Rails helper that relies on id will stop functioning. If you really want that, you need to override more methods and this seems too much of a pain for me. So I’d highly recommend you to leave #id as is. The question is: when I am trying to get instance of my model by using .find() method in my ModelNameController it doesn't work (I think that's because of .find() method's search parameters - it does find something by id field which is numeric) I have this piece of code: def set_model_name @model_name = ModelName.find(params[:hashid]) end Where :hashid is a parameter that is a string (I'd like to use a string instead of a number) How could I solve my problem? One of the solutions would be overriding ActiveRecord::Base's .find() method. Thanks in advance! A: You do not need to override the default behaviour of find. Instead, you can use the find_by method: def set_model_name @model_name = ModelName.find_by(hashid: params[:hashid]) end
Q: How can I multiply matrices by overloading the * operator, when the operator doesn't match the operands? I have two matrices which should multiply together by overloading * operator in the constructor class, but the problem here is that no operator [] matches these operands. Why? I saw videos and asked my classmates multiple times and tried my own way, but I can't make it work. I only get this error! This is the code I have a problem with: The Constructor Code: I made two ways to make this code works. The result should store at cell matrix or the new matrix: Matrix operator*(const Matrix &matrix1, const Matrix &matrix2) { if (matrix1.Cols != matrix2.Rows) { throw("Error"); } cell.resize(matrix2.Cols); // one way to call Matrix res(matrix1.Rows, matrix2.Cols, 1.0); // second way to call for (int i = 0; i < matrix1.Rows; i++) { cell[i].resize(matrix1.Rows); for (int j = 0; j < matrix2.Cols; j++) { double value_of_elements; for (int k = 0; k = matrix1.Cols; k++) { res[i][j] += matrix1[i][k] * matrix2[i][j];// 1. metod value_of_elements += matrix1[i][k] * matrix2[i][j];// 2. metod } cell[i][j]+=value_of_elements; } } return res; } The Header Code: The header code normally I don't have unless some modification should be made. friend Matrix operator*(const Matrix &matrix1, const Matrix &matrix2); The source Code: This is where the code is tested: try { Matrix m1(3, 3, 1.0); Matrix m2(3, 4, 1.0); std::cout << "m1*m2:" << m1 * m2 << std::endl;// this si where the matrix should be multiplied here; } catch (std::exception &e) { std::cout << "Exception: " << e.what() << "!" << std::endl; } catch (...) { std::cout << "Unknown exception caught!" << std::endl; } system("pause"); return 0; } The result: The result should be this: m1*m2:[3, 3, 3, 3 3, 3, 3, 3 3, 3, 3, 3] What I get is an error; the cause of error are that res[i][j], matrix1[i][k] etc. have operators [] wont work on these operands: Error C2065 'cell': undeclared identifier 71 matrix.cpp Error C2065 'cell': undeclared identifier 74 matrix.cpp Error C2065 'cell': undeclared identifier 81 matrix.cpp Error C2088 '[': illegal for class 79 matrix.cpp Error C2088 '[': illegal for class 78 matrix.cpp Error C2676 binary '[': 'Matrix' does not define this operator or a conversion to a type acceptable to the predefined operator 78 matrix.cpp Error C2676 binary '[': 'const Matrix' does not define this operator or a conversion to a type acceptable to the predefined operator 78 matrix.cpp Error C2676 binary '[': 'const Matrix' does not define this operator or a conversion to a type acceptable to the predefined operator 79 matrix.cpp Error (active) E0020 identifier "cell" is undefined 71 Matrix.cpp Error (active) E0349 no operator "[]" matches these operands 78 Matrix.cpp Error (active) E0349 no operator "[]" matches these operands 78 Matrix.cpp Error (active) E0349 no operator "[]" matches these operands 78 Matrix.cpp Error (active) E0349 no operator "[]" matches these operands 79 Matrix.cpp Error (active) E0349 no operator "[]" matches these operands 79 Matrix.cpp A: Assuming that the class matrix has a member vector<vector<double>> cell, here is a sample that multiply matrices: Matrix operator*(const Matrix &matrix1, const Matrix &matrix2) { if (matrix1.Cols != matrix2.Rows) { throw("Error"); } Matrix res(matrix1.Rows, matrix2.Cols, 1.0); for (int i = 0; i < matrix1.Rows; i++) { for (int j = 0; j < matrix2.Cols; j++) { double value_of_elements=0; for (int k = 0; k = matrix1.Cols; k++) value_of_elements += matrix1.cell[i][k] * matrix2.cell[k][j]; res.cell[i][j]=value_of_elements; } } return res; } There was three problems. First the class Matrix doesn't have an operator[]. The problem was solved by accessing the member cell directly. Second, the variable value_of_elements was not initialized, making the result undefined. Third, the matrix multiplication was not done correctly. You multiply one column from matrix1 with one column from matrix2, whereas you should multiply a row by a column.
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. What If We Don't Resign Hibbert AND Hill? (Another option) That would be a ton of money to throw at Nash and Kaman. Just throwing stuff out there. Is there anybody that would prefer Nash/Kaman over Hill/Hibbert? That would definitely be more of a short term solution and still not sure Nash would come for any amount of money. Re: What If We Don't Resign Hibbert AND Hill? (Another option) I think that we'd just have to match what the Raptors are offering to have a fighting chance....$12 mil per year / 3 seasons. With that said....I think that we can match Hibbert's offer, let GH go ( yes, maddening ) and then have enough space to make a run at Nash with the same offer ( if not slightly more ) that the Raptors gave him. However, the ideal situation would be do to offer a $12 mil per year / 3 season offer to Nash ( see if he bites ) and then match both Hibbert and GH. Is it possible to do make an offer to Nash for $12 mil per year while matching both GH and Hibbert? I'm thinking send out Hansbrough to the Bobcats for a future 1st round pick without getting anything back? Ash from Army of Darkness: Good...Bad...I'm the guy with the gun. This is David West, he is the Honey Badger, West just doesn't give a *****....he's pretty bad *ss cuz he has no regard for any other Player or Team whatsoever. Re: What If We Don't Resign Hibbert AND Hill? (Another option) I agree id let Hill go i have a feeling he will stop the pacers from signing any meaningful Free agency acquisition. Although someone will have to confirm....I thought that if we end up matching a $8 mil per year offer for GH ( yes, that seems high ) and the MAX offer for Hibbert....there'd still be roughly $7 to 8 mil in CapSpace that we can spend on Free Agents....which I think can be devoted to a single FA. On top of that...if we go after a PG or PF...we can send out DC or Hansbrough to help with the Capspace. Last edited by CableKC; 07-02-2012 at 02:41 AM. Ash from Army of Darkness: Good...Bad...I'm the guy with the gun. This is David West, he is the Honey Badger, West just doesn't give a *****....he's pretty bad *ss cuz he has no regard for any other Player or Team whatsoever. Re: What If We Don't Resign Hibbert AND Hill? (Another option) Although someone will have to confirm....I thought that if we end up matching a $8 mil per year offer for GH ( yes, that seems high ) and the MAX offer for Hibbert....there'd still be roughly $7 to 8 mil in CapSpace that we can spend on Free Agents....which I think can be devoted to a single FA. On top of that...if we go after a PG or PF...we can send out DC or Hansbrough to help with the Capspace. Bingo... people are overacting to thinking Hibbert's contract would handcuff us... we could sign another key FA, and make trades. Re: What If We Don't Resign Hibbert AND Hill? (Another option) I think that we'd just have to match what the Raptors are offering to have a fighting chance....$12 mil per year / 3 seasons. With that said....I think that we can match Hibbert's offer, let GH go ( yes, maddening ) and then have enough space to make a run at Nash with the same offer ( if not slightly more ) that the Raptors gave him. However, the ideal situation would be do to offer a $12 mil per year / 3 season offer to Nash ( see if he bites ) and then match both Hibbert and GH. Is it possible to do make an offer to Nash for $12 mil per year while matching both GH and Hibbert? I'm thinking send out Hansbrough to the Bobcats for a future 1st round pick without getting anything back? I would much rather have GH signed to a long term contract at $8 million than having an aging and declining Nash for $12 million. Nash would be a year or two bandaid. Hill could be part of the team for years to come and he can play two positions....... Re: What If We Don't Resign Hibbert AND Hill? (Another option) I would much rather have GH signed to a long term contract at $8 million than having an aging and declining Nash for $12 million. Nash would be a year or two bandaid. Hill could be part of the team for years to come and he can play two positions....... I definitely agree. And to add to that, our style of play does not play to Nash's strengths. He needs to go to a fast paced team that places little emphasis on defense, or post play. He was at his best when he played with 3 other 3pt shooters, and Amare in the Pick N Roll. These ARE NOT strengths of the Pacers. Would he help our offense for 2 or 3 years? Yes, but we would have to completely overhaul our offensive system, change our defensive principles, and those system changes don't play to the strengths of our personnel. The Following User Says Thank You to Ace E.Anderson For This Useful Post: Re: What If We Don't Resign Hibbert AND Hill? (Another option) I'm not sure where to post this, but a player that is a UFA to no one has mentioned is Camby. I realize he's long in the tooth, but he could be used as a filler replacement for Hibbert until something better is gotten. He rebs, blocks shots, plays "D", and can get you some points. I can't imagine his cost is going to be that high. He can tutor Plumlee as well. He'd be good as just a b/u C who can give you rebs, "D", and patrolling the paint. Something to consider. The Following User Says Thank You to Justin Tyme For This Useful Post: Re: What If We Don't Resign Hibbert AND Hill? (Another option) We don't resign either so we can throw money at Old, Ancient, and/or Moses, then I lead the torch and pitchfork mob towards BLFH. "Nobody wants to play against Tyler Hansbrough NO BODY!" ~ Frank Vogel "And David put his hand in the bag and took out a stone and slung it. And it struck the Philistine on the head and he fell to the ground. Amen. " Want your own "Just Say No to Kamen" from @mkroeger pic? http://twitpic.com/a3hmca Re: What If We Don't Resign Hibbert AND Hill? (Another option) We don't resign either so we can throw money at Old, Ancient, and/or Moses, then I lead the torch and pitchfork mob towards BLFH. I never expected some to be enthralled with the idea. Your view is duely noted. Although you might want to do some checking b4 you make a comment about age, it helps. 011-012 season stats Camby in 24 MPG ... 9 rebs... 1.8 BS Hibbert in 30 MPG ... 8.8 rebs... 1.7 BS This in 5+ min less than Hibbert. He'd be the top rebounder and shot blocker on the Pacers last year AND 12 years older than Hibbert! I guess you missed the part of him being signed as a b/u too? Is your idea of a rookie who has never laced up his sneakes on a NBA court the ideal b/u C? Or maybe you like often injured Pendergraph better. Re: What If We Don't Resign Hibbert AND Hill? (Another option) I have a real hard time thinking we should throw money at players who are OLDER than our head coach, hence Old (Kaman), Ancient (Nash), and/or Moses (Camby). (And yes, I know Nash and Camby are the same age, I had to differentiate). "Nobody wants to play against Tyler Hansbrough NO BODY!" ~ Frank Vogel "And David put his hand in the bag and took out a stone and slung it. And it struck the Philistine on the head and he fell to the ground. Amen. " Want your own "Just Say No to Kamen" from @mkroeger pic? http://twitpic.com/a3hmca
Media playback is unsupported on your device Media caption George Hamilton apologises for Cookstown disco crush comment Northern Ireland's police chief has apologised for describing officers' actions on the night of the Greenvale Hotel crush as "brave". George Hamilton met the family of Morgan Barnard, 17, who died in the tragedy in Cookstown, County Tyrone. "No public commentary by me or any police officer will detract from the independent investigation," he said. In April, he said officers were brave but there were "questions to answer" as they held back to await support. Lauren Bullock, 17, and Connor Currie, 16, also died on the night in March. Morgan Barnard's family said it had found the chief constable's comments extremely hurtful, and had asked for the private meeting. The deaths happened as hundreds of young people were queuing to get into a St Patrick's Day disco. An investigation by the Police Ombudsman into the initial police response to the incident is being carried out. It was previously revealed that the first officers who arrived at the scene of the tragedy withdrew to await support. Image copyright Euphoria Allstar Cheerleading/Family/Edendork GAC Image caption Lauren Bullock, 17, Morgan Barnard, 17, and 16-year-old Connor Currie, died after the incident at the hotel on 17 March "I expressed my deep regret if any comment that I have made in relation to the incident has caused the family any further distress," said Mr Hamilton. "The investigation into the circumstances that led to the deaths of Morgan Barnard, Lauren Bullock and Connor Currie continues and we are very grateful to all the witnesses who have come forward with information."
Q: Debug assertion failed, C++ vector subscript out of range I know this question has been asked before but even after looking at all the others I can't seem to figure this out. I am getting "vector subscript out of range" for the following code: double forward_price(int number_divs, std::vector<double> *dividends, std::vector<double> *time_dividends) { int i = number_divs - 1; for (; i >= 0 & (*time_dividends)[i] > 0.0;) i--; for (; i >= 0 & (*time_dividends)[i] > 0.0; i--) { forward_px -= (*dividends)[i] } return forward_px; } int number_divs = 3; std::vector<double> dividends = { .5, .6, .58 }; std::vector<double> time_dividends = { .04, .198, .6 }; double forward_div = forward_price(3, &dividends, &time_dividends); As far as I can tell it is coming from the second for statement and works when I change that one to i>=1. I'm able to call time_dividends[0] and dividends[0] so I can't tell why this isn't working. A: The single & in i >= 0 & (*time_dividends)[i] > yte_forward is a "bitwise and", you probably want the "logical and" &&.
Neuropsychiatric manifestations and their outcomes in chronic hypocalcaemia. Hypocalcaemia is an established cause of neurological and psychiatric disease with numerous clinical manifestations. The aim of the study was to determine the outcome of severe neuropsychiatric manifestations of chronic hypocalcaemia after correction of calcium levels. Clinical and laboratory data of 22 patients seen between 1999 and 2009 were retrospectively analysed. Calcium, magnesium, phosphorus, albumin and parathormone values were measured in all cases. All patients except infants under one year of age had computed tomography (CT) scans of the head. Most patients (n = 19; 86%) presented with generalised tonic clonic convulsions while three had seizures with psychiatric manifestations. Movement disorders were present in 4 patients and one had candida meningitis. Nineteen of the 22 patients had primary hypoparathyroidism of which one had associated mucocutaneous candidiasis. One had pseudohypoparathyroidism and two had vitamin D deficiency. All patients improved with calcitriol and calcium treatment. Twelve of the 14 patients with convulsions could be taken off anticonvulsants. Hemiballismus disappeared in one patient and choreiform movements disappeared in one patient and dystonia in two patients. Psychiatric manifestations improved but did not disappear in the three patients who had them. Adult patients with seizures or neuropsychiatric manifestations should have calcium levels checked. Seizure disorders due to chronic hypocalcaemia had excellent prognosis on correction of serum calcium levels. Movement disorders improved markedly. Psychiatric manifestations did not improve substantially on correction of serum calcium levels.
Related Articles Despite the challenges they face, the transgendered community can prove to be a great source of inspiration. Here we list 10 of the most influential transgender advocates of all time. Here are just a few of the most notable trans people who have left, or are making a mark, on society today. 1. Caitlyn Jenner Credit: Annie Leibovitz - vanityfair.com Formerly known as Bruce Jenner, the famous stepfather of the Kardashian clan appeared as a woman for the first time, on the June 2015 cover of US magazine Vanity Fair. The iconic cover was shot by esteemed photographer Annie Leibovitz. 2. Laverne Cox Since playing a role in the successful series Orange is the New Black, Laverne has put her newfound celebrity status to good use. The outspoken actress is not shy of voicing her opinion and support towards the equal rights of the transgender community. 3. Chaz Bono Born Chastity Sun Bono to famous parents Sonny and Cher, Chaz's journey is probably considered one of the most memorable stories of all time. Coming to terms with the fact that he felt different to his peers, Chaz made the courageous step of undergoing gender reassignment from 2008 to 2010. 4. Carmen Carrera After appearing on RuPaul's Drag Race, this trans model catapulted to reality TV fame. Carmen has appeared on various magazine covers and starred in interviews alongside other famous transgender celebrities including Laverne Cox. 5. Candis Cayne Playing Carmelita on prime time series Dirty Sexy Money, Candis was the first transgender actress to play the role of a recurring trans character. 6. Fallon Fox After coming out as transgender in 2013, Fallon became the first ever transgender fighter in the history of mixed martial arts. Now that's a true fighter. 7. Isis King Another one to prove that beauty comes in all different forms, Isis became the first ever transgender contestant in America's Next Top Model history. 8. Renee Richards Renee made trans rights history in 1977 when she won the ability to compete in professional tennis tournaments as a woman. 9. Lana Wachowski Alongside her brother Andy, Lana has become a well-respected name in Hollywood, producing and directing a number of films including the Matrix series and Cloud Atlas. 10. Alexis Arquette Since identifying as a female from a young age, Alexis has always been a familiar face within the transgender community. Enjoying a variety of big screen roles as an actress, probably the one that springs to mind most is Arquette: She's My Brother, detailing her journey from male to female.
import {Direction, Directionality} from '@angular/cdk/bidi'; import { DOWN_ARROW, END, HOME, LEFT_ARROW, PAGE_DOWN, PAGE_UP, RIGHT_ARROW, UP_ARROW, } from '@angular/cdk/keycodes'; import {dispatchFakeEvent, dispatchKeyboardEvent} from '@angular/cdk/testing/private'; import {Component, ViewChild} from '@angular/core'; import {waitForAsync, ComponentFixture, TestBed} from '@angular/core/testing'; import {MatNativeDateModule} from '@angular/material/core'; import {AUG, DEC, FEB, JAN, JUL, JUN, MAR, MAY, NOV, OCT, SEP} from '@angular/material/testing'; import {By} from '@angular/platform-browser'; import {MatCalendarBody} from './calendar-body'; import {MatYearView} from './year-view'; describe('MatYearView', () => { let dir: {value: Direction}; beforeEach(waitForAsync(() => { TestBed.configureTestingModule({ imports: [ MatNativeDateModule, ], declarations: [ MatCalendarBody, MatYearView, // Test components. StandardYearView, YearViewWithDateFilter, YearViewWithDateClass, ], providers: [ {provide: Directionality, useFactory: () => dir = {value: 'ltr'}} ] }); TestBed.compileComponents(); })); describe('standard year view', () => { let fixture: ComponentFixture<StandardYearView>; let testComponent: StandardYearView; let yearViewNativeElement: Element; beforeEach(() => { fixture = TestBed.createComponent(StandardYearView); fixture.detectChanges(); let yearViewDebugElement = fixture.debugElement.query(By.directive(MatYearView))!; yearViewNativeElement = yearViewDebugElement.nativeElement; testComponent = fixture.componentInstance; }); it('has correct year label', () => { let labelEl = yearViewNativeElement.querySelector('.mat-calendar-body-label')!; expect(labelEl.innerHTML.trim()).toBe('2017'); }); it('has 12 months', () => { let cellEls = yearViewNativeElement.querySelectorAll('.mat-calendar-body-cell')!; expect(cellEls.length).toBe(12); }); it('shows selected month if in same year', () => { let selectedEl = yearViewNativeElement.querySelector('.mat-calendar-body-selected')!; expect(selectedEl.innerHTML.trim()).toBe('MAR'); }); it('does not show selected month if in different year', () => { testComponent.selected = new Date(2016, MAR, 10); fixture.detectChanges(); let selectedEl = yearViewNativeElement.querySelector('.mat-calendar-body-selected'); expect(selectedEl).toBeNull(); }); it('fires selected change event on cell clicked', () => { let cellEls = yearViewNativeElement.querySelectorAll('.mat-calendar-body-cell'); (cellEls[cellEls.length - 1] as HTMLElement).click(); fixture.detectChanges(); let selectedEl = yearViewNativeElement.querySelector('.mat-calendar-body-selected')!; expect(selectedEl.innerHTML.trim()).toBe('DEC'); }); it('should emit the selected month on cell clicked', () => { let cellEls = yearViewNativeElement.querySelectorAll('.mat-calendar-body-cell'); (cellEls[cellEls.length - 1] as HTMLElement).click(); fixture.detectChanges(); const normalizedMonth: Date = fixture.componentInstance.selectedMonth; expect(normalizedMonth.getMonth()).toEqual(11); }); it('should mark active date', () => { let cellEls = yearViewNativeElement.querySelectorAll('.mat-calendar-body-cell'); expect((cellEls[0] as HTMLElement).innerText.trim()).toBe('JAN'); expect(cellEls[0].classList).toContain('mat-calendar-body-active'); }); it('should allow selection of month with less days than current active date', () => { testComponent.date = new Date(2017, JUL, 31); fixture.detectChanges(); testComponent.yearView._monthSelected({value: JUN, event: null!}); fixture.detectChanges(); expect(testComponent.selected).toEqual(new Date(2017, JUN, 30)); }); describe('a11y', () => { it('should set the correct role on the internal table node', () => { const table = yearViewNativeElement.querySelector('table')!; expect(table.getAttribute('role')).toBe('presentation'); }); describe('calendar body', () => { let calendarBodyEl: HTMLElement; let calendarInstance: StandardYearView; beforeEach(() => { calendarInstance = fixture.componentInstance; calendarBodyEl = fixture.debugElement.nativeElement.querySelector('.mat-calendar-body') as HTMLElement; expect(calendarBodyEl).not.toBeNull(); dir.value = 'ltr'; fixture.componentInstance.date = new Date(2017, JAN, 5); dispatchFakeEvent(calendarBodyEl, 'focus'); fixture.detectChanges(); }); it('should decrement month on left arrow press', () => { dispatchKeyboardEvent(calendarBodyEl, 'keydown', LEFT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2016, DEC, 5)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', LEFT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2016, NOV, 5)); }); it('should increment month on left arrow press in rtl', () => { dir.value = 'rtl'; dispatchKeyboardEvent(calendarBodyEl, 'keydown', LEFT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, FEB, 5)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', LEFT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, MAR, 5)); }); it('should increment month on right arrow press', () => { dispatchKeyboardEvent(calendarBodyEl, 'keydown', RIGHT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, FEB, 5)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', RIGHT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, MAR, 5)); }); it('should decrement month on right arrow press in rtl', () => { dir.value = 'rtl'; dispatchKeyboardEvent(calendarBodyEl, 'keydown', RIGHT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2016, DEC, 5)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', RIGHT_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2016, NOV, 5)); }); it('should go up a row on up arrow press', () => { dispatchKeyboardEvent(calendarBodyEl, 'keydown', UP_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2016, SEP, 5)); calendarInstance.date = new Date(2017, JUL, 1); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', UP_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, MAR, 1)); calendarInstance.date = new Date(2017, DEC, 10); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', UP_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, AUG, 10)); }); it('should go down a row on down arrow press', () => { dispatchKeyboardEvent(calendarBodyEl, 'keydown', DOWN_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, MAY, 5)); calendarInstance.date = new Date(2017, JUN, 1); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', DOWN_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, OCT, 1)); calendarInstance.date = new Date(2017, SEP, 30); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', DOWN_ARROW); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2018, JAN, 30)); }); it('should go to first month of the year on home press', () => { calendarInstance.date = new Date(2017, SEP, 30); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', HOME); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, JAN, 30)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', HOME); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, JAN, 30)); }); it('should go to last month of the year on end press', () => { calendarInstance.date = new Date(2017, OCT, 31); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', END); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, DEC, 31)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', END); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, DEC, 31)); }); it('should go back one year on page up press', () => { calendarInstance.date = new Date(2016, FEB, 29); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', PAGE_UP); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2015, FEB, 28)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', PAGE_UP); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2014, FEB, 28)); }); it('should go forward one year on page down press', () => { calendarInstance.date = new Date(2016, FEB, 29); fixture.detectChanges(); dispatchKeyboardEvent(calendarBodyEl, 'keydown', PAGE_DOWN); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2017, FEB, 28)); dispatchKeyboardEvent(calendarBodyEl, 'keydown', PAGE_DOWN); fixture.detectChanges(); expect(calendarInstance.date).toEqual(new Date(2018, FEB, 28)); }); }); }); }); describe('year view with date filter', () => { it('should disable months with no enabled days', () => { const fixture = TestBed.createComponent(YearViewWithDateFilter); fixture.detectChanges(); const cells = fixture.nativeElement.querySelectorAll('.mat-calendar-body-cell'); expect(cells[0].classList).not.toContain('mat-calendar-body-disabled'); expect(cells[1].classList).toContain('mat-calendar-body-disabled'); }); it('should not call the date filter function if the date is before the min date', () => { const fixture = TestBed.createComponent(YearViewWithDateFilter); const activeDate = fixture.componentInstance.activeDate; const spy = spyOn(fixture.componentInstance, 'dateFilter').and.callThrough(); fixture.componentInstance.minDate = new Date(activeDate.getFullYear() + 1, activeDate.getMonth(), activeDate.getDate()); fixture.detectChanges(); expect(spy).not.toHaveBeenCalled(); }); it('should not call the date filter function if the date is after the max date', () => { const fixture = TestBed.createComponent(YearViewWithDateFilter); const activeDate = fixture.componentInstance.activeDate; const spy = spyOn(fixture.componentInstance, 'dateFilter').and.callThrough(); fixture.componentInstance.maxDate = new Date(activeDate.getFullYear() - 1, activeDate.getMonth(), activeDate.getDate()); fixture.detectChanges(); expect(spy).not.toHaveBeenCalled(); }); }); describe('year view with custom date classes', () => { let fixture: ComponentFixture<YearViewWithDateClass>; let yearViewNativeElement: Element; let dateClassSpy: jasmine.Spy; beforeEach(() => { fixture = TestBed.createComponent(YearViewWithDateClass); dateClassSpy = spyOn(fixture.componentInstance, 'dateClass').and.callThrough(); fixture.detectChanges(); let yearViewDebugElement = fixture.debugElement.query(By.directive(MatYearView))!; yearViewNativeElement = yearViewDebugElement.nativeElement; }); it('should be able to add a custom class to some month cells', () => { let cells = yearViewNativeElement.querySelectorAll('.mat-calendar-body-cell'); expect(cells[0].classList).toContain('even'); expect(cells[1].classList).not.toContain('even'); }); it('should call dateClass with the correct view name', () => { expect(dateClassSpy).toHaveBeenCalledWith(jasmine.any(Date), 'year'); }); }); }); @Component({ template: ` <mat-year-view [(activeDate)]="date" [(selected)]="selected" (monthSelected)="selectedMonth=$event"></mat-year-view>` }) class StandardYearView { date = new Date(2017, JAN, 5); selected = new Date(2017, MAR, 10); selectedMonth: Date; @ViewChild(MatYearView) yearView: MatYearView<Date>; } @Component({ template: ` <mat-year-view [activeDate]="activeDate" [dateFilter]="dateFilter" [minDate]="minDate" [maxDate]="maxDate"></mat-year-view>` }) class YearViewWithDateFilter { activeDate = new Date(2017, JAN, 1); minDate: Date | null = null; maxDate: Date | null = null; dateFilter(date: Date) { if (date.getMonth() == JAN) { return date.getDate() == 10; } if (date.getMonth() == FEB) { return false; } return true; } } @Component({ template: `<mat-year-view [activeDate]="activeDate" [dateClass]="dateClass"></mat-year-view>` }) class YearViewWithDateClass { activeDate = new Date(2017, JAN, 1); dateClass(date: Date) { return date.getMonth() % 2 == 0 ? 'even' : undefined; } }
Role of atrial natriuretic peptide in the natriuretic response to central volume expansion induced by head-out water immersion in sodium-retaining cirrhotic subjects. It is possible that abnormalities in atrial natriuretic peptide may be involved in the pathogenesis of sodium retention in edema states. We performed a study in a group of 12 sodium-retaining cirrhotic subjects to determine the role of this peptide in mediating differences in the natriuretic response to central volume expansion induced by head-out water immersion. Each patient was maintained for seven days on a 20-mmol sodium intake, and then studied on both control and immersion days. On each day, measurements of the following were obtained: plasma atrial natriuretic peptide, hematocrit, electrolytes, creatinine, plasma renin activity, serum aldosterone, urinary cyclic guanosine monophosphate (cGMP), blood pressure, and pulse rate. In six subjects, immersion resulted in a marked natriuresis sufficient to induce negative sodium balance by the third hour, and these subjects were termed "responders." In these six patients, baseline pre-immersion levels of plasma renin activity and serum aldosterone were all below 3 ng/liter/second and 4 nmol/liter, respectively. In the other six subjects, the natriuretic response to immersion was markedly blunted and insufficient to induce negative sodium balance, and these subjects were termed "non-responders." In these subjects, baseline pre-immersion levels of plasma renin activity and aldosterone were all above 3.5 ng/liter/second and 5 nmol/liter, respectively, and were significantly elevated compared with the responders, and compared with the normal range for control subjects consuming the same sodium intake. In both groups of cirrhotic subjects, baseline levels of plasma atrial natriuretic peptide and cGMP excretion were significantly and comparably elevated compared with the normal range for control subjects ingesting the same sodium intake. Despite the marked difference in the natriuretic response to immersion in both responders and non-responders, there was a significant and comparable further elevation of plasma atrial natriuretic peptide and urinary cGMP excretion during immersion, compared with the control day. These results suggest that the relative resistance to the natriuretic action of atrial natriuretic peptide in the non-responders compared with the responders is mediated by anti-natriuretic factors acting at a level parallel with or beyond atrial natriuretic peptide release or coupling to its cGMP-linked receptors.
Whatcom County and the state's ecology department added the two controversial questions to the proposed coal terminal's environmental impact study by Floyd McKay Crosscut NewsGateway Pacific, the giant export terminal north of Bellingham, took a major step forward Thursday as developers agreed to a $7.2 million contract for an environmental study on what would become Washington’s first coal-export terminal.The new agreement, with the engineering firm CH2M Hill, follows on the heels of an earlier $1.9 million deal to conduct public meetings and prepare the scope of review. That brings the total environmental review cost for Gateway Pacific to $9,089,911, according to Whatcom County officials who will supervise the contract.SSA Marine and BNSF Railway — both signed Thursday — will deal with subcontractors who will assess the project's impacts on human and animal health, marine life, wetlands, railway and shipping traffic and Native American culture. Whatcom County posted the contracts here on Friday.In what could be a precedent-setting proceeding, Whatcom County and the Washington Department of Ecology have also called for a review of the impacts of increased railroad traffic across the state, as well as a study on the impact on climate change of burning the coal in Asia.The call for a railway study brought objections from BNSF, which inserted statements in the contract: “BNSF Railway reserves its right to challenge the (state) EIS and any related agency actions at any time and in any forum . . . BNSF Railway in no way consents to jurisdiction of Ecology or County over any BNSF Railway action . . .” The railroad, which would haul the coal from Wyoming’s Powder River Basin to the Gateway site, insists that federal law and agencies govern its operations, under the Interstate Commerce section of the U.S. Constitution.The U.S. Army Corps of Engineers has opted for a narrower scope of environmental review, excluding rail traffic and climate change. The Corps, however, has sole jurisdiction to work with the Lummi Nation, which has raised strong objections to the terminal’s potential impact on fishing and cultural rights. The site was once a Lummi fishing and hunting ground, and its waters are used by tribal fishers.Although the contracts call for the complex studies to be completed by April 30, 2015, nearly all proceedings thus far have taken longer than expected. The contracts allow for extensions if the studies are not completed on time. Once the Draft EIS is finished, it will face public hearings and action by several agencies, the first of which is likely to be the Whatcom County Council.SSA Marine has been working for an export terminal at Cherry Point, north of Bellingham, for nearly three decades. An earlier proposal, smaller and without coal, was approved by Whatcom County in 1997, but encountered environmental objections which resulted in a negotiated agreement in 1999. This plan is more than six times larger and is currently focused on coal. Peabody Coal and Cloud Peak Mining have already secured commitments if the terminal proceeds.Gateway Pacific would accept some 48 million tons of coal a year, drawing nearly 1,000 ships through Puget Sound annually. Coal trains, which would run through the state from Spokane to the Columbia Gorge and up Western Washington, would number about nine loaded and nine empty trains a day, each a mile and a half long.A similar contract is being readied for the Millennium terminal proposed at Longview on the Columbia River. It is also expected to have a broad scope of environmental review. A third export terminal, considerably smaller, has been proposed for the Columbia at Boardman, upriver from Portland. That environmental review is less sweeping than the ones in Washington. Amid all the debate about the risk of coal trains spreading coal dust into areas near the railroad tracks, it’s often forgotten that the subject is controversial even within the industry. How to control coal dust—or whether it can be done at all to a meaningful degree—has been the subject of a long-running dispute between those who ship the coal and those who carry it. The coal companies or utilities that ship the coal are on one side and the railroads that carry it are on the other. Sightline Institute has a new accounting of Northwest oil train projects. by Eric de Place Sightline is re-releasing a popular report: The Northwest’s Pipeline on Rails. It’s the most comprehensive regional analysis of plans to ship crude oil by train. Moving large quantities of oil by rail would represent a major change for the Northwest’s energy economy, and the plans now in development puts the region’s communities at risk.Why does it matter? If all of the projects were built and operated at full capacity, they would put an estimated 11 loaded mile-long trains per day on the Northwest’s railway system. Many worry about the risk of oil spills from thousands of loaded oil trains that may soon traverse the region each year. A string of high profile oil train explosions has raised widespread concern about the risks of moving crude oil by rail through populated areas. States and local governments across North America are beginning to seek more information about oil shipments and demand stricter tank car standards from federal regulators. Taken together, the oil-by-rail projects planned for the Northwest would be capable of delivering enough fuel to exceed the region’s oil refining capacity. Ironically, two of the facilities that would handle oil by rail were originally built to supply renewable fuels. The projects are designed to transport fuel from the Bakken oil formation in North Dakota, but the infrastructure could also be used to export Canadian tar sands oil. In fact, if all of the oil-by-rail projects were built, they would be capable of moving 785,000 barrels per day—that’s more oil capacity than either of the controversial pipelines planned in British Columbia, and nearly as much as the planned Keystone XL pipeline. On Puget Sound, three of the region’s five refineries already receive oil-by-rail shipments, and the other two are planning new facilities. Three proposals for Grays Harbor would move oil along the Washington coast. And on the Columbia RIver, one port terminal is already receiving oil-by-rail shipments, while officials at Vancouver are planning by far the region’s largest facility. by Floyd McKay Crosscut.comThey call it “The Funnel,” a 70-mile confluence of BNSF rail tracks that feeds nearly 50 trains daily into Spokane and, according to an exhaustive research study, as many as 82 additional coal and oil trains could cascade into The Funnel in another decade.The study warns that the “heavy traffic ahead” could damage both agriculture and intermodal shipping that must compete with coal and oil trains on a rail system that faces limits both around Spokane and across the state.Big Energy’s race to the Pacific has already generated a lot of controversy west of the Cascades, but impacts on Eastern Washington, the Columbia Gorge and Montana are likely to be even more significant, according to the report for the Western Organization of Resource Councils, based in Billings. (The council describes itself as a regional network of grassroots groups dedicated to building "a democratic, sustainable and just society through community action.) The research was done by Terry Whiteside and Gerald Fauth, both with deep backgrounds in transportation.The report finds there will be sharp growth in the volume of U.S. coal exports through the Pacific Northwest. By 2023, the Northwest could exports 170 million tons if all proposed or expanded ports go ahead; 2012 saw 11.8 million tons of U.S. coal exports, all via Canada. Coal trains (full and empty) would go from 7 a day currently to from 52 to 62 daily in 2023. Oil trains, still relatively few, are seen as quickly reaching 22 a day — nearly half would go to Vancouver, Wash., where a proposed Tesoro-Savage terminal is now being studied by the state Energy Facility Site Evaluation Council.The export of crude oil is barred by federal law; the Bakken crude would go from trains to West Coast refineries or terminals such as Vancouver, for barging to refineries. But efforts to lift the export ban have been discussed in Congress.The potential of 85 daily energy trains would be added to a system already nearing capacity in several key sections. Not all proposed terminals will be approved, the authors note, but even if 75 percent are, the traffic would be very heavy. Their report is the first comprehensive study since Bakken oil entered the Big Energy mix.The authors raise the potential impact on Northwest industries of coal and oil trains dominating the rail infrastructure and, in essence, bullying smaller industries off the routes that many have used for generations:As a result of the high volume and profitable revenue, [Powder River Basin to Pacific Northwest] export coal movements and Bakken oil trains to the [Pacific Northwest] would likely be favored by the railroads over other types of existing railroad traffic. The remaining capacity available to other railroad shippers would be limited, constrained, and more expensive. ... Other freight shippers would likely see increased costs and higher railroad rates as a result of rail congestion and the limitations on available rail capacity. Railroad transit times would likely increase for other railroad traffic as a result of congestion.BNSF has consistently maintained that it can handle additional traffic without shorting longtime customers. Then-CEO Matt Rose in November 2012 told Puget Sound Business Journal, “Why would we want to haul one type of freight at the expense of another? The answer would be, we’re going to handle all customers’ business — that’s in our own self-interest. We think with long-term planning, and working with WSDOT, and the other agencies we deal with, and providing proper capital, we can do that.”Whiteside and Fauth say Rose understates the potential traffic. The matter of effects on existing industries has not been widely discussed by regional business and industry leaders, who have either supported the export plans or, more frequently, kept out of the controversy. Chambers of Commerce, predictably, have led the drum rolls, in several instances in an alliance with construction and shipping unions. LONGVIEW, WA - Today, the Washington State Department of Ecology and Cowlitz County announced a broad scope of their Environmental Impact Statement (EIS) for the proposed coal export terminal in Longview in southwest Washington State. If built, it would be the largest coal export terminal in North America, exporting up to 44 million metric tons of coal per year to Asia. “It’s great to see the Dept. of Ecology and County Cowlitz using their authority to raise questions about the vast threats to coal exports,” said Gayle Kiser, a local Longview resident and president of Landowners and Citizens for a Safe Community. “This broad scope of the environmental and health review by the agencies reflects our Northwest values and common sense. The entire state of Washington, including Cowlitz County residents, would face impacts from coal export. Coal export would pollute our air and water, and halt the flow of traffic in our towns. Taxpayers and local governments can’t afford to put the blinders on for coal export; our agencies cannot either.” The agencies will take a broad look at the impacts of the proposed terminal through the EIS, and will include a number of impacts: coal dust around the terminal, rail traffic and coal dust including in Montana, Idaho and the Columbia River Gorge, and the effects of coal combustion in China on Washington state, in particular carbon and mercury pollution. The Army Corps of Engineers has yet to announce their scope for Longview but took a very narrow one with the Cherry Point terminal. “The Spokane City Council previously unanimously voted to have our voice heard in the building of coal export facilities and its great news that our state agency listened,” said Ben Stuckart, City of Spokane, City Council President. “Spokane has much to lose, and little to gain by allowing all these new coal trains through our town. Such an increase would harm our air quality, transportation systems, and emergency response. Today is a great step in the right direction for Spokane.” The Dept. of Ecology, Cowlitz County and U.S. Army Corps of Engineers received over 215,000 public comments and heard from thousands of citizens last fall during the public comment period. That brings the total to 370,000 comments that have been submitted on coal export proposals in the Northwest. More than 60 local governments, including 28 local jurisdictions, submitted official comments on proposed Longview terminal. Over 160 elected officials, 500 businesses, and 600 health professionals have expressed concern or opposition to coal export. “I’m pleased that the State of Washington is including rail impacts to Montana. These trains don’t just materialize at the Washington border, and the impacts increased coal traffic would have on emergency response times and air quality in Montana cities and towns is significant,” said Dawson Dunning of Montana, whose family has ranched near the proposed Otter Creek coal mine for generations. “We still have a ways to go to make sure that our ranching and agricultural interests are taken under full consideration in light of the drastic impacts increased mining would have on Montana and Wyoming. Any review of these ports needs to take a harder look at the survival of ranching communities and economies like ours.” The Longview terminal is one of three remaining proposals in Washington and Oregon; three proposals have been pulled off the table in the last two years. The proponents of the terminal include Ambre Energy, its American subsidiary Millennium Bulk Logistics, and Arch Coal. The Longview coal export proposal has a rocky history. In 2011, a legal challenge exposed internal documents showing that Ambre and their US subsidiary Millennium Bulk Logistics lied to Cowlitz County and state officials about the size of their project, claiming it would ship five million tons per year when they planned a project more than 10 times that size. The release of the Longview scope comes on the heels of the Oregon Dept. of Environmental Quality yesterday announcing that they will require Ambre Energy to do an additional water quality certification process at their proposed terminal in Boardman, OR on the Columbia River. Much of the oil traveling by train to the profusion of new oil-by-rail terminals is shipped in what one Chicago-area leader called the “Ford Pinto of railroad cars.” These are the soda-can shaped tank cars, DOT-111s, built to standards in effect as recently as 2011 that have a “high incidence of failure during accidents.” If used to ship crude oil, their design flaws pretty much guarantee that a serious train derailment will lead to oil spills or massive explosions.One summer night in 2013, a rail accident involving DOT-111s resulted in a catastrophic explosion that killed 47 people in a small town in Quebec. In the months that followed, DOT-111s carrying oil unleashed towering explosions in Alabama, North Dakota, and New Brunswick.These mishaps were not accidents, so much as they were the logical consequence of a sea change in the way that we transport crude oil. A few years ago, a sudden oil boom from shale geologies, such as the Bakken formation of western North Dakota, caught almost everyone by surprise. With few good options for moving the abundant new found oil to market, companies turned to railroads in a big way: shipments of crude oil by rail spiked, and then spiked again.Yet shippers are moving oil largely in the old DOT-111 tank cars that for more than 20 years we’ve known are unsafe. In fact, since 1991, the National Transportation Safety Board (NTSB) has issued several crash investigation and safety recommendation reports involving tank cars documenting the inadequacies of the DOT-111 standard.Things came to a head after a high profile collision in 2009 when a slow moving train composed of DOT-111 cars hauling ethanol derailed at a road-crossing in Cherry Valley, Illinois. The resulting fireball fatally burned a passenger and seriously injured three others in vehicles waiting at the crossing. Local officials had to evacuate residents within a half-mile of the incident. In its subsequent report on the Cherry Valley explosion, the NTSB once again documented the inability of DOT-111s to withstand the forces of accidents even when traveling at low speeds. Investigators ticked off a long list of known problems: the thinness of the DOT-111 metal shell, lack of shielding for tank ends, weak housings for top fittings, tanks that don’t separate from rail car frames during a crash causing them to rip open, outlet valves that open when handles get caught by objects during a crash, and bottom outlet valves that are difficult to protect. (Here is a summary presentation of their DOT-111 findings.) The flaws were so numerous and so severe that the agency urged their owners to retrofit all existing tank cars carrying ethanol and crude oil. The NTSB recommendations were largely ignored. Tank cars are regulated not by the NTSB but by another government body: the US Pipeline and Hazardous Material Safety Administration (PHMSA) based on standards developed by a private industry group, the American Association of Railroads. (Transport Canada also plays a role in regulatory development.) After the Cherry Valley incident, PHMSA issued new standards to increase the crashworthiness of the tank cars, but only for those ordered after October 2011.So today, oil trains are mostly composed of the older, flawed DOT-111 tank cars. According to the railroad industry, 92,000 DOT-111 tank cars are used to move flammable liquids.Yet simply adding new tank cars to the mix of unsafe cars doesn’t work, according to the NTSB, because the “safety benefits [are] not realized if old and new tank cars are commingled.” In other words, if a unit train with old and new tank cars derails, the older DOT-111s will almost certainly breach and explode taking out the newer DOT-111s as well.Plus, even the newer DOT-111s with thicker shells and shielded ends still have an Achilles Heel: bottom outlet valves “which have been prone to failure in derailment accidents.” During derailment when a tank car skids along the ground, the bottom outlet valve’s operating levers are bent and pulled causing the valve to open, or the valve is sheared off all together. In the Cherry Valley derailment, for example, bottom outlet valves in three tank cars opened and released most, if not all, of the ethanol from those cars. The NTSB found that the bottom outlet valve handle breakaway design in use “has been shown to be of limited effectiveness in preventing product releases from bottom outlets” and that existing standards and regulations for the protection of bottom outlet valves on tank cars “are insufficient to ensure that the valves remain closed during accidents.”It’s a risk that has been recognized for many years. In fact, based on NTSB recommendations, members of the Chemical Manufacturers’ Association, in the early 1990s voluntarily upgraded the tank cars hauling hazardous chemicals (DOT-105s and DOT-112s) to eliminate bottom outlet valves because of their inherent danger. But crude oil, even the notoriously combustible oil from the Bakken region, need not be moved in these safer tank cars. Bottom outlet valves may be the issue to watch. If new oil-by-rail facilities support only tank cars that can be unloaded by a bottom outlet valve, they will guarantee the presence of a needlessly dangerous design for oil trains. After each oil train derailment, the railroad industry has pointed out that only a tiny percentage of all rail shipments of hazardous materials result in a release caused by a train accident. Theseindustry statistics are at least partially bogus, as EarthFix reporter Tony Shick demonstrated, but in a sense it doesn’t matter.With DOT-111 tank cars carrying volatile liquids, any accident rate greater than zero is too high.To eliminate the risk of a catastrophic explosion, every trip hauling Bakken crude or ethanol has to be perfect. The tracks can never be tampered with; no auto or truck can ever stall in a crossing (or be left on the track maliciously); no mix ups in communication can ever occur; no mudslide can hit a train. There is no margin for error. Because if a train with older DOT-111 tank cars derails and piles up, or if multiple car-to-car impacts ensue, the tank cars will “almost always” be breached. Even with newer DOT-111 tank cars, the risk is not reduced if they are mixed in with the older version. And a train composed solely of newer tank cars still has failure-prone bottom outlet valves on each and every tank car.There is a fix for all this: temporarily decommission the outdated DOT-111s pending their upgrade, and run oil only in new or retrofitted tank cars without bottom outlet valves. In the next post, we’ll explain why this hasn’t happened—and who’s behind it.
Woodcote Woodcote is a village and civil parish in South Oxfordshire, about southeast of Wallingford and about northwest of Reading, Berkshire. It is in the Chiltern Hills, and the highest part of the village is above sea level. Woodcote lies between the Goring Road and the A4074. It is centred on the village green and Church Farm, with the village hall centred on the crossroads. History Prehistoric artefacts have been found in the area, including a polished hand-axe from about 3000 BC found in the nearby hamlet of Exlade Street and on show in Reading Museum and a 28cm carved stone head Romano-Celtic, probably 1st–2nd century, with typical protruding eyes, exaggerated lips and flattened nose. The folds of skin on the neck and musculature at the back of the head have been carefully detailed. It is of white oolite limestone, and was found at Wayside Green, Woodcote, and is now in Reading Museum (Ref 401-78). The toponym Woodcote means "cottage in the wood". Woodcote was first documented in 1109, when it was a dependent settlement of South Stoke, which in turn was a possession of Eynsham Abbey. At the time of the Hundred Rolls in 1279, Woodcote had 14 freeholders and 20 tenants. Woodcote's population grew thereafter but then declined, perhaps as a result of the Black Death. In 1366 as a result of depopulation 15 virgates of land at Woodcote were vacant. Woodcote Manor may date from the 12th century. In 1550 it was called Rawlins Manor. There is a Jacobean barn in the grounds of Woodcote House. Woodcote House itself is a Georgian country house built in 1733. It was remodelled by the architect Detmar Blow in 1910. Since 1942 it has been the premises of The Oratory School, a Roman Catholic day and boarding independent school. Woodcote used to hold an annual sheep fair on the first Monday after St Leonard's Day (6 November). The earliest known record of it is from early in the 18th century, but the link with the feast day of the parish's patron saint suggests the fair may have begun in the Middle Ages. The fair was still being held in 1852. Woodcote farmed largely on an open field system with five open fields until 1853, when an Act of Parliament enabled an enclosure award for South Stoke and Woodcote. Woodcote provided the common pasture for the whole of South Stoke parish, while South Stoke beside the River Thames provided most of the parish's hay meadow. In the 20th century Woodcote outgrew South Stoke. By 1920 most residents worked outside the parish, many commuting to either Reading or a RAF station at Goring Heath. Woodcote won the Oxfordshire Village of the Year title for 2008. Churches By 1406 the parish of St. Andrew, South Stoke had at Woodcote a dependent chapel that served both Woodcote and Exlade Street. The chapel was dedicated to St. Leonard and there is a record from 1467 of John Chedworth, Bishop of Lincoln, issuing a licence for services at it. Architectural evidence suggests that the chapel, which had an apsidal chancel, was much older and probably dated from the 12th century. The people of Woodcote and Exlade Street could not afford to pay a priest to serve at the chapel, and in 1597 it was recorded that the vicar of South Stoke held services at St. Leonard's only on Christmas Day, Easter Day and a few other days each year. Some worshippers travelled each way to South Stoke to go to church, but most preferred to travel less than to SS Peter and Paul in the adjacent parish of Checkendon. The law obliged everyone to worship in their own parishes, so since 1595 the Rector of Checkendon had prosecuted people from Exlade Street and Checkendon in the local archdeacon's court for coming to his church. In response the faithful of Exlade Street and Woodcote petitioned John Whitgift, Archbishop of Canterbury for permission to worship at Checkendon. Whitgift granted the request, so long as they continued to attend their parish church in South Stoke four times a year. In 1653 the faithful of Woodcote and Exlade Street petitioned for St. Leonard's to be made a separate parish, but their request was not granted. In 1845–46 St. Leonard's was rebuilt to the designs of the Gothic Revival architect H.J. Underwood. Of the original building little survives except the outer flintwork of the chancel walls. St. Leonard's parish is now a member of The Langtree Team Ministry: a Church of England benefice that includes also the parishes of Checkendon, Ipsden, North Stoke, Stoke Row and Whitchurch-on-Thames. Woodcote has also Roman Catholic and Methodist churches. Schools Langtree School, The Oratory School and Woodcote Primary School are all in the village. Langtree School is a comprehensive school and recently became a DfES Specialist Performing Arts College. Woodcote Breakfast Club is based in Langtree School and Woodcote After School Club is based in the primary school. There are two pre-schools. The Cabin pre-school was founded by Mrs Rose Hunt in 1974. It had two previous homes until in 1986, when Mrs Bella Saunders, the Chairperson at the time, along with the Management Committee began raising funds for a new building. £10,000 was raised in just twelve months. The current building was installed in 1987 during the Christmas holidays within the grounds of Langtree School. In September 1996, the name was changed from The Cabin Playschool to The Cabin Pre-School. Amenities Woodcote has two shops – Londis and Co-op – and two pubs, The Red Lion and The Black Lion. The village post office closed in 2017. There is a children's playground built in October 2006 beside the main village green, which is next to the village hall. A basketball net is also available. Woodcote has a Women's Institute and a Goring and Woodcote Lions Club. Woodcote is surrounded in many parts by woodland. There are many country footpaths in the area. Sport Woodcote / Stoke Row Football Club currently has three teams. The First team plays in Premier Division of the Thames Valley League; the Reserve team plays in the Thames Valley League Division Two, and the Youth team plays in the South Chiltern Minor League Division One. The First Team manager is Sam Tucker. Home kit colours are black and white stripes. The away kit is red and white. Woodcote Cricket Club currently plays in the Berkshire Cricket League Premier Division. Woodcote Rally Each year Woodcote hosts a steam, vintage and veteran transport and real ale festival, the proceeds of which are donated to local charities and organisations, and over the years has raised more than £450,000. The rally includes a funfair. References Sources External links Category:Villages in Oxfordshire Category:Civil parishes in Oxfordshire
It’s an old and familiar saying: “If you lie down with dogs, you’ll wake up with fleas.” Not to mention: Poisoned, fake-brand THC vapes killing our kids via the vaping lung illness outbreak of last summer. (Oh. You missed that foreign-sourced public health crisis?) Plus floods of counterfeit nicotine e-cigarette brands habitualizing high-schoolers and undermining heavy-handed efforts at home to keep the products away from kids. Thousands of dangerous, fraudulent, mis- and wrongly-labeled, and even banned products swamping the world’s largest retail platform – and driven to the top of listings, claiming a coveted but misleading “Amazon Choice,” through lies, bogus sales and bribes. Decades of systematic cheating on trade agreements hollowing out America’s industrial base. Blatant and ubiquitous identity theft, cyberfraud and cyberattacks – including, ironically, unrelenting unleashing of computer “viruses.” Sponsorship of rogue nations rushing headlong to develop nukes that can take out Los Angeles. Of course, pandemics that, in flea-like fashion, rapidly infest the entire globe. And resulting record stock-market crashes, free falls in energy and other commodity markets, plummeting bond yields, and maybe, a recession that could undo the unparalleled job gains of the last few years. Hey, Wall Street! Ya maybe paying attention yet? One hates to say “We told you so.” But we told you so: that there was a plethora of grounds beyond illegal trade practices not to do business with China. And with the coronavirus and its far-reaching effects across our public health, society and now economy, we’ve just been reminded of another reason – one bringing new meaning to the term “economic contagion.” By the way – before we get started on empty charges of “racism,” the opening aphorism does not imply in any way that Chinese people are “dogs.” Although to say that their government and business leadership are such would be to insult every canine species. What might have been your first hint that China’s tyrants are not reliable, benign and mutually beneficial business partners? The tanks running down protesting students even as their dispatchers were seeking to normalize relations and gain access to our markets? Demonstrations of benevolence in blowing up churches, enslaving pastors, harvesting religious dissidents’ organs, imprisoning ethnic minorities in concentration camps and deploying expropriated technology to create the world’s most pervasive surveillance state? The massive and abiding competitive advantage gained by forcing laborers to toil hundreds of hours of overtime for months on end – under constant watch (even in the toilet) and in unsafe, unsanitary working and living conditions? Rampant looting and forced surrender of prized intellectual property as a price of doing business? Previous scandals involving tainted crayons, toys, lumber, drywall and personal care products? The lack of financial safeguards that one day could catch up the entire world in the collapse of a Potemkin village economic system? Or how about earlier global epidemics such as SARS and other various forms of flu that leaked into the world from a medieval public health infrastructure and unconstrained and disgusting dietary habits – specifically, unregulated meat markets peddling civet cats, pangolins and the like? However serious it may turn out to be, this flareup has given all of us, including Wall Street, a serious and painful wake-up call: that coronavirus and the resulting panics are what happens when a nation – seemingly inextricably – links its economy and entire way of life to a country whose system is completely inimical. But also an opportunity to prove the coupling is extricable after all – by pressing, Hillary Clinton-style, the famous “reset” button, this time with China. Perhaps spelled right (however it’s rendered in Mandarin). And policy-wise, in an opposite, decidedly less accommodative direction. Now that this frightening outbreak has captured our collective consciousness, the president should seize the opportunity to take his tough posture vis-à-vis the Chinese to a whole new level – and announce that for the sake of our economic wellbeing, national security, public health and consumer safety, America is indeed going to recast our relationship. That we will progressively wean ourselves off the Middle Kingdom’s brutal, criminal dictatorship; its crooked, immoral, hazardous and larcenous business activities; its unhygienic practices; and its antagonistic rhetoric and actions. That, until and unless China cleans up its act, we will promote and even mandate the shift of business and commerce to safer, saner, more reliable, less hostile and most important, more humane locations – including and especially the good old US of A. In short, that America is going to stop lying down with depraved, despotic dogs and waking up fleeced — in every sense. We Could Use Your Help Issues & Insights was founded by seasoned journalists from the IBD Editorials page. Our mission is to use our decades of experience to provide timely, fact-based reporting and deeply informed analysis on the news of the day. We’re doing this on a voluntary basis because we think our approach to commentary is sorely lacking both in today’s mainstream media and on the internet. You can help us keep our mission going. If you like what you see, feel free to visit our Donations Page by clicking here. And be sure to tell your friends! You can also subscribe to I&I: It's free! Share this... Reddit Linkedin email
Fourteen three year olds converge in the Italy for this Saturday’s Grade One Italian Derby. The race will be run at one mile and one half over a good turf course and the purse is $750,000. None of the horses running in this spot have experience over the distance and many have never run over an off track making this a wide open affair. Let’s take a look at them. #1 Dark Silver Add 16/1 2017 Record – 3-1-0-0 Highest Level Win – Listed Pedigree – by Monaco Counsul out of the Smarty Jones x Adjuticating mare Silver Ad. The youngest of her four foals also the only stakes winner. Analysis – Dark Silver Add began the year with a strong wire to wire victory in a listed at one mile. The race was run on an off track and he earned a career best speed of 97. Since then he was tenth in a grade three at one mile and then tenth again in the mile and one half Grade One Sydney Derby. He did not come anywhere near the early lead in either of those starts. The off track here might help him move up but he needs to show a complete reversal of recent form to factor. #2 Bummer of A Summer 7/1 2017 Record – 3-1-2-0 Highest Level of Win – Listed Pedigree – by Darci Brahma out of the grade one winning Storm Cat x Mr. Prospector mare Cruel Summer. Cruel Summer never ran beyond a mile and three sixteenths and did her best running closer to a mile. She has produced five foals to race, the three oldest foals are all stakes placed, Bummer of a Summer is the only stakes winner. Analysis – Bummer of a Summer is making his graded debut in this spot. In his last two starts he won a listed at a mile and one sixteenth in March and was second in a mile and one eighth listed in April. He ran 98 and 97 speeds respectively in those races. He’s never run further than that mile and one eighth race and he’s never tried an off course before. He’s a horse who usually sits well off the pace but he’ll need to keep in touch with the leaders and not fall too far behind here. #3 Sealandia Wisen 23/1 2017 Record – 2-0-1-0 Highest Level of Win – Allowance Pedigree – by Wiesenpfad out of the Sulamani x Galileo mare Warwhatisitgoodfor. This is her first stakes placed runner from four foals. Analysis – Sealandia Wisen is taking a lot of chances in this race. It is his first route start, his first graded start and his first start on an off track. His best race came in his 2017 debut when he finished second by a neck in a seven and one half furlong listed. He ran a career best 94 speed in that start where he closed from last to just miss the win. He ran back under the same conditions in his next start, again falling back early but failing to mount a rally in the stretch. Stranger things have happened in the sim but a win by this guy would be a big windfall for his connections. #4 Tete Cittagazze 7/1 2017 Record – 4-2-0-2 Highest Level of Win – Allowance Pedigree – by Shamardal out of the grade three winning Teofilo x Cozzene mare Tete Felicia. Tete Felicia scored her biggest win at a mile and one quarter but raced well at this distance and even hit the board in longer races. This is her only foal. Tete Felicia was produced by a grade one winning mare who won at distances up to a mile and one quarter but did not hit the board in any tries at longer distances. Analysis – Tete Cittagazze broke his maiden in his second start over a good turf track. He ran career best 96 speeds in a February allowance at a mile and three sixteenths and again in his most recent start, a listed at a mile and one quarter. His pedigree suggests that the distance is in reach but jumping up in class, especially after only being stakes placed is a big task. #5 Moon’s Lakehouse 11/1 2017 Record – 3-2-0-1 Highest Level of Win – Allowance Pedigree – by Malibu Moon out of the stakes winning Dynaformer x Sadler’s Wells mare Constance. Constance won up to a mile and one quarter on the turf, she never raced farther. This is her only foal. Analysis – Moon’s Lakehouse made his debut over a yielding surface where he finished fifth. It is difficult to say whether the track was to blame for that performance as it was run at six furlongs and his form has improved in route races. He’s won his last two starts, a maiden and an allowance race with 95 and 97 speeds respectively. The maiden came at a mile and one sixteenth which is the furthest distance he’s ever run. This is a difficult spot to make a stakes debut, especially considering he is also adding a ton of distance. The pedigree potential is there for a distance runner but until he does it it’s just potential. #6 Global Horn 7/2 2017 Record – 3-2-1-0 Highest Level of Win – G2 Pedigree – by Golden Horn out of the Kingmambo x Rahy mare Global King. This is her third graded winning foal after a grade three winner by Mineshaft and another by Street Cry. Analysis – Global Horn has really turned a corner this year. He was a solid two year old but looks like he may be an exceptional three year old. He’s won his last two starts, a listed at a mile and one sixteenth and the Grade Two South African Classic at a mile and one eighth. He ran a huge 108 speed in that start in which he sat just off the early leaders before cruising to the lead and winning easily. He’s never run on an off track and while his sire won the Epsom Derby his dam and her other two graded winning offspring did not fare well over a mile and one quarter. #7 Spirit Journey 10/1 2017 Record – 2-0-1-0 Highest Level of Win – Listed Pedigree – Invincible Spirit x Rainbow Quest x Nureyev Analysis – Spirit Journey ended last season with a residency restricted listed win going one mile. He had a three month holiday before returning in another mile listed where he ran sixth but with a career best 99 speed. He’s made two starts over off tracks and was second both times. This includes his most recent race, the Grade Two No Allies at a mile and five sixteenths in what was a sneaky good race. He raced in third in the early going before mounting a rally. The race was won by the early leader but Spirit Journey was closing gamely and held off the late closers. #8 Rocco Vino 41/1 2017 Record – 5-0-2-2 Highest Level of Win – Allowance Pedigree – by Le Havre out of a Rock of Gibraltar x Phone Trick mare. He is the best of her four foals. Rocco Vino is from the family of grade one winner Waymart by Statue of Liberty. Waymart did his best running in middle distances on dirt. Analysis – Rocco Vino needs a big turnaround in form to factor here. He has a career best speed of 82, a number which he has run twice this year both times in one mile races. The furthest he has ever run is a mile and one sixteenth and he’s won twice at that distance in HOT races. He made his debut on a good turf surface and he finished sixth that day. Rocco Vino ran in stakes races twice as a two year old but failed to hit the board. His pedigree is not one that you would normally see in mile and one half horses. #9 Wolferts Roost 14/1 2017 Record – 3-0-1-2 Highest Level of Win – Maiden Pedigree – by Gleneagles out of the Kingmambo x Danzig mare Mushy Taters. Mushy Taters produced the grade one winner Tizater by Tiznow who won earned her biggest score going a mile and one half over a good turf course. Analysis – Wolferts Roost made his stakes debut in his last start, the Grade Two No Allies at a mile and five sixteenths over a good turf course. He ran a 90 speed while finishing third, beaten six lengths. Previously he ran a 95 speed when second in an allowance at a mile and three sixteenths. This colt obviously has talent and has a really nice pedigree (although Gleneagles did his best running at a mile) but this is a big step up. If he runs like his sister Tizater he could win her but that’s a big if. #10 Galileo Figura 12/1 2017 Record – 3-1-2-0 Highest Level of Win – Maiden Pedigree – by Galileo out of the grade one winning Kingmambo x Seattle Slew mare Bella Figura. She did her best running around a mile and one quarter on dirt. Her five other foals include two listed winners and a grade three winner by Roderic O’Connor. Analysis – This may be the best bred colt in this contest but with only a maiden win to his credit he needs that pedigree to kick into gear big time. After losing both 2016 starts, Galileo Figura broke his maiden in his 2017 debut going a mile and one eighth. He jumped straight to stakes company , finishing second in a mile and three sixteenths listed with a 95 speed and then returning to run second again in a listed at a mile and one sixteenth with a 92 speed. This colt should sit off the pace but not too far back. At 12/1 if you like the pedigree potential he’s a juicy option. #11 Phatranger 8/1 2017 Record – 2-0-0-0 Biggest Win Level – G1 Pedigree – by Bushranger out of a Green Desert x Mr. Prospector mare. The dam of fourteen foals, Phatranger is her first grade one winner but she also has a grade two winner by Bluegrass Cat and three stakes winners. Analysis – Last season Phatranger won the Grade One Duetoburst Stakes. He does have a listed win at a mile and one sixteenth last November but he has failed to hit the board in three subsequent starts. Phatranger did run a career best 97 speed in December when seventh in the Grade One Futurity. Still his 2017 efforts leave a lot to be desired. #12 English Moon 10/1 2017 Record – 2-0-0-1 Highest Win Level – Allowance Pedigree – English Channel x Montjeu x Dansili Analysis – English Moon broke his maiden going a mile last September and followed that with an allowance win over the same distance. After a disappointing effort in a listed he was gelded. He returned to the races going one mile where he finished fourth with a 95 speed. He tried a sprint in his most recent start, finishing third in a listed at seven and one half furlongs. English Channel did win the Breeders’ Cup Turf and Montjeu of course was a classic winner at this distance. #13 Furious Fast 11/1 2017 Record – 3-2-1-0 Highest Level Win – Allowance Pedigree – Fastnet Rock x Nureyev x Raise A Native Analysis – Furious Fast is coming into this race with steady improvement in all three of his 2017 starts. He started with a maiden win at a mile and one eighth and then demolished an allowance field at a mile and three sixteenths. Most recently he ran third in the mile and one quarter Grade Three Force Stakes. That day he closed from last to be beaten only two lengths. #14 Over Jo 10/1 2017 Record – 3-0-1-0 Highest Level Win – Listed Pedigree – by Twice Over out of a Johannesburg x Nureyev mare. This is the second stakes winner out of the mare from four foals. Analysis – Over Jo broke the triple digit speed barrier in his last start, the Grade three Doctor Is In Memorial at a mile and one sixteenth. He was second by two lengths in that start after trailing in last early on. Last year he won a local-bred listed going six furlongs. Twice Over was best at a mile and one quarter and Johannesburg tends to be a sprint influence so the pedigree is mixed. He’s never run on an off track so that’s another question mark for him. Good Luck to All!
DIY CB2 Marlow Vase Look-A-Like Hack 01.26.2016 Let me begin by asking this: where do you get your inspiration? Do you get it primarily from Pinterest? Maybe from shopping at a favorite thrift store or antique mall? For me personally, I find inspiration from all of the above, but I also grab quite a bit of creative insight from the pages of high end catalogs. Even though I rarely splurge on new pieces from stores like Pottery Barn and West Elm, I love leafing through the pages of their mailers to get ideas on styling vignettes, art pieces that I can recreate, and patterns that I can reimagine. Case in point, the Marlow Vase from CB2. Lately, I’ve been fixated on stocking up on vessels for styling things like coffee tables and our fireplace mantel, so when I caught sight of the pretty black and white Marlow Vase in the latest CB2 catalog, it was love at first sight. But then I saw the price tag and my heart fell to the floor. Happily, I was able to get the look for less than $10, and I’m sharing the hack with you below. The original vase from CB2 sports concentric rows of angular black triangles, so the first step was to cut one triangle out of my sheet of peel-and-stick vinyl. I noticed that the triangles on the CB2 vase were technically scalene triangles—in other words, they had three unequal sides and one right angle. So I cut my first triangle to match. Note: feel free to track down pre-cut black triangle stickers if you’re able to. Then, I layered the first cut triangle on top of an uncut section of the vinyl sheet, and treated it like a stencil to cut out the rest of my triangles, one at a time. In the end, I cut 20 triangles, but the amount you’ll need to cut will depend on the size of your own white vase, and the size of your first cut triangle. My triangles measured about 2-1/2-inches-by-2-inches-by-1-inch. With all of your triangles cut, peel off the backer paper and stick them in rows along the side of your white vase. Make sure to press down on all sides and in the center to remove air bubbles that might form during the placement. Although the original vase from CB2 showed a stair-stepped pattern of triangles, my vase was a little smaller so the pattern wouldn’t have been quite as visible. So, I just went with a symmetrical pattern of corner-to-corner triangles all the way up. I was surprised to find that this project took less than 20 minutes to make, and it truly captures the cool, modern look of the original vase from CB2 that I loved so much. I couldn’t be happier to have my new vase in hand, but most of my money still in my pocket.
Q: Project Euler 25 infinite loop I'm working on the Project Euler 25. I worked out how to do Fibonacci and I'm using BigInteger. My program seems to be running for an infinite loop (or so I think). Could it be that it is taking a long time or is it actually going into infinite loop? Can someone point me in the correct direction so I can fix it? import java.math.BigInteger; public class Problem25 { public static void main(String[] args) { getTerm(0); } public static void getTerm(int start) { BigInteger var = BigInteger.ZERO; BigInteger var2 = BigInteger.valueOf(start); int counter = 0; while(true) { BigInteger temp = var.add(var2); var = var2; var2 = temp; counter++; if(var.toString().length() > 1000) { System.out.print(counter); } } } } EDIT: Sorry people. I thought, I had break; but thanks for your responses. A: You have no condition for terminating the loop: while(true) { // << always true ;P BigInteger temp = var.add(var2); var = var2; var2 = temp; counter++; if(var.toString().length() > 1000) { System.out.print(counter); } } So it is an infinite loop. You have two (or even more) options: Specify in the while(statement) what is the condition to continue with the loop for another round. Add some break; statement to stop the loop if a certain condition is evaluated as true. A: getTerm(0); Shouldn't this be getTerm(1);? Also, MByD's answer is right; but this is also a critical problem. Without changing this, your program will never output.
Imaging the in vivo fate of human T cells following transplantation in immunoincompetent mice - implications for clinical cell therapy trials. Many forms of adoptive T cell therapy are on the verge of being translated to the clinic. To gain further insight in their immunomodulating functions and to optimize future clinical trials it is essential to develop techniques to study their homing capacity. CD4+ T cells were labeled using [(111)In]oxine, and the radioactive uptake was determined in vitro before intravenous injection in immunodeficient mice. In vivo biodistribution of [(111)In]oxine-labeled cells or tracer alone was subsequently measured by μSPECT/CT and organ distribution. CD4+ T cells incorporated [(111)In]oxine with higher labeling yield using Ringer-Acetate compared to 0.9% NaCl. Cellular viability after labeling with [(111)In]oxine was not compromised using less than 0.4 MBq/million cells. After intravenous infusion CD4+ T cells preferentially homed to the liver (p<0.01) and spleen (p<0.05). This study presents a protocol for labeling of T cells by [(111)In]oxine with preserved viability and in vivo tracking by SPECT for up to 8days, which can easily be translated to clinical cell therapy trials.
The Blog The Tax Bill Would Bankrupt Graduate Students and Halt Scientific Progress Official White House Photo by Lawrence Jackson Some of our loyal readers may have noticed this column has had an irregular publication schedule lately. This is because I wanted to give everyone a fresh update from the Society for Neuroscience 2017’s annual meeting, the largest gathering of over 30,000 neuroscientists every year to discuss the most fascinating and cutting-edge research. Unfortunately, that update will have to wait another week, because today I feel compelled to use my platform to talk about the current tax bill making its way through congress. This bill, if passed, would effectively make graduate school impossible for all those but the independently wealthy, and would decimate the structure of science as we know it. I typically keep this column apolitical, as my goal is to spread interesting neuroscience knowledge to everyone, rather than wading into the political thicket. Were this bill to have been proposed by the other side of the aisle, I would take equal issue. This overarching legislation aims to in part simplify taxes to, as its proponents so often state, ‘the back of a postcard.’ One such ‘simplification’ is the repeal of Section 117(d)(5), a tiny piece of the tax code that makes a huge difference to graduate students. In most STEM graduate programs, students have their tuitions waived and are awarded a modest stipend of approximately $20,000-$30,000 per year to focus on their research. Under the current tax code, graduate students are only taxed on their stipends, which makes sense, as this is the only money they actually take home. In the tax bill just approved by the house, this exemption is removed. That means a catastrophic increase in tax burdens for all STEM graduate students. Let’s take an average graduate student in Columbia’s Neurobiology PhD program. Their take-home income is just under $30,000 from their stipends, but Columbia’s tuition (which, again, a graduate student never sees or pays), is nearly $50,000. If the senate passes the current version of this bill, graduate students will see a tripling of their tax burden, an increase of over $10,000. Essentially, by trying to simplify the tax code, this bill would prevent all but the most wealthy of graduate students from pursuing higher education. While some universities may be able to increase stipends to compensate, most cannot afford to. Graduate students are the backbone of labs, and their projects make up the bulk of research happening in the US; without them, there is no science as we know it. Without this tiny line of tax code, programs will slash acceptances, US science productivity will plummet, and the hundreds of innovations which have made us a superpower will grind to a halt. Like all of STEM, neuroscience is reliant on the productive output of graduate students. While we are on the cusp of incredible breakthroughs in understanding the brain — many of which can lead to cures for heartbreaking diseases — none of that is possible with the passage of this tax bill in its current form. This is bigger than politics, and this is bigger than just science. This is about ensuring that the United States continues to be the world’s leader in innovative scientific and technological breakthroughs. If you enjoy the tiny computer in your pocket, have yourself been or known someone helped by modern medicine, or believe in the necessity of scientific progress, please take the time to speak out against this bill and ensure that if it progresses, it does so without this provision. You can find your representative’s information here; ask them to oppose the repeal of Section 117(d)(5) within the Tax Cut and Jobs Act. Next week, I promise we’ll be back to our regularly scheduled programming with some fun, new neuroscience findings.
using System; using System.Collections.Generic; using System.Reflection; using System.Xml.Serialization; using NewLife.Reflection; namespace NewLife.RocketMQ.Protocol { /// <summary>发送消息请求头</summary> public class SendMessageRequestHeader { #region 属性 /// <summary>生产组</summary> [XmlElement("a")] public String ProducerGroup { get; set; } /// <summary>主题</summary> [XmlElement("b")] public String Topic { get; set; } /// <summary>默认主题</summary> [XmlElement("c")] public String DefaultTopic { get; set; } /// <summary>默认主题队列数</summary> [XmlElement("d")] public Int32 DefaultTopicQueueNums { get; set; } /// <summary>队列编号</summary> [XmlElement("e")] public Int32 QueueId { get; set; } /// <summary>系统标记</summary> [XmlElement("f")] public Int32 SysFlag { get; set; } /// <summary>生产时间。毫秒</summary> [XmlElement("g")] public Int64 BornTimestamp { get; set; } /// <summary>标记</summary> [XmlElement("h")] public Int32 Flag { get; set; } /// <summary>属性。Tags/Keys等</summary> [XmlElement("i")] public String Properties { get; set; } /// <summary>重新消费次数</summary> [XmlElement("j")] public Int32 ReconsumeTimes { get; set; } /// <summary>单元模式</summary> [XmlElement("k")] public Boolean UnitMode { get; set; } #endregion #region 方法 /// <summary>获取属性字典</summary> /// <returns></returns> public IDictionary<String, Object> GetProperties() { var dic = new Dictionary<String, Object>(); foreach (var pi in GetType().GetProperties()) { if (pi.GetIndexParameters().Length > 0) continue; if (pi.GetCustomAttribute<XmlIgnoreAttribute>() != null) continue; var name = pi.Name; var att = pi.GetCustomAttribute<XmlElementAttribute>(); if (att != null && !att.ElementName.IsNullOrEmpty()) name = att.ElementName; dic[name] = this.GetValue(pi); } return dic; } #endregion } }
Air battle evens out today with larry losing ten Sally and 3 Oscar offset by the loss of just one Hurricane. Still the damage is rising at Rangoon and if I don't get a respite soon, it may become untenable for a/c. If that happens, I will have no choice but to run for it. CV Enterprise has sortied for the SoPac. Lexington is only about a week away. Saratoga is fast approaching the Bay area. I get the breather I wanted at Rangoon. Runway 43, Service 34. Its still a razor's edge here but if I get a couple more days of respite, it may be ok. IJA has landed at Surigao and suffers horrible disablements - the invasion bonus is indeed long gone. The 6th marine RGT (with Brett Castlebury) has sailed for Hilo to unite the 2nd Marine Division. The 27th ID and 164th Inf RGT plues four BF and a Air HQ are loading at San Diego for eventually Noumea. CV Enterprise in on her way back to the SoPac as is CV Hornet. CV Saratoga arrived SF on the 2nd and will go to Alameda for upgrade. CV Lexington will be ready in nine days. CV Yorktown is three days out of San Diego for her refit. So in rough three weeks I will have all five carriers upgraded and ready for action. Oh, and TBFs are now being produced; so one of the two CV's just arriving at WCUSA will likely take new TB back with her (perhaps both) With all this action, I hope to be in a place by 1 June to start minor offensive actions Larry opts for sweeps over Rangoon today and achieves a 2:1 loss ratio in his favor. There are now five IJA units at Pegu, so I pulled the a/c out. Weakening Rangoon a bit also as far as AS strength. There are at least 22 IJA units approaching from the East. No sense in letting units die in place for no real gain. I sent a BF to Prome which should help me get the damaged FS out of Rangoon when the time comes. There were five dud TT attacks today by USN Fleet subs. The last one was the hardest. Still nice to know where Sho is at the moment. I suspect this is a precursor to a Java invasion. Again, another turn with lots of dud TT. My one working torpedo was fired by my old friend Trusty. One fewere I-Boat to worry about. IJA does a BOMB Attack at Pegu and has a base AS advatage of 1100 to 330. Heres hoping that supplies will need a few days to trickle in. I just need Pegu to hold for two days while I evacuate Rangoon. As bad as that AS advantage is, there are no fewer than 15 additional units inbound to Pegu. Larry brought the house in Burma it appears. Come on Monsoon, don't fail me now. Remember that even if they block the rail line east of Rangoon, units can rail to Prome. From there British units can strat-road move up to the next rail station, while the commonwealth units will have to change to move mode and go on foot. Remember that even if they block the rail line east of Rangoon, units can rail to Prome. From there British units can strat-road move up to the next rail station, while the commonwealth units will have to change to move mode and go on foot. That is exactly what I am doing. I am railing three BF and the two BGDs that recombine into a full ID to Prome. From there they will move by road to Magwe where they will hopefull be air lifted out to the Bengal Region. All that is dependent on how fast the IJA pushes past Pegu and Rangoon proper. I am not sure. I am really surprised how many units Larry brought here considering all of the DEI west of Sumatra and south of Kendari is still mine. If he tarries much longer, I may reinforce Timor. Problem with that is that there are Netties at Kendari. The first attack at Pegu comes off 1 to 1. I suffer 600 casualties to Larry's 400. A dispropotionate number of my losses were destroyed squads. The BF has arrived at Prome. I now have a way to get any damage Aa/c at Rangoon out. More dud TT attacks this turn. At least I can say my subs are in the right locales. I forget to mention, Larry invaded/occupied Lae about a week ago. The rest of the PNG bases have been occupied since then. A slow turn. Only action of note is another Deliberate attack at Pegu which again came off 1 to 1. Unfortunately I suffered 2300 casualties to Larrys 200. The base should fall next turn. One BGD is free of Rangoon. A second will depart next turn. Once those two units are safe up north (if that happens), I will begin the mad rush with everybody else. Also Larry broke one unit off and moved it to Toungoo. I have a small BTN of troops there. If they get routed. All of Burma will collapse. CV Yorktown began her upgrade this turn. She is my last carrier to enter her April upgrade Pegu falls and the Burma Div is creamed. I have two BGDs and two BF at Prome. The RAF is pulled out. I left two sq of AVG in place as their withdraw date is approaching. Prolly pull them as soon as the IJA arrives. Toungoo held against an ARM RGT (barely) Two more units are approaching. I fear this is about to turn into a rout. I have four sq of transports moved up to Chittagong. As soon as the units start arriving at Magwe, they will begin the evacuation This operation is the best organized Larry has pulled off thus far. He not only brought enough but brought enough to overwhelm the defense. Cudos to my opponent on this one. Last little bit of troops just don't want to unload at Ramree Island. I have a BF only there right now and want to get some infantry in there ASAP. ENG unloading at Exmouth. This base will begin the process og being fully developed as soon as every one is ashore. A bad day. Toungoo fell to a two ARM RGT Assault. Larry put another armor unit between Rangoon and Prome so no more STRAT move out of Rangoon. I ordered all a/c out of Burma. The roofs caving in and I don't want anybody getting caught under it when it finally collapses. I had sortied the last two RNN CLs to Balikpapan two turns ago. The only ran afoul of a few MTB and only hit one. The Netties found them this turn adn they are both sunk. A very quiet turn. The IJA has arrived at Chuhsien with five units. I have a base AS of 975 behind level three forts but supplies are very short. Larry continues to poke around China looking for a soft spot. Quiet in Burma. Something odd happened. The IJA had landed two RGT at Cebu. This is the only place where I still have supplies in the PI. Thinking the units might be highly disrupted, I ordered the Filipino ID to shock attack. Nothing happend. When I opened the turn file, the IJA is gone. I asked Larry if he had evacuated. I am concerned we may have hit some sort of game glitch. BTW we are still using q6 I beleive. Quiet turn. The IJA units at Chuhsien only have a base AS of 275 vs mine of 975. However, I am very short on supplies. Larry varified that the units landed at Cebu were indeed evacuated (Whew) Seems the disruption was so high he was afraid they might get destroyed. Very happy we hadn't hit a game glitch Retreat in Burma is proceeding. The 16th Indian Div is beginning to be lifted out this turn. VF-42 on USS Yorktown upgrades to F4f-4 - the last CV unit to make that jump. There are 12 TBF in the pool. Lexington is due out of upgrade in 2 days. Likely I will send her with TBD. Yorktown and Saratoga should sail with TBF. Conversely, I might send Hornet and/or Enterprise to Auckland and have them upgrade their VT first. * IJN SCTF is in the Java Sea. Dutch AF bombs it to no avail * IJA arrives at vacant Oosthaven. Likely to be occupied next turn meaning the troops retreating from Benkholen are now cut off * IJA increase strength at Chuhsien but still trails the Chinese by greater than a 1:2 margin in base AS * IJA has two INF RGT at Sabang slowly grinding the Dutch defnders down * IJA paras remain at Padang * Sub I-4 has been having a field day NE of Hiva Oa. My ASW forces are unable to deter it * the 32nd ID is on its way to Oz. * 8th PG arrives Sydney next turn. More BFs are right behind the planes * Exmouth goes to Level 1 Port. All troops now unloaded * Engineers have arrived at Unmak Island. For the third time IJN carriers raid Colombo. I only lost a small number of AMc, ML, and HDML. The Japanese Air Groups are trashed by the RAF. Cursor intel is once again useless as to the composition of the CVTF. I watched the combat replay twice and only detected Akagi and Kaga air groups in the attack, but I suspect that is not very reliable. The RAF scrambled far more fighters in the second attack than the first. Again, the raid is heavily weighted with TB. The RN CVs are at Madras. I ordered them to sea to take up a position just SE of Ceylon. I suspect the IJN will withdraw, but if they don't, the RN may be able to penetrat the CAP. If there are indeed two IJN carriers, air losses indicate half of their fighters are gone as well as the bulk of two TB squadrons. Assuming an extra TB squadron is aboard (based on the raid composition), I figure the IJN has at best 30 Zeroes, 18 Vals, and 25 Kates. Still a formidible force to be sure, but I am willing to scarifice a RN carrier or two for a chance at crippling one or two IJN carriers in return. Ok so I chickened out. The cursor intel is likely wrong. Either the a/c numbers are overstated or there are three carriers vice two. I think the a/c numbers are what is the most likley error but I don't want to risk the carrier count being wrong. Just doing the math tells me Akagi + Kaga combined can hold 162 airframes. If the numbers from last turn are correct, that means 108 a/c remain. If larry overloaded the flight decks to the max amount to have without decreased flight ops there could be an additional 16 a/c. Now thats all conjecture but I don't think my three RN CVs could handle even the 108 a/c;ergo, I retreat. Only other action is that IJA has arrived at Rangoon with five units. I suspect he will hold off on attacking until the remainder or the stack arrives. One thing I did this turn was look over the map to scrounge up some fighters for Noumea. Unfortunately all I have is about seven VMF spread between PH and the WC. So Either I weaken currently held positions or just settle for those units. Problem is they all need F4F-4 for upgrade. I used all those up on upgrading the VFs. Will take a while to build up the pool. I looked at the reinforcement cue and there is not a lot of help coming in that department for another three or four months. I could weaken PH a bit to augment the FS. The 300 plus fighters I have there is probably overkill I am officially annoyed. You can see from the image that a convoy got jumped at Unmak Island. There is a CVE based TF off to the south plus at least one additional SCTF. Larry landed at Amchitka Island which he will take this comming turn. I never pay that much to NoPac this early in the game. All the ships were sunk in my TF. There were an odd collection of very short ranged xAP plus some AM and PC. Fortuantely the troops they were transporting had all already unloaded. This is now going to force me to pay attention to this section of the map. This annoys me because Larry still hasn't landed on java, Timor or any points in between. If this goes on much longer, I may move a large force to timor that may make it almost impossible to take. I have 32nd ID in Oz now and another ID inbound (40th). The 41st ID should be released in two more turns. If I move all that force into the DEI. It become very hard for Larry to disloadge it. If he doesn't move soon, he may regret it. The IJN Carriers withdraw from the NoPac. The AMC is still chasing convoys around but hasn't hit any. I have a INF RGT assigned to NoPac. I think I may use APDs to try to move it to Adak. The is assuming the Japanese Row Boat Corps doesn't take it first. Cursor intel say four units, 8k troops at Amchitka now. Next turn I finally will have enough PP to free up the 41st ID. It is at San Diego in Strat mode so can be sent to Oz immediately. Three large BF and two AA units unloading at Sydney. They will be sent to Cairns to build that base up. The 1st Burma Div will move to Prome next turn. From there it will go to Magwe then be lifted out. I saved it for two reason: it is trashed and it does not withdraw. The 16th Ind BGD is almost all at Chittagong now. The Gurka BGD goes next. I now have two CVs in the SoPac with a third on the way. Saratoga and Yorktown are still about two weeks away from being out of the yard. An IJN CVTF has appear SW of the Adaman Islands. It is safe to assume they are chasing the RN CVTF. There is 14 hexes between the two forces. Even at Full Speed, they could at best make up 5 hexes. The RN will make port at Madras next turn. There are plent of LBA fighters there to protect them. IJA attempted another Deliberate Attack at Rangoon which come off at 1 to 2, cost the IJA 1800 casualties vice 1500 for the Allies. More importantly an additional 41 IJA Engineer squads were destroyed. The IJA Armored spearhead is now just one hex away from Meiktila (SP?) Rest of the units including almost all the BF are getting clear. A RNN sub attacked a TF just north of the Makassar Straits. Could this be the Java invasion? I included as much of the theater as I could on the image to give my readers an idea how little Larry has conquered thus far. Also was prerusing the intel reports and got some nuggets. First BB Hyuga and CA Kinugasa are both listed as sunk now for over two months. Second, sad to say the AVG version of Greg Boyington is KIA (died fighting over Rangoon with two kills) Thats okay because he is coming back in the USMC version - odd, that sounds vaguely familiar The above mentioned TF passes over no fewere than three Allied subs with no attacks and is now in the Java Sea between Bandjermasin and Soerabaja. I suspect it will hit Bandjermasin next turn. IJA tries a Deliberate attack at Chuhsien which comes off 1:2 and costs the Japanese 1900 casualties offset by 1200 Allied. If I had some supply in this hex, it would probably be a stalemate with present forces. Without supply, the Chinese defense will like crumble soon. I-Boat off Auckland again sinks two xAK. I-Boat and possible SSX at Sydney harassed by Allied ASW. I-boat previously seen off Perth is gone - that boat may have been roughed up by LBA ASW. Busy turn. IJN CVTF is spotted ENE of Java. My PT boats intercepted a TF just south of Bandjermasin. Not much damage to the transports but the TF retreated from combat towards Balikpapan. A Falcon FB group put one 50 kg Bomb into an AK also (The Dutch finally hit something but its with a louse 50 kg bomb!!!) There is a third TF west of Soerabaja. Suspect this is a SCTF. NE of Oosthaven, the handwritin is on the wall for my isolated stack. Deliberate attack causes 300 casualties. At Rangoon another Deliberate attack at Rangoon. Forts are reduced but another "ouch" moment for IJA engineers. The unidentified TF is a MSW TF which arrives at Semerang this turn. It tangles with a RNN PT Boat TF which only results in me losing two PTs. Dozens of Dutch Matin bombers sortie and miss. The Falcon Sq at Soerobaja gets two more 50 kg hits. At least one Japanese ship sinks this turn. An IJN SCTF based around two BBs bombards Bandjermasin and the net result is the BF there is DESTROYED!!!!!!!!. That is a first for me where a Naval Bombardment elminates a unit. Elswhere on the map, TF leaves Sydney to pick up two RGT of America and a Tank BTNl. The last RGT plus 27th ID is about a week out of Auckland. I should have three CVs to escort this group into Noumea. Now that I am not saving PP to free up the 41st ID, I am able to free two CB and a BF this turn. The CBs go south while the BF will eventually head to Unmak Island. I will use the two remaining CVs that are at the WC to escort the BF and some infantry up to the NoPac. I set my first offensive objective today as 2nd Marine Div (with CPT Brett Castlebury) is set to Renell Island as its new target. Before that, I pan to occupy and build up the New Hebrides and likely Ndeni IJA comes ashore at Semerang. Lots of disablements on the combat report. Two addition TFs with transports are approaching. No idea of what the final force will look like as of yet. There were nearly 50 sorties by Martin bombers today for no hits. PG Isabell died an honorable death trying to interfere. USS Salmon found a working torpedo and hig a big tanker off Balikpapan. Likely will make port safely. Exmoth is approaching a Level 2 Port which should allow me to operate subs there effectively. The Gurka BGD is nearly lifted out of Magwe. The evacuation process went nearly flawlessly. An IJA ARM unit has arrived at Prome cutting the Prome-Magwe Road. All other units will have to fight their way out. All the BF have made it to the safety of NE Burma. Should not lose any of those precious units. The Dutch AF impales itself against the IJN CVTF. One attack had 30 Martin bombers go in for no hits!. No attacks at all against the invasion force. Interestingly, the CVTF is made up of four CVL plus one or two CVs. Where are the rest of the IJN CVS? Semerang falls. My PT boats did put a TT into a IJN Cl but that is all. My Dutch stack NE of Oosthaven retreated (????) Huh, The Japanese hold all the bases in southern Sumatra. How did these guys find a retreat path. Must be hexside control. If that weren't bad enough, Rangoon Falls under a shock attack and Prome falls to the Imp Guards. This effectively traps the 1st Burma Div and 4 Ind BGDs that were holding Rangoon. Will try to bushwhack them out but not keeping my hopes up. The Gurka BGD has been lifted out except for one ENG Veh and eight 18 pd guns. The fragment will try to get out across country. The turn starts with SS Shark attempting to hit a DD in a REPL TF. The TT fail to detonate and Shark suffers moderate damage. I-121 was roughed up by a RN DD off Diamond Harbor. One hit was penetrating and gave an internal explosion critical message. I suspect the sub is a goner. The Dutch AF continues to try to interfere with landings on Java. They didn't go after the CVTF this turn instead forcusing on a SCTF with two BBs in it. No hits, lots of damaged Dutch a/c. I think the total sorties off Java was apparoachign 70 attack aircraft. I got 4 50 kg hits on the Japanese LSD which has heavy fires. Two xAP also got hit. A RNN sub put four TT into a large xAP at Semerang. I believe this ship sunk. Djakarta was occupied. Ships moving into position to move a stout force to Noumea. CV Saratoga will leave the yard in five days. She will be paired with Yorktown to escort reinforcements to the Dutch Harbor/Unmak area
# Translation of Odoo Server. # This file contains the translation of the following modules: # * website_twitter # # Translators: # Pedro Filipe <pedro2.10@hotmail.com>, 2019 # Martin Trigaux, 2019 # Pedro Castro Silva <pedrocs@exo.pt>, 2019 # Diogo Fonseca <dsf@thinkopensolutions.pt>, 2019 # Manuela Silva <manuelarodsilva@gmail.com>, 2020 # msgid "" msgstr "" "Project-Id-Version: Odoo Server saas~12.4\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-08-12 11:33+0000\n" "PO-Revision-Date: 2019-08-26 09:16+0000\n" "Last-Translator: Manuela Silva <manuelarodsilva@gmail.com>, 2020\n" "Language-Team: Portuguese (https://www.transifex.com/odoo/teams/41243/pt/)\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: \n" "Language: pt\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "" "<i class=\"fa fa-arrow-right\"/>\n" " Show me how to obtain the Twitter API key and Twitter API secret" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "" "<span class=\"o_form_label\">Twitter Roller</span>\n" " <span class=\"fa fa-lg fa-globe\" title=\"Values set here are website-specific.\" groups=\"website.group_multi_website\"/>" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "<strong>Callback URL: </strong>Leave blank" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "<strong>Description: </strong> Odoo Twitter Integration" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "<strong>Name: </strong> Odoo Twitter Integration" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "<strong>Website: </strong>" msgstr "<strong>Site da Web: </strong>" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_res_config_settings__twitter_api_key #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "API Key" msgstr "Chave API" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_res_config_settings__twitter_api_secret #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "API secret" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "" "Accept terms of use and click on the Create your Twitter application button " "at the bottom" msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:16 #, python-format msgid "" "Authentication credentials were missing or incorrect. Maybe screen name " "tweets are protected." msgstr "" #. module: website_twitter #: model:ir.model,name:website_twitter.model_res_config_settings msgid "Config Settings" msgstr "config configurações" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "" "Copy/Paste Consumer Key (API Key) and Consumer Secret (API Secret) keys " "below." msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "Create a new Twitter application on" msgstr "Crie um novo aplicativo do Twitter no" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__create_uid msgid "Created by" msgstr "Criado por" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__create_date msgid "Created on" msgstr "Criado em" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__display_name msgid "Display Name" msgstr "Nome a Exibir" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_res_config_settings__twitter_screen_name #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "Favorites From" msgstr "" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website__twitter_screen_name msgid "Get favorites from this screen name" msgstr "Obter favoritos a partir deste nome de tela" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:48 #, python-format msgid "HTTP Error: Something is misconfigured" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "How to configure the Twitter API access" msgstr "Como configurar o acesso à API do Twitter" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__id msgid "ID" msgstr "ID" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:59 #, python-format msgid "Internet connection refused" msgstr "Ligação da Internet recusada" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet____last_update msgid "Last Modified on" msgstr "Última Modificação em" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__write_uid msgid "Last Updated by" msgstr "Última Atualização por" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__write_date msgid "Last Updated on" msgstr "Última Atualização em" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:61 #: code:addons/website_twitter/models/res_config_settings.py:62 #, python-format msgid "Please double-check your Twitter API Key and Secret!" msgstr "" #. module: website_twitter #: code:addons/website_twitter/controllers/main.py:27 #, python-format msgid "" "Please set a Twitter screen name to load favorites from, in the Website " "Settings (it does not have to be yours)" msgstr "" "Por favor, defina um nome de ecrã do Twitter para carregar os favoritos, nas" " «Definições» do site da Web (este não tem que ser o seu)" #. module: website_twitter #: code:addons/website_twitter/controllers/main.py:23 #, python-format msgid "Please set the Twitter API Key and Secret in the Website Settings." msgstr "" "Por favor, defina a «Chave» de API do Twitter e o «Segredo» nas definições " "do ''site'' da Web." #. module: website_twitter #. openerp-web #: code:addons/website_twitter/static/src/js/website.twitter.editor.js:21 #, python-format msgid "Reload" msgstr "Recarregar" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:18 #, python-format msgid "" "Request cannot be served due to the applications rate limit having been " "exhausted for the resource." msgstr "" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__screen_name msgid "Screen Name" msgstr "Nome de utilizador" #. module: website_twitter #: model:ir.model.fields,help:website_twitter.field_res_config_settings__twitter_screen_name msgid "" "Screen Name of the Twitter Account from which you want to load favorites.It " "does not have to match the API Key/Secret." msgstr "" "Nome de Ecrã da Conta do Twitter a partir da qual deseja carregar os " "favoritos não. Este não tem de corresponder com a Chave/Segredo API." #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "Switch to the API Keys tab: <br/>" msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:21 #, python-format msgid "" "The Twitter servers are up, but overloaded with requests. Try again later." msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:22 #, python-format msgid "" "The Twitter servers are up, but the request could not be serviced due to " "some failure within our stack. Try again later." msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:17 #, python-format msgid "" "The request is understood, but it has been refused or access is not allowed." " Please check your Twitter API Key and Secret." msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:15 #, python-format msgid "" "The request was invalid or cannot be otherwise served. Requests without " "authentication are considered invalid and will yield this response." msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:14 #, python-format msgid "There was no new data to return." msgstr "" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__tweet_id msgid "Tweet ID" msgstr "Tweet ID" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__tweet msgid "Tweets" msgstr "Tweets" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "Twitter API Credentials" msgstr "" #. module: website_twitter #: model:ir.model.fields,help:website_twitter.field_website__twitter_api_key msgid "Twitter API Key" msgstr "Chave API Twitter" #. module: website_twitter #: model:ir.model.fields,help:website_twitter.field_website__twitter_api_secret msgid "Twitter API Secret" msgstr "Segredo API Twitter" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website__twitter_api_key msgid "Twitter API key" msgstr "Chave API Twitter" #. module: website_twitter #: model:ir.model.fields,help:website_twitter.field_res_config_settings__twitter_api_key msgid "Twitter API key you can get it from https://apps.twitter.com/" msgstr "Pode obter a chave API do Twitter em https://apps.twitter.com/" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_website__twitter_api_secret msgid "Twitter API secret" msgstr "Twitter API segredo" #. module: website_twitter #: model:ir.model.fields,help:website_twitter.field_res_config_settings__twitter_api_secret msgid "Twitter API secret you can get it from https://apps.twitter.com/" msgstr "Pode obter o segredo API do Twitter em https://apps.twitter.com/" #. module: website_twitter #. openerp-web #: code:addons/website_twitter/static/src/xml/website.twitter.xml:40 #, python-format msgid "Twitter Configuration" msgstr "Configuração Twitter" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:62 #, python-format msgid "Twitter authorization error!" msgstr "Erro Autorização do Twitter!" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:20 #, python-format msgid "Twitter is down or being upgraded." msgstr "" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:19 #, python-format msgid "" "Twitter seems broken. Please retry later. You may consider posting an issue " "on Twitter forums to get help." msgstr "" #. module: website_twitter #: model:ir.model.fields,field_description:website_twitter.field_res_config_settings__twitter_server_uri msgid "Twitter server uri" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "Twitter tutorial" msgstr "" #. module: website_twitter #: code:addons/website_twitter/controllers/main.py:37 #, python-format msgid "" "Twitter user @%(username)s has less than 12 favorite tweets. Please add more" " or choose a different screen name." msgstr "" "O utilizador do Twitter @%(username)s com menos de 12 tweets favoritos. Por " "favor, adicione mais ou escolher um nome de tela diferente." #. module: website_twitter #. openerp-web #: code:addons/website_twitter/static/src/xml/website.twitter.xml:6 #, python-format msgid "Twitter's user" msgstr "" #. module: website_twitter #: model:ir.actions.server,name:website_twitter.ir_cron_twitter_actions_ir_actions_server #: model:ir.cron,cron_name:website_twitter.ir_cron_twitter_actions #: model:ir.cron,name:website_twitter.ir_cron_twitter_actions msgid "Twitter: Fetch new favorites" msgstr "Twitter: Obter novos favoritos" #. module: website_twitter #: code:addons/website_twitter/models/res_config_settings.py:58 #: code:addons/website_twitter/models/res_config_settings.py:59 #, python-format msgid "We failed to reach a twitter server." msgstr "" #. module: website_twitter #: model:ir.model,name:website_twitter.model_website #: model:ir.model.fields,field_description:website_twitter.field_website_twitter_tweet__website_id msgid "Website" msgstr "Website" #. module: website_twitter #: model:ir.model,name:website_twitter.model_website_twitter_tweet msgid "Website Twitter" msgstr "" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "https://apps.twitter.com/app/new" msgstr "https://apps.twitter.com/app/new" #. module: website_twitter #: model_terms:ir.ui.view,arch_db:website_twitter.res_config_settings_view_form msgid "with the following values:" msgstr "com os seguintes valores:"
Q: Why is my canvas not clearing? This is my first year in programming and I'm having problems clearing my gEnemies canvas. $(document).ready(function() { initStars(600); }); var FPS = 60; width = 300; height = 400; var gBackground = document.getElementById("canvas_background").getContext("2d"); var gPlayer = document.getElementById("canvas_player").getContext("2d"); var gEnemies = document.getElementById("canvas_enemies").getContext("2d"); var GUI = document.getElementById("canvas_ui").getContext("2d"); var bullets = []; var enemies = []; var shootTimer = 0; var maxShootTimer = 15; var Key = { up: false, down: false, left: false, right: false }; var player = { width: 16, height: 16, x: (width / 2) - 8, speed: 3, y: height - 20, canShoot: true, render: function() { gPlayer.fillStyle="aqua"; gPlayer.fillRect(this.x,this.y,this.width,this.height); }, tick: function() { if(Key.left && this.x > 0) this.x -= this.speed; if(Key.right && this.x < width - 20) this.x += this.speed; if(Key.space && this.canShoot) { this.canShoot = false; bullets.push(new Bullet(this.x,this.y - 4)); bullets.push(new Bullet(this.x + this.width,this.y - 4)); shootTimer = maxShootTimer; } } }; stars = []; addEventListener("keydown", function(e) { var keyCode = (e.keyCode) ? e.keyCode : e.which; switch(keyCode) { case 38: // up Key.up = true; break; case 40: // down Key.down = true; break; case 37: // left Key.left = true; break; case 39: // right Key.right = true; break; case 32: //spacebar Key.space = true; break; } }, false); addEventListener("keyup", function(e) { var keyCode = (e.keyCode) ? e.keyCode : e.which; switch(keyCode) { case 38: // up Key.up = false; break; case 40: // down Key.down = false; break; case 37: // left Key.left = false; break; case 39: // right Key.right = false; break; case 32: //spacebar Key.space = false; break; } }, false); function collision(obj1,obj2) { return ( obj1.x < obj2.x+obj2.width && obj1.x + obj1.width > obj2.x && obj1.y < obj2.y+obj2.height && obj1.y + obj1.height > obj2.y ); } function Star(x,y) { this.x = x; this.y = y; this.size = Math.random() * 2.5; this.render = function() { gBackground.fillStyle = "white"; gBackground.fillRect(this.x,this.y,this.size,this.size) }; this.tick = function() { this.y++; } } function createStars(amount) { for(i=0;i<amount;i ++) { stars.push(new Star(Math.random() * width, -5)); } } function initStars(amount) { for(i=0;i<amount;i++) { stars.push(new Star(Math.random()*width,Math.random()*height)); } } function Bullet(x,y) { this.x = x; this.y = y; this.width = 2; this.height = 12; this.speed = 5; this.render = function() { gPlayer.fillStyle = "red"; gPlayer.fillRect(this.x,this.y,this.width,this.height); }; this.tick = function() { if(this.y < -this.height) { var index = bullets.indexOf(this); bullets.splice(index,1); } this.y-=this.speed; }; } function Enemy(x,y) { this.x = x; this.y = y; this.width = 16; this.height = 16; this.speed = 0.5; this.render = function() { gEnemies.fillStyle = "red"; gEnemies.fillRect(this.x,this.y,this.width,this.height); }; this.tick = function() { this.y += this.speed; } } function render() { gBackground.clearRect(0,0,width,height); gPlayer.clearRect(0,0,width,height); gEnemies.clearRect(0,0,this.width,this.height); player.render(); for(x=0;x<8;x++) { for(y=0;y<8;y++) { enemies.push(new Enemy(x,y)); } } for(i in enemies) enemies[i].render(); for(i in stars) { stars[i].render(); } for(i in bullets) bullets[i].render(); } function tick() { createStars(1); player.tick(); for(i in enemies) enemies[i].tick(); for(i in stars) stars[i].tick(); for(i in bullets) bullets[i].tick(); if(shootTimer <= 0) player.canShoot = true; shootTimer--; } setInterval(function() { render(); tick(); }, 1000/FPS ); canvas { position: absolute; top: 0; left: 0; } #canvas_background { background: black; } <!DOCTYPE html> <html> <head> <title> Game </title> </head> <body> <canvas id='canvas_background' width='300' height='400'></canvas> <canvas id='canvas_player' width='300' height='400'></canvas> <canvas id='canvas_enemies' width='300' height='400'></canvas> <canvas id='canvas_ui' width='300' height='400'></canvas> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.2/jquery.min.js"></script> </body> </html> Everything is working properly. I can move the square and shoot bullets. However, the enemies canvas is not clearing properly. To be clear, this is how I'm trying to clear the Enemies Canvas: gEnemies.clearRect(0,0,width,height); Why is the Enemies Canvas not clearing? A: It is clearing. The problem is you keep creating 64 new enemies every time you render: for(x=0;x<8;x++) { for(y=0;y<8;y++) { enemies.push(new Enemy(x,y)); } } Add this line to your render function and you'll see what I mean: console.log('enemies='+enemies.length);
--- abstract: 'We investigate a dynamical mechanism to generate fermion mass in the Randall-Sundrum background. We consider four-fermion interaction models where the fermion field propagates in an extra-dimension, i.e. the bulk four-fermion interaction model. It is assumed that two types of fermions with opposite parity exist in the bulk. We show that electroweak-scale mass is dynamically generated for a specific fermion anti-fermion condensation, even if all the scale parameters in the Lagrangian are set to the Planck scale.' title: ' Dynamical origin of low-mass fermions in Randall-Sundrum background' --- Kenji Fukazawa$^{\small 1}$, Tomohiro Inagaki$^{\small 2}$, Yasuhiko Katsuki$^{\small 3}$, Taizo Muta$^{\small 4}$ and Kensaku Ohkura$^{\small 5}$\ [**]{} Introduction ============ The Standard Model (SM) provides a remarkably successful description of known phenomena. On the other hand the SM has an unsatisfactory feature which is called hierarchy problem, disparity between the Planck scale and the electroweak scale. One of the solution for the hierarchy problem is found in the higher dimensional theory [@arkani]. In the scenario the SM scale can be obtained from the ratio between the Planck scale and the size of the extra dimension. Randall and Sundrum proposed an alternative approach to solve the hierarchy problem in a five-dimensional curved spacetime [@randall]. They considered the five-dimensional anti-de Sitter spacetime compactified on an orbifold, $S^1/Z_{2}$, and two 3-branes existing at the orbifold fixed points. The 3-branes are four-dimensional subspaces embedded in the five-dimensional spacetime. It was shown that the spacetime metric satisfies the Einstein equation and the electroweak scale $O$(TeV) is derived from the Planck scale $M_{P}$ without introducing any very large parameters. In the beginning it is considered that the SM particles are constrained on the 3-brane. But there is a possibility the SM particles also propagate in an extra dimension, i.e. bulk SM. Goldberger and Wise pointed out that the bulk scalar fields have modes whose mass terms are exponentially suppressed on a brane as well as the brane particles [@goldberger]. We can identify the lightest modes of bulk fields to be the SM particles. There are a lot of possibilities to put the SM particles in the bulk [@chang; @hewett]. In the Randall-Sundrum background some mechanisms are proposed to generate an extremely light fermion mass like a neutrino [@grossman]. It is the standard scenario that an elementary Higgs boson induces mass of particles through Higgs mechanism. The dynamical mass generation, for example top quark condensation [@miransky] in extra dimensions [@dobrescu], is an appealing alternative scenario. It has been known that the spacetime curvature plays an important role to dynamical symmetry breaking [@ina]. A four-fermion interaction model is studied in the RS background as a prototype model of dynamical symmetry breaking [@abe]. The dynamical origin to generate the low mass fermion is found by assuming the existence of the strong interaction between the bulk fermion and the brane fermion. Gauge theories is also considered in the RS background by using the Schwinger-Dyson equation [@inagaki]. It is pointed out that the strong interaction is naturally appears in the gauge theory. It is found that the SM scale is obtained on the brane from the bulk QCD coupled with the brane fermion. Dynamical origin of the low mass fermion is discussed in some models with brane fermions. In the present paper we study the possibility to generate the low fermion mass dynamically starting from the theory only with the bulk fermion. Assuming the four-fermion like interaction between the bulk fermions, we evaluate the phase structure and the natural mass scale of the model. This paper is organized as follows: In section 2 we explain our setup and model. In section 3 we analyze the effective potential and fermion mass spectrums. In section 4 we make a comment on the solution for the hierarchy problem in our model. In section 5 we give the summary and discussions. Bulk Four-Fermion Model ======================= As is known, the chiral symmetry prohibits a Dirac mass term in four-dimensional spacetime. The Dirac mass term is generated through breaking of the chiral symmetry. Especially in a four-fermion model a composite operator of fermion and anti-fermion, $\bar{\psi}\psi$, may develop a non-vanishing vacuum expectation value and the chiral symmetry is broken dynamically [@Nam]. Here we study a bulk four-fermion model in the RS spacetime. The RS spacetime is the five-dimensional spacetime which is compactified on an orbifold $S^{1}/Z_{2}$ of radius $r_c$. The metric of the RS spacetime is described by $$G_{AB} = diag (e^{-2 k|y|} \eta_{\mu \nu}, -1 ),$$ where y is the coordinate of an extra-dimensional direction. There is no chiral symmetry in the RS spacetime. To see it we consider a free bulk fermion theory, $$S_{F}= \int d^4x dy \sqrt{G} \left[ \bar{\psi} i\Gamma^{\bar{A}}e_{\bar{A}}^{A} (\partial_{A}+ \frac{1}{8} \omega_{A}^{\bar{B}\bar{C}}[\Gamma_{\bar{B}},\Gamma_{\bar{C}}] ) \psi \right], \label{action1}$$\ where $e_{\bar{A}}^{A}$ is the inverse of the vierbein and $\omega_{A}^{\bar{B}\bar{C}}$ is the spin connection. We denote the five-dimensional Dirac $\gamma$ matrix by $\Gamma_{\bar{A}}=(\gamma_{\mu},i \gamma_5)$. The action is invariant under $y \rightarrow y\pm 2\pi r_c$, and therefore we can restrict y to $-\pi r_c < y \le \pi r_c$. The action is also invariant under $y \leftrightarrow -y$. The fermion field, $\psi$, has even or odd property under five-dimensional parity transformation, $$\begin{aligned} \left\{ \begin{array}{l} \psi(y)\rightarrow \gamma_{5} \psi(-y) = \psi(y) ; even, \\ \psi(y)\rightarrow \gamma_{5} \psi(-y) = - \psi(y) ; odd. \\ \end{array} \label{parityap} \right.\end{aligned}$$ For even-parity fermion five-dimensional spinor fields can be expanded in terms of Kaluza-Klein(K-K) modes, $$\begin{aligned} \psi(x,y)&=& \psi_{R}(x,y)+ \psi_{L}(x,y) \nonumber \\ &=& \sum_{n=0}^{\infty} \psi_{R}^{(n)}(x)g_{R}^{(n)}(y) +\psi_{L}^{(n)}(x) g_{L}^{(n)}(y). \label{kkap}\end{aligned}$$ The parity transformation (\[parityap\]) for even case gives $$\begin{aligned} \left\{ \begin{array}{l} g_{R}^{(n)}(y) = g_{R}^{(n)}(-y), \\ g_{L}^{(n)}(y) = -g_{L}^{(n)}(-y). \\ \end{array} \right. \label{kkbcap}\end{aligned}$$ From the periodicity on y and the latter equation of (\[kkbcap\]) we can easily see $g_{L}^{(n)}(0) = g_{L}^{(n)}(\pi r_c) = 0$. For the bases which diagonalize the Lagrangian in terms of the K-K modes, the action (\[action1\]) reads $$S_{F}= \int d^4x \bar{\psi}_{R}^{(0)} i \partial_{\mu} \gamma^{\mu} \psi_{R}^{(0)} + \sum_{n=1}^{\infty} \bar{\psi}^{(n)} (i \partial_{\mu} \gamma^{\mu}-m_n) \psi^{(n)} ,$$ where $\psi^{(n)}=\psi_{R}^{(n)}+\psi_{L}^{(n)}$. The boundary condition $g_{L}^{(n)}(\pi r_{c})=0$ yields $ m_n= k \pi n / (e^{k \pi r_{c}}-1)$ [@chang]. The properly normalized mode functions are given as follows: $$\begin{aligned} \left\{ \begin{array}{l} g_{R}^{(0)}= \sqrt{\frac{k}{1-e^{-k \pi r_c}}} e^{-\frac{1}{2}k \pi r_c} e^{\frac{1}{2} k|y|}, \ \ g_{L}^{(0)} = 0, \\ g_{R}^{(n)}= \sqrt{\frac{2k}{1-e^{-k \pi r_c}}} e^{-\frac{1}{2}k \pi r_c} e^{\frac{1}{2} k|y|} \ (n \geq 1), \\ g_{L}^{(n)}= \sqrt{\frac{2k}{1-e^{-k \pi r_c}}} e^{-\frac{1}{2}k \pi r_c} e^{\frac{1}{2} k|y|} \ (n \geq 1). \\ \end{array} \right. \label{rskkap}\end{aligned}$$ Since $g_{L}^{(0)}=0$, only the right handed zero mode survives and other modes are vector-like. When the five-dimensional parity is odd, the spinor field is expanded as $$\begin{aligned} \psi(x,y)&=& \psi_{R}(x,y)+ \psi_{L}(x,y) \nonumber \\ &=& \sum_{n=0}^{\infty} \psi_{R}^{(n)}(x)g_{L}^{(n)}(y) +\psi_{L}^{(n)}(x) g_{R}^{(n)}(y), \nonumber\end{aligned}$$ where the mode functions $g_R$ and $g_L$ are defined in (\[rskkap\]). Only one of the zero modes survives for both fermions. It is always a massless fermion because there is no chiral partners in the induced four-dimensional spacetime. We would like to pursue the scenario that chiral symmetry breaking generates the mass of fermions. To realize the chiral symmetry in four-dimensional spacetime it is necessary to introduce the five-dimensional parity even fermions in addition to the parity odd fermions as the chiral partner. It is possible that the right and left handed zero modes form the mass term via the four-fermion interaction [@chang]. The induced four-dimensional Lagrangian of the bulk four-fermion model is given by $${\cal L} = \int dy \sqrt{G} [\bar{\psi}_{1} i \partial_A \Gamma^{A} \psi_{1} + \bar{\psi}_{2} i \partial_A \Gamma^{A} \psi_{2} - \hat{\lambda} (\bar{\psi}_1 \psi_2)(\bar{\psi}_2 \psi_{1})], \label{lagrangian1}$$ where $\psi_1$ and $\psi_2$ are parity even and odd fermions respectively. Because the natural scale of bulk is the Planck scale, $M_p$, it is natural to take $\hat{\lambda}$  as $O(1/M_p^3)$. This Lagrangian is invariant under a discrete chiral transformation; $$\begin{aligned} \left\{ \begin{array}{lll} \psi_{1}(x,y) &=& \gamma_5 \psi_1(x,-y), \\ \psi_{2}(x,y) &=& - \gamma_5 \psi_2(x,-y), \\ \end{array} \right.\end{aligned}$$ which prohibits the mass term $m \bar{\psi_1} \psi_2$. Introducing the auxiliary field, $\sigma \sim \bar{\psi}_1 \psi_2$, we rewrite the Lagrangian; $$\begin{aligned} {\cal L} = \int dy \sqrt{G} \left[ \left( \begin{array}{cc} \bar{\psi}_1 & \bar{\psi}_2 \\ \end{array} \right) \left[ \begin{array}{cc} i \partial_A \Gamma^A & -\sigma^{*} \\ \sigma & i \partial_A \Gamma^A \\ \end{array} \right] \left( \begin{array}{c} \psi_1 \\ \psi_2 \\ \end{array} \right) -\frac{| \sigma |^2}{\hat{\lambda}} \right]. \label{lagrangian2}\end{aligned}$$ After applying the K-K mode expansion the Lagrangian reads $$\begin{aligned} {\cal L} &=& \bar{\psi}_{1R}^{(0)} i \gamma^{\mu} \partial_{\mu} \psi_{1R}^{(0)} +\bar{\psi}_{2L}^{(0)} i \gamma^{\mu} \partial_{\mu} \psi_{2L}^{(0)} \nonumber \\ \nonumber \\ &+& \hspace{-0.5cm} \sum_{1 \leq n} \left[\bar{\psi}_{1R}^{(n)} i \gamma^{\mu} \partial_{\mu} \psi_{1R}^{(n)} +\bar{\psi}_{1L}^{(n)} i \gamma^{\mu} \partial_{\mu} \psi_{1L}^{(n)} +\bar{\psi}_{2R}^{(n)} i \gamma^{\mu} \partial_{\mu} \psi_{2R}^{(n)} +\bar{\psi}_{2L}^{(n)} i \gamma^{\mu} \partial_{\mu} \psi_{2L}^{(n)}\right] \nonumber \\ &+& \hspace{-0.5cm} \sum_{0 \leq m,n} \left( \begin{array}{cccc} \bar{\psi}_{1R}^{(m)} & \bar{\psi}_{2R}^{(m)} & \bar{\psi}_{1L}^{(m)} & \bar{\psi}_{2L}^{(m)} \\ \end{array} \right) {\Large M} \left( \begin{array}{c} \psi_{1R}^{(n)} \\ \psi_{2R}^{(n)} \\ \psi_{1L}^{(n)} \\ \psi_{2L}^{(n)} \\ \end{array} \right) - \int dy \sqrt{G} \left[\frac{|\sigma|^2}{\hat{\lambda}}\right] , \label{lagrangian3}\end{aligned}$$ where $M$ is the fermion mass matrix which is given by\ $$\begin{aligned} {\Large M} &=& \left( \begin{array}{cc} 0 & M_1 \\ M_2 & 0 \\ \end{array} \right): \nonumber \\ M_1 &=& \left( \begin{array}{cc} \int\! dy \sqrt{G}[ g_{R}^{(m) *} \partial_{y} g_{L}^{(n)}] & - \int\! dy \sqrt{G}[ g_{R}^{(m) *} \sigma^{*} g_{R}^{(n)}] \\ - \int\! dy \sqrt{G}[ g_{L}^{(m) *} \sigma g_{L}^{(n)}] & \int\! dy \sqrt{G}[ g_{L}^{(m) *} \partial_{y} g_{R}^{(n)}] \\ \end{array} \right), \\ M_2 &=& \left( \begin{array}{cc} - \int\! dy \sqrt{G}[ g_{L}^{(m) *} \partial_{y} g_{R}^{(n)}] & - \int\! dy \sqrt{G}[ g_{L}^{(m) *} \sigma^{*} g_{L}^{(n)}] \\ - \int\ dy \sqrt{G}[ g_{R}^{(m) *} \sigma g_{R}^{(n)} ] & - \int\! dy \sqrt{G}[ g_{R}^{(m) *} \partial_{y} g_{L}^{(n)}] \\ \end{array} \right).\\\end{aligned}$$ Since the RS spacetime has no translational invariance along the y direction, there appears the y dependence of vacuum expectation value, $\langle \sigma \rangle$. The vacuum expectation value $\langle \sigma \rangle$ is determined by observing the minimum of the induced four-dimensional effective potential. Phase Structure in RS Background ================================ We evaluate the induced four-dimensional effective potential and calculate the vacuum expectation value $\langle \sigma \rangle$ in the leading order of 1/N expansion. The effective four-dimensional action is derived after performing the integration over all the fermion fields, $$S_{F}=\ln \det \left[i\partial_{\mu} \gamma^{\mu} + M[\sigma,\sigma^{*}] \right]- \int d^4x \int dy \sqrt{G}\left[\frac{|\sigma|^2}{\hat{\lambda}}\right].$$ To get the effective potential we set $\sigma$ independent of x and obtain the effective potential, $$\begin{aligned} V_{eff}[\sigma,\sigma^*]&=&-\frac{1}{16\pi^2} {\rm Tr} \left[ \Lambda^4 \ln[1+\frac{M^2}{\Lambda^2}] -M^4 \ln[1+\frac{\Lambda^2}{M^2}]+M^2 \Lambda^2 \right] \nonumber \\ &+&\int dy \sqrt{G}\left[\frac{|\sigma|^2}{\hat{\lambda}}\right] \label{epotential1}\end{aligned}$$ with a momentum cutoff $\Lambda$, and the trace is taken over the K-K modes. Here we restrict ourselves to the following two possibilities, $\langle \sigma \rangle = v$ and $\langle \sigma \rangle = v e^{k |y|}$. Such restricted functions may not minimize the effective potential. But note that if such functions with non-vanishing $v$ minimize the effective potential even within the restricted functions, we can see that there is more stable state than the symmetric state, $\langle \sigma \rangle =0$. In other words the chiral symmetry is broken down. $\langle \sigma \rangle =v$ case -------------------------------- First we assume that the vacuum expectation value $\langle \sigma \rangle$ is independent of y, $\langle \sigma \rangle =v$. In general we cannot diagonalize the Lagrangian \[\[lagrangian3\]\] and therefore we rely on the numerical analysis. The effective potential is expressed such that $$V_{eff}(v)=\frac{v^2}{\lambda}-\frac{1}{16\pi^2} {\rm Tr} \left[ \Lambda^4 \ln[1+\frac{M(v)^2}{\Lambda^2}] -M(v)^4 \ln[1+\frac{\Lambda^2}{M(v)^2}]+M(v)^2 \Lambda^2 \right].$$ Here $M(v)$ is a matrix whose components are $M_{mn} (0 \le m,n \le 2N_{KK})$ with $$\begin{aligned} M_{00} &=& \frac{v}{a}\ln[1+a], \\ M_{0n} &=& M_{n0} = \frac{\sqrt{2}v}{a} \left[ \cos[\frac{n\pi}{a}] \left\{C_{i}(\frac{n\pi(1+a)}{a})-C_{i}(\frac{n \pi}{a}) \right\} \right. \\ && \left. + \sin[\frac{n\pi}{a}] \left\{S_{i}(\frac{n\pi(1+a)}{a})-S_{i}(\frac{n \pi}{a}) \right\} \right] \ \ \ (1 \le n \le N_{KK}),\\ M_{0n} &=& M_{n0} = 0 \ \ \ (N_{KK}+1 \le n \le 2N_{KK}),\\ M_{mn} &=& \frac{v}{a} \left[ \cos[\frac{(m+n)\pi}{a}] \left\{C_{i}(\frac{(m+n)\pi (1+a)}{a})-C_{i} (\frac{(m+n)\pi}{a})\right\} \right. \\ && \left. + \sin[\frac{(m+n)\pi}{a}] \left\{S_{i}(\frac{(m+n)\pi (1+a)}{a})-S_{i} (\frac{(m+n)\pi}{a})\right\} \right. \\ && \left. + \cos[\frac{(m-n)\pi}{a}] \left\{C_{i}(\frac{(m-n)\pi (1+a)}{a})-C_{i} (\frac{(m-n)\pi}{a})\right\} \right. \\ && \left. + \sin[\frac{(m-n)\pi}{a}] \left\{S_{i}(\frac{(m-n)\pi (1+a)}{a})-S_{i} (\frac{(m-n)\pi}{a})\right\} \right] \\ && (1 \le m,n \le N_{KK}), \\ M_{mn} &=& M_{nm} = m_{n} \delta_{m,n} \ \ \ (1 \le m \le N_{KK}, N_{KK}+1 \le n \le 2 N_{KK}), \\ M_{mn} &=& \frac{v}{a} \left[ \cos[\frac{(m-n)\pi}{a}] \left\{C_{i}(\frac{(m-n)\pi (1+a)}{a})-C_{i} (\frac{(m-n)\pi}{a})\right\}\right. \\ && \hspace{-2cm} \left. + \sin[\frac{(m-n)\pi}{a}] \left\{S_{i}(\frac{(m-n)\pi (1+a)}{a})-S_{i} (\frac{(m-n)\pi}{a})\right\} \right. \\ && \hspace{-2cm} \left. - \cos[\frac{(m+n-2N_{KK})\pi}{a}] \left\{C_{i}(\frac{(m+n-2N_{KK}) \pi (1+a)}{a})-C_{i} (\frac{(m+n-2N_{KK})\pi}{a})\right\}\right. \\ && \hspace{-2cm} \left. - \sin[\frac{(m+n-2N_{KK})\pi}{a}] \left\{S_{i}(\frac{(m+n-2N_{KK})\pi (1+a)}{a})-S_{i} (\frac{(m+n-2N_{KK})\pi}{a})\right\} \right] \\ && \ \ \ (N_{KK}+1 \le m,n \le 2 N_{KK}), \\\end{aligned}$$ where $a=e^{k\pi r_c}-1, m_n = n k\pi/a$ and $\lambda = 4k\hat{\lambda}/(1-e^{-4k\pi r_c})$, and $C_i$ and $S_i$ are the cosine-integral and sine-integral function respectively. We take $N_{KK}$ as $a \frac{k\pi}{\Lambda}+1$ such that we neglect K-K modes heavier than $\Lambda$ in the unbroken phase. For small $N_{KK}$ we can calculate the effective potential numerically. Through numerical inspections we find that the chiral symmetry is broken down at a critical coupling. The phase transition from the symmetric phase to the broken phase is of the second order. In Fig. \[critical\] we draw the critical coupling constant by the dotted line against $N_{KK}$ as $k=\Lambda=M_{P}$ fixed. $\langle \sigma \rangle = v e^{k |y|}$ case ------------------------------------------- Next we consider the case, $\langle \sigma \rangle = v e^{k |y|}$. In this case we easily diagonalize the mass term and perform the y integration in Eq. (\[lagrangian3\]). The Lagrangian reduces to $$\begin{aligned} {\cal L} &=& \frac{1}{\lambda} v^2 + \left( \begin{array}{cc} \bar{\psi}_{1R}^{(0)} & \bar{\psi}_{2L}^{(0)} \\ \end{array} \right) \left( \begin{array}{cc} i \gamma^{\mu} \partial_{\mu} & -v \\ - v & i \gamma^{\mu} \partial_{\mu} \\ \end{array} \right) \left( \begin{array}{c} \psi_{1R}^{(0)} \\ \psi_{2L}^{(0)} \\ \end{array} \right) \nonumber \\ &&\hspace{-1cm} + \sum_{n \geq 1} \left( \begin{array}{c} \bar{\psi}_{1R}^{(n)} \\ \bar{\psi}_{2R}^{(n)} \\ \bar{\psi}_{1L}^{(n)} \\ \bar{\psi}_{2L}^{(n)} \\ \end{array} \right)^{T} \left( \begin{array}{cc|cc} i \partial_{\mu} \gamma^{\mu} & 0 & m_n & -v \\ 0 & i \partial_{\mu} \gamma^{\mu} & -v & m_n \\ \hline m_n & - v & i \partial_{\mu} \gamma^{\mu} & 0 \\ -v & m_n & 0 & i \partial_{\mu} \gamma^{\mu} \end{array} \right) \left( \begin{array}{c} \psi_{1R}^{(n)} \\ \psi_{2R}^{(n)} \\ \psi_{1L}^{(n)} \\ \psi_{2L}^{(n)} \\ \end{array} \right) , \label{lagrangian4}\end{aligned}$$ where $\lambda$ is $2 k \hat{\lambda}/(1-e^{-2k \pi r_c})$ and $m_n=n k \pi / (e^{k \pi r_c}-1)$. Fermion mass matrix $M$ has the following form, $$\begin{aligned} M= \left( \begin{array}{cc} 0& \begin{array}{ll} m_{n}&-v\\ -v&M_n\\ \end{array} \\ \begin{array}{ll} m_{n}&-v\\ -v&M_n\\ \end{array} &0\\ \end{array} \right).\end{aligned}$$ The mass of the fermion is given by the eigen value of $M$, i.e. $v$ and $|v\pm \frac{nk\pi}{a}|$. In the leading order of the $1/N$ expansion the effective potential for the Lagrangian (\[lagrangian4\]) is given by $$\begin{aligned} &&V_{eff} = \frac{1}{\lambda}v^2-\frac{1}{16\pi^2}\left[ \Lambda^4 \ln\left[1+\frac{v^2}{\Lambda^2} \right] -v^4\ln\left[1+\frac{\Lambda^2}{v^2}\right] +v^2\Lambda^2 \right] \nonumber \\ && \hspace{-0.5cm} -\frac{1}{16\pi^2}\sum_{n < N_{KK}} \left[ \Lambda^4 \ln \left[1+\frac{v_{+}^2}{\Lambda^2} \right] -v_{+}^4 \ln \left[1+\frac{\Lambda^2}{v_{+}^2}\right] +v_{+}^2 \Lambda^2 \right. \nonumber \\ && \hspace{-0.5cm} \left. + \Lambda^4 \ln \left[ 1+\frac{v_{-}^2}{\Lambda^2} \right] -v_{-}^4 \ln\left[1+\frac{\Lambda^2} {v_{-}^2} \right] +v_{-}^2\Lambda^2 \right], \label{epotential}\end{aligned}$$ where we define that $v_{+} \equiv v+\frac{nk\pi}{a}, v_{-} \equiv v-\frac{nk\pi}{a}$. Differentiating the effective potential by $v^2$ we obtain the gap equation; $$\begin{aligned} \frac{\partial^2}{\partial v^2}V_{eff}(v)&=&\frac{2}{\lambda} -\frac{1}{4\pi^2} \left[ 1+\frac{2v^2}{1+v^2}-3v^2 \ln \left[1+\frac{1}{v^2} \right] \right]\\ -\frac{1}{4\pi^2} \sum_{n \le N_{KK}} && \hspace{-1cm} \left[ 2+\frac{2v_{+}^2}{1+v_{+}^2}-3v_{+}^2 \ln \left[1+\frac{1}{v_{+}^2} \right] +\frac{2v_{-}^2}{1+v_{-}^2}-3v_{-}^2 \ln \left[1+\frac{1}{v_{-}^2} \right] \right]\\ &=&0.\end{aligned}$$ For each numerical value of the coupling constant $\lambda$ we calculate the effective potential (\[epotential\]). It is found that the second order phase transition takes place, and the chiral symmetry is broken down above the critical value of the coupling constant. The critical coupling constant, $\hat{\lambda}_{cr}$, is obtained by solving the equation; $ [\partial^2/ \partial v^2 \ \ V_{eff}(v)]_{v=0}=0, $ $$\lambda_{cr}=8\pi^2 \left[ 1+ \sum_{n \le N_{KK}} \left\{ 2+ \frac{4(nk\pi)^2}{a^2+(nk\pi)^2}-6(\frac{nk\pi}{a}^2)\ln[ \frac{a^2+(nk\pi)^2}{(nk\pi)^2} ] \right\}^{-1} \right]. \label{criticalcoupling1}$$ ![Critical coupling constant[]{data-label="critical"}](cr-curv.eps){width="11cm"} In Fig. 1 we show the critical coupling constant by the solid line against $N_{KK}$ as $k = \Lambda = M_{P}$ fixed. In the region $II$ and $III$ the chiral symmetry is broken. The most remarkable feature shows up in the region $II$. In this region the y-dependent state is more stable than the y-independent state. Natural Mass Scale ================== What is the natural mass scale for the lightest fermion in the bulk four-fermion model? Only a mass scale in the bulk is the Planck scale, $M_{P}$. We take all the mass scale in the bulk as $O(M_{P})$, i.e. $k\simeq \Lambda\simeq M_{P}$ and set $kr_c \simeq 12$, which gives $N_{KK}\simeq 10^{16}$. For such a large $N_{KK}$ the critical coupling (\[criticalcoupling1\]) behaves to be inverse proportional to $N_{KK}$ in the $\langle \sigma \rangle = v e^{ky}$ case, $$\hat{\lambda}_{cr} \rightarrow \frac{2\pi^2}{(1-\ln2) k^2\Lambda N_{KK}} \simeq \frac{20}{\Lambda^3} e^{-k \pi r_c},$$\ which is much smaller than the natural scale $1/M_p^3$. For $\langle \sigma \rangle = v$ case the critical coupling constant behaves almost like a constant value, $\hat{\lambda}_{cr} \Lambda \simeq O(30)$, in our numerical analysis, Fig. 1. It is larger than the natural scale. Therefore we conclude that the four-fermion coupling at the natural scale, $\hat{\lambda}\simeq\ 1/M_{P}^3$, is located in the region $II$ in Fig. 1. In this region the effective potential for $\langle \sigma \rangle = v e^{ky}$ is smaller than the y-independent vacuum. If the y-dependent vacuum, $\langle \sigma \rangle \simeq v e^{ky}$, is a true vacuum of the theory, the mass of fermions is given by $v, |v \pm \pi/a|$. One of the fermion necessarily has mass below $\pi/a\sim O(M_{EW})$ independently of the value $v$. Thus the lightest fermion mass is generated dynamically at the electroweak scale, $k\pi/a$, even if the vacuum expectation value $v$ is at the Planck scale $M_{P}$. A low mass fermion exists in the bulk four fermion model. It is one of the dynamical realizations of the RS mechanism [@inagaki; @randall2]. Summary and Discussion ====================== We have investigated the bulk four-fermion model in the RS spacetime. We assume the existence of two kinds of bulk fermions with different parity. It is necessary to introduce the chiral symmetry in the induced four-dimensional model. The effective potential is calculated in the induced model. Evaluating the minimum of the effective potential we found that the extra direction y-dependent vacuum is more stable than the y-independent one for a special region of the coupling constant. If we take all the mass scale in the bulk as the Planck scale, the lightest fermion mass is generated dynamically at the electroweak scale. Therefore a low mass fermion is obtained in our model. It shows the possibility to build up a realistic model which may solve the hierarchy problem dynamically in the RS spacetime. In the present analysis we restrict ourselves to the special form for the y-dependence of the vacuum expectation value. Our solution may not be the true minimum of the effective potential. To find a true minimum of the effective potential we calculate the effective potential (\[epotential1\]) for a general y-dependent $\langle \sigma \rangle$. It is interesting to calculate the stress tensor in our model and solve the Einstein equation. The y-dependent $\langle \sigma \rangle$ naturally change the spacetime structure. In the RS spacetime there are two 3-branes at the orbifold fixed point, $y=0$ and $y=\pi r$. The radiative correction of the brane fields has something to do with the vacuum expectation value. But the mass scale of the $y=\pi r$ brane is the electroweak scale, $M_{EW}$. Since the influence of the brane fields is of the order $O(M_{EW})$, the mass scale of the lightest fermion keeps at the electroweak scale. A SUSY extension of our model is also interesting. We have two kinds of fermion in our model. In the RS spacetime we can construct N=2 SUSY model, which include two kinds of fermions automatically [@buchbinder]. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank H. Abe and S. D. Odintsov for useful discussions. [99]{} I. Antoniadis, Phys. Lett. B [**246**]{}, 377 (1990);\ N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phys. Lett. [**B429**]{}, 263 (1998); Phys. Rev. [**D59**]{}, 086004 (1999). L. Randall, R. Sundrum, Phys. Rev. Lett. [**83**]{}, 3370 (1999). W. D. Goldberger and M. B. Wise, Phys. Rev. Lett. [**83**]{}, 4922 (1999). S. Chang, J. Hisano, H. Nakano, N. Okada, M. Yamaguchi, Phys. Rev. [**D62**]{}, 084025 (2000). H. Davoudiasl, J. L. Hewett, T. G. Rizzo, Phys. Rev. [**D63**]{}, 075004 (2001). Y. Grossman, M. Neubert, Phys. Lett. [**B474**]{}, 361 (2000);\ N. Arkani-Hamed, S. Dimopoulos, G. R. Dvali, J. March-Russell, Phys. Rev. [**D65**]{}, 024032 (2002). V. A. Miransky, M. Tanabashi and K. Yamawaki, Phys. Lett. [**B221**]{}, 177 (1989); Mod. Phys. Lett. [**A4**]{}, 1043 (1989);\ C. T. Hill and E. H. Simmons, Phys. Rept. [**381**]{}, 235 (2003). B. A. Dobrescu, Phys. Lett. [**B461**]{}, 99 (1999);\ H. Cheng, B. A. Dobrescu and C. T. Hill, Nucl. Phys. B [**589**]{}, 249 (2000);\ N. Arkani-Hamed, H. Cheng, B. A. Dobrescu and L. J. Hall, Phys. Rev. [**D62**]{}, 096006 (2000);\ H. Abe, H. Miguchi and T. Muta, Mod. Phys. Lett. A15, 445 (2000);\ A. B. Kobakhidze, Phys. Atom. Nucl. [**64**]{}, 941 (2001) \[Yad. Fiz.  [**64**]{}, 1010 (2001)\];\ M. Hashimoto, M. Tanabashi and K. Yamawaki, Phys. Rev. [**D64**]{}, 056003 (2001); hep-ph/0304109; V. Gusynin, M. Hashimoto, M. Tanabashi and K. Yamawaki, Phys. Rev. [**D65**]{}, 116008 (2002); T. Inagaki, T. Muta and S. D. Odintsov, Mod. Phys. Lett. [**A8**]{} 2117 (1993); Prog. Theor. Phys. Suppl. [**127**]{} 93 (1997);\ E. Elizalde, S. D. Odintsov and Yu. I. Shilnov, Mod. Phys. Lett. [**A9**]{}, 913 (1994);\ T. Inagaki, S. Mukaigawa and T. Muta, Phys. Rev. [**D52**]{}, 4267 (1995);\ K. Ishikawa, T. Inagaki and T. Muta, Mod. Phys. Lett. [**A11**]{}, 939 (1996);\ T. Inagaki, Int. J. Mod. Phys. [**A11**]{} 4561 (1996). H. Abe, T. Inagaki, T. Muta, in [*Fluctuating Paths and Fields*]{}, edited by W. Janke, A. Pelster, H.-J. Schmidt, and M. Bachmann (World Scientific, Singapore, 2001);\ N. Rius, V. Sanz, Phys. Rev. [**D64**]{}, 075006 (2001). H. Abe, T. Inagaki, Phys. Rev. [**D66**]{}, 085001 (2002);\ H. Abe, K. Fukazawa, and T. Inagaki, Prog. Theor. Phys. [**107**]{}, 1047 (2002);\ H. Abe, hep-ph/0307004. Y. Nambu and G. Jona-Lasinio, Phys. Rev. [**122**]{}, 345 (1961). L. Randall, R. Sundrum, Phys. Rev. Lett. [**83**]{}, 4690 (1999);\ H. Davoudiasl, J. L. Hewett, T. G. Rizzo, Phys. Rev. Lett. [**84**]{}, 2080 (2000); Phys. Lett. [**B473**]{}, 43 (2000). I. L. Buchbinder, T. Inagaki and S. D. Odintsov, Mod. Phys. Lett. [**A12**]{}, 2271 (1997);\ T. Inagaki, S. D. Odintsov and Y. I. Shil’nov, Int. J. Mod. Phys. [**A14**]{}, 481 (1999).
Barbet of Liège ( Luikse Barbet ) The Barbet of Liége – also known by the names: Barbet Liègeois, Lütticher Barbet, Barbet in Liegi, Льежский барбет – is a breeding variety in the Liege region of Belgium since the 1900s, and is informed as a cross from French Owl with other short-billed pigeons. The varieties incorporated into this Owl pigeons type, have a pure appearance of owl, with tie hairs on the front neck, short beak, and slightly upright posture. It is large for owl groups.
How It Is To Be John Project Description Actually, these days it’s really difficult to be me. Because I am being something that I don’t want to be in order to pay the bills. If I’m being me than I’d be doing scientific research every single day. Then I wouldn’t be doing street performance. I would invent a washing machine that can dry the clothes within seconds after washing them. And use technologies from the pyramids of Bosnia to trick seeds to grow in a different month than they’re supposed to. It would not be about making money but instead I would like to live in a society where we are working for the benefit of mankind rather than for the benefit of ourselves. As long as we have money existent, we’re working for the benefit of ourselves, which defeats the object of humanity. Which is to work together. How can we work together if we’re all fighting for our survival? John – the Scottish street performing scientist (met in the streets of Athens) ...and would like to dig in deeper...leave your details below and get the BONUS workbook to Design Your Life straight in your inbox. I can't stand SPAM and would never do that to you! I'm currently in the process of releasing The Everyday Guide To World Peace as well as a program that will support you in the design of your life (not to be mistaken for the time of your life...though they both make sure nobody puts you in a corner!). If you're curious and haven't yet signed up, then here's the place to do it:
Browse Events & Giveaways Friday, September 20, 2013 Introducing Indie Band Oregon Trail, the Hardest Videogame Ever MECC's elementary school computer lab classic The Oregon Trail probably taught its users relatively little about American History. The lasting impression the game did leave was the idea that choices have consequences and that, sometimes, bad things just happen, regardless of how well you plan for them. When you lost those oxen by deciding to ford the river rather than take the ferry (a test to see if you understood that, on four legs, oxen are only about four feet tall), that was on you. But there was very little you could do about that dysentery epidemic later on. Oregon Trail was a simulation of people setting out into a world that was arbitrarily cruel and attempting to overcome it. That basic design element of Oregon Trail can be applied to a game that simulates something relevant to this blog: the life of a relatively unknown touring indie band. The same pioneer spirit can be found in a band setting out into the unknown on its first tour. Many risks and uncertainties await, but so does the potential for success. In the following design document, I will outline the fundamentals of making such an Oregon Trail-like game revolving around the lives of touring musicians. The Characters In the original Oregon Trail, only the occupation of the player character was important. This game instead takes an approach similar to RPG games like the original Final Fantasy in crafting a five-person party from the following character classes: The slacker: Consistently broke, no one is actually sure how he manages to sustain his lazy but leisurely life of avoiding work as much as possible. His primary commitment is to the scene, and people view his rejection of social norms in favor of following his passions as endearing. His draw is generally 19-year-olds who are insistent on never ever selling out. The BoBo: Short for "Bohemian Bourgeois", the BoBo comes from a privileged background and was brought up being exposed to a variety of culture which helps him add a layer of sophistication to the band's image and draw out more refined audiences. He is not arrogant about his upbringing and views his bandmates as equals. However, he lives in constant fear of another BoBo who went to the same prep school as him outing the extent of his privilege on Tumblr and possibly killing the band's credibility. The Virtuoso: A classically trained musician who views being in the band as an escape from the rigid dogmatism of the institution. However, he often comes to the conclusion that his bandmates' approach to writing music is trite and uninspired. Whenever he contributes ideas, his non-Virtuoso bandmates think they are too weird. The Virtuoso draws similarly minded music nerds, often drum majors, who are obsessed with irregular time signatures and changes in modality. He listens to John Cage and gamelan ensembles when he drives the van. Most of his bandmates think this is weird, but the BoBo, being a natural consumer of culture, seeks to understand it. The Working Stiff: These kinds of people are oftentimes unable to play local shows because they can't get the time off from their soul-crushing part-time jobs. The upcoming tour may guarantee that they won't even have that job when they return. Their draw is often limited because they commit more time to trying to pay rent than being scene celebrities, but sometimes they use music as an efficient outlet for their pent-up frustration, cultivating an angry mystique that can seem captivating to those who don't know that they work at an ice cream shop. The Merch Guy: A young person, eager to step their rep up in the scene, who views going on the road with the one band that managed to get out of this crummy town as a unique privilege. Band members generally view him as an extra source of tour funding and a person who will carry things when they don't feel like it. Over time, he will discover that his local heroes are flawed and fragile human beings and decide never to go on another tour again. Gender: I used "he" as the pronoun in most of these cases, however all character classes can be women as well. The dimension that this adds to the game is that people in the music scene will be needlessly obsessed with the character not possessing a Y chromosome, sometimes treating her with either blatant misogyny, or well-intended but flawed praise that tokenizes her as a "girl who rocks!" and gets the band on Lilith Fair-like fests that may pigeonhole female musicians more than empower them.
Determination of estrogenic potential in waste water without sample extraction. This study describes the modification of the ER-Calux assay for testing water samples without sample extraction (NE-(ER-Calux) assay). The results are compared to those obtained with ER-Calux assay and a theoretical estrogenic potential obtained by GC-MSD. For spiked tap and waste water samples there was no statistical difference between estrogenic potentials obtained by the three methods. Application of NE-(ER-Calux) to "real" influent and effluents from municipal waste water treatment plants and receiving surface waters found that the NE-(ER-Calux) assay gave higher values compared to ER-Calux assay and GC-MSD. This is explained by the presence of water soluble endocrine agonists that are usually removed during extraction. Intraday dynamics of the estrogenic potential of a WWTP influent and effluent revealed an increase in the estrogenic potential of the influent from 12.9 ng(EEQ)/L in the morning to a peak value of 40.0 ng(EEQ)/L in the afternoon. The estrogenic potential of the effluent was <LOD (<0.68 ng(EEQ)/L). The overall reduction in estrogenic potential was 92-98%. Daytime estrogenic potential values varied significantly.
<?xml version="1.0" encoding="utf-8"?> <!-- Copyright (c) .NET Foundation and contributors. All rights reserved. Licensed under the Microsoft Reciprocal License. See LICENSE.TXT file in the project root for full license information. --> <tableDefinitions xmlns="http://schemas.microsoft.com/wix/2006/tables"> <tableDefinition name="NetFxNativeImage"> <columnDefinition name="NetFxNativeImage" type="string" length="72" primaryKey="yes" modularize="column" category="identifier" description="The primary key, a non-localized token."/> <columnDefinition name="File_" type="string" length="72" modularize="column" keyTable="File" keyColumn="1" category="identifier" description="The assembly for which a native image will be generated."/> <columnDefinition name="Priority" type="number" length="2" minValue="0" maxValue="3" description="The priority for generating this native image: 0 is syncronous, 1-3 represent various levels of queued generation."/> <columnDefinition name="Attributes" type="number" length="4" minValue="0" maxValue="2147483647" description="Integer containing bit flags representing native image attributes."/> <columnDefinition name="File_Application" type="string" length="72" modularize="column" nullable="yes" category="formatted" description="The application which loads this assembly."/> <columnDefinition name="Directory_ApplicationBase" type="string" length="72" modularize="column" nullable="yes" category="formatted" description="The directory containing the application which loads this assembly."/> </tableDefinition> </tableDefinitions>
Q: Union of a set of collections I'm given the following definition asked to prove the following theorem: Definition: Let $X$ be a set and suppose $C$ is a collection of subsets of $X$. Then $\cup \mathbf{C}=\{x \in X : \exists C\in \mathbf{C}(x\in C)\}$ Theorem: Let $\mathbf{C,D}$ be collections of subsets of a set $X$. Prove that $\cup ( \mathbf{C} \cup \mathbf{D}) = (\cup \mathbf{C}) \cup (\cup\mathbf{D})$ By my reading of the definition, I run into two problems: Firstly I think there is a type error since $ ( \mathbf{C} \cup \mathbf{D}) = \cup\{\mathbf{C}, \mathbf{D}\}$ is not a collection of subsets of $X$ (i.e. it is not a set whose elements are subsets of $X$); instead it is a set whose elements are collections of subsets of $X$. Second, if we ignore the type error and plug into the definition, we get: $\cup \{\mathbf{C}, \mathbf{D}\} =\{x\in X:\exists C\in \{\mathbf{C}, \mathbf{D}\}(x\in C)\}=\{x\in X:x\in \mathbf{C} \lor x \in \mathbf{D}\}$, however, since $\boldsymbol{C},\boldsymbol{D}$ are collections, all their elements are sets. Since $x$ is not a set, $x\notin\boldsymbol{C}\land x\notin\boldsymbol{D}$. Thus $\cup \{\mathbf{C}, \mathbf{D}\}=\emptyset$ However this can't be right since the other side of the equality; $(\cup \mathbf{C}) \cup (\cup\mathbf{D})\neq \emptyset$ in general. What am I missing? A: Essential are the points: $a\in\cup b\iff \exists x[a\in x\in b]$ $a\cup b$ is an abbreviation of $\cup\{a,b\}$ (so that $x\in\cup\{a,b\}\iff x\in a\vee x\in b$) On your first point: $\mathbf C$ and $\mathbf D$ are collections of subsets of $X$. Then $\{\mathbf C,\mathbf D\}$ is a set whose elements are collections of subsets of $X$. Consequently $\cup\{\mathbf C,\mathbf D\}$ is a set whose elements are subsets of $X$, because: $$x\in\cup\{\mathbf C,\mathbf D\}\text{ iff }x\in\mathbf C\text{ or }x\in\mathbf D$$ It must be proved that: $$\cup(\mathbf C\cup\mathbf D=(\cup\mathbf C)\cup(\cup\mathbf D)$$ or equivalently that: $$\cup\cup\{\mathbf C,\mathbf D\}=\cup\{\cup\mathbf C,\cup\mathbf D\}\tag1$$ Equivalent are the following statements: $x\in\cup\cup\{\mathbf C,\mathbf D\}$ $\exists A\in\cup\{\mathbf C,\mathbf D\}[x\in A]$ $\exists A[[A\in\mathbf C\vee A\in\mathbf D]\wedge x\in A]$ $\exists A[x\in A\in\mathbf D]\vee\exists A[x\in A\in\mathbf D]$ $x\in\cup\mathbf C\vee x\in\cup\mathbf D$ $x\in\cup\{\cup\mathbf C,\cup\mathbf D\}$ This proves $(1)$
Pages Thursday, June 12, 2014 Law & Order “Repeat To Fade” was the final episode of series 8, and possibly the final episode of the show. Earlier this month, ITV announced that Law & Order UK would “rest” for now, this being the last episode for the foreseeable future, and that Bradley Walsh was leaving the show. This was sad news for Law & Order fans both in the UK and in the US, the UK show being the only way US viewers can get a “fix” of new episodes in their beloved “mothership” Law & Order format. (Hopefully series 8 will air on BBC America now that it finished its UK run but as of right now there is no word on that.) Bradley Walsh leaving the show is a huge loss. Bradley has been with Law & Order UK since the very beginning, and his character Ronnie Brooks is clearly the most interesting and most layered. It seemed when Ronnie was “born” into the Law & Order universe he was modeled after Law & Order’s Lennie Briscoe, but over the years Ronnie developed into his own unique character. I can’t say whether the show deciding to “rest” has anything to do with Bradley’s exit, but his character has been so integral that finding someone to replace him could be a daunting task. I can only hope that someday Law & Order UK will return. The show was produced by Kudos, who did an exceptional job in delivering a high quality episode each week. Law & Order UK is probably the most visually interesting of all the shows in the Law & Order brand. Each scene was framed, lit, and filmed beautifully. Even the most common location shots were staged just right to bring a scene to life. It also featured consistently excellent writing and a great cast. I am going to miss this show tremendously. “Repeat to Fade” was based on the original Law & Order episode “Marathon” (season 10, episode 6 ) where Detective Lennie Briscoe insisted that a suspect confessed to him, and only Lennie heard that confession. “Repeat to Fade” had an identical theme but the story still felt fresh and interesting. In this case, Ronnie also has to deal with a new boss, Elizabeth Flynn, who replaced DI Wes Leyton after his murder. Things get off to a rocky start, and things don’t get better when only Ronnie hears a confession from a suspect in a high profile murder case. Viewers feel like they know Ronnie so we already trust that he did hear what he said he heard. But his own boss DI Flynn thinks he is a dinosaur and her boss, Commander Stone, is ready to put Ronnie out to pasture. It’s a huge embarrassment that Flynn is publicly tough on knife crime and her team can’t seem to nail the person who killed a woman with a knife. The legal case hits a brick wall when the limited evidence they have is circumstantial. Thankfully, the Ronnie we know and love remains diligent and manages to find the one shred of evidence that can put the case together and to put a young killer behind bars. Even though Ronnie was offered a job by Flynn and Stone which would move him off the street, we never hear if Ronnie has accepted the job. I assume that, with Ronnie working so passionately to (successfully) close his current case, that he had no intention of leaving. Now that Bradley Walsh has decided to exit the show, and the show won’t be returning for the foreseeable future, we can only wish that Ronnie is happy doing whatever Ronnie has decided to do. Many thanks to Bradley Walsh for making the beloved Ronnie Brooks come to life and for making viewers truly care about him. At the Farmer’s Market in Southwark on Saturday, December 13th, DS Ronnie Brooks and DS Joe Hawkins help a woman who had been stabbed as emergency medical people race to the scene. Ronnie asks Joe who called it in, and Joe explains it was PCS Lennie and he is giving a statement to a uniform and that SOCO is on the way. The woman is Sally Carlow, she is 24 years old and she was carrying some sweets and her purse has 40 quid in it. The officer tending to the woman tells the detectives Sally was already gone bus she will pronounce her dead at the hospital. Joe and Ronnie speak with witnesses, who didn’t see Sally get knifed, they just saw a guy running away. A man said he was wearing a black leather jacket but a woman says it was blue. He also wore a red football shirt with a logo but the witnesses differ on the name of the team. Ronnie speaks with another witness who saw the man flee on a scooter, a Typhoon 50cc but he did not see the registration. He wore white helmet with a green a red stripe like his flag. At the mortuary on the same day. Ronnie and Joe hear there were no identification markings for the knife, which went straight through the femoral artery. The angle of the wound indicates the attacker was 5’4” or 5’5”. They wonder if the killer was a woman or a kid. Later. at MIU central headquarters, Joe, playing darts, asks Ronnie how come so many witnesses can see so many different things. Ronnie thinks it is because it happened so fast and it takes a while to sink in, the suspect goes missing, and they are stuck talking to the Italian tourist board. As Ronnie tries to get Joe back to the darts, Kayla tells them that CCTV us on its way in and she will track the red scooter. Ronnie tells her the full reg is the priority. As they go back to darts, they notice a news conference on the TV with their new “gov” DI Elisabeth Flynn, who started Monday. She talks about knife crime which is her priority. The ask her about Sally’s stabbing which happened an hour prior and she looks caught off guard. Ronnie says “ouch” adding that Flynn was bitten by her own sound bite, saying she is being a plonk and she knows it. But Flynn has just entered the room and hears this and when Joe thinks Ronnie got the term plonk wrong, Flynn corrects him and says Ronnie did get it right, it means “person of little or no knowledge” and it was Ronnie’s generation called “dinosaurs” before they realized they had brains as well as tits. Ronnie says he was just saying…and Flynn cuts him off and says she knows exactly what he was saying, and fighting knife crime is not just a sound bite for her. She tells Joe to go and get the eyewitness reports and when he is finished with his game of darts, there is a murder for him to solve. As Flynn leaves, Joe turns to Ronnie and says he likes her, then shoots his dart and leaves. Afterwards, walking outside, Ronnie says he will not have her accusing him of being a dinosaur, and Joe replies he didn’t think it is him he needs to be telling. Ronnie replies he’s looking for another chance to make a first impression. Joe gets a phone call from Kayla who tells him the owner of the scooter is at Chadwicke Estate. At the home of Thomas King at Chadwicke Estate later in the day, the find that his red scooter was nicked that morning. He didn’t report it because they police haven’t responded to his complaints before. They explain the stabbing and Thomas asks if they think it was him because he is short and more suspicious. Joe replies in this investigation, yes. Thomas says someone stole his scooter and dumped it, and tells them it is at the other side of the estate. Ronnie and Joe find the burned scooter and Joe thinks they won’t get anything off it. Thomas also tells them his crash helmet was laying next to it. Back at MIU as Flynn sets up her office, the detectives explain Thomas works at the local café and they confirmed he was there. Ronnie shows her the bagged helmet and that Lilly will take some prints from it. She tells them if there is any news to let her know, and to think of her as the third man on the team. Ronnie smirks and looks at Joe, and Flynn asks if that is a problem. Ronnie says no, not at all. She replies good, she likes to keep her hands dirty. Kayla enters and informs them she tracked down Sally’s next of kin; her dad Albert is downstairs and she has a son, Jack. They speak with Albert Carlow while Jack waits in another room. He wants answers. Sally has been in London two years and she came here to teach art to kids. Ronnie thinks this was random but asks if anyone wanted to hurt Sally or had a grudge. Albert says no, she only had time for Jack and the kids at school. Joe asks for a list of her friends in the area. Ronnie explains they have lots of eyewitness statements they are working on. Ronnie promises they will get him and Albert asks how does he tell Jack he lost his mother. Ronnie and Albert enter the room where Jack is waiting. Ronnie explains he is a policeman who doesn’t wear a uniform and he has been around a long time, like a dinosaur. He says there are things he doesn’t understand in this world, and sometimes bad things happen to good people. He can’t explain it, but today something sad has happened and…Joe watches from outside the room as Ronnie delivers the news. At the forensics lab on Sunday December 14, Lilly says the owners prints were on the helmet but found another partial thumbprint and she will email what she has. Ronnie thanks her for coming in and she says no problem, anything for him. The final episode of series 8 of Law & Order: UK scheduled for Wednesday 11 June (9pm on ITV) will be the last to be transmitted for the foreseeable future, ITV and producers Kudos announced today. The hugely popular series starring Bradley Walsh, Ben Bailey Smith, Peter Davison, Georgia Taylor and Dominic Rowan is to be rested by the channel. “There may well come a time when we re-visit Law & Order: UK,” said ITV’s Director of Drama Commissioning Steve November. “For the moment we’ll be resting the series whilst we continue to refresh our drama slate,” he added. The move coincides with Bradley Walsh’s decision to depart the successful crime drama to pursue other projects, both in drama and entertainment. “Ronnie Brooks is one of my best friends,” said Bradley. “It’s been an absolute pleasure to inhabit Ronnie’s Mac for as long as I have. Eight series is a wonderful achievement for everyone involved in the production. This has been one of the hardest decisions I have ever had to make. I hope one day to revisit him, but for now I’d like the opportunity to pursue other drama projects which ITV are developing,” he added. “Don’t forget you have one more chance to watch Ronnie in action on 11 June. I’d really love fans of the series, old and new, to watch the final episode to give the series a fitting and proper send off.” Created by acclaimed US show runner Dick Wolf and based on the US franchise, Law & Order is one of the most successful American primetime television franchises and has become a firm favourite with the ITV audience since first broadcasting in 2009. Law & Order: UK is produced by Kudos, a Shine Group company, Wolf Films and NBC Universal, with Executive Producers Jane Featherstone and Alison Jackson on behalf of Kudos and Dick Wolf for Wolf Films with Jane Dauncey producing series 8. “It’s been a privilege for Kudos to produce Law & Order: UK over the last eight series,” said Jane Featherstone. “It’s success and huge audience appeal over all of these years is a testament to the cast, crew and production team who have worked tirelessly to bring such great drama to air,” said Jane. Monday, June 2, 2014 UNIVERSAL CITY, Calif. — June 2, 2014 — NBC has announced premiere dates for its fall schedule, which include No. 1 broadcast program “Sunday Night Football,” the return of No. 1 reality series “The Voice,” No. 1 new series “The Blacklist,” and the debuts of highly anticipated comedy “Marry Me” and drama “The Mysteries of Laura.” “Sunday Night Football” launches its campaign on Thursday, Sept. 4 when the Green Bay Packers travel to the Super Bowl champion Seattle Seahawks (8 p.m. ET/5 p.m. PT). Three days later on Sunday, Sept. 7, the Indianapolis Colts will play at the AFC champion Denver Broncos (8:20 p.m. ET/5:20 p.m. PT). The game will be preceded by the season premiere of “Football Night in America” (7 p.m. ET/4 p.m. PT). “The Biggest Loser,” in which contestants lose weight in the hope of restarting their lives, returns for its 16th season Thursday, Sept. 11 (8-10 p.m. ET/PT). “The Blacklist” returns for its second season Sept. 22 (10-11 p.m.) following “The Voice.” “The Blacklist,” which stars Emmy winner James Spader as “Red” Reddington, was the top new series last season and helped propel NBC to No. 1 in the 18-49 demo for the first time in 10 years. Following a two-hour episode of “The Voice” on Tuesday, Sept. 23, “Chicago Fire” (10-11 p.m.) returns for its third season as the heroic men and women of the Windy City’s fire department consistently put their lives in on the line in order to save others. Wednesday, Sept. 24 marks the debut of the new Debra Messing series “The Mysteries of Laura” (8-9 p.m.). In this breezy new drama, Messing plays a NYPD detective who must balance her professional life as a cop with her home life as a mother of unruly twin boys and on the cusp of divorce. On that same night and back in their familiar Wednesday timeslots, “Law & Order: SVU” (9-10 p.m.), and star Mariska Hargitay, returns for an astonishing 16th season while “Chicago P.D.” (10-11 p.m.) comes back for its second season. NBC’s beloved family drama “Parenthood” begins its sixth and final season Thursday, Sept. 25 (10-11 p.m.). Comedy “Bad Judge” begins Thursday, Oct. 2 (9-9:30 p.m.) with Kate Walsh starring as a judge who enjoys living on the wild side, but is one of the most respected jurists when behind the bench. Following “Bad Judge” that night, “A to Z” (9:30 p.m.) makes its debut. The comedy series, which stars Ben Feldman and Cristin Milioti, chronicles the relationship of a young couple from the first time they meet. Also on the comedy front, “Marry Me” — from “Happy Endings” executive producer David Caspe, about how a couple’s engagement gets off to a rough start — launches Tuesday, Oct. 14 (9-9:30 p.m.). That is immediately followed by Jason Katims’ second-year series “About a Boy” (9:30 p.m.), which was one of the most successful new comedies of last season. Fan favorite “Grimm” begins its fourth campaign Friday, Oct. 24 (9-10 p.m.), and that will be immediately followed by new series “Constantine” (10-11 p.m.). “Constantine” is based on the wildly popular DC Comics series “Hellblazer” and stars Matt Ryan as master of the occult John Constantine. As a reminder, “The Blacklist” will return to NBC’s primetime lineup with a two-part episode beginning Sunday, Feb. 1 immediately following the Super Bowl. Part two will air Thursday, Feb. 5 (9-10 p.m.) in its regular midseason timeslot.
Protein carboxymethylation during in vitro culture of human peripheral blood monocytes and pulmonary alveolar macrophages. Protein carboxymethylase (PCM) activity was evaluated for long-term in vitro cultures of human peripheral blood monocytes and pulmonary alveolar macrophages. Both cell types exhibited increases in endogenous (without addition of the exogenous substrate, gelatin) and total (with gelatin) enzyme activity with increased time in culture. Monocytes developed increased activity after a 5-day lag period; three-to four-fold increases over day 1 values in both total and endogenous specific activity occurred. In contrast, PCM activity increased for pulmonary alveolar macrophage (PAM) without a detectable lag period. Although the increase in endogenous activity of 10--14-day PAM culture was similar to comparable age monocyte cultures, total enzyme activity increased only two-fold above day 1 values. The observation of changes in PCM endogenous specific activity in monocyte cultures may reflect alterations in enzyme activity and/or levels of endogenous methyl-acceptor proteins.
CUSTOM BUILD Custom builds are designed around your lifestyle and space requirements and provide endless possibilities. We love building custom projects! ​ Our tiny homes and other movable structures (commercial, office, retail, etc.) are typically 14-40ft in length and are either 8.5'-10.5' wide with fender/deck-over/gooseneck configurations. There are many factors to consider whether you are building a residential home or other type of movable structure and we can help guide you through the process. ​ The design team at California Tiny House are experts in maximizing livable space and storage, while balancing design elements. Are you ready to start a Custom Build? Starting a design with California Tiny House is easy: Fill out the form below and let us know what you want to build (we are up for any type of project). Some of the most common home requests are shown below, so if you don't see an option you are interested in, please add it to the comments. Once you submit your form our team will contact you and the design process can begin! Once we get the basic details we can start custom drawings and create something awesome just for you. For all custom projects there is initial design fee of $1500, that will go toward the final purchase of your build. Let's get started!
Recent reviews Fabulous 8.8From 154 reviews Feung Nakorn Balcony Rooms and Cafe, Bangkok Exceptional10.0 Stay here! Absolutely lovely. Courtyard is a beautiful setting for breakfast. Close to everything we wanted to see during our short stay in Bangkok. Wish we could have stayed longer. Very helpful staff. Good breakfast. Dec 20, 2017Verified Hotels.com guest review A Traveler, us2 night trip Feung Nakorn Balcony Rooms and Cafe, Bangkok Very Good8.0 love it, not for everyone I thought my stay was close to perfect. I have stayed here several times and the neighborhood is transforming into something very special. When I say not for everyone, the location of this hotel is in one of Bangkoks oldest neighborhoods. If you want or need to go for a more modern shopping experience, it's quite inconvenient and expensive. On the other hand if you are comfortable in Asia and want a very affordable and comfortable place to explore the old city Feung Nakorn is perfect. Dec 7, 2017Verified Hotels.com guest review Robert Gala, us3 night trip Feung Nakorn Balcony Rooms and Cafe, Bangkok Very Good8.0 2nd time around This is a love it or hate it location. Walking distance to many of the "attractions" klongs and soi's filled with street food and traditional markets. If you are a seasoned traveler you'd probably love it. On the other hand you will find few if any western style restaurants or shopping. If this is your thing I would suggest staying elsewhere, This was our 2nd time here so that should speak for itself. The price is also very reasonable. Dec 3, 2016Verified Hotels.com guest review A Traveler, us4 night romance trip Feung Nakorn Balcony Rooms and Cafe, Bangkok Very Good8.0 Nice Boutique Hotel Big room, great location for sightseeing, friendly staffs Apr 6, 2016Verified Hotels.com guest review A Traveler, us3 night trip with friends Feung Nakorn Balcony Rooms and Cafe, Bangkok Exceptional10.0 DO NOT MISS THIS HOSTEL/HOTEL!! The fiance and I stayed in a 4-bed dorm (2 bunk beds) that sat near a ledge overlooking the minibar and koi pond. The hostel was VERY clean with incredibly nice staff. Their little orange cat that roams around is adorable and well taken care of (unlike a lot of strays). This is by far the best hostel I have ever stayed in. Pros: - Great food and even greater prices! Breakfast is good. Better than a lot of restaurants in the area - Extremely friendly staff - Very clean - Air conditioning was awesome especially if you like to take naps mid day - Great atmosphere and friendly environment - CHEAP CHEAP CHEAP!! Cons: - Shared bathroom with men and women. Can be a bit uncomfortable if not used to it
In vitro antioxidant potentials of traditional Chinese medicine, Shengmai San and their relation to in vivo protective effect on cerebral oxidative damage in rats. The preventive effects of Shengmai San (SMS), a traditional Chinese herbal medicine (TCM), was studied on cerebral ischemia-reperfusion injury in rats as a model of antioxidant-based composite therapy. Two biochemical indicators of oxidative damage, thiobarbituric acid reactive substance (TBARS) formation and glutathione peroxidase (GPX) loss were measured in the brain after forebrain ischemia-reperfusion treatment and both were inhibited in all rats administered SMS (15 g original herbs/kg) 2 h before the ischemia-reperfusion. Histochemical study of the brain slice using TTC staining revealed that the SMS effectively reduced infarct area caused by the cerebral ischemia-reperfusion. The antioxidant potentials of SMS preparations were determined in vitro by five different assay methods and were related to the in vivo effectiveness of SMS in protection against brain damage. Inhibitory effect on TBARS formation in vivo showed better correlation with superoxide radical scavenging and DPPH quenching activity in vitro rather than with the other in vitro antioxidant indicators. On the other hand, the in vivo prevention of GPX activity loss showed better correlation with in vitro crocin bleaching inhibition than with the other in vitro antioxidant indicators. It was also suggested that the in vitro TBARS inhibitory activty of SMS is not a good indication to predict the in vivo effectiveness of SMS on inhibition of either TBARS formation or GPX activity loss.
Kill-A-Watt promotes efficiency Residents of University Village are getting the opportunity to monitor and reduce their carbon footprints. A pilot program named Kill-A-Watt is coming to UM. Coupled with GreenU initiatives, it will provide students living at the UV with incentives to monitor their energy use and establish more energy-efficient lifestyles. Senior Sean Ahearn, vice president of Kill-A-Watt, is currently working with Ian McKeown, GreenU sustainability coordinator, to facilitate the program. Every month, UV residents who participate in the Kill-A-Watt competition will receive a free energy statement and they will be provided with tips on how to save energy, which in turn will lower their bill. “The building that saves the most will get a pizza party,” Ahearn said. “Then individuals from that building will get a chance to receive other great prizes.” The Kill-A-Watt Program can already be found at various educational institutions in the state of Florida, including Florida International University and University of Central Florida. Both schools have reportedly saved thousands of dollars on their energy bills each semester. The program is looking to expand onto other campuses, including the University of Florida. “We’re looking to get this energy program started this semester,” McKeown said. The launch date has yet to be determined, but many students living in the UV are excited about the prospects of reducing their carbon footprints. “Saving energy is important to me and I think it’s great that so many people on campus care,” said junior Arthur Affleck, a UV resident. Sign up for our newsletter Email Address* First Name Last Name * = required field The Miami Hurricane is the student newspaper of the University of Miami in Coral Gables, Fla. The newspaper is edited and produced by undergraduate students at UM and is published semi-weekly on Mondays and Thursdays during the regular academic year.
The Wharton One-Acts have become an annual event, as much enjoyed for their atmospheric setting as the plays themselves. This season Shakespeare & Company has moved the One-Acts from Edith Wharton's erstwhile summer home to the Company's new Salon Theater in the Springlawn Mansion, another historically and architecturally significant "summer Cottage. " Entering the spacious lobby -- its magnificent curved stairway and Lenox Mountain vistas visible from the Theater, the Dining Room backed by stone terraces -- one has the feeling that the Mount's Parlor Theater has been brought to Kemble Street on a magic carpet, undergoing a face lift along the way. The new Salon is configured exactly like the Plunkett Street space so that the audience still flanks the actors at close range, but in more comfortable seats and more pristine and airy surroundings. The general excitement of the Company's historic expansion has had a very positive ripple effect on the plays and the players. The current One-Acts are the best and most substantial I've seen and so are the performances. Both are smartly directed by Normy Noël. Dennis Krausnick's adaptation of Henry James's comedy of American vs. English Manners, An International Episode, is the longer and more intricate of the paired plays. It spans a one year period in 1875 and moves through three locales. The action begins in the New York City office of Jack Westgate (Jeffrey Kent) an American tycoon. From there it's on to the Newport country house where he has sent two young English aristocrats, Lord Lambeth (Ethan Flower) and Percy Beaumont (John Rahal Sarrouf) to be entertained by his wife Katherine (Corinna May) and her "bluestocking " sister Betsy Alden (Kate Holland) and his office assistant and Betsy's would-be suitor Willie Woodly (Ben Lambert) The last two scenes skip a year and take us to a London Hotel. (The scene and time change and length could easily, and perhaps more advantageously, make this a stand-alone two-acter). The hotel is temporary home to the American sisters whose European visit rekindles the abruptly aborted relationships (a romance between Lord Lambeth and Betsy, and the battle of wits and clashing opinions between Katherine and Percy. Katherine, who's proudly and defiantly American does not share her sister's admiration of England and Englishmen, although Percy suggests "Perhaps we disagree in order to agree". The pragmatic Englishman has some of the play's sharpest lines, as when he tells his love-smitten aristocratic friend "Love only happens to shepherds and coal miners". Company veteran Corinna May, who has turned in one stellar performance after another, does not disappoint as the wary Katherine Westgate. Another Shakespeare & Compay regular, Diane Prusha, is also at hand. If her double portrayal of a flustered Midwestern matron visiting the Westgate country house and a regal British duchess are any indication, Prusha has moved to a new level in her career. As for John Rahal Sarrouf, the flair for comedy he displayed in previous One-Acts has become even sharper. The company newcomers, Ethan Flower and Kate Holland, also acquit themselves most impressively. Flower, besides being an attractive leading man, is a good comedian. The opening scene when he and Sarrouf arrive in the Westgate office gets An International Episode off to a hilarious start. While the props for both plays are minimal, the costumes are luscious and true to the period of each. The Wharton play, The Rembrandt, brings five of the actors from the James play back on stage. This Edwardian farce is shorter, lighter, an episode more than a play. But the adaptation and direction and the three leads have unearthed the comic potential in the story of Miles Hackett (Sarrouf again, and brilliantly funny), a museum curator Whe he is manipulated by his philanthropic and exuberant cousin Eleanor (Holland) into implying that the painting the impoverished Mrs. Fontage (Prusha, in yet another amusing role) thinks is an unsigned Rembrandt might fetch$1,000. His little deception leads to more complications. It also reintroduces Corinna May as the deliciously over the top Mrs. Crozier. She is Hackett's boss back from a trip to India which has filled her with fervor for Far Eastern religion. She also brigs news that ends Hackett's disaster filled day on a bright note. While the set consists of a few simple props, the costumes for both plays are luscious and true to the period of each. Speaking of costumes, there are a group of elegantly attired mannequins in the lobby who look as if they had stepped right out of the Gilded Age when Springlawn was built. They are, in fact replicas of characters from past productions in the Wharton parlor -- not just the One-Acts but the full-fledged productions likeHouse of Mirth and Custom of the Country. Could the carefully scripted and staged Wharton One-Acts 2001 be a signal of a transition to increasingly meatier matinee fare? Given the company's amazing leap forward, I'm willing to bet my intermission cookies that it is.
Q: jQuery draggable div I want to create a draggable content like the iOS Facebook chat feature, so if you drag the content less than 50% from right, it will go right, if you drag it more than 50% from right, it will go left. I have this: HTML: <div class="chat-head"> <a class="bg"></a> <div class="message"> <p>Content</p></div> </div> jQuery: $(function() { $( "#draggable" ).draggable(); }); I don't know how to create the auto-align to the left/right feature. A: http://jqueryui.com/draggable/#snap-to Snap the draggable to the inner or outer boundaries of a DOM element. Use the snap, snapMode (inner, outer, both), and snapTolerance (distance in pixels the draggable must be from the element when snapping is invoked) options.
Baselworld Preview: Bell & Ross Turn down bling, amplify trust. Aeronautical instruments are a biblical witness: truthful, clear; someone to stake your life on. When Tech wasn’t the cheap all-consuming T-Rex it is now, pilots needed a sure way of knowing what they (and not the algorithm or SoC) needed to do to stay airborne. If you’re on a sortie behind enemy lines, you’d like true information so you can make it home in one piece. As a statement of intent, mil-spec lends the Bell & Ross brand an aura of desirability – all its watches comply with military specifications. Bell & Ross watches have been smartly associated with aviation from the reltatively young brand’s conception in 1992 by industrial designer Belamich and his business partner, Rosillo. BRV-2: Devil in the Subtlest Details All Worked Out In coming up with the Heritage design concept of the Vintage collection, Bell & Ross studied, among other things, pocket watches from the Great War and flight instruments from back to the 1940s. The Vintage collection is now in its third generation, and the newest members feature a trademark black dial with sand-coloured 12, 3, 6 and 9 numerals, “as if aged by the patina of time”. They evoke the past to bring its tangibility, trust and gravitas to the attention-deficit era. The smaller 41mm polished/satin-finished steel case similarly speaks with quiet authority, its rounded lugs a match for the new metal bracelet with fine links. The retro-look is also detailed in the ultra-curved finish of the sapphire crystal, a nod to vintage watches. An example of how clearly conceived and well-executed industrial design can be a competitive advantage. BRV2-92 Steel Heritage The Bell & Ross Vintage collection introduced in 2009 referenced “key eras in aviation history” and, more pertinently, unifies its models around a common design concept. Regardless of shape, size or function, “they share the characteristic of expressing the passage of time through colours" and, as mentioned previously, "a patina that give them a vintage look”. This level of subtlety completes the design so that it pushes all right buttons in the mind of the beholder. The primary design element is the combination of beige numerals on a black background. The collection has now been updated with the BR V2-92 and BR V2-94 Steel Heritage, inspired by 1960s aircraft instrument panels. BRV2-92 Steel Heritage case BRV2-94 Steel Heritage metal BRV2-94 Steel Heritage BRV-2: Devil in the Subtlest Details All Worked Out In coming up with the Heritage design concept of the Vintage collection, Bell & Ross studied, among other things, pocket watches from the Great War and flight instruments from back to the 1940s. The Vintage collection is now in its third generation, and the newest members feature a trademark black dial with sand-coloured 12, 3, 6 and 9 numerals, “as if aged by the patina of time”. They evoke the past to bring its tangibility, trust and gravitas to the attention-deficit era. The smaller 41mm polished/satin-finished steel case similarly speaks with quiet authority, its rounded lugs a match for the new metal bracelet with fine links. The retro-look is also detailed in the ultra-curved finish of the sapphire crystal, a nod to vintage watches. An example of how clearly conceived and well-executed industrial design can be a competitive advantage. 1 of 5 BRV2-92 Steel Heritage The Bell & Ross Vintage collection introduced in 2009 referenced “key eras in aviation history” and, more pertinently, unifies its models around a common design concept. Regardless of shape, size or function, “they share the characteristic of expressing the passage of time through colours" and, as mentioned previously, "a patina that give them a vintage look”. This level of subtlety completes the design so that it pushes all right buttons in the mind of the beholder. The primary design element is the combination of beige numerals on a black background. The collection has now been updated with the BR V2-92 and BR V2-94 Steel Heritage, inspired by 1960s aircraft instrument panels.
Author Topic: Back to front signature canes (Read 1274 times) Hi,I have two paperweights in my collection that both have back-to-front signature canes in them. One by Peter McDougall and the other Perthshire. Neither are highly collectable as they both are standard/mass produced. I presume this is not as rear as I initially thought, especially in the unlimited weights, and that does not really affect the value of the weights?I can only presume that PMCD has fused the four canes as I cannot see how the whole lot could be placed the wrong way around?Thanks and kind regards. I suppose at some future date the ones with backwards sig canes might be worth a little more. Even though the weights you speak of are general range weights, I have difficulty thinking of them 'mass produced'. Even though they are not limited I doubt McDougall makes more than a couple hundred of each model in a given year. Perthshire being a factory sized operation probably made more. I would say with the closure of Perthshire even their general range weights are highly collectible. They always get a lot of bidding activity on eBay. There are not many makers who make weights to the standard of perfection you see in Perthshire and McDougall. Logged I collect Scottish and Italian paperweights and anything else that strikes my fancy. Actually, the oddity of reversed letters might make collectors avoid the weights. I suppose you could add $5 to the value, but there is no "antique" value for either Perthshire or McDougall weights. And are you writing that the Perthshire has a reverse "P" cane in it? Regardless, they are still too new in terms of their making and collectibility. Maybe in 150 years, things might change. Just as the mid-19th-century weights from France now garner top dollar. As for "mass-produced," I agree that you can hardly call Perthsire or McDougall makers of "mass-produced" paperweights. Thanks for the inputs so far!Dave, do you know if your friend payed above the normal price for that? I need this info to tell my grandchildren about my two weights (The grandkids should start apearing in about 20-30 years )I accept that the Perthshire etc weights have no antique value, but as others have posted, I do not believe that that detracts from their collectability. As a new colector myself I believe these are the weights that most people do actually collect, making them highly collectable! {One just has to have a look at the way auctions go on ebay to see this, I have seen Baccarat, Saint Louis, Clichy etc weights sell with 5-10 (serious) bids and big bucks; the same holds true for the good modern millifiori weights such as Perthshire, JD, PMCD, Strathearn etc however with affordable price tags. Of course when you start talking good lampwork (or some of the brilliant millefiori the abovementioned produce/d) the picture changes back to big bucks. As always I'm getting a bit carried away but the point I am trying to make is that there are only so many weights made by the same people that one wants, especially in the same/similar pattern number, until you start looking for something different...and affordable in terms of your collection...and a reversed signature could be one of those things.}Kind regards...and happy May Day/Workers Day I have two of the same 'model' of a PMcD standard range because they look so different from each other. I don't suppose I would have done that if they had been terribly expensive. I just remembered that one of my PMcDs has a backwards sig cane. I took a piccie of it but the file is too large to upload. With as tiny as those canes are, I would not be surprised that a lot of them end up going in backwards. Logged I collect Scottish and Italian paperweights and anything else that strikes my fancy. I have two of the same 'model' of a PMcD standard range because they look so different from each other. I don't suppose I would have done that if they had been terribly expensive. I have the same thing with 3 Perthshire weights....they all measure to a different width and look different from each other so I could explain it to SWMBO!Unfortunately I also have a lot of Eastern weights that are very similar......anyone want an instant collection Unfortunately I also have a lot of Eastern weights that are very similar......anyone want an instant collection hahahahaha If we add yours and mine Karel, we'd probably have enough for a museum. I think I last counted about 300 weights sitting in boxes not going anywhere, stuff that came when I bought ones I REALLY wanted... I can't decide what to do with them. Someone suggested lining the path with them in the garden. I thought I might give it a try on the rockery.. I take a batch with me every time I do a boot sale, but these days nobody wants the basic stuff, even at a couple of quid, only Millefiori weights (doesn't matter if they are Chinese, people just like these) or named ones.
Automated Home Finder - Real Estate & Homes Garden of the Gods, Colorado Automated Home Finder is the source for all of your real estate and home buying needs. By combining the most advanced search tools with the top real estate agents in the industry, we give you the competitive advantage to reach your home buying and real estate goals. Whether you are just looking for the newest MLS listings, want to dig deep into the latest stats and neighborhood trending reports, or see what schools are located in the area, we have the systems and experts to help guide you each step of the way. At Automated Home Finder, we have what you're looking for. Greetings from AutomatedHomeFinder.com! Our comprehensive home and property search is free to use, and has a variety of ways to locate what you are looking for. Just Sign In and you can save property listings and specific real estate searches. You can also have new properties sent to your email as soon as they hit the market! View this help page for more information on using this website. Thank you and enjoy your search! $579,000 - 388 Fairfield Lane, Louisville 3 Bedrooms 4 Baths 2,853 SQ. Feet 0.12 Acres No detail overlooked, no expense spared. This home is designed for those who appreciate quality. The floor plan offers a wide open feel, natural light and architectural touches. The kitchen may be the crown jewel of this home. Custom cabinetry, 42" upper cabinets, stunning crown molding, beautiful stainless appliances, dazzling granite and so much more! Take a mini vacation in your main floor master suite w/a bath that feels like you are at a 5 star hotel. The mini master upstairs has its own full bath with a custom features sure to delight you and your guests. Just off of the mini master is a delightful loft with build in cabinetry, what a delightful space to relax or work! The beautifully finished basement offers a large family room, bedroom, office or craft room and a bath. Relax on your 15x23' deck! Let the HOA do the MOWING & SNOW REMOVAL take a walk or snuggle by the fireplace instead. You must see this amazing home today. After all, you deserve the Very Best!
Mike Hull and Miles Dieffenbach stayed together on their official recruiting visit to Penn State, which they turned into becoming roommates. The two arrived at Penn State in 2010 as part of a big, heralded recruiting class that expected to achieve great things. They developed bonds as scout-team players that year, which Hull remembers fondly. He also looks around now to see so many faces gone. "It's definitely weird," the linebacker said. "When we came in, there were like 30 of us. That first year, we had a great time. It was a tight-knit group. Then things started to happen." Penn State will recognize 17 seniors when it hosts Michigan State on Saturday in the annual Senior Day game. Among them are just five players remaining from the 2010 recruiting class that was considered the Big Ten's best. Hull, Dieffenbach, Brad Bars, C.J. Olaniyan and Zach Zwinak will appear at Beaver Stadium for the last time, with only four able to play against the Spartans. Zwinak's injury (along with one to former walk-on Ryan Keiser) was the last bit of attrition for a class that dwindled since 2010. Of the 19 players in that recruiting class, six completed their eligibility as Penn State football players. Along with the five who will be honored Saturday, defensive tackle DaQuan Jones graduated last year and made the Tennessee Titans' roster. It was a promising group at the time, loaded with linemen and some highly recruited players. Three (Hull, Silas Redd and Paul Jones) were five-star prospects, according to Rivals.com. Twelve others received four stars. But then players began leaving for a variety of reasons. Redd, Rob Bolden, Khairi Fortt and Kevin Haplea transferred following the announcement of sanctions in 2012. Paul Jones, who was shifted from quarterback to tight end, left during the 2012 season. Others left for different schools, Two players gave up football but remained enrolled at Penn State. One player, Evan Hailes, was forced to quit playing for medical reasons. "Before you know it, we're down to like 10 guys," Hull said. "It changed the off-the-field dynamic of hanging out with the guys and getting that team camaraderie. On the field it was just different. You didn't see as many familiar faces that you saw on scout team during your freshman year." But, as Hull and Dieffenbach bonded, so did their dwindling class. They lasted through the 2011 season, the death of the coach who recruited them, a new coach, the NCAA sanctions and another new coach this year. "We're very tight," Dieffenbach said. "We stuck through it, we've done well, even thrived with all the different coaches and everything we've been through. That made our bond very special." Penn State coach James Franklin said the group also helped make his transition smoother. Following last week's loss to Illinois, Franklin met with the captains (including Hull, Dieffenbach and Olaniyan) when the team returned Saturday night. He called the meeting productive and said this year's seniors have been "awesome" for team morale. "It's probably like this in a lot of professions, but you get frustrated or disappointed and you're going through some challenges or adversity," Franklin said. "As long as you're surrounded with really good people that care and are committed, you can talk through it, and you feel better. "I know they made me feel better; gave me some perspective on some things that was really valuable. I think these seniors have been unbelievable. I know myself, as well as the rest of the staff and the young guys, look up to them and are very, very thankful." Saturday's game will be one to watch, Dieffenbach said, because his class wants to go out "as a bunch of fighters." Teammates want that for the seniors as well. "There's only a few of them left," redshirt junior Anthony Zettel said. "For them, I want to give everything I have. These guys really did everything they could to help Penn State through the sanctions. I want to go out and turn it up for them." WHERE ARE THEY NOW? Of the 19 players in Penn State's 2010 recruiting class, five will take part in Senior Day on Saturday. Catching up with the group, with remaining players bold-faced. Brad Bars: Defensive lineman. Kyle Baublitz: Graduated this year, did not return for final season of eligibility. Rob Bolden: Transferred to LSU in 2012, then to Eastern Michigan in 2014.
In large skillet over medium-high heat, warm olive oil. Season chicken with salt and pepper. Place flour, eggs and panko in bowls or plate; dredge chicken in flour, then dip in eggs and coat lightly with panko. Place coated chicken into warm pan and sauté, turning, until chicken is done, about five minutes on each side. Remove chicken from pan and set aside, keeping warm.
Q: Query to get the number of times an item appears I have a table like this in my db: id --- owner --- product ---- type 0 --- john --- mustang ---- car 1 --- tim --- a360 ---- plane 2 --- john --- camry ---- car 3 --- dan --- a380 ---- plane 4 --- tim --- ninja ---- bike 5 --- dan --- accord ---- car I'm trying to get the number of each type an owner has. Something like this: John Car = 2 Plane = 0 Bike = 0 ------------- Tim Car = 0 Plane = 1 Bike = 1 ------------- Dan Car = 1 Plane = 1 Bike = 0 ------------- I have been unable to solve this. Another issue is that my database is able to accept new types. For example, someone can add a bicycle as a type. Is there any way to do this? A: To solve this problem, first you have to generate the output rows and then get the counts: select o.owner, p.product, count(t.owner) from (select distinct owner from table) o cross join (select distinct product from table) p left join table t on t.owner = o.owner and t.product = p.product group by o.owner, p.product; If you have reference tables for owners and products, then you can use these instead of the select distinct subqueries. EDIT: Grouping by type is basically the same idea: select o.owner, ty.type, count(t.owner) from (select distinct owner from table) o cross join (select distinct type from table) ty left join table t on t.owner = o.owner and t.type = ty.type group by o.owner, ty.type;
An employer identification number (EIN) is a nine-digit number assigned by the IRS. It is used to identify the tax accounts of employers and certain others who have no employees. The IRS uses the number to identify taxpayers who are required to file various business tax returns. EINs are used by employers, sole proprietors, corporations, partnerships, non-profit associations, trusts, estates of decedents, government agencies, certain individuals, and other business entities. If you already have an EIN and the organization or ownership of your business changes, you may need to apply for a new number. For more information, refer to Do You Need an EIN on IRS.gov or Publication 1635 (PDF), Understanding Your EIN. Daily Limitation of an Employer Identification Number To ensure fair and equitable treatment for all taxpayers, the IRS limits EIN issuance to one per responsible party per day. This limitation is applicable to all requests for EINs whether online, by fax or mail. We apologize for any inconvenience this may cause. There are three ways to apply for an EIN: Online Fax, and Mail Online - The Internet is the preferred method to use when applying for an EIN. The online EIN application is available Monday - Friday, 7 a.m. to 10 p.m. Eastern time; visit IRS.gov. The information submitted is validated during the online session. Once completed, an EIN is issued immediately. Taxpayers who apply online have an option to view, print, and save their EIN assignment notice at the end of the session. (Authorized third party designees will receive the EIN; however, the EIN assignment notice will be mailed to the applicant.) The online application is available for all entities whose principal business, office or agency, or legal residence (in the case of an individual) is located in the United States or U.S. Territories. Additionally, the principal officer, general partner, grantor, owner, trustor, etc. must have a valid taxpayer identification number (Social Security number, EIN, or individual taxpayer identification number) in order to use the online application. The online application is not available to third party designees or for entities with foreign addresses (including Puerto Rico). Visit IRS.gov and search the term EIN for more information. By Fax - You may obtain an EIN by completing Form SS-4 (PDF), Application for Employer Identification Number, and faxing it to the IRS for processing. The IRS Fax-TIN numbers are provided in the Form SS-4 Instructions (PDF). An EIN applied for by fax will be issued within 4 business days. Fax-TIN is available 24 hours a day, 7 days a week. By Mail - You may also obtain an EIN by completing the Form SS-4 and mailing it to the IRS service center address listed on the Form SS-4 Instructions (PDF). Ensure that the Form SS-4 contains all of the required information. Send your completed Form SS-4 at least 4 to 5 weeks before you need your EIN to file a return or make a deposit. An EIN will be assigned and mailed to you within 4 to 5 weeks. International EIN Applicants There are two ways for international applicants to apply: Telephone Mail By Telephone - International applicants may call 267-941-1099 (not a toll-free number) 6 a.m. to 11 p.m. (Eastern time) Monday through Friday to obtain their EIN. The person making the call must be authorized to receive the EIN and answer questions concerning the Form SS-4 (PDF), Application for Employer Identification Number. By Mail - Send the Form SS-4 to Internal Revenue Service Center, Attn: EIN International Operation, Cincinnati, OH 45999. Ensure that the Form SS-4 contains all of the required information. Send your completed Form SS-4 at least 4 to 5 weeks before you need your EIN to file a return or make a deposit. An EIN will be assigned and mailed to you within 4 to 5 weeks.
Evangelism 'must not be forced on others', says Archbishop of Canterbury Christian witness "must be both confident and humble", the Archbishop of Canterbury has said in an Easter letter to churches around the world. Archbishop Justin Welby said that although it was a Christian duty to share the faith, it was important that Christians season their message with "gentleness and respect" and not force their beliefs onto others. "Our proclamation of the hope which is ours in the Resurrection of Jesus Christ must be both confident and humble," he wrote. "In our complex and plural world our evangelism must not be forced on others, but as followers of Christ we have a duty to bear witness to our faith: to speak of hope for the world in the Resurrection of Christ, a message seasoned with gentleness and respect. "Our actions of love, compassion, respect and gentleness confirm that the message we share is indeed good news." He reflected on the need for Christians to bring a message of hope to communities as people around the world continue to suffer as a result of environmental damage, war, terrorism, and political and economic instability. In addition, he warned of the "twin threats of extremism and apathy". "Our world is in desperate need of hope. As Christians we have a message of sure and certain hope to proclaim," he wrote. The call for humility echoes a recent appeal made by the Archbishop specifically to British Christians to be sensitive to their country's colonial past and how it might affect their witness in communities that were part of the Empire. Delivering the Deo Gloria Trust lecture, Archbishop Welby said it was important that Christians engage in dialogue rather than monologue, and recognise the positive contributions of people who belong to different faiths. "How are British Christians heard when we talk of the claims of Christ by diaspora communities who have experienced abuse and exploitation by an empire that has seemed to hold the Christian story at the heart of its project?" he said. He added: "Let us never be guilty of demeaning the light that others have, just show them something of the light you know. "Let's tell people about Jesus and witness to what he has done for us, without feeling the need to presume to tell others what is wrong with their faith."
--- author: - 'J. Adams' - 'M.M. Aggarwal' - 'Z. Ahammed' - 'J. Amonett' - 'B.D. Anderson' - 'D. Arkhipkin' - 'G.S. Averichev' - 'S.K. Badyal' - 'Y. Bai' - 'J. Balewski' - 'O. Barannikova' - 'L.S. Barnby' - 'J. Baudot' - 'S. Bekele' - 'V.V. Belaga' - 'R. Bellwied' - 'J. Berger' - 'B.I. Bezverkhny' - 'S. Bharadwaj' - 'A. Bhasin' - 'A.K. Bhati' - 'V.S. Bhatia' - 'H. Bichsel' - 'A. Billmeier' - 'L.C. Bland' - 'C.O. Blyth' - 'B.E. Bonner' - 'M. Botje' - 'A. Boucham' - 'A.V. Brandin' - 'A. Bravar' - 'M. Bystersky' - 'R.V. Cadman' - 'X.Z. Cai' - 'H. Caines' - 'M. Calderón de la Barca Sánchez' - 'J. Castillo' - 'D. Cebra' - 'Z. Chajecki' - 'P. Chaloupka' - 'S. Chattopdhyay' - 'H.F. Chen' - 'Y. Chen' - 'J. Cheng' - 'M. Cherney' - 'A. Chikanian' - 'W. Christie' - 'J.P. Coffin' - 'T.M. Cormier' - 'J.G. Cramer' - 'H.J. Crawford' - 'D. Das' - 'S. Das' - 'M.M. de Moura' - 'A.A. Derevschikov' - 'L. Didenko' - 'T. Dietel' - 'S.M. Dogra' - 'W.J. Dong' - 'X. Dong' - 'J.E. Draper' - 'F. Du' - 'A.K. Dubey' - 'V.B. Dunin' - 'J.C. Dunlop' - 'M.R. Dutta Mazumdar' - 'V. Eckardt' - 'W.R. Edwards' - 'L.G. Efimov' - 'V. Emelianov' - 'J. Engelage' - 'G. Eppley' - 'B. Erazmus' - 'M. Estienne' - 'P. Fachini' - 'J. Faivre' - 'R. Fatemi' - 'J. Fedorisin' - 'K. Filimonov' - 'P. Filip' - 'E. Finch' - 'V. Fine' - 'Y. Fisyak' - 'K. Fomenko' - 'J. Fu' - 'C.A. Gagliardi' - 'J. Gans' - 'M.S. Ganti' - 'L. Gaudichet' - 'F. Geurts' - 'V. Ghazikhanian' - 'P. Ghosh' - 'J.E. Gonzalez' - 'O. Grachov' - 'O. Grebenyuk' - 'D. Grosnick' - 'S.M. Guertin' - 'Y. Guo' - 'A. Gupta' - 'T.D. Gutierrez' - 'T.J. Hallman' - 'A. Hamed' - 'D. Hardtke' - 'J.W. Harris' - 'M. Heinz' - 'T.W. Henry' - 'S. Hepplemann' - 'B. Hippolyte' - 'A. Hirsch' - 'E. Hjort' - 'G.W. Hoffmann' - 'H.Z. Huang' - 'S.L. Huang' - 'E.W. Hughes' - 'T.J. Humanic' - 'G. Igo' - 'A. Ishihara' - 'P. Jacobs' - 'W.W. Jacobs' - 'M. Janik' - 'H. Jiang' - 'P.G. Jones' - 'E.G. Judd' - 'S. Kabana' - 'K. Kang' - 'M. Kaplan' - 'D. Keane' - 'V.Yu. Khodyrev' - 'J. Kiryluk' - 'A. Kisiel' - 'E.M. Kislov' - 'J. Klay' - 'S.R. Klein' - 'A. Klyachko' - 'D.D. Koetke' - 'T. Kollegger' - 'M. Kopytine' - 'L. Kotchenda' - 'M. Kramer' - 'P. Kravtsov' - 'V.I. Kravtsov' - 'K. Krueger' - 'C. Kuhn' - 'A.I. Kulikov' - 'A. Kumar' - 'R.Kh. Kutuev' - 'A.A. Kuznetsov' - 'M.A.C. Lamont' - 'J.M. Landgraf' - 'S. Lange' - 'F. Laue' - 'J. Lauret' - 'A. Lebedev' - 'R. Lednicky' - 'S. Lehocka' - 'M.J. LeVine' - 'C. Li' - 'Q. Li' - 'Y. Li' - 'G. Lin' - 'S.J. Lindenbaum' - 'M.A. Lisa' - 'F. Liu' - 'L. Liu' - 'Q.J. Liu' - 'Z. Liu' - 'T. Ljubicic' - 'W.J. Llope' - 'H. Long' - 'R.S. Longacre' - 'M. López Noriega' - 'W.A. Love' - 'Y. Lu' - 'T. Ludlam' - 'D. Lynn' - 'G.L. Ma' - 'J.G. Ma' - 'Y.G. Ma' - 'D. Magestro' - 'S. Mahajan' - 'D.P. Mahapatra' - 'R. Majka' - 'L.K. Mangotra' - 'R. Manweiler' - 'S. Margetis' - 'C. Markert' - 'L. Martin' - 'J.N. Marx' - 'H.S. Matis' - 'Yu.A. Matulenko' - 'C.J. McClain' - 'T.S. McShane' - 'F. Meissner' - 'Yu. Melnick' - 'A. Meschanin' - 'M.L. Miller' - 'N.G. Minaev' - 'C. Mironov' - 'A. Mischke' - 'D.K. Mishra' - 'J. Mitchell' - 'B. Mohanty' - 'L. Molnar' - 'C.F. Moore' - 'D.A. Morozov' - 'M.G. Munhoz' - 'B.K. Nandi' - 'S.K. Nayak' - 'T.K. Nayak' - 'J.M. Nelson' - 'P.K. Netrakanti' - 'V.A. Nikitin' - 'L.V. Nogach' - 'S.B. Nurushev' - 'G. Odyniec' - 'A. Ogawa' - 'V. Okorokov' - 'M. Oldenburg' - 'D. Olson' - 'S.K. Pal' - 'Y. Panebratsev' - 'S.Y. Panitkin' - 'A.I. Pavlinov' - 'T. Pawlak' - 'T. Peitzmann' - 'V. Perevoztchikov' - 'C. Perkins' - 'W. Peryt' - 'V.A. Petrov' - 'S.C. Phatak' - 'R. Picha' - 'M. Planinic' - 'J. Pluta' - 'N. Porile' - 'J. Porter' - 'A.M. Poskanzer' - 'M. Potekhin' - 'E. Potrebenikova' - 'B.V.K.S. Potukuchi' - 'D. Prindle' - 'C. Pruneau' - 'J. Putschke' - 'G. Rakness' - 'R. Raniwala' - 'S. Raniwala' - 'O. Ravel' - 'R.L. Ray' - 'S.V. Razin' - 'D. Reichhold' - 'J.G. Reid' - 'G. Renault' - 'F. Retiere' - 'A. Ridiger' - 'H.G. Ritter' - 'J.B. Roberts' - 'O.V. Rogachevskiy' - 'J.L. Romero' - 'A. Rose' - 'C. Roy' - 'L. Ruan' - 'R. Sahoo' - 'I. Sakrejda' - 'S. Salur' - 'J. Sandweiss' - 'I. Savin' - 'P.S. Sazhin' - 'J. Schambach' - 'R.P. Scharenberg' - 'N. Schmitz' - 'K. Schweda' - 'J. Seger' - 'P. Seyboth' - 'E. Shahaliev' - 'M. Shao' - 'W. Shao' - 'M. Sharma' - 'W.Q. Shen' - 'K.E. Shestermanov' - 'S.S. Shimanskiy' - E Sichtermann - 'F. Simon' - 'R.N. Singaraju' - 'G. Skoro' - 'N. Smirnov' - 'R. Snellings' - 'G. Sood' - 'P. Sorensen' - 'J. Sowinski' - 'J. Speltz' - 'H.M. Spinka' - 'B. Srivastava' - 'A. Stadnik' - 'T.D.S. Stanislaus' - 'R. Stock' - 'A. Stolpovsky' - 'M. Strikhanov' - 'B. Stringfellow' - 'A.A.P. Suaide' - 'E. Sugarbaker' - 'C. Suire' - 'M. Sumbera' - 'B. Surrow' - 'T.J.M. Symons' - 'A. Szanto de Toledo' - 'P. Szarwas' - 'A. Tai' - 'J. Takahashi' - 'A.H. Tang' - 'T. Tarnowsky' - 'D. Thein' - 'J.H. Thomas' - 'S. Timoshenko' - 'M. Tokarev' - 'T.A. Trainor' - 'S. Trentalange' - 'R.E. Tribble' - 'O.D. Tsai' - 'J. Ulery' - 'T. Ullrich' - 'D.G. Underwood' - 'A. Urkinbaev' - 'G. Van Buren' - 'M. van Leeuwen' - 'A.M. Vander Molen' - 'R. Varma' - 'I.M. Vasilevski' - 'A.N. Vasiliev' - 'R. Vernet' - 'S.E. Vigdor' - 'Y.P. Viyogi' - 'S. Vokal' - 'S.A. Voloshin' - 'M. Vznuzdaev' - 'W.T. Waggoner' - 'F. Wang' - 'G. Wang' - 'G. Wang' - 'X.L. Wang' - 'Y. Wang' - 'Y. Wang' - 'Z.M. Wang' - 'H. Ward' - 'J.W. Watson' - 'J.C. Webb' - 'R. Wells' - 'G.D. Westfall' - 'A. Wetzler' - 'C. Whitten Jr.' - 'H. Wieman' - 'S.W. Wissink' - 'R. Witt' - 'J. Wood' - 'J. Wu' - 'N. Xu' - 'Z. Xu' - 'Z.Z. Xu' - 'E. Yamamoto' - 'P. Yepes' - 'V.I. Yurevich' - 'Y.V. Zanevsky' - 'H. Zhang' - 'W.M. Zhang' - 'Z.P. Zhang' - 'P.A Zolnierczuk' - 'R. Zoulkarneev' - 'Y. Zoulkarneeva' - 'A.N. Zubarev' ---
Catechol-O-methyltransferase (COMT) pharmacogenetics in the treatment response phenotypes of major depressive disorder (MDD). Psychiatry is a specialty where the application of pharmacogenomics approaches is made to the study of interindividual differences in response to antidepressants. It is highly applied for improving patient treatment. Major depressive disorder (MDD) is a common and complex disorder resulting from genetic and environmental interactions. Less than 40% of patients with MDD achieve remission, and even after several treatment trials, one in three patients do not fully recover from MDD. Many clinical and genomic association studies suggested that the catechol-O-methyltransferase (COMT) gene region was an important genetic locus for psychiatric disorders, because of the proposed relationship between its function in catecholaminergic neurotransmission and individual response to antidepressants, and vulnerability to psychiatric disorders. Although a number of COMT single nucleotide polymorphisms (SNPs) were observed, the Val108/158Met (rs4680) polymorphism in exon 4 resulted in a change in the enzyme structure which has been intensively investigated in relation to its role of enzyme activity and processes of prefrontal cortex functions in cognition. As serotonin interacts with dopamine and dopamine availability is influenced by COMT SNPs, an association between the COMT gene and response to treatment, based on the various pharmacogenetics/pharmacogenomics studies about COMT gene published to date, is explored in this overview.
/* Tencent is pleased to support the open source community by making Plato available. Copyright (C) 2019 THL A29 Limited, a Tencent company. All rights reserved. Licensed under the BSD 3-Clause License (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://opensource.org/licenses/BSD-3-Clause Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" basis, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See the AUTHORS file for names of contributors. */ #ifndef __PLATO_ALGO_KCORE_HPP__ #define __PLATO_ALGO_KCORE_HPP__ #include <cstdlib> #include <cstdint> #include <limits> #include <vector> #include <algorithm> #include <functional> #include <type_traits> #include "glog/logging.h" #include "gflags/gflags.h" #include "boost/format.hpp" #include "boost/iostreams/stream.hpp" #include "boost/iostreams/filter/gzip.hpp" #include "boost/iostreams/filtering_stream.hpp" #include "plato/util/perf.hpp" #include "plato/util/atomic.hpp" #include "plato/graph/graph.hpp" #include "plato/engine/dualmode.hpp" namespace plato { namespace algo{ struct kcore_info_t { plato::vid_t v_count_; plato::eid_t e_count_; }; enum class kcore_calc_type_t { SUBGRAPH = 1, VERTEX = 2 }; class kcore_algo_t { public: /* * Finds the coreness (shell index) of the vertices of the network. * Ref: Montresor A, De Pellegrini F, Miorandi D. Distributed k-core decomposition[J]. * IEEE Transactions on parallel and distributed systems, 2012, 24(2): 288-300. * * \tparam Graph The graph type, the graph must be partition by destination node * \tparam CALLBACK The type of callback function * * \param graph_info Graph infomations * \param incomings Graph edges indexed by destination node * \param callback The result callback function **/ template <typename Graph, typename Callback> static dense_state_t<vid_t, typename Graph::partition_t> compute_shell_index(const graph_info_t& graph_info, Graph& incomings, Callback&& callback); }; template <typename Graph, typename Callback> dense_state_t<vid_t, typename Graph::partition_t> kcore_algo_t::compute_shell_index(const graph_info_t& graph_info, Graph& incomings, Callback&& callback) { using partition_t = typename Graph::partition_t; using adj_unit_list_spec_t = typename Graph::adj_unit_list_spec_t; using bitmap_spec_t = bitmap_t<>; using state_t = dense_state_t<vid_t, partition_t>; constexpr bool is_seq = std::is_same<partition_t, sequence_balanced_by_destination_t>::value || std::is_same<partition_t, sequence_balanced_by_source_t>::value; static_assert(is_seq, "kcore only support sequence partition now"); plato::stop_watch_t watch; watch.mark("t0"); watch.mark("t1"); auto& cluster_info = cluster_info_t::get_instance(); bitmap_spec_t active_current(graph_info.max_v_i_ + 1); bitmap_spec_t active_next(graph_info.max_v_i_ + 1); state_t estimate(graph_info.max_v_i_, incomings.partitioner()); state_t coreness(graph_info.max_v_i_, incomings.partitioner()); active_current.fill(); size_t need_modified = graph_info.vertices_; size_t modified = graph_info.vertices_; coreness.fill(std::numeric_limits<vid_t>::max()); // init coreness with node's degree incomings.reset_traversal(); #pragma omp parallel { size_t chunk_size = 64; while (incomings.next_chunk([&](vid_t v_i, const adj_unit_list_spec_t& adjs) { coreness[v_i] = adjs.end_ - adjs.begin_; return true; }, &chunk_size)) { } } int partitions = cluster_info.partitions_; int partition_id = cluster_info.partition_id_; vid_t avg_degrees = std::max(graph_info.edges_ / graph_info.vertices_, 1UL); auto& offsets = incomings.partitioner()->offset_; if (0 == cluster_info.partition_id_) { LOG(INFO) << "prepared for shell-index caculation done, cost: " << watch.showlit_seconds("t1"); } std::vector<int> displs(partitions); std::vector<int> counts(partitions); for (int p_i = 0; p_i < partitions; ++p_i) { counts[p_i] = offsets[p_i + 1] - offsets[p_i]; displs[p_i] = offsets[p_i]; } CHECK(displs[partitions - 1] >= 0) << "Allgatherv exceed int32 limit. displs[partitions-1]: " << displs[partitions - 1]; int epoch = 0; do { struct broadcast_msg_t { vid_t v_i; vid_t coreness; }; watch.mark("t1"); watch.mark("t2"); std::shared_ptr<bitmap_spec_t> active_wrapper(&active_current, [](bitmap_spec_t*) { }); size_t lowbound = graph_info.vertices_ / avg_degrees / (sizeof(broadcast_msg_t) / sizeof(vid_t)); bool is_broadcast_sparse = (modified < lowbound / 8) || ((sizeof(vid_t) * modified > 500 * MBYTES) && (modified < lowbound)); if (is_broadcast_sparse) { // sparse mode using broadcast_ctx_t = mepa_bc_context_t<broadcast_msg_t>; auto active_view = create_active_v_view(incomings.partitioner()->self_v_view(), active_current); broadcast_message<broadcast_msg_t, vid_t>(active_view, [&](const broadcast_ctx_t& ctx, vid_t v_i) { ctx.send(broadcast_msg_t { v_i, coreness[v_i] }); }, [&](int, broadcast_msg_t& msg) { estimate[msg.v_i] = msg.coreness; return 0; }); } else { // dense mode // MPI_IN_PLACE is slow for MPI_Allgatherv int rc = MPI_Allgatherv(&coreness[offsets[partition_id]], offsets[partition_id + 1] - offsets[partition_id], get_mpi_data_type<vid_t>(), &estimate[0], counts.data(), displs.data(), get_mpi_data_type<vid_t>(), MPI_COMM_WORLD); CHECK_EQ(MPI_SUCCESS, rc); MPI_Barrier(MPI_COMM_WORLD); } double broadcast_cost = watch.show("t2"); watch.mark("t2"); // update coreness modified = 0; need_modified = 0; active_next.clear(); vid_t actives = 0; // XXX(ced) Here we do not flow montresor'12' algo(update node's coreness until no update can be performed) // if communication is a problem, refactor here. actives = 0; coreness.reset_traversal(active_wrapper); #pragma omp parallel reduction(+:actives) { size_t chunk_size = 64; vid_t __actives = 0; while (coreness.next_chunk([&](vid_t v_i, vid_t* pcrns) { static thread_local std::vector<vid_t> __count; // caculate h-index auto adjs = incomings.neighbours(v_i); vid_t adjcnt = adjs.end_ - adjs.begin_; if (0 == adjcnt) { *pcrns = 0; ++__actives; return true; } vid_t est = std::min(adjcnt, estimate[v_i]); __count.assign(est + 1, 0); for (auto it = adjs.begin_; it != adjs.end_; ++it) { ++__count[std::min(est, estimate[it->neighbour_])]; } vid_t sum = 0; for (vid_t i = est; i > 0; --i) { sum += __count[i]; if (sum >= i) { if (i < *pcrns) { *pcrns = i; for (auto it = adjs.begin_; it != adjs.end_; ++it) { if (!active_next.get_bit(it->neighbour_)) { // get_bit is not a atomic ops active_next.set_bit(it->neighbour_); } } ++__actives; __atomic_fetch_add(&modified, 1, __ATOMIC_RELAXED); } break; } } return true; }, &chunk_size)) { } actives += __actives; } MPI_Allreduce(MPI_IN_PLACE, &modified, 1, get_mpi_data_type<size_t>(), MPI_SUM, MPI_COMM_WORLD); if (actives) { coreness.reset_traversal(active_wrapper); #pragma omp parallel { size_t chunk_size = 64; while (coreness.next_chunk([&](vid_t v_i, vid_t* pcrns) { estimate[v_i] = *pcrns; return true; }, &chunk_size)) { } } } active_next.sync(); need_modified = active_next.count(); ++epoch; if (0 == cluster_info.partition_id_) { LOG(INFO) << "epoch: " << epoch << ", modified: " << modified << ", need_modified: " << need_modified << ", broadcast[" << is_broadcast_sparse << "] cost: " << broadcast_cost << "ms" << ", update cost: " << watch.showlit_seconds("t2") << ", one epoch cost: " << watch.showlit_seconds("t1"); } std::swap(active_next, active_current); } while (need_modified); if (0 == cluster_info.partition_id_) { LOG(INFO) << "caculation done, cost: " << watch.showlit_seconds("t0") << ", now saving result to hdfs..."; } return coreness; } }} // plato algo #endif
# C# Language Design Notes for Oct 3, 2018 ## Agenda 1. How is the nullable context expressed? 2. Async streams - which interface shape? # Nullability context In order to accommodate "null-oblivious" legacy source code while it is under transition, we want to allow regional changes to the context for how nullability is handled. There are actually two interesting "nullability contexts": 1. Annotation context: should an unannotated type reference in the context be considered nonnullable or oblivious? 2. Warning context: If null annotations are violated within the context, should a warning be given? We have learned the hard way that we cannot use regular attributes due to circularities in binding. Essentially, modifying semantics with attributes is a bad idea not just from a "moral" perspective, but, as it turns out, a technical one: you need to do binding to understand the attributes. If the attributes themselves affect binding (as these would), well hmmm. This is causing us to rethink the context switching experience. We have three general candidate ideas for describing regional changes to the nullability context: 1. A new modifier 2. A "fake" or pseudo-attribute 3. Compiler directives We don't have even a strawman-level idea for good modifier keywords, so we are going to drop that one from the discussion. For the two others, there are strawman proposals below for the purposes of discussion. ## Pseudo-attributes The idea is to keep the currently implemented attribute-based user experience, but discover the attributes specially, in an earlier phase of the compiler, rather than through normal binding (where it is too late). Nullable annotations and warnings are controlled by the same attribute, `[NonNullTypes]`. It has a positional boolean parameter that controls annotations, and defaults to true. It has an additional named boolean parameter that controls warnings and defaults to true. The attributes override each other hierarchically, and can be applied at the module, type and member levels at least. Here is an example where nullable annotations are turned off for some members, and warnings are turned off for another member: ``` c# [module:NonNullTypes] public class Dictionary<TKey, TValue> : ICollection<KeyValuePair<TKey, TValue>>, IEnumerable<KeyValuePair<TKey, TValue>>, IEnumerable, IDictionary<TKey, TValue>, IReadOnlyCollection<KeyValuePair<TKey, TValue>>, IReadOnlyDictionary<TKey, TValue>, ICollection, IDictionary, IDeserializationCallback, ISerializable { public Dictionary() { } public Dictionary(IDictionary<TKey, TValue> dictionary) { } public Dictionary(IEnumerable<KeyValuePair<TKey, TValue>> collection) { } public Dictionary(IEqualityComparer<TKey>? comparer) { } [NonNullTypes(false)] public Dictionary(int capacity) { } [NonNullTypes(false)] public Dictionary(IDictionary<TKey, TValue> dictionary, IEqualityComparer<TKey> comparer) { } [NonNullTypes(false)] public Dictionary(IEnumerable<KeyValuePair<TKey, TValue>> collection, IEqualityComparer<TKey> comparer) { } public Dictionary(int capacity, IEqualityComparer<TKey>? comparer) { } [NonNullTypes(warn = false)] protected Dictionary(SerializationInfo info, StreamingContext context) { } } ``` Some consequences of the attribute being "fake": - argument expressions would have to be literals - there's generally a trade off between expressiveness and effort - maybe we wouldn't even be able to allow the named parameter - What are implications to the semantic model in the Roslyn API? Should you be able to tell the difference? ## Directives The idea is to control nullable annotations and warnings as separate `#`-prefixed compiler directives that apply lexically to all source code until undone by another directive: - `#pragma warning disable null` and `#pragma warning restore null` for turning warnings off and on (simply using the `null` keyword as a special diagnostic name for the existing feature) - `#nonnull disable` and `#nonnull restore` for turning nullable annotations off and on (reusing the `disable` and `restore` keywords from pragma warnings) Here is the same example as above, using the directives approach: ``` c# public class Dictionary<TKey, TValue> : ICollection<KeyValuePair<TKey, TValue>>, IEnumerable<KeyValuePair<TKey, TValue>>, IEnumerable, IDictionary<TKey, TValue>, IReadOnlyCollection<KeyValuePair<TKey, TValue>>, IReadOnlyDictionary<TKey, TValue>, ICollection, IDictionary, IDeserializationCallback, ISerializable { public Dictionary() { } public Dictionary(IDictionary<TKey, TValue> dictionary) { } public Dictionary(IEnumerable<KeyValuePair<TKey, TValue>> collection) { } public Dictionary(IEqualityComparer<TKey>? comparer) { } #nonnull disable public Dictionary(int capacity) { } public Dictionary(IDictionary<TKey, TValue> dictionary, IEqualityComparer<TKey> comparer) { } public Dictionary(IEnumerable<KeyValuePair<TKey, TValue>> collection, IEqualityComparer<TKey> comparer) { } #nonnull restore public Dictionary(int capacity, IEqualityComparer<TKey>? comparer) { } #pragma warning disable null protected Dictionary(SerializationInfo info, StreamingContext context) { } #pragma warning restore null } ``` A consequence: - Might require a compiler switch for the global opt-in - which we were happy to get rid of when we first adopted the attribute approach. ## Discussion The purpose of the feature is to toggle *contextual information* for a region of code: a) whether unannotated type references in the region should be interpreted as nonnullable, and b) whether warnings should be yielded for violations of nullable intent within the region. It is uncommon for attributes to affect context. Modifiers sometimes do (unsafe, checked/unchecked), and directives sometimes do (#pragma warning). Attributes usually directly affect the entity to which they are attached, rather than elements (declarations, expressions, etc.) within it. It is also uncommon - and usually considered undesirable - for attributes to affect semantics. In fact, it is affecting semantics that is causing the problem in the first place, because attributes also *depend* on semantics. Syntactically, directives stand out from the syntax, and generally indent to the left margin. They are *about* the source code, not part of it. Attributes are enmeshed with the code itself, and stand out less. Which signal do we want to send? "The rules of the game have changed in this region of source code" vs "This class or member is special with regards to nullability"? Attributes are syntactically limited in their *granularity* – they can only apply to certain nodes in the syntax tree. Directives are free to appear between any two tokens (as long as there are line breaks between them), including at both a smaller scale (in between statements and expressions) and a larger scale (around multiple types or even namespaces) than attributes. They can also modify top-level non-metadata constructs such as using aliases. Wherever we *infer* obliviousness for a specific declaration, directives would let you make that same declaration explicitly (albeit awkwardly), whereas attributes would generally be unable to. On the other hand, in practice your desired granularity would often be at the member or type level, where attributes would do just fine. The ability of directives to turn off and then on *at different syntactic nesting levels* is just weird, and hardly useful. Maybe the natural prevention attributes provide against such anomalies is a good thing. Then again, other directives already have this ability, and that doesn’t seem to cause trouble in practice. Inside of method bodies, change of the warning context seems more likely than the annotation context. Attributes are the means by which we would have to encode the context in metadata regardless. So using attributes in syntax would be more direct than having to come up with a scheme for generating attributes from weirdly placed directives. While changing context around a using alias or in the middle of a member body wouldn’t need to have direct metadata effects, the same is not the case for a context change inside, say, a constraints clause. We would need to either invent an encoding scheme, or error on such places. Directives would more likely allow editor-config integration. Attributes would maybe complicate the semantic model. Finally, "pseudo-attributes", recognized specially by the compiler, are a new concept to the language. Is it worth it? In practice, though, the seams are mostly not going to show: a user won’t generally need to worry that it’s not a regular attribute. Conversely, `#nonnull` would be a new directive: Is it worth it? ## Conclusion Given all this, we are in favor of the directive approach. We will start from the strawman and refine over the coming weeks. # Async streams The idea behind async streams is to make them analogous to synchronous `IEnumerable<T>`s, allowing them to be consumed by `foreach` in asynchronous methods, and produced by `yield return`ing asynchronous iterator methods. The natural shape of the `IAsyncEnumerable<T>` interface is therefore simply a straightforwardly "async'ified" version of `IEnumerable<T>`, like this: ``` c# public interface IAsyncEnumerable<out T> { IAsyncEnumerator<T> GetAsyncEnumerator(); } public interface IAsyncEnumerator<out T> : IAsyncDisposable { ValueTask<bool> MoveNextAsync(); T Current { get; } } public interface IAsyncDisposable { ValueTask DisposeAsync(); } ``` This works well. The main difference (other than `Async` occurring in names) is that `MoveNextAsync` is asynchronous, so that you need to await it to learn if there is a next value, and before `Current` can be assumed to contain that value. Just as the semantics of `foreach` can be described (and *is* described in the language spec) as an expansion to a while loop, `foreach await` is an almost identical expansion, except with some `await`ing going on. It is also generally quite efficient. The use of `ValueTask` instead of `Task` allows implementers (including the ones produced from iterators by the compiler) to avoid any allocations during iteration time, storing any state needed to track the asynchrony alongside the iteration state in the enumerator object that implements `IAsyncEnumerator<T>` (which can often in turn be shared with the implementation of `IAsyncEnumerable<T>` itself). Thus, the number of allocations needed to iterate an `IAsyncEnumerable<T>` will typically be zero, and occasionally (in the case of concurrent iterations) one (for a second enumerator that can't be shared with the enumerable). One problem remains, performance-wise, and it is shared with the original synchronous `IEnumerable<T>`: It requires *two* interface calls per loop iteration, whereas with clever tricks that could be brought down to "usually one" in cases where the next value is most often available synchronously. The best `IAsyncEnumerator<T>` design we can come up with that satisfies those properties is the following: ``` c# public interface IAsyncEnumerator<out T> : IAsyncDisposable { ValueTask<bool> WaitForNextAsync(); T TryGetNext(out bool success); } ``` Here the `TryGetNext` method can be used to loop synchronously through all the readily available values. Only when it yields `false` do we need to fall back to an outer loop that awaits calls to `WaitForNextAsync()` until more data is available. In the degenerate case where `TryGetNext` always yields `false`, this is not faster: you fall back to the outer loop and always have two interface calls. However, the more often `TryGetNext` yields true, the more often we can stay in the inner loop and skip an interface call. The interface has several drawbacks, though: - It is meaningfully different from the synchronous `IAsyncEnumerator<T>` - `TryGetNext` is unidiomatic, in that it switches its success result and its value result, in order to allow the type parameter to be covariant, making it annoying to manually consume - The meaning of the two methods is less intuitively clear - The double loop consumption is more complicated It's a dilemma between simple and fast. The performance benefit of the fast version can be up to 2x, when most elements are available synchronously, and the work in the loop is small enough to be dominated by the interface calls. But it really is a lot harder to use manually. While that doesn't matter when you use `foreach` and iterators, there are still enough residual scenarios where you really need to produce or consume the interface directly. An example is the implementation of `Zip`. There are ways to have the two approaches coexist. If we support both of them statically, we'd end up exploding e.g. async LINQ with vast amounts of surface area. That'd be terrible and confusing. But there's also a dynamic approach. Already today, many LINQ query operators are implemented to do type checks on their inputs to see if they support a faster implementation. For instance, `Count` on an array can just access the `Length` instead of iterating to count the elements. Similarly we could have a fast async enumerator interface that generally lives under the hood. But implementations of e.g. LINQ methods can type check for the fast interface and use it if applicable. The code generated for foreach could even do that too, though that probably leads to complicated code. We can also generate iterators that implement both interfaces. ## Conclusion We want to stick with the simple `IEnumerator<T>`-like interface in general. We'll keep the "fast under the hood" option in our back pocket and decide after the first preview whether to apply that, but the surface area that people see and trade in should be the simple, straightforward "port" of `IEnumerator<T>` to "async space".