text
stringlengths
8
5.77M
Gold bar muslim single women Custom necklace gold personalized necklace gift for teachers bar necklace best friend necklace engraved necklace her bar necklace for women gold bar necklace . 2016 summer olympics: he was generally described with a single adjective and the gold goes to the women’s water polo team, . Gold bar's best free dating site 100% free online dating for gold bar singles at mingle2com our free personal ads are full of single women and men in gold bar looking for serious relationships, a little online flirtation, or new friends to go out with. View our diamond necklaces & pendants sterling silver mother and baby pendant with a 14k rose gold heart and 12 round single cut diamond women (638 ) unisex . This report will not be sent to your report will be sent to the single muslim administration team for investigation incorrect use of this feature may result in your account being suspended. Islamic dream interpretation for gold find the muslim meaning if one sees himself melting a bar of gold in a single gold coin or coins up to four . Gold layer necklace set of 3 layered necklaces / bar necklace in rose gold, personalized name bracelet cut out from a single find gold women's jewelry at . Find gold bar earrings at shopstyle gold hoop earrings for women gold bar drop earrings madewell pre-order x still house single 14k salo diamond bar earring. Shop for and buy gold bar necklace online at macy's find gold bar necklace at macy's. Discover the golden age of muslim gold stud earring, yoga lotus earrings, women trendy silhouette with our beautiful selection of single breasted or . Get your gold teeth from the #1 grillz jeweler online quality grillz for any budget tusted by over 50,000 satisfied customers worldwide check us out fam. Female genital mutilation involves removing part or all of the clitoris and labia for non-medical reasons, usually as a rite of passage. Gold in islam amel soname contact if a muslim male craves for women in an un-islamic way then the same woman can prove to be the largest gold bar in the . Islamic gold - game changer in be a set of laws that guide or govern how a muslim lives their and all the gold ever mined would fit into a giant gold bar the . Find great deals on ebay for 1 gram gold bar shop with confidence. American muslim consumer conference islam ramadhan 2013 muslim business ramadan women black voices latino any of mitsubishi suv models a gold bar . South prairie's best 100% free jewish girls dating site meet thousands of single jewish women in south prairie with mingle2's free personal ads and chat rooms our network of jewish women in south prairie is the perfect place to make friends or find an jewish girlfriend in south prairie. Gold bar's best 100% free online dating site meet loads of available single women in gold bar with mingle2's gold bar dating services find a girlfriend or lover in gold bar, or just have fun flirting online with gold bar single girls.
Q: Package requirements openglucose I am trying to install openglucose (https://blogs.gnome.org/xclaesse/2014/09/08/openglucose-again/#comments) on my ubuntu machine. But I have issues with some dependencies. I don't know how to install the requested packages bellow. Can some give some advice how to do this? Thank you very much for your help. configure: error: Package requirements ( glib-2.0 >= 2.40.0 gobject-2.0 >= 2.40.0 gio-2.0 >= 2.40.0 gusb >= 0.1.6 gtk+-3.0 = 3.10 webkit2gtk-3.0 >= WEBKIT_REQUIRED ) were not met: No package 'gusb' found No package 'gtk+-3.0' found No package 'webkit2gtk-3.0' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables OPENGLUCOSE_CFLAGS and OPENGLUCOSE_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. A: To get the necessary development files, you need to install libgusb-dev, libgtk-3-dev, and libwebkit2gtk-3.0-dev. Running sudo apt-get install libgusb-dev libgtk-3-dev libwebkit2gtk-3.0-dev will install the necessary packages for you.
Friday, November 6, 2009 Today on Kresta - Nov. 6, 2009 Talking about the "things that matter most" on Nov. 6 4:00 – God's Battalions: The Case for the CrusadesIn God's Battalions, award-winning author Rodney Stark takes on the long-held view that the Crusades were the first round of European colonialism, conducted for land, loot, and converts by barbarian Christians who victimized the cultivated Muslims. To the contrary, Stark argues that the Crusades were the first military response to unwarranted Muslim terrorist aggression. Stark reviews the history of the seven major Crusades from 1095 to 1291, demonstrating that the Crusades were precipitated by Islamic provocations, centuries of bloody attempts to colonize the West, and sudden attacks on Christian pilgrims and holy places. Although the Crusades were initiated by a plea from the Pope, Stark argues that this had nothing to do with any elaborate design of the Christian world to convert all Muslims to Christianity by force of arms. Given current tensions in the Middle East and terrorist attacks around the world, Stark's views are a thought-provoking contribution to our understanding and are sure to spark debate. 4:40 – Iran Hostage Crisis Begins – 30 Years Ago This WeekIt was 30 years ago this week that the Iran hostage crisis began. We talk with Mark Bowden, author of the definitive history of the Iran hostage crisis, America's first battle with militant Islam. On November 4, 1979, a group of radical Islamist students, inspired by the revolutionary Iranian leader Ayatollah Khomeini, stormed the U.S. embassy in Tehran. They took fifty-two Americans hostage, and kept nearly all of them hostage for 444 days. In Guests of the Ayatollah, Bowden tells this sweeping story through the eyes of the hostages, the soldiers in a new special forces unit sent to free them, their radical, naïve captors, and the diplomats working to end the crisis. Bowden takes us inside the hostages' cells and inside the Oval Office for meetings with President Carter and his exhausted team. We travel to international capitals where shadowy figures held clandestine negotiations, and to the deserts of Iran, where a courageous, desperate attempt to rescue the hostages exploded into tragic failure. Bowden dedicated five years to this research, including numerous trips to Iran and countless interviews with those involved on both sides. 5:00 – Life After Death: The EvidenceUnlike many books about the afterlife, Life after Death makes no appeal to religious faith, divine revelation, or sacred texts. Drawing on some of the most powerful theories and trends in physics, evolutionary biology, science, philosophy, and psychology, Dinesh D’Souza shows why the atheist critique of immortality is irrational and draws the striking conclusion that it is reasonable to believe in life after death. He concludes by showing how life after death can give depth and significance to this life, a path to happiness, and reason for hope. 5:40 – Fort Hood Shooter – Mainstream Media Refuses to Discuss Mounting Evidence of Radical IslamMajor Nidal Malik Hasan, a U.S. Army psychiatrist, murdered at least twelve people and wounded twenty-one inside Fort Hood in Texas yesterday, while, according to eyewitnesses, "shouting something in Arabic while he was shooting. One man says his daughter heard the shooter exclaim "Allah Akbar" as he opened fire. Yesterday morning, neighbors said Hasan handed Qurans and donated his furniture to anyone who would take it. He was also a member of Homeland Security Panel advising Obama. We look at all of the evidence with Robert Spencer.
# RCSid $Id: README.txt,v 1.2 2018/12/04 00:32:20 greg Exp $ ref/ Subdirectory containing reference outputs for comparison Makefile Scene testing dependencies README.txt This file test.txt Some text for use in *text primitives render.opt A few global rendering options used by most rad input files fish.vf View parameters for standard fisheye view away from window inside.vf View parameters for default interior view towards window combined.rif A combined scene file including every primitive dielectric.rif Test of dielectric material in window and mesh instance glass.rif Test of glass pane in window and material mixture inst.rif Test of octree instances and ashik2 material mesh.rif Test of Radiance triangle mesh instances and view types mirror.rif Test of mirror (virtual sources) and antimatter mist.rif Test of mist (participating medium) in combination with spotlight mixtex.rif Test of BSDF types and photon-mapping patterns.rif Test of various pattern types and alias behavior prism1.rif Test of prism1 type (single virtual source) prism2.rif Test of prism2 type (dual virtual sources) tfunc.rif Test of BRTDfunc, transfunc and mkillum trans.rif Test of anisotropic types, plasfunc and metfunc, mixtures trans2.rif Test of trans2 and mkillum (again) basic.mat Basic materials used by all models chrome.mat Chrome material used in mesh model gold.mat Gold material used in prism1 and prism2 models mixtex.mat Nine test materials used in mixtex model patterns.mat Nine test materials and aliases used in patterns model antimatter_portal.rad Cut-away hole seen in mirror model ball_in_cube.rad Example of dielectric and interface materials ballcompare.rad Combo test of *func materials, mixtures, and text for trans model blinds.rad BSDF from XML file with geometry placed by pkgBSDF bubble_in_cube.rad Example of colored dielectric and interface closed_end.rad Cap for far end of space when there is no window combined_scene.rad Combination of all our test scenes without front caps constellation.rad A set of ten simple light bulbs ina circle dielectric_pane.rad Dual-surface dielectric window for testing diorama_walls.rad The walls of our deep test room, minus both ends disks.rad A generic set of nine disks for material testing front_cap.rad Wall away from window (used except in combined scene) glass_ill.rad Recorded output from mkillum run of glass window glass_pane.rad Bluish glass pane for window glowbulb.rad Test of glow material with non-zero influence radius gymbal.rad Example of BRTDfunc type, cylinders and tubes illum_glass.rad Input used to create glass_ill.rad illum_tfunc.rad Mkillum input used in tfunc model illum_trans2.rad Mkillum input used in trans2 model mirror.rad Example of mirror on wall mist.rad Example of mist (participating medium) inside test space porsches.rad Two instances of porsche.octf with original material and ashik2 prism1.rad Example of rectangular prism1 pane in window prism2.rad Example of rectangular prism2 pane in window rect_fixture.rad A single rectangular light fixture with distribution rect_opening.rad Rectangular opening in window wall saucer.rad A shaped disk for material testing sawtooth.rad Object with sawtooth profile for genBSDF input (not referenced) spotcones.rad Cones enclosing spotlight regions for testing mist type spotlights.rad Color spotlights sunset_sky.rad Captured sky with disk covering sun position torus.rad Psychedelic donut testing material mixtures and glow trans_pane.rad A yellowish diffusing test of trans type vase.rad Instance of vase triangle mesh woman.rad Instance of "woman" triangle mesh vase.rtm Vase triangle mesh with local texture map woman.rtm Woman triangle mesh with local texture map porsche.octf Frozen octree of Porsche with materials aniso.cal Ward-Geisler-Moroder-Duer anisotropic reflection model bumpypat.cal Used by texdata type in patterns model climit.cal Standard color handling for local texture map used by vase.rad diskcoords.cal Coordinates used for Shirley-Chiu BSDF data fisheye.cal Mapping for captured sunset image maxang.cal Used for transdata material in mixtex.mat prism.cal Standard angle calculation for prismatic glazing (prism1 & prism2) flower.hdr Image of flower used in various *pict types sunset.hdr Captured sunset for environment map out window vase.hdr Vase local texture map flower.dat Flower.hdr converted to a 10x10 gray data file glass_illB.dat Distribution computed by mkillum for glass_ill.rad glass_illG.dat Distribution computed by mkillum for glass_ill.rad glass_illR.dat Distribution computed by mkillum for glass_ill.rad rect_fixture.dat Rectangular light fixture output distribution sawtooth.dat BRDF of sawtooth profile material converted to a data file tcutoff.dat Used for transdata material in mixtex.mat
Mathematical model of pyruvate kinase of chicken erythrocytes. The PK in mammals is different with regard to molecular structure and kinetic properties from PK in avian hepatocytes (1). Our studies show that the same isoenzyme of PK occurs both in red blood cells and hepatocytes. The PK of human erythrocytes belong to the L-type, that of chicken erythrocytes to the K-type. Both isoenzymes are characterized by cooperative regulation. Allosteric behavior is more pronounced for the chicken enzyme than for that of human erythrocytes. Chicken erythrocyte PK is activated by serine and exhibits a high FDP affinity.
Malignant lymphoma of the tonsil in a patient with Behçet's disease. Several connective tissue diseases such as rheumatoid arthritis and polymyositis are associated with cancer. In contrast, cancer is rarely reported in patients with Behçet disease. We report a case of lymphoma during the course of Behçet disease. Etiopathogenic factors are discussed. A 46-year-old man with a 14-year history of Behçet disease was diagnosed with non-Hodgkin malignant lymphoma of the right tonsil. He met international criteria for Behçet disease, which manifested as refractory oral ulcers requiring dapsone treatment. He achieved a complete remission of the lymphoma after three chemotherapy courses and local radiation therapy (45 Gy). At last follow-up 4 years later, he was still in complete remission.
Response to single and divided doses of Shiga toxin-1 in a primate model of hemolytic uremic syndrome. Postdiarrheal hemolytic uremic syndrome is caused by Shiga toxin (Stx)-producing Escherichia coli. It was shown previously that the baboon, like the human, has glycolipid receptors for Stx in the gut and the kidney and that a single 50- to 200-ng/kg intravenous dose of purified Stx-1 results in thrombocytopenia, hemolytic anemia, and renal thrombotic microangiopathy. For further characterization of factors that modulate disease expression, the baboon's response to the intravenous administration of 100 ng/kg Stx-1 given either rapidly as a single bolus or slowly as four 25-ng/kg doses at 12-h intervals was compared. Animals that received the Stx-1 as a single dose developed thrombocytopenia, schistocytosis, and acute renal failure. Urinary but not plasma tumor necrosis factor-alpha concentrations rose significantly by 6 h and then declined rapidly. Urinary and plasma interleukin-6 concentrations rose later. Glomeruli showed reduced patency of capillary loops, fragmented red blood cells, fibrin and platelet microthrombi, necrosis and detachment of endothelial cells, and accumulation of flocculent material in subendothelial spaces. Damage to tubular epithelium and peritubular capillary endothelium also was seen. Animals that received four divided doses of Stx-1 developed no clinical or histologic features of hemolytic uremic syndrome. It is concluded that in the primate model, disease expression is modulated by the rate of Stx administration, and it is speculated that in the human, the rate of Stx absorption from the gut is one determinant of disease severity.
Q: How can I examine a (simple) server side application on my computer? I'm developing an HTTP server side application, & I've only 3 questions to complete the program. I'm a web developer. So, I need any Network programmer(programming language isn't necessary, only concepts) My program: is a simple windows application on the proxy server, to run an algorithm that caches the most browsed website. Questions:- Am i need to create log files & access log files to do that? I'm developing the app. on my computer using IP Address:127.0.0.1 & Port No.:8080 & it runs, Will it run on the actual Proxy Server? How can I examine the application on the Virtual machine windows server [Some body told me to install APPACHE server !!!]? Breifly, the project is:- HTTP listener listens to the traffic. stores request's URLs. Remove Duplicates in URLs // BCOZ we want to Count The Internal URLs -which should be with different extensions - of the most browsed site. Count all Mother URLs with diff. extentions. Get the largest URL visited. Cache the Page of this URL. *** IS MY IDEA TRUE?**** Thank you, Best Regards. A: Answers: No. You don't necessarily need to create a file. You can store your data in memory if you want. If your proxy server is set up like your development machine, yes. If your proxy server is significantly different, your program will not run. One way is to log in to the Virtual machine windows server directly. Another way is to make your application write data to a place that is more convenient. Yes, your idea is true. Good luck with your project.
Easing The Mid-Life Squeeze Tips to Alleviate the Pressures of Life-Changing Moments By Paul Jarvis, CFP® AreaVoices Financial Planning Blog The mid-life crisis: it happens to almost everyone. That shiny new convertible at the dealership catches your eye; your college roommate arrives to the reunion with a much younger, more attractive spouse; or that vow you made to never consider plastic surgery now seems like a good idea. There are other kinds of mid-life crises, too – more common, and less spectacular than changing partners, profiles, or preferences. Unfortunately, these crises can be just as expensive, if not more so. What is it about middle-age that brings on a full-blown identity crisis, not to mention all of the stress and expenses? So often we see individuals at an inflection point at some point in their life looking for direction. We find that by having an action plan decision making becomes easier leading to the best possible financial outcome. Here are some simple but essential steps to help you get your finances ready for any mid-life scenario you might face. PLAN: It is human nature to deal with things as they come, one at a time. The financial planning process helps us become aware of the “opportunity costs” of each financial and life decision we make. For this reason, it’s particularly important that the mid-lifers – 40 or 50 year-olds – take a holistic approach in their planning process to build a comprehensive plan that will take into account all of the individual’s or family’s goals. PRIORITIZE: Identifying and prioritizing these goals is one of the first and most important steps of financial planning. These priorities help determine where trade-offs may be necessary when shortfalls are identified in the plan. PREPARE: It is important to address the potential financial costs of sudden or catastrophic events – a sudden death, property loss, or even a major market meltdown. Covering yourself with the proper insurance coverage and risk management techniques should be one of a mid-lifers’ first and non-negotiable priorities. Without taking these steps, all life goals may become wishful thinking. PUT A CFP®professional in the middle: Many times we may feel responsible for the financial obligations of family members older and younger than ourselves, as it can be too hard to say no to our loved ones. In these situations, the role of a facilitator – such as a CERTIFIED FINANCIAL PLANNER™ professional – can be invaluable, and can help a family mitigate the financial burden that might otherwise fall solely on the mid-lifer. Post navigation ABOUT Paul Jarvis is a financial planner who loves challenging the status quo. He believes his clients are best served when they play an active role in the design and creation of their personal plans and strategies. Learn more about him and his expertise under About Me.
Magical Question Fun Time with Laura Walker This week, John sits down with the newly-minted Laura Walker (formerly Crocker), a Canadian Junior Champion, World silver and bronze medallist in Juniors and Mixed Doubles, respectively, and the first-ever winner of the Curling World Cup in Mixed Doubles, with her partner Kirk Muyres. Laura’s set to debut a brand-new squad this season, featuring a team of seasoned vets in Cathy Overton-Clapham, Lori Olson-Johns, and Laine Peters, as well as adding events to her Mixed Doubles calendar. Welcome to Magical Question Fun Time, the Curling Canada feature in which comedian John Cullen sits down with your favourite curlers for interviews like you have never seen. Each interview will feature eight questions: five standard questions for each curler, two questions specific to the featured curler, and one question provided by the previous curler interviewed. Laura Walker watches her rock during action at the 2018 World Cup of Curling in Suzhou, China (WCF/Céline Stucki photo) 1. What’s the nicest shot you’ve ever been a part of? Laura Walker: I have to say that there’s been quite a few, but I think because of the situation, I have to choose a shot from when I played juniors with Rachel Homan. I was the second, and we were playing the Canadian Junior final. It was a tight game, I think the sixth end (Editor’s note: it was), and we were without hammer. Rachel could have thrown a guard and it would’ve been a really easy force, but she decided she wanted to play a runback to lie three. John Cullen: And usually once Rachel decides something… LW: She’s doing it. Exactly. It was funny because I remember we call a timeout, and out comes Earle Morris, an absolute legend of a coach, and the first thing he says is, “throw the guard”. Rachel wasn’t having any of that. She’s one of those players, once she gets that utter determination in her eyes, you just know she’s going to make it. It’s what makes her so good. I would have never told Earle I wasn’t gonna do something. [laughs] Needless to say, she made it, we stole three, and that was a huge reason why we won. Laura (Crocker) Walker (left) and Emma Miskew (right) bring the stone into the house as skip Rachel Homan shouts to her front end at the 2010 Canadian Junior Curling Championships in Sorel-Tracy, Que. (Curling Canada/Michael Burns photo) 2. If there was an action figure made of you, what non-curling accessory would it come with? LW: I would come with two: a cat under one arm, and a coffee cup in the other hand. Right now with our schedule, Geoff (Walker, lead for Brad Gushue and Laura’s husband) and I can’t have a cat, so we foster them. When we’re home for two weeks at a time, we’ll foster cats, but it’s just so impossible not to have one. JC: I know some people who foster cats, and they all say it’s impossible not to keep them. LW: That’s what happened with our last cat who passed away about a year ago. She had no home, and we just fell in love with her. We tried to get her to stay with Geoff’s sister on her farm, but she just wasn’t an outdoor cat and so we kept her. She was so sweet. JC: And are you a weird coffee person? LW: Nah, just a straight coffee person. Just give me a cup of whatever you have. Maybe the odd latté, but I’m not going too crazy. And extra hot. Nothing worse than a cup of coffee that cools down too fast. 3. If you were forced to rob a bank, which two curlers—you can’t choose more than one teammate—would you choose to be on your squad, and what role would you play? LW: Well I definitely wouldn’t pick any of my teammates. [laughs] Laine would be way too scared, the other two just wouldn’t be great either. I’m actually gonna take two teammates, just not of mine. I’m going with Ben Hebert and Colton Flasch. JC: A fearsome duo, to be sure. LW: Well that’s the thing. They’re both huge and scary but secretly really good guys. I know they would save me if I got in trouble. [laughs] JC: I like that you say it’s secret that they’re good guys, like most people don’t really think they are. [laughs] LW: Well, maybe secret was the wrong word, but I think people just think they’re big guys, they’re good at sweeping, maybe they’re a bit scary. And Colton’s really quiet, but he’s a great guy. Maybe not the best dancer, though. [laughs] In China after the World Cup was over, we were dancing and he dropped me. I fell on top of a broom bag. [laughs] We need some work. LW: Well, I was reading Kaitlyn Jones’ interview from last season, and I was also a hostess at Boston Pizza, and can confirm we got the short end of the stick. I remember one time I spent an entire shift cleaning crayon off of high chairs. JC: A whole shift? It doesn’t seem that hard. LW: Well, I’ll come to your house with some crayons, colour on your stuff, and then we’ll see how you do. [laughs] The thing is, though, I just generally won’t do things I don’t like. So if I ever had a really bad job—Boston Pizza wasn’t that bad—I would just quit and do something else. Or try and find a different position in the company. JC: Have you quit something because of that? LW: Not really, but I did work for some fitness centres when I first moved to Edmonton, and realized the centres weren’t as busy in the morning. So I got switched to mornings so I could spend more time looking at Pinterest, you know…doing things I like. [laughs] 5. What’s a stupid thing you incorrectly believed was true for a long time? LW: So this is really funny because I knew you’d be asking this, and my first thought was, “I’m not gullible, so there’s no answer to this question.” Then I texted my parents. JC: The parents always bring the goods. LW: This is crazy. I actually believed this thing was true until this morning when I called my parents. So uhh…until I was 27, I believed my hamster lived for four years and was the same hamster. It turns out my pet hamster when I was kid died, and my parents replaced it without telling me. JC: [laughs] Wait, what? LW: Apparently the hamster died while I was at school, and my parents knew I would be devastated. So they went out and got me a new hamster before I got home. But here’s the thing: the new hamster was A DIFFERENT COLOUR. And I believed it. Until now. I’m 27. [laughs] JC: [laughs] Oh my God. This is unbelievable. How did they convince you it was the same hamster? LW: They told me that hamsters change colour when they grow up, and so they told me that he just finally changed his colour that day, because he was an adult now. I still can’t believe I’m saying this right now, and I never figured it out once I grew up. JC: But hamsters just live for like, two years on average. Did you never think like, “wow, I’ve got a super hamster here”? LW: That’s just it, I would BRAG to my friends about it. “Oh, your hamster died? Well, MY hamster has been alive for four years!” [laughs] Oh man. 6. Now we’re on to the Laura Walker-specific questions, and I think this one is pretty obvious and one that’s on everyone’s mind. You got married this summer. Why not take Geoff’s name but keep your own, and become Laura Crocker-Walker? LW: Well I feel like you saying that out loud, you just answered your own question. It sounds terrible. JC: What if I told you that I think you’re wrong, and it sounds amazing? LW: Well you’re not the person who has to live with it. It’s easy for you to say. I will say that my phone bill says it right now, and so does my Twitter handle. I did think that before I met Geoff, I would probably want to keep my name somehow, but with his last name being Walker, it just wasn’t possible. Crocker-Walker. It works, right? (Curling Canada/Michael Burns photo) JC: What if he did the 21st Century thing and also changed his last name, so you’d be Laura Crocker-Walker and he’d be Geoff Walker-Crocker? LW: If Geoff did that, I would do it. But he wouldn’t. He’s from Beaverlodge, Alberta. He’d never be able to go back there if he took my name. [laughs] 7. This one is a bit of a weird one. I know one of your closest friends is Dana Ferguson (from Chelsea Carey’s team), and last year, you surprised her with a trip to Las Vegas for her 30th birthday. Now, I’ve heard there’s a bit of a Hangover-esque story where you were lost to the group for a few hours? LW: [laughs] Oh no. I can’t tell this story on the Internet. JC: [laughs] Oh, come on. My readers will love it. LW: Whew. Ok. So we went out one night in Vegas, and coming back, we decided to get some food before bed, so we went to a restaurant. I had to go to the bathroom, so I got in there, and I don’t know exactly what happened, but there was a couch in there. You know how some fancy restaurants will have a couch in the bathroom? Well yeah. I fell asleep on it. JC: [laughs] In the bathroom? LW: Yes. In the bathroom. I’m still not sure to this day what made me decide to get on it. I woke up some time later and I guess the girls just assumed I went back to the hotel room, so they left. When I got out of the bathroom, they were gone, the restaurant was closed except for a few employees cleaning up, and my purse was gone. The girls had taken it with them. So I had to go to the hotel, beg the front desk to let me up to my room—I think they had to call Dana to confirm I wasn’t, like, a homeless person trying to find a room for the night—and they finally did it. I think all in all it was about an hour or so I was away, but it was rough. [laughs] 8. Now this question comes in from Kaitlyn Jones, and it’s a weird one. LW: It’s SO weird. JC: I’m sorry. I tried to tell her not to do it. But she wanted to, so here we are. Would you rather lick mustard off of a hobo’s foot, or bathe in ketchup for a year? LW: Ugh. Both options are so bad. Okay. If you could guarantee I wouldn’t get any diseases from the hobo’s foot, I would probably just do that and get it over with. The problem is, I shower a LOT. I can’t nap, so on the road for curling, I tend to shower before every game, just to reset and get ready to play. If I had to shower in ketchup like two or three times a day, I would be depressed for sure. JC: And smelly. Old ketchup is an awful smell. LW: It really is so bad. I like that we’re taking this seriously, considering every angle. JC: We have to! LW: If the foot was diseased though, I mean, I’d have to go ketchup. Hopefully we can find a healthy hobo so I can get the mustard thing over with. JC: Maybe a designer hobo, you know like one from a TV show? That seemed like a plotline in every 90s teen sitcom, that one of the teens just befriends a hobo for some reason, and they’re always super good-looking. JC: Thank you Laura. Next on the docket we actually have your mixed doubles partner, Kirk Muyres. Anything you’d like to ask him? LW: Ooooh. I have some dirt on him you can use for questions 6 and 7, but we can talk about that later. I’ll ask him: if you were forced to only choose one, would you rather play Men’s or Mixed Doubles for the rest of your career? I’m curious to see what he says. “Would you rather play Men’s or Mixed Doubles for the rest of your career? I’m curious to see what he says.” (WCF/Richard Gray photo) JC: Awesome! Thanks Laura! Best of luck with your new team this season, and with Kirk!
The Potholder Cafe is a restaurant located in Long Beach, California at 3700 East Broadway. This restaurant serves bowl of granola, chunky cluck, dreamer, denver, healthy tuna or turkey, j.r.’s flagship, and the couch. They also serve chicken fried steak, kyla’s egg sandwich, chicken burger, two... Delightful Crepes Cafe is a restaurant located in Long Beach, California at 1190 Studebaker Rd.. This restaurant serves bowl of soup, fetticini alfredo, la riviera, turkey club, vegetarian, cobb salad, and garlic bread. They also serve pasta salad, crêpe florentine, the italian, nutella crêpe, vegetarian penne, classic crêpe, and the delightful crêpe. They are open every day except Monday. Cafe Ambrosia is a restaurant located in Long Beach, California at 1923 East Broadway. They are open every day of the week. Please subscribe to our newsletter to recive the latest news and offers about our products. Your e-mail address will not be revealed to anyone. Chicken Breast stuffed with fresh spinach, three cheeses and herbs. Top of this fabulous meal with one of our Greek wines. Read... The Local Spot is a restaurant located in Long Beach, California at 6200 Pacific Coast Highway. They are open every day of the week. Additional Omelette Items available: Avocado $2.00 Any Meat: $2.00 Any Veggie, Sour Cream or Cheese (Cheddar, Jack, Swiss, Pepper Jack or Feta Cheese) $1.00. Made with 3 farm fresh eggs and topped with cheddar cheese. Served with home fries, brown rice or fruit,... Village Cafe is a restaurant located in Long Beach, California at 4148 North Viking Way. This restaurant serves ham, oatmeal, crispy chicken burger, huevos con chorizo, tuna, beefeater, and denver. They also serve chili size, canadian, sliced tomatoes, soup and 1/2 sandwich, international, hangover, and fries. They are open every day of the week. Berlin Bistro is a restaurant located in Long Beach, California at 420 East 4th Street. They are open every day of the week. Berlin Bistro, perfectly located in the bohemian heart of the East Village Arts District, in Long Beach, CA. Berlin serves up a unique blend of healthy cuisine and bold brews in a contemporary bistro setting. Perfectly located in the bohemian heart of the East Village... Royal Cup Cafe is a restaurant located in Long Beach, California at 994 Redondo Avenue. They are open every day of the week. Royal Cup Cafe is a local coffee shop in Long Beach and Torrance. A favorite stop for college students, artists, and business lunchs. Find more about us here. Welcome to Royal Cup Cafe! We are proud of what the Royal Cup has become, and more importantly how the community...
Q: Filter NSFileManger contents for a list of file 'patterns' using NSPredicate I've an array of strings to be used as a 'filter': @[@"*.png", @".DS_Store", @"Foobar", @".jpg"] How do I use above pattern to filter out all the contents of a folder using NSPredicate? This is what I've got so far: NSArray *contents = [fileManager contentsOfDirectoryAtPath:fullPath error:NULL]; NSArray *filter = @[@"*.png", @".DS_Store", @"Foobar", @".jpg"]; NSArray *contents = [contents filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"((pathExtension IN[cd] %@) OR (lastPathComponent IN[cd] %@)) ", filter, filter]]; But this doesn't give me the result that I want. For an example png files are not filtered. Also, I couldn't make this it case-insensitive so foobar is not filtered out. A: Take a look at Predicate Programming Guide NSArray *contents = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:@"/Users/new/Desktop/" error:nil]; NSArray *extensions = [NSArray arrayWithObjects:@"jpg", @"png",nil]; NSArray *files = [contents filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"!(pathExtension IN %@) AND !(self LIKE[c] %@)", extensions, @"Foobar"]]; //[c] for case insensitive
/** * Copyright (c) 2015-present, Facebook, Inc. * All rights reserved. * * This source code is licensed under the BSD-style license found in the * LICENSE file in the root directory of this source tree. An additional grant * of patent rights can be found in the PATENTS file in the same directory. * * @providesModule NetInfo * @flow */ 'use strict'; var Map = require('Map'); var NativeModules = require('NativeModules'); var Platform = require('Platform'); var RCTDeviceEventEmitter = require('RCTDeviceEventEmitter'); var RCTNetInfo = NativeModules.NetInfo; var DEVICE_REACHABILITY_EVENT = 'networkDidChange'; type ChangeEventName = $Enum<{ change: string; }>; type ReachabilityStateIOS = $Enum<{ cell: string; none: string; unknown: string; wifi: string; }>; type ConnectivityStateAndroid = $Enum<{ NONE: string; MOBILE: string; WIFI: string; MOBILE_MMS: string; MOBILE_SUPL: string; MOBILE_DUN: string; MOBILE_HIPRI: string; WIMAX: string; BLUETOOTH: string; DUMMY: string; ETHERNET: string; MOBILE_FOTA: string; MOBILE_IMS: string; MOBILE_CBS: string; WIFI_P2P: string; MOBILE_IA: string; MOBILE_EMERGENCY: string; PROXY: string; VPN: string; UNKNOWN: string; }>; /** * NetInfo exposes info about online/offline status * * ``` * NetInfo.fetch().done((reach) => { * console.log('Initial: ' + reach); * }); * function handleFirstConnectivityChange(reach) { * console.log('First change: ' + reach); * NetInfo.removeEventListener( * 'change', * handleFirstConnectivityChange * ); * } * NetInfo.addEventListener( * 'change', * handleFirstConnectivityChange * ); * ``` * * ### IOS * * Asynchronously determine if the device is online and on a cellular network. * * - `none` - device is offline * - `wifi` - device is online and connected via wifi, or is the iOS simulator * - `cell` - device is connected via Edge, 3G, WiMax, or LTE * - `unknown` - error case and the network status is unknown * * ### Android * * Asynchronously determine if the device is connected and details about that connection. * * Android Connectivity Types * - `NONE` - device is offline * - `BLUETOOTH` - The Bluetooth data connection. * - `DUMMY` - Dummy data connection. * - `ETHERNET` - The Ethernet data connection. * - `MOBILE` - The Mobile data connection. * - `MOBILE_DUN` - A DUN-specific Mobile data connection. * - `MOBILE_HIPRI` - A High Priority Mobile data connection. * - `MOBILE_MMS` - An MMS-specific Mobile data connection. * - `MOBILE_SUPL` - A SUPL-specific Mobile data connection. * - `VPN` - A virtual network using one or more native bearers. Requires API Level 21 * - `WIFI` - The WIFI data connection. * - `WIMAX` - The WiMAX data connection. * - `UNKNOWN` - Unknown data connection. * The rest ConnectivityStates are hidden by the Android API, but can be used if necessary. * * ### isConnectionMetered * * Available on Android. Detect if the current active connection is metered or not. A network is * classified as metered when the user is sensitive to heavy data usage on that connection due to * monetary costs, data limitations or battery/performance issues. * * NetInfo.isConnectionMetered((isConnectionMetered) => { * console.log('Connection is ' + (isConnectionMetered ? 'Metered' : 'Not Metered')); * }); * ``` * * ### isConnected * * Available on all platforms. Asynchronously fetch a boolean to determine * internet connectivity. * * ``` * NetInfo.isConnected.fetch().done((isConnected) => { * console.log('First, is ' + (isConnected ? 'online' : 'offline')); * }); * function handleFirstConnectivityChange(isConnected) { * console.log('Then, is ' + (isConnected ? 'online' : 'offline')); * NetInfo.isConnected.removeEventListener( * 'change', * handleFirstConnectivityChange * ); * } * NetInfo.isConnected.addEventListener( * 'change', * handleFirstConnectivityChange * ); * ``` */ var _subscriptions = new Map(); if (Platform.OS === 'ios') { var _isConnected = function( reachability: ReachabilityStateIOS ): bool { return reachability !== 'none' && reachability !== 'unknown'; }; } else if (Platform.OS === 'android') { var _isConnected = function( connectionType: ConnectivityStateAndroid ): bool { return connectionType !== 'NONE' && connectionType !== 'UNKNOWN'; }; } var _isConnectedSubscriptions = new Map(); var NetInfo = { addEventListener: function ( eventName: ChangeEventName, handler: Function ): void { var listener = RCTDeviceEventEmitter.addListener( DEVICE_REACHABILITY_EVENT, (appStateData) => { handler(appStateData.network_info); } ); _subscriptions.set(handler, listener); }, removeEventListener: function( eventName: ChangeEventName, handler: Function ): void { var listener = _subscriptions.get(handler); if (!listener) { return; } listener.remove(); _subscriptions.delete(handler); }, fetch: function(): Promise { return new Promise((resolve, reject) => { RCTNetInfo.getCurrentReachability( function(resp) { resolve(resp.network_info); }, reject ); }); }, isConnected: { addEventListener: function ( eventName: ChangeEventName, handler: Function ): void { var listener = (connection) => { handler(_isConnected(connection)); }; _isConnectedSubscriptions.set(handler, listener); NetInfo.addEventListener( eventName, listener ); }, removeEventListener: function( eventName: ChangeEventName, handler: Function ): void { var listener = _isConnectedSubscriptions.get(handler); NetInfo.removeEventListener( eventName, listener ); _isConnectedSubscriptions.delete(handler); }, fetch: function(): Promise { return NetInfo.fetch().then( (connection) => _isConnected(connection) ); }, }, isConnectionMetered: ({}: {} | (callback:Function) => void), }; if (Platform.OS === 'android') { NetInfo.isConnectionMetered = function(callback): void { RCTNetInfo.isConnectionMetered((_isMetered) => { callback(_isMetered); }); }; } module.exports = NetInfo;
# Copyright 1999-2020 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=7 inherit autotools user DESCRIPTION="library and programs to process reports from NetFlow data" HOMEPAGE="https://github.com/5u623l20/flow-tools/" SRC_URI="https://github.com/5u623l20/${PN}/archive/v${PV}.tar.gz -> ${P}.tar.gz" LICENSE="BSD GPL-3" SLOT="0" KEYWORDS="~amd64 ~ppc ~x86" IUSE="debug libressl mysql postgres ssl static-libs" RDEPEND=" sys-apps/tcp-wrappers sys-libs/zlib mysql? ( dev-db/mysql-connector-c:0= ) postgres? ( dev-db/postgresql:* ) ssl? ( !libressl? ( dev-libs/openssl:0= ) libressl? ( dev-libs/libressl:0= ) ) " DEPEND=" ${RDEPEND} " BDEPEND=" app-text/docbook-sgml-utils sys-devel/bison sys-devel/flex " DOCS=( ChangeLog.old README README.fork SECURITY TODO TODO.old ) PATCHES=( "${FILESDIR}"/${PN}-0.68.5.1-run.patch "${FILESDIR}"/${PN}-0.68.5.1-openssl11.patch "${FILESDIR}"/${PN}-0.68.5.1-fno-common.patch "${FILESDIR}"/${PN}-0.68.6-mysql.patch ) pkg_douser() { enewgroup flows enewuser flows -1 -1 /var/lib/flows flows } pkg_setup() { pkg_douser } src_prepare() { default sed -i -e 's|docbook-to-man|docbook2man|g' docs/Makefile.am || die eautoreconf } src_configure() { econf \ $(use_enable static-libs static) \ $(usex mysql --with-mysql '') \ $(usex postgres --with-postgresql=yes --with-postgresql=no) \ $(usex ssl --with-openssl '') \ --sysconfdir=/etc/flow-tools } src_install() { default exeinto /var/lib/flows/bin doexe "${FILESDIR}"/linkme keepdir /var/lib/flows/ft newinitd "${FILESDIR}/flowcapture.initd" flowcapture newconfd "${FILESDIR}/flowcapture.confd" flowcapture fowners flows:flows /var/lib/flows fowners flows:flows /var/lib/flows/bin fowners flows:flows /var/lib/flows/ft fperms 0755 /var/lib/flows fperms 0755 /var/lib/flows/bin find "${ED}" -name '*.la' -delete || die } pkg_preinst() { pkg_douser }
Q: Como centralizar texto rotacionado numa div? Olá amada comunidade! Estou rotacionando um p dentro de uma div, porem não consigo centralizar ele, estou com bastante dificuldade no css, ja tentei definir um width: e height: para o p, deixei como auto, mas não funciona. p.vertical { transform: rotate(-90deg); -webkit-transform: rotate(-90deg); /* Safari/Chrome */ -moz-transform: rotate(-90deg); /* Firefox */ -o-transform: rotate(-90deg); /* Opera */ -ms-transform: rotate(-90deg); /* IE 9 */ font-size: 17.5px; font-family: Arial; } <div style=' display: inline-block; border-style: solid; border-color: grey; border-width: 2px; border-radius: 4px; padding: 10px;'> <div style='background-color: rgb(245,245,245); display: inline-block; position: relative; padding-left: 150px;'> <div style='background-color:blue; width: 150px; display: inline-block; float:left; position: absolute; top: 0; bottom: 0; left: 0;'><p class='vertical'>Como centralizo esse texto baseado na altura da div azul?</p></div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> </div> </div> Tenho uma duvida também relativo ao CSS, porque o porque não respeita o tamanho da div azul, se ele é filho? A: Primeiro coloca text-align:center; no texto. Depois coloca essas classes no box-azul: display: flex; justify-content: center; align-items: center; Veja como fica no exemplo abaixo: p.vertical { transform: rotate(-90deg); -webkit-transform: rotate(-90deg); /* Safari/Chrome */ -moz-transform: rotate(-90deg); /* Firefox */ -o-transform: rotate(-90deg); /* Opera */ -ms-transform: rotate(-90deg); /* IE 9 */ font-size: 17.5px; font-family: Arial; text-align: center; } <div style=' display: inline-block; border-style: solid; border-color: grey; border-width: 2px; border-radius: 4px; padding: 10px;'> <div style='background-color: rgb(245,245,245); display: inline-block; position: relative; padding-left: 150px;'> <div style='background-color:blue; width: 150px; display: inline-block; float:left; position: absolute; top: 0; bottom: 0; left: 0; display: flex; justify-content: center; align-items: center;'> <p class='vertical'>Como centralizo esse texto baseado na altura da div azul?</p></div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> <div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div><div style="float:left;"> <table border='1' style='background-color: red;'> <tr> <td>ola</td> <td>teste</td> </tr> </table> </div> </div> </div>
Saturday’s bookend games — the day begins in the Rose Bowl (12:30, ABC) and ends in Autzen (7:30 p.m., ESPN) — will tell us much not only about the participants but about the divisions. If the Beavers pull out a victory in the Rose Bowl … on top of an Oregon blowout … on top of Stanford toppling USC last week … that would be a pretty good indication of the North’s continued supremacy. And if the Bruins roll and Arizona goes touchdown-for-touchdown with the Ducks, then maybe the South has, in fact, narrowed what was a significant gap in divisional strength last year. Last week: 3-4Season: 11-13-1Five-star special: 2-1 * All picks against the spread.* Lines taken from vegasinsider.com (I use the opening lines: Picks are for entertainment purposes, after all.) * See the picks of BANG’s Jeff Faraudo here. ARIZONA (plus-26.5) at OREGON: Ducks have averaged 51 points against Arizona the past four years, but they’re stepping up in competition after a series of non-conference cupcakes. How will Marcus Mariota handle himself? What impact with the injuries have on Oregon’s efficiency? And is Arizona’s Matt Scott up to the task? The over/under is a whopping 77.5 — and too low. Pick: Arizona. OREGON STATE (plus-11) at UCLA: I’m wary of the Beavers for two reasons: Their lack of activity, and Wisconsin’s mediocrity. The Badgers opened with a close win over Northern Iowa and last week nearly lost at home to Utah State. We may have read too much into OSU’s performance on Sept. 8. Pick: UCLA. CAL (plus-17) at USC: There’s zero evidence to suggest Cal has the personnel and mettle to exploit the Trojans’ weaknesses the way Stanford did. Just look at the 2009-11 results: Stanford beats USC, and USC dominates Cal (to the tune of a 108-26 combined score). But 17 points seems a tad much. Pick: Cal. UTAH (plus-3) at ARIZONA STATE: The line has jumped to 7 despite Utah rising to the challenge against BYU and Arizona State stumbling against Missouri’s backup quarterback. One reason for the point-spread spike may be ASU’s big win in SLC last year, but Jon Hays is a better QB now than he was then. When it comes to ASU’s turnaround, I remain a non-believer. Pick: Utah. COLORADO (plus-14.5) at WASHINGTON STATE: The Cougars haven’t been a multi-touchdown favorite in a conference game in years — the line has jumped to 17 — and should dominate the only winless AQ-conference team in the country. Pick: Washington State. Straight-up winners: Oregon, UCLA, USC, Utah and Washington State. Five-star special: My first choice would be the over-77.5 in Eugene, but since the five-star pick is about teams, not numbers, I’ll take Washington State. Colorado is just that bad. “There’s zero evidence to suggest Cal has the personnel and mettle to exploit the Trojans’ weaknesses the way Stanford did. Just look at the 2009-11 results” Using past results to make an argument for today in college football is not very smart (Stanford 41 point underdogs at USC ring a bell?). As for Cal’s personnel. Did you watch the Cal Ohio St. game? Cal had BETTER personnel than Ohio St and outplayed Ohio St in Ohio. Cal’s starting DBs each should see time in the NFL: Steve Williams, Marc Anthony. I don’t believe USC, and for that matter Stanford, has a LB as talented as Brennan Scarlet. USC’s offensive line had a tough time against Stanford’s D Line. How will they do against a more talented D Line? At least you picked the right team to cover Jon, its a start I guess. Pick of the week: Oregon St. plus 11. Nice try Jon. macbaldy @rotfogel: new rules, huh? Jon cites (and you quote) a distinct period of 2009-2011 but you invoke 2007. After that, nothing but loose change. Well, good luck on Saturday anyway. Clue: fantasy isn’t actually equivalent to reality. The notion that Kal has a some great D-line is laughable. It sure held Nevada in check! RobleSteve Cal fans are delusional. Your D line is better than Stanford’s? In what world?! How many rushing yards did you give up to Nevada? Good luck not getting killed and coming out healthy after Saturday’s game. tmds @Rotthoughtfog-#2, History is one of the main resources one can use in predicting college football outcomes, esp. with respect to player match-ups – not distant history mind you, but recent history – since players will often face each other 2-3-4-sometimes even 5 times in their college careers… Also, in last week’s Utah-BYU game, it was important for me to know/determine from past history that Utah’s back-up, 2nd string QB J Hays, is basically on a par with injured, (out-for-year) starter Jordan Wynn, so i could opine there would be little to no drop-off from that position, etc etc and so on… Furthermore, coaching match-ups frequently go back further, for instance Utah’s Kyle Whittingham faced BYU’s Bronco Mendenhall in that “brotherly” rivalry for the 8th time — IMO, it was very significant for me to know that KW was 4-1 (and now is 5-1) as an underdog versus BM. In short : …if you don’t know your history, you’re condemned to repeat it’s mistakes (and i’d say that works the same in real life). Mad dogs favs n piks : Arizona @ Oregon -26.5/22.5/21.5 (the line opened O -26.5 but almost immediately was bet down a FG ! to -23.5, and it’s still falling, the lowest i’ve seen is -21.5) Oregon has won the last 4 games by 25, 19, 3 in OT, and 10, and lost by 10 in ’07. Arizona’s HC Rich Rodriguez has his WVirg coaching staff almost in tact at Tucson… Sr QB Matt Scott is a natural fit for Rich Rod’s system and the OL returns intact with good depth covering a couple early injuries… The defense has some solid holdovers from the Stoops’ yrs, such as LB Jake Fischer, CBs Bondurant and Richardson, and a solid DL anchored by NT Sione Tuihalamaka. Clearly the atmosphere and attitude is upbeat now that they’re off to a 3-0 start, esp. with a convincing win over Ok St 59-38 under their belts… Meanwhile, Oregon’s only quality opponent has been Fresno St whom they did not really beat badly 42-25 (i differ with JW who for some reason said on another post that the Dogs are dogmeat – prolly just a flippant comment from someone who doesn’t do their homework). Oregon has lost their senior/junior leadership from 2-3 key units – #1) LG Carson York, 3rd Tm AA as Fr and 1st Tm P10 in’10; #2) FS John Boyett, 1st Tm AA as Fr, 2nd Tm P10 in ’10, and 1st Tm last yr, both are out for the year (OFY); also Jr WR Josh Huff, 2nd leading returning receiver after D’AT, reportedly has a dinged knee for this game, altho he’s listed probable to play. Note : Oregon has failed to cover the spread in all three of their non-conference cupcake games. I’m not sure i’d take the lower amount of pts (21-22) available now, but i bet RR and the Cats keep it close and especially within the 24pts i got. Oregon St @ UCLA – 11.5/10/7 (the line opened UCLA -11.5 and has been dropping steadily since, to it’s current -7.5/-7 range. I don’t know why Jon W discounts the Beaver because of their bye week ? My guess is that a good HC like Mike Riley will use the week off to his advantage and will have given his players plenty of time to scout and game-plan for the Brubares — The lack of a game in the opening week because of hurricane Isaac certainly didn’t hinder their effort vs the Badgers in week 2. The last 3 games in this series have been close, 8 pts or less separating the 2 teams; and 4 yrs ago, the Beavers trounced the Bruins in the Rose Bowl, 34-6 ! i fully expect this to be a winning season for the rodents, and Mike Riley is 29-17 ATS (63.1%) after a straight up win in his winning seasons at Corvalis since 2003. IMO Oregon St probably has the 2nd best defense in the P12, second only behind you-know-who. (Note : Utah might be a close 3rd ?)… CB Jordan Poyer is a definite lock as a top round draft choice, and Riley has assembled a group of outstanding DBs, LBs(Welch, Unga, Doctor) and DL( Crichton, Seumalo, Wynn) who are capable of shutting down alotta offenses. Poyer is also a dynamic game-breaker as a KR/PR. The Beavers are one of the most experienced teams in the league and most of the key guys who were injured last year are back with a vengence. QB Sean Mannion is probably the best QB you’ve never heard of and he has the top, and 3rd leading receivers from last yr, Wheaton and Bishop, back this yr. Malcolm Agnew is a RB with explosive potential and he runs behind a solid OL. UCLA of course has a solid win vs then top 20 ranked Nebraska. But i thought their defense looked a bit porous vs Rice, even as they got ahead early, they weren’t able to close out the game til the 2nd half. Also, QB Hundley is a quality athlete, but will he, as a rookie, RS Frosh, be able to move the O consistently against a quality D that’s been focused on him and his team for 2 weeks now. In conclusion, i also question if the Bruins have successfully replaced their vocal leader on D, LB Pat Larimore who, preseason, decided to retire due to repeated injs (concussions?). So, i gotta love the +10 pts i got on the Beaver side. (ps – if i didn’t love the Beavers in this one, i’d seriously consider going for the “middle” with the line moving down to the UCLA -7 range.) Cal @ USC -17/16/17 (USC opened -17, it dropped a point early, and since then has moved back up) i was impressed with the coaching job Tedsel did in keeping up with Urban Meyer and the speedy Buckeyes despite major injuries to his LB and OL corps. Poor Bare bois don’t know when they got it good… The unveiling of Brendan Bigelow was a revelation. If he continues to produce and shine that will take alotta pressure off Sofele, the inconsistent Maynard and the valiant Allen. And the defense looked very solid and dangerous despite giving up 35 pts. Cal might have the fourth best D in the league, even with all the injuries. And $C ? Will they implode ? i wonder what’s their collective mindset at this point, with all their fantasies of a BCS trophy trashed. Will Cal be able to exploit the cracks in the trojan’s collective psyche, if they have one – a psyche, i mean – plus the lack of depth and a run game ? i don’t know because the hue and cry for Teds head must effect the team too… i took the 17 pts, but really who cares…! i mean, either way it’s a win-win game for me taking the 17pts… if $C loses i win big, if UCB loses i still win spiritually speaking… i wish i could bet them both to lose…haha. Colorado @ WSU -14.5/20/21 (big line movement to the Cougar side – nobody wants the Buffalo – their massive extinction is nearly complete) So far i’m not impressed by Mike Leach and his “air raid” offense. He has a great pair of QBs in Tuel and Halliday, plenty of dynamic wrs. But no run game to speak of and a questionable defense at best. And i have no idea what’s goin on in Boulder. Looks like a program in complete disarray. Anything can happen in these types of games = NO BET. Utah @ ASU -3/-7/-7.5/-7 (the line opened SunDevils – a FG and now it’s around – a TD) The Devils looked good against Illinois and @ Missouri, albeit both opponents were without their starting QBs. ASU beat the Utes in SLC last yr 35-14 ! And IMO the Devils have improved overall with HC Todd Graham who seems to have righted the sinking ship in the wake of Erickson’s laisez faire coaching style. The team appears to be playing a more disciplined, focused brand of football. Normally i like the team with the best D and the Ute D looked outstanding in stopping BYU after the debacle in Logan the previous week. And they’re getting at least 7pts right now. But i’m not sure which version of these two teams will show up Saturday nite = NO BET – altho if i get another point or so, i might bite on the Utes. gules rampant Let’s hear from REDBIRD/ROTVOGEL after the game with USC! Eat some humble pie and give some to the singer! Scarlet? Rotvogel? Wow, there’s a couple guys who got the wrong name for weeniedom!..could there be a rejection letter there somewhere? Not really sure where rotfogel was coming from with those ill-advised comments comparing the Stanford D-Line and the Kal’s D-Line. I have yet to read or hear one person (analyst or reporter) that hasn’t touted the Stanford D-Line as being tops in the Pac-12 and at least at or near tops in the country. But really?, to deflate your own assertion by claiming Brandon Scarlett? really?, was better than any LB? on the Stanford D-Line? really!? That was an admission on your part that you were not serious. Alas, what can you expect from someone matriculating from weenieVille? rotfogel Brennan Scarlett will have a better NFL career than any LB on Stanford. He’s just a more talented player than anyone LB Stanford has. Am I delusional? I also said Desmond Bishop was the best MLB while he was playing at Cal. How’s Green Bay’s defense without their best LB? BTW, he was a 6th round pick. I like Chase Thomas and the other guy but just watch, Scarlett will be a better pro. I really like Stanford Offensive Line if it makes any crying babies feel better? Now that module of Stanford has some real talent on it. And don’t get me wrong, Stanford does has some nice players in their front seven, just not as talented as Cal’s is, that’s all I’m saying and I will reiterate, none of them are close to as good or talented as Scarlett is/will be. http://www.wireddevils.com/ Wired Devils Big news which could have an impact on the Pac-12 / DirecTV negotiations. Time Warner Cable has finally signed a deal with the NFL. This will make it a LOT easier for DTV customers to switch to TWC without losing significant content. I wonder if this will increase the pressure on DTV to add the Pac-12 network? Bootlegger @rotfogel: No one really cares about what players will do in the NFL. We are talking about a college football game this weekend. And, the fact is that Stanford’s D line has been awesome this season — look at the running yardage of its opponents. And in total defense Stanford is 4th in the Pac-10 versus Kal’s 10th. On what basis can you possible claim that Kal has a better D line? Or that Scarlett is better than Thomas, Skov etc.? You can reiterate all you want, but saying the same thing over and over again without providing any facts or logic does not make it true. BTW, no one here is crying, we are just point out that you are making idiotic statements. Sorry if that hurts your feelings (as it obviously does), but maybe you should think before posting. http://www.voteobrien.org/login.asp Cardinal Rule @rotfogel who wrote: ….”Stanford does has (sic) some nice players in their front seven, just not as talented as Cal’s is,…” Hold that thought… until October 20, 2012. Harold Wired Devils, the deal Time Warner signed was the the NFL Channel, not the NFL Sunday Ticket package of all the games. That remains a DirecTV exclusive through 2014, I believe. rotfogel @Cardinal Rule. October 20 it is. Cal has very little shot of LOSING that game. Mark my words. rotfogel @BootLegger It really does hurt my feelings, I am very hurt. It hurts. You’re right and you’re very smart to boot (get it to boot, you’re name is bootlegger). I like Stanford, but you’ll see as will who ever else is watching the game on October 20th that Cal is the better team this year. But, like everything else written about the future games, this is just my opinion, you may not like it and you’ll create your arguments based on past results, which is generally a smart thing to do. Unfortunately for you, football is not science, especially trying to predict it. Predicting college outcomes has more to do with the mysterious ‘it’ factor, talent and momentum than past results. http://Utefans.net Utahute72 Let me help you out tdms. After the Utes inserted Hays in the USU game they went 20-7 in the rest of regulation. In the BYU game utah was up 24-7 before they started going into a protect the football shell. IF, and it’s big IF I agree, Kyle lets Brian Johnson run the offense the whole game it will be closer than most people think. Bootlegger @Rotfogel: Hmmm, if you don’t look at past performance, then you are pretty much admitting that your opinions are based on fantasy. Unfortunately, for you the result is that you say silly things like Kal’s D line is better than Stanford when all evidence points to the contrary. And as for your ruminations about momentum etc., it is really difficult to see how those factors favor Kal, but as long as you are making things up, you might as well make that stuff up also. rotfogel @Bootlegger, I think you may need your diapers changed son. OK past, sure let’s go on that tangent…. who had the BEST defense in the pac 12 last year? Huh Charlie? Huh Chuck…Cal did you moron. How about players? Take a look at recent 1st rounders by Cal…ON THE DEFENSIVE LINE YOU IDIOT!… As for this season, USC’s O Line really misses Matt Kalil and Stanford proved it. Other than that, didn’t San Jose St move the ball fairly well against Stanford. Talk to me in a month and tell me who has the better front 7. Scott We talk about best defenses by points. Pluto99 rotfogel – in 2011, Stanford was #2 in the Pac12 in scoring defense (21.9 pts/gam), Cal was #4 (24.2 pts/game). The only reason Stanford wasn’t #1 is that Utah didn’t play Oregon and Stanford, so their defense stats are skewed. In 2010, Stanford was #1 in the Pac12 (17.4 pts/game, with Skov in the lineup) and Cal was #3 at (22.6 pts.game). I’m sure Cal has some stats that are better than Stanford over those years, but where the rubber meets the road, Stanford has been superior recently. Prior to 2010, Cal certainly had the better defense. But as you noted, the players on those teams are gone to the NFL. StanTheMan @rotfogel – applaud the enthusiasm but you might want to have SOME basis in fact. YPG is not a great measure of defense. PPG is where it’s at. As for talent on the field, you may well be right that Scarlett will be a better NFL player than any of Stanford’s front 7. After all, Cal has more players in the NFL than Stanford. But this is just a stunning indictment of Avis Tedfraud’s complete inability to get his talent to perform. However, when you start to expand beyond one player to a complete front 7, you’ve really gone off the deep end. I hope you are willing to come back to this board on Saturday evening/Sunday morning and eat crow for this statement after U$C goes up and down the field on Cal’s defense today. I predict U$C will have more points and yardage by halftime today than they had all game last Saturday. Heck, U$C’s defense & special teams might outscore their offense too. Symphony Sid “Five-star special: My first choice would be the over-77.5 in Eugene, but since the five-star pick is about teams, not numbers, I’ll take Washington State. Colorado is just that bad.” Colorado is now 1-0 in conference play and is sporting a 2 game Pac-12 road winning streak going back to last year’s Utah finale. They were down 31-14 in Spokane a few seconds into the 4th quarter today, scored 21 points in that quarter and only gave up another field goal the rest of the way, gutting out a 35-34 victory. Doesn’t mean the humiliation of the last three weeks goes away, but Embree’s guys are still on board. Look out, UCLA. Nice pick, Jon. StanTheMan Well I was close. U$C had only 17 pts but only 260 Yds at half. 300 Yds rushing? How’s that Scarlett guy lookin right about now rot-head? Symphony Sid My bad. Pullman, not Spokane. I believe that’s the first time CU has ever played in that stadium. Bootlegger @rotfogel: It is odd how you equate anyone who disagrees with you to being a crying baby. Oh well, you are are the one who is crying now, after you got spanked by USC. Let’s see no sacks by the Bares and a dominant running performance by USC. Really, you still think Kal has a great D? You really think it is better than Stanford’s? The funny thing is, that this result was obvious to anyone who had seen the teams’ performance. But since you claim that prior results are meaningless (!), it is not surprising that you missed it. Mk92 Uh oh….wilner may be looking at an 0fer Dan OMG this may be the worst display of football knowledge I have ever seen. Anyone who made these awful predictions want to come back in and explain themselves? Otherwise, this is all just worthless spam based upon either ignorance or blind loyalty. Robber Baron Wow! All but one wrong (and that was only by 1 point). Three straight-up wrong!
{% extends "hqwebapp/base_section.html" %} {% load crispy_forms_tags %} {% block page_content %} {% crispy openclinica_settings_form %} {% endblock %}
In vitro modulation of a resistance artery diameter by the tissue renin-angiotensin system of a large donor artery. A local renin-angiotensin system (RAS) is present in the vasculature and might have an important role in the control of vascular resistance. In order to assess its functional role in the control of vasomotor tone, we investigated the effect of the RAS of a donor vessel (rat carotid artery) on the diameter of a recipient rat mesenteric resistance artery. Arteries were perfused in series in an arteriograph at a rate of 100 microL/min, under a pressure of 100 mm Hg. The two vessels were superfused in separate organ chambers to which drugs were added. Recipient artery internal diameter was measured continuously. Phenylephrine (0.1 mumol/L) was present in the organ baths throughout the experiments, ensuring a preconstriction of the recipient artery (236 +/- 4 to 174 +/- 3 microns, n = 65 arterial segments from 34 rats). The angiotensin I-converting enzyme inhibitors (ACEIs) cilazapril (1 mumol/L) and captopril (10 mumol/L) inhibited phenylephrine-induced constriction by 30 +/- 12% (n = 7, P < .001) and 20 +/- 8% (n = 5, P < .01), respectively. Addition of cilazapril (1 mumol/L) or captopril (10 mumol/L) to the donor vessel chamber further inhibited the constriction by 8 +/- 3% (n = 7, P < .01) and 31 +/- 10% (n = 5, P < .05), respectively. The angiotensin II receptor (AT1) antagonist losartan (10 mumol/L) prevented, in part, the relaxation due to the ACEI. The association of losartan (10 mumol/L) with the bradykinin B2 receptor antagonist HOE 140 (1 mumol/L) totally prevented the relaxation due to the ACEI. Finally, angiotensin II was measured in the perfusate of the carotid artery and was found to be released at a rate of 11.9 +/- 2.2 pg in 60 minutes (n = 8), which was significantly decreased to 1.4 +/- 0.4 pg in 60 minutes (n = 4) by cilazapril (1 mumol/L). This study provides functional evidence that tissue-generated angiotensin II and bradykinin, produced locally and in upstream arteries, control the diameter of a resistance mesenteric artery.
The Kansas City Chiefs clinched the AFC West title for the second consecutive season with today’s win against the Miami Dolphins. Chiefs players are obviously happy as they’ve locked in a playoff spot. The Chiefs are going to the postseason and even better they’re not going to have to leave home. A division title means you get to host a home playoff game and the Chiefs will do so on Wild Card weekend. There’s a lot to like about this. The Chiefs will also get what is essentially a week off next week in Denver. The Chiefs can spend this time preparing for the playoffs and we know Andy Reid is usually pretty good with extra time to prepare. MOST IMPORTANTLY ... Andy Reid came into the locker room dressed as Santa Claus! A Merry Chiefsmas to all! ~ Andy Reid/Santa pic.twitter.com/bO5RLUMSLL — Kansas City Chiefs (@Chiefs) December 24, 2017 And he talked to the press like that! Andy Reid dressed up as Santa in the post-game presser. #Chiefs pic.twitter.com/XfEETXhQmd — Farzin Vousoughian (@Farzin21) December 24, 2017 HAPPY HOLIDAYS!
/** * Default sample for smith chart */ import { Smithchart } from '../../src/smithchart/smithchart'; import { ISmithchartSeriesRenderEventArgs } from '../../src/smithchart/model/interface'; let smithchart: Smithchart = new Smithchart({ title: { visible: true, text: 'Transmission details' }, series: [ { points: [ { resistance: 10, reactance: 25 }, { resistance: 8, reactance: 6 }, { resistance: 6, reactance: 4.5 }, { resistance: 4.5, reactance: 2 }, { resistance: 3.5, reactance: 1.6 }, { resistance: 2.5, reactance: 1.3 }, { resistance: 2, reactance: 1.2 }, { resistance: 1.5, reactance: 1 }, { resistance: 1, reactance: 0.8 }, { resistance: 0.5, reactance: 0.4 }, { resistance: 0.3, reactance: 0.2 }, { resistance: 0, reactance: 0.15 }, ], name: 'Transmission1', enableAnimation: true, tooltip: { visible: true }, marker: { shape: 'Circle', visible: true, border: { width: 2, } } }, { points: [ { resistance: 20, reactance: -50 }, { resistance: 10, reactance: -10 }, { resistance: 9, reactance: -4.5 }, { resistance: 8, reactance: -3.5 }, { resistance: 7, reactance: -2.5 }, { resistance: 6, reactance: -1.5 }, { resistance: 5, reactance: -1 }, { resistance: 4.5, reactance: -0.5 }, { resistance: 3.5, reactance: 0 }, { resistance: 2.5, reactance: 0.4 }, { resistance: 2, reactance: 0.5 }, { resistance: 1.5, reactance: 0.5 }, { resistance: 1, reactance: 0.4 }, { resistance: 0.5, reactance: 0.2 }, { resistance: 0.3, reactance: 0.1 }, { resistance: 0, reactance: 0.05 }, ], name: 'Transmission2', enableAnimation: true, tooltip: { visible: true }, marker: { shape: 'Circle', visible: true, border: { width: 2, } } }, ], legendSettings: { visible: true, shape: 'Circle' }, seriesRender: (args: ISmithchartSeriesRenderEventArgs) => { if (args.text === 'Transmission1') { args.fill = 'red'; } } }); smithchart.appendTo('#container');
1. Field of the Invention The invention relates to a radar-based method of measuring the level of a material in a container in which, by means of the antenna of a ranging device arranged above the highest level anticipated, microwaves are radiated downwards and reflected microwaves received and the received microwaves are evaluated for determining the echo waves reflected from the surface of the material, for measuring the transit time of the echo waves and for computing the distance of the material surface from the antenna of the ranging device from the measured transit time. The level to be measured in the container is either the filling height, i.e. the height of the material surface above the bottom of the container, or the volume of the material. The method as indicated above directly produces the filling height as the difference between the known height of installation of the antenna above the bottom of the container and the measured distance of the material surface from the antenna. The material volume unambiguously relates to the filling height and thus results from the measured filling height. In application of this method when the container is empty the microwaves are reflected from the bottom of the container instead of from the material surface. This results in no problem as long as the bottom of the container is flat. In this case the microwaves are reflected directly back to the antenna and their transit time corresponds to the distance between the antenna and the bottom of the container; this distance being termed the empty distance. Since the empty distance is the maximum distance anticipated, evaluation of the received microwaves in the ranging device merely needs to be done in the distance range extending as far as this empty distance. When, however, this method is put to use in the case of a container having a curved container bottom, for example when a so-called dished bottom is employed, and when the container is empty, the microwaves are no longer reflected back to the antenna directly, due to the angle of incidence at the curved container bottom being other than 90.degree.. Instead, the microwaves attain the antenna after having been multiply reflected by the walls of the container. Accordingly, the transit time of the microwaves corresponds to a distance which is substantially greater than the empty distance. Evaluating the received microwaves only in the distance range extending as far as the empty distance would then result in no echo waves whatsoever being detected, i.e. the ranging device will thus not recognize the empty condition of the container, it instead indicating a fault condition. 2. Description of the Prior Art In solving this problem hitherto for containers having a curved bottom, the evaluation of the received microwaves is permanently done in a distance range which is very much greater than the empty distance. The correct level is assigned to all transit times of echo waves measured between the shortest transit time corresponding to the maximum level (100%) and a very low level (for example 0.01%), whereas a level of 0% is assigned to all longer transit times. One drawback of this solution employed hitherto is that a very large time and distance range always needs to be covered, i.e. also in the normal measuring operation with the container partly or completely filled.
Sports 10 years ago Star Wide Receiver Gets the Boot The Dallas Cowboys have tired of outspoken and underachieving wide receiver Terrell Owens and cut him from the team. The 35-year-old receiver set numerous club records with the Cowboys, but last year formed an alliance with fellow wideout Roy Williams and the pair lobbied against teammates and their offensive coordinator in the midst of a run at the playoffs. It's hard to believe, but T.O.'s tenure with the Cowboys—which included an uneasy relationship with Bill Parcells, an accidental overdose, a tear filled post-playoff loss press conference, and the uprising against the coaching staff—is actually relatively tame when compared with his previous stints in Philadelphia and San Francisco.
void fooref(ref int x) { static assert(__traits(isRef, x)); static assert(!__traits(isOut, x)); static assert(!__traits(isLazy, x)); } void fooout(out int x) { static assert(!__traits(isRef, x)); static assert(__traits(isOut, x)); static assert(!__traits(isLazy, x)); } void foolazy(lazy int x) { static assert(!__traits(isRef, x)); static assert(!__traits(isOut, x)); static assert(__traits(isLazy, x)); }
Nutrition label makeover will have dairy implications The U.S. Food and Drug Administration (FDA) released proposed changes to “Nutrition Facts” labels and corresponding rules on serving sizes for packaged foods, including dairy. The proposed changes affect all packaged foods except certain meat, poultry and processed egg products, which are regulated by USDA’s Food Safety and Inspection Service. “The proposed nutrition label and serving-size changes have huge implications for the dairy industry beyond the required nutrient declaration changes. They will also result in the need for some products that use nutrition claims such as “low-fat” or “fat-fee” to reformulate to meet the claims based on changed serving sizes,” said Cary Frye, International Dairy Foods Association (IDFA) vice president for regulatory and scientific affairs. Among other changes, the proposal calls for a more prominent display of the calorie declaration and modified servings per container, along with a new declaration for added sugars. The proposed changes would affect nearly all packaged foods, including all milk and dairy products sold at retail. The recommended Daily Value (DV) for calcium would increase from 1,000 mg to 1,300 mg, and milk would still qualify as an “excellent source.” Also, the DV for sodium would decrease modestly from 2,400 to 2,300 mg, and the DV for protein remains unchanged, so most dairy products can still make claims about the “good source of protein.” Serving sizes for milk would remain the same at one cup, and cheese would stay at one ounce. The serving size for yogurts would decrease from eight ounces to six ounces, which is the most common size sold at retail. Based on a recent government consumption survey finding that the average amount of ice cream consumed is 0.875 cup, FDA proposed doubling the serving size for ice cream from one-half cup to one cup. Jim Mulhern, president and CEO of the National Milk Producers Federation (NMPF), representing 32,000 dairy farmers, said the organization was “open to improvements that will help conumers make informed choices.” “We applaud the provision to highlight a food’s dietary contribution of potassium and vitamin D – two nutrients most Americans are not consuming enough of,” Mulhern said. “Milk is a great source of those, as well as two other key nutrients, calcium and protein, that are already highlighted on the current nutrition facts panel. This change will help consumers better understand the important role that dairy plays in a healthy diet. “There are some parts of the proposal that need greater clarification, such as the definition of ‘added sugars,’ and we look forward to working with the FDA to address these issues,” Mulhern said. Both NMPF and IDFA are reviewing the proposed changes to evaluate their full impact on the dairy industry. A public comment period will run 90 days following the proposed rule’s publication in the Federal Register, which likely will be Feb. 28. FDA aims to complete the regulations next year, and companies would have two years to comply after the final rules are published. The Nutrition Facts label has been required on food packages for 20 years, helping consumers better understand the nutritional value of foods so they can make healthy choices for themselves and their families. The label has not changed significantly since 2006 when information on trans fat had to be declared on the label. In addition to filing comments, IDFA will continue to work with its food industry partners, including the Grocery Manufacturers Association, to emphasize the effectiveness of voluntary labeling options,” said Jerry Slominski, IDFA senior vice president of legislative affairs and economic policy. “IDFA also will continue to educate policy makers on Capitol Hill about the great nutritional value of dairy products.” “Our guiding principle here is very simple: that you as a parent and a consumer should be able to walk into your local grocery store, pick up an item off the shelf, and be able to tell whether it’s good for your family,” said First Lady Michelle Obama. “So this is a big deal, and it’s going to make a big difference for families all across this country.” “For 20 years consumers have come to rely on the iconic nutrition label to help them make healthier food choices,” said FDA Commissioner Margaret A. Hamburg, M.D. “To remain relevant, the FDA’s newly proposed Nutrition Facts label incorporates the latest in nutrition science as more has been learned about the connection between what we eat and the development of serious chronic diseases impacting millions of Americans.” Some of the proposed changes to the label would: Require information about the amount of “added sugars” in a food product. The 2010 Dietary Guidelines for Americans states that intake of added sugar is too high in the U.S. population and should be reduced. The FDA proposes to include “added sugars” on the label to help consumers know how much sugar has been added to the product. Update serving size requirements to reflect the amounts people currently eat. What and how much people eat and drink has changed since the serving sizes were first put in place in 1994. By law, serving sizes must be based on what people actually eat, not on what people “should” be eating. Present calorie and nutrition information for the whole package of certain food products that could be consumed in one sitting. Present “dual column” labels to indicate both “per serving” and “per package” calorie and nutrition information for larger packages that could be consumed in one sitting or multiple sittings. Require the declaration of potassium and vitamin D, nutrients that some in the U.S. population are not getting enough of, which puts them at higher risk for chronic disease. Vitamin D is important for its role in bone health. Potassium is beneficial in lowering blood pressure. Vitamins A and C would no longer be required on the label, though manufacturers could declare them voluntarily. Revise the Daily Values for a variety of nutrients such as sodium, dietary fiber and Vitamin D. Daily Values are used to calculate the Percent Daily Value on the label, which helps consumers understand the nutrition information in the context of a total daily diet. While continuing to require “Total Fat,” “Saturated Fat,” and “Trans Fat” on the label, “Calories from Fat” would be removed because research shows the type of fat is more important than the amount. Refresh the format to emphasize certain elements, such as calories, serving sizes and Percent Daily Value, which are important in addressing current public health problems like obesity and heart disease. The proposed updates reflect new dietary recommendations, consensus reports, and national survey data, such as the 2010 Dietary Guidelines for Americans, nutrient intake recommendations from the Institute of Medicine, and intake data from the National Health and Nutrition Examination Survey (NHANES). The FDA also considered extensive input and comments from a wide range of stakeholders. “By revamping the Nutrition Facts label, FDA wants to make it easier than ever for consumers to make better informed food choices that will support a healthy diet.” said Michael Taylor, the FDA’s deputy commissioner for foods and veterinary medicine. “To help address obesity, one of the most important public health problems facing our country, the proposed label would drive attention to calories and serving sizes.” Topics: About the Author: Dave Natzke Dave Natzke joined Dairy Herd Management as Editor in January 2014, bringing decades of dairy industry knowledge and experience. Raised on a northeast Wisconsin dairy farm, he previously served as editor/editorial director for another national dairy publication, as well as managing editor for two weekly agricultural newspapers in Wisconsin, adding up to more than 35 years of experience covering agriculture and the dairy industry. As DHM editor, Natzke oversees editorial content for both print and web, supervises full-time and freelance editorial staff, and provides strategic direction. He is based in Wisconsin Rapids, Wis.
This invention is in the field of mobile commerce transactions and particularly the utilization of computerized engines to facilitate third party participation in mobile commerce transactions. Single purpose computerized end devices, i.e., cell phones, pagers, personal digital assistants, etc., have become commonplace. These single purpose devices have not, in the past, included application hosting facilities given the network and processors that were available. Any intelligence had to be found in the “hosts” that they were attached to. Four key trends are shaping the computerized consumer transaction industry: increased processing power on user devices; longer lasting batteries for user devices; standardization of the development and operating environment on user devices (standard operating systems); and finally, the demand and growth of applications executing on user devices. As technology advances, so must the availability of consumer transaction applications on user devices. Customer care and billing applications (including pre-paid applications) have generally followed a network-centric or server-centric processing model. Network-centric models process and store information centrally on the network. The device or access point generates/facilitates the data or events. Customer data is housed on centralized databases. Rating and pricing for events involves some sort of network activity or usage. Recent inventions have facilitated movement of some or all of the processing functionality to the end device (please see, U.S. Published Patent Application No. 2003-0187794, entitled System and Method for a Flexible Device-Based Rating Engine, Irwin et al, filed on Mar. 20, 2003). In these systems, a computerized engine, specifically, a device-based rating engine, may be incorporated into virtually any end device. The device-based rating engine may include a computerized application, which facilitates the intelligent configuration of computerized metering, rating, billing and managing of account balances on an end computing device. The device-based rating engine may then interact with a back-end system through a network on an as-needed basis to communicate the processing results. User devices may also represent a natural integration point for a wide variety of consumer transactions applications ranging from voice, to data, to content, to next generation services such as mobile infotainment (i.e., information and entertainment). The power, convenience and ubiquity of these smart devices may drive the rapid adoption of new services, and in the process, create enormous value for consumers. Most consumer transactions are still processed via paper. Typically, a consumer transaction occurs directly between the vendor and the consumer with cash or a financial settlement mechanism, such as a credit card. This usually occurs directly on the vendor's premises with the vendor's financial instruments. In the standard transaction model, the consumer receives a separate bill for each individual financial settlement mechanism (i.e., each credit card, checking account, etc.). If the customer wishes to verify the billing charges itemized on each statement, then the customer must maintain a copy of the sales receipt for each transaction. Furthermore, if the customer wishes to return a good or service, then the customer must sort through his or her receipts to find the receipt for the specific transaction. Mobile commerce transactions may occur on a mobile carriers' network infrastructure and may not require use of the vendor's financial instruments. Nevertheless, consumers still receive separate bills for each individual financial settlement mechanism and may still be forced to retain copies of receipts (paper or electronic). Additionally, the network providers usually do not actively participate in mobile commerce transactions, but rather merely facilitate the transaction by serving as a network pipe though which opaque data may travel. For this reason, the reality for network infrastructure providers, such as wireless carriers as well as Mobile Virtual Network Operators (MVNOs), is that they are cut out of the value chain during financial transactions that do not directly involve either the carrier-managed balance or the carrier's transmission network. Moreover, they are essentially providing a mobile commerce platform for which others will benefit.
Our best books aer on display at the Chicago Rare Book Center, 703 Washington St. Evanston, IL 60202 Books are first editions unless otherwise noted. Dust jackets mentioned when present. All items returnable (within 10 days). Payment should accompany orders; institutions will be billed. UPS is $7.00 for one book and 1.00 for each book thereafter. Foreign orders sent by Priority Mail. Postage will be based on cost. Credit cards, checks, IMOs, PayPal are all welcome.
A report over the weekend, which was the result of a state filling showed that some Republican members of the Senate may have been receiving stipends that they did not rightfully deserve. Sen. Pam Helming (R-54) fired back at the accusations, which named her as a recipient of a check for her role as the Vice Chairman of the Crimes and Corrections Committee. That stipend would reach $12,500 in 2017. “I’ve received a couple checks, but they haven’t been cashed,” Sen. Helming explained in a phone conversation Monday afternoon. “I have not, and will not be accepting a stipend for serving as Vice Chair of the Crimes and Corrections Committee.” It’s a practice that has always had a cloud of confusion around it. According to filings, senators not actually serving as committee chairpersons, have received stipends, which are typically reserved for those individuals. Senate Republicans have said that there’s nothing illegal about the practice itself. Sen. Helming said that her office is working with the New York State Comptrollers Office to determine what they can do legally with the checks. “I’m not comfortable with accepting a check like that,” she added during the conversation on Monday. She said that her office would explore donating the funds to a non-profit, if the Comptroller’s Office would not accept it. Sen. Patrick Gallivan (R-59) is chairs the committee that Sen. Helming began receiving stipends for chairing. She said her office would be releasing a full-statement later in the day on Monday.
Digital currency Taxonomy of money, based on "Central bank cryptocurrencies" by Morten Linnemann Bech and Rodney Garratt Digital currency (digital money, electronic money or electronic currency) is a type of currency available in digital form (in contrast to physical, such as banknotes and coins). It exhibits properties similar to physical currencies, but can allow for instantaneous transactions and borderless transfer-of-ownership. Examples include virtual currencies and cryptocurrencies[1] and central bank issued money accounted for in a computer database (including digital base money). Like traditional money, these currencies may be used to buy physical goods and services, but may also be restricted to certain communities such as for use inside an online game[2] or social network.[3] Digital currency is a money balance recorded electronically on a stored-value card or other devices. Another form of electronic money is network money, allowing the transfer of value on computer networks, particularly the Internet. Electronic money is also a claim on a private bank or other financial institution such as bank deposits.[4] Digital money can either be centralized, where there is a central point of control over the money supply, or decentralized, where the control over the money supply can come from various sources. History In 1983, a research paper by David Chaum introduced the idea of digital cash.[5] In 1990, he founded DigiCash, an electronic cash company, in Amsterdam to commercialize the ideas in his research.[6] It filed for bankruptcy in 1998.[7][8] e-gold was the first widely used Internet money, introduced in 1996, and grew to several million users before the US Government shut it down in 2008. Users of the e-gold mailing list used the term "digital currency" to describe peer to peer payments in various instruments. [9][6] In 1997, Coca-Cola offered buying from vending machines using mobile payments.[10]PayPal launched its USD-denominated service in 1998.[11] In 2009, bitcoin was launched, which marked the start of decentralized blockchain-based digital currencies with no central server, and no tangible assets held in reserve. Also known as cryptocurrencies, blockchain-based digital currencies proved resistant to attempt by government to regulate them, because there was no central organization or person with the power to turn them off.[12] Origins of digital currencies date back to the 1990s Dot-com bubble. Another known digital currency service was Liberty Reserve, founded in 2006; it lets users convert dollars or euros to Liberty Reserve Dollars or Euros, and exchange them freely with one another at a 1% fee. Several digital currency operations were reputed to be used for ponzi schemes and money laundering, and were prosecuted by the U.S. government for operating without MSB licenses.[13] Q coins or QQ coins, were used as a type of commodity-based digital currency on Tencent QQ's messaging platform and emerged in early 2005. Q coins were so effective in China that they were said to have had a destabilizing effect on the Chinese Yuan currency due to speculation.[14] Recent interest in cryptocurrencies has prompted renewed interest in digital currencies, with bitcoin, introduced in 2008, becoming the most widely used and accepted digital currency. Comparisons Digital versus virtual currency According to the European Central Bank's 2015 "Virtual currency schemes – a further analysis" report, virtual currency is a digital representation of value, not issued by a central bank, credit institution or e-money institution, which, in some circumstances, can be used as an alternative to money.[15] In the previous report of October 2012, the virtual currency was defined as a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specific virtual community.[16] According to the Bank for International Settlements' November 2015 "Digital currencies" report, it is an asset represented in digital form and having some monetary characteristics.[17] Digital currency can be denominated to a sovereign currency and issued by the issuer responsible to redeem digital money for cash. In that case, digital currency represents electronic money (e-money). Digital currency denominated in its own units of value or with decentralized or automatic issuance will be considered as a virtual currency. As such, bitcoin is a digital currency but also a type of virtual currency. Bitcoin and its alternatives are based on cryptographic algorithms, so these kinds of virtual currencies are also called cryptocurrencies. Digital versus traditional currency Most of the traditional money supply is bank money held on computers. This is also considered digital currency. One could argue that our increasingly cashless society means that all currencies are becoming digital, but they are not presented to us as such.[18] Types of systems Centralized systems Mobile digital wallets A number of electronic money systems use contactless payment transfer in order to facilitate easy payment and give the payee more confidence in not letting go of their electronic wallet during the transaction. In January 2010, Venmo launched as a mobile payment system through SMS, which transformed into a social app where friends can pay each other for minor expenses like a cup of coffee, rent and paying your share of the restaurant bill when you forget your wallet.[20] It is popular with college students, but has some security issues.[21] It can be linked to your bank account, credit/debit card or have a loaded value to limit the amount of loss in case of a security breach. Credit cards and non-major debit cards incur a 3% processing fee.[22] On September 19, 2011, Google Wallet released in the United States to make it easy to carry all your credit/debit cards on your phone.[23] The UK's O2 invented O2 Wallet[25] at about the same time. The wallet can be charged with regular bank accounts or cards and discharged by participating retailers using a technique known as 'money messages'. The service closed in 2014. Virtual currency A virtual currency has been defined in 2012 by the European Central Bank as "a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specific virtual community".[16] The US Department of Treasury in 2013 defined it more tersely as "a medium of exchange that operates like a currency in some environments, but does not have all the attributes of real currency".[31] The key attribute a virtual currency does not have according to these definitions, is the status as legal tender. Law Since 2001, the European Union has implemented the E-Money Directive "on the taking up, pursuit and prudential supervision of the business of electronic money institutions" last amended in 2009.[32] Doubts on the real nature of EU electronic money have arisen, since calls have been made in connection with the 2007 EU Payment Services Directive in favor of merging payment institutions and electronic money institutions. Such a merger could mean that electronic money is of the same nature as bank money or scriptural money. Securities and Exchange Commission guidance New York state regulation In July 2014, the New York State Department of Financial Services proposed the most comprehensive regulation of virtual currencies to date, commonly called BitLicense.[37] Unlike the US federal regulators it has gathered input from bitcoin supporters and the financial industry through public hearings and a comment period until 21 October 2014 to customize the rules. The proposal per NY DFS press release “sought to strike an appropriate balance that helps protect consumers and root out illegal activity".[citation needed] It has been criticized by smaller companies to favor established institutions, and Chinese bitcoin exchanges have complained that the rules are "overly broad in its application outside the United States".[38] Hong Kong’s Octopus card system: Launched in 1997 as an electronic purse for public transportation, is the most successful and mature implementation of contactless smart cards used for mass transit payments. After only 5 years, 25 percent of Octopus card transactions are unrelated to transit, and accepted by more than 160 merchants.[40] London Transport’s Oyster card system: Oyster is a plastic smartcard which can hold pay as you go credit, Travelcards and Bus & Tram season tickets. You can use an Oyster card to travel on bus, Tube, tram, DLR, London Overground and most National Rail services in London.[41] Japan’s FeliCa: A contactless RFID smart card, used in a variety of ways such as in ticketing systems for public transportation, e-money, and residence door keys.[42] The Netherlands' Chipknip: As an electronic cash system used in the Netherlands, all ATM cards issued by the Dutch banks had value that could be loaded via Chipknip loading stations. For people without a bank, pre-paid Chipknip cards could be purchased at various locations in the Netherlands. As of January 1, 2015, you can no longer pay with Chipknip.[43] In March 2018, the Marshall Islands became the first country to issue their own cryptocurrency and certify it as legal tender; the currency is called the "sovereign".[45] Canada The Bank of Canada have explored the possibility of creating a version of its currency on the blockchain.[46] The Bank of Canada teamed up with the nation’s five largest banks — and the blockchain consulting firm R3 — for what was known as Project Jasper. In a simulation run in 2016, the central bank issued CAD-Coins onto a blockchain similar Ethereum.[47] The banks used the CAD-Coins to exchange money the way they do at the end of each day to settle their master accounts.[47] China A deputy governor at the central bank of China, Fan Yifei, wrote that "the conditions are ripe for digital currencies, which can reduce operating costs, increase efficiency and enable a wide range of new applications".[47] According to Fan Yifei, the best way to take advantage of the situation is for central banks to take the lead, both in supervising private digital currencies and in developing digital legal tender of their own.[48] Denmark The Danish government proposed getting rid of the obligation for selected retailers to accept payment in cash, moving the country closer to a "cashless" economy.[49] The Danish Chamber of Commerce is backing the move.[50] Nearly a third of the Danish population uses MobilePay, a smartphone application for transferring money.[49] Ecuador A law passed by the National Assembly of Ecuador gives the government permission to make payments in electronic currency and proposes the creation of a national digital currency. "Electronic money will stimulate the economy; it will be possible to attract more Ecuadorian citizens, especially those who do not have checking or savings accounts and credit cards alone. The electronic currency will be backed by the assets of the Central Bank of Ecuador", the National Assembly said in a statement.[51] In December 2015, Sistema de Dinero Electrónico ("electronic money system") was launched, making Ecuador the first country with a state-run electronic payment system.[52] Germany The German central bank is testing a functional prototype for the blockchain technology-based settlement of securities and transfer of centrally-issued digital coins.[53][54] Unified Payments Interface (UPI) is an instant real-time payment system developed by National Payments Corporation of India facilitating inter-bank transactions. The interface is regulated by the Reserve Bank of India and works by instantly transferring funds between two bank accounts on a mobile platform. UPI is built over Immediate Payment Service(IMPS) for transferring funds. Being a digital payment system it is available 24*7 and across public holidays. Unlike traditional mobile wallets, which takes a specified amount of money from user and stores it in its own accounts, UPI withdraws and deposits funds directly from the bank account whenever a transaction is requested. It uses Virtual Payment Address (a unique ID provided by the bank), Account Number with IFS Code, Mobile Number with MMID (Mobile Money Identifier), Aadhaar Number, or a one-time use Virtual ID. An UPI-PIN (UPI Personal Identification number that one creates on the UPI app of the bank) is required to confirm each payment. Russia Government-controlled Sberbank of Russia owns Yandex.Money – electronic payment service and digital currency of the same name.[56] Russia’s President Vladimir Putin has signed off on regulation of ICOs and cryptocurrency mining by July 2018.[57] South Korea South Korea plans national digital currency using a Blockchain.[58] The chairman of South Korea’s Financial Services Commission (FSC), Yim Jong-yong, announced that his department will "Lay the systemic groundwork for the spread of digital currency."[58] South Korea has already announced plans to discontinue coins by the year 2020.[59] Sweden Sweden is in the process of replacing all of its physical banknotes, and most of its coins by mid-2017. However, the new banknotes and coins of the Swedish krona will probably be circulating at about half the 2007 peak of 12,494 kronor per capita. The Riksbank is planning to begin discussions of an electronic currency issued by the central bank to which "is not to replace cash, but to act as complement to it".[60] Deputy Governor Cecilia Skingsley states that cash will continue to spiral out of use in Sweden, and while it is currently fairly easy to get cash in Sweden, it is often very difficult to deposit it into bank accounts, especially in rural areas. No decision has been currently made about the decision to create "e-krona". In her speech, Skingsley states: "The first question is whether e-krona should be booked in accounts or whether the ekrona should be some form of a digitally transferable unit that does not need an underlying account structure, roughly like cash." Skingsley also states: "Another important question is whether the Riksbank should issue e-krona directly to the general public or go via the banks, as we do now with banknotes and coins." Other questions will be addressed like interest rates, should they be positive, negative, or zero? Switzerland In 2016, a city government first accepted digital currency in payment of city fees. Zug, Switzerland, added bitcoin as a means of paying small amounts, up to 200 SFr., in a test and an attempt to advance Zug as a region that is advancing future technologies. In order to reduce risk, Zug immediately converts any bitcoin received into the Swiss currency.[61]Swiss Federal Railways, government-owned railway company of Switzerland, sells bitcoins at its ticket machines.[62][62] UK The Chief Scientific Adviser to the UK government advised his Prime Minister and Parliament to consider using a blockchain-based digital currency.[63] The chief economist of Bank of England, the central bank of the United Kingdom, proposed abolition of paper currency. The Bank has also taken an interest in bitcoin.[47][64] In 2016 it has embarked on a multi-year research programme to explore the implications of a central bank issued digital currency.[39] The Bank of England has produced several research papers on the topic. One suggests that the economic benefits of issuing a digital currency on a distributed ledger could add as much as 3 percent to a country's economic output.[47] The Bank said that it wanted the next version of the bank’s basic software infrastructure to be compatible with distributed ledgers.[47] Ukraine The National Bank of Ukraine is considering a creation of its own issuance/turnover/servicing system for a blockchain-based national cryptocurrency.[65] The regulator also announced that blockchain could be a part of a national project called "Cashless Economy".[65] Adoption by financial actors Government attitude dictates the tendency among established heavy financial actors that both are risk-averse and conservative. None of these offered services around cryptocurrencies and much of the criticism came from them. The first mover among these has been Fidelity Investments, Boston based Fidelity Digital Assets LLC will provide enterprise-grade custody solutions, a cryptocurrency trading execution platform and institutional advising services 24 hours a day, seven days a week designed to align with blockchain's always-on trading cycle".[66] It will work with Bitcoin and Ethereum with general availability scheduled for 2019. Hard vs. soft digital currencies Hard electronic currency does not have the ability to be disputed or reversed when used. It is nearly impossible to reverse a transaction, justified or not. It is very similar to cash. Advantages of this system include it being cheaper to operate, and transactions are instantaneous. Western Union, KlickEx and Bitcoin are examples of this type of currency.[67] Soft electronic currencies are the opposite of hard electronic currencies. Payments can be reversed. Usually, when a payment is reversed there is a "clearing time." This can take 72 hours or more. Examples of soft currencies are PayPal and any type of credit card. A hard currency can be "softened" with a third party service.[67] Criticism Many existing digital currencies have not yet seen widespread usage, and may not be easily used or exchanged. Banks generally do not accept or offer services for them.[68] There are concerns that cryptocurrencies are extremely risky due to their very high volatility[69] and potential for pump and dump schemes.[70] Regulators in several countries have warned against their use and some have taken concrete regulatory measures to dissuade users.[71] The non-cryptocurrencies are all centralized. As such, they may be shut down or seized by a government at any time.[72] The more anonymous a currency is, the more attractive it is to criminals, regardless of the intentions of its creators.[72]Forbes writer Tim Worstall has written that the value of bitcoin is largely derived from speculative trading.[73] Bitcoin has also been criticised for its energy inefficient SHA-256-based proof of work.[74]
Colorectal cancer screening--optimizing current strategies and new directions. The first evidence that screening for colorectal cancer (CRC) could effectively reduce mortality dates back 20 years. However, actual population screening has, in many countries, halted at the level of individual testing and discussions on differences between screening tests. With a wealth of new evidence from various community-based studies looking at test uptake, screening-programme organization and the importance of quality assurance, population screening for CRC is now moving into a new realm, promising better results in terms of reducing CRC-specific morbidity and mortality. Such a shift in the paradigm requires a change from opportunistic, individual testing towards organized population screening with comprehensive monitoring and full-programme quality assurance. To achieve this, a combination of factors--including test characteristics, uptake, screenee autonomy, costs and capacity--must be considered. Thus, evidence from randomized trials comparing different tests must be supplemented by studies of acceptance and uptake to obtain the full picture of the effectiveness (in terms of morbidity, mortality and cost) the different strategies have. In this Review, we discuss a range of screening modalities and describe the factors to be considered to achieve a truly effective population CRC screening programme.
For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter. On Wednesday night’s episode of The Daily Show with Jon Stewart, Liam Neeson (star of the Taken movies, Schindler’s List, and Battleship) revealed why he was a little “pissed off” at New York City Mayor Bill de Blasio. The 61-year-old actor and New York resident wasn’t mad at the new mayor for his socialist past or his leadership during winter storms. Neeson was upset over horses. “He wants to close this horse and carriage industry in New York,” Neeson said, referring to the mayor’s goal to replace “inhumane” carriage horses with “vintage tourist-friendly vehicles in parks.” Neeson also accused animal rights groups for spreading “false information” about the treatment of the horses in the city. (Neeson, whose close friend is a New York horse and carriage owner, previously wrote an intensely punctuated open letter to de Blasio on how he was “appalled to learn of [de Blasio’s] intent to obliterate one of the most deep rooted icons of our city!”) Horse-drawn carriages have attracted controversy due to accusations of excessive harm to the animals. Carriage drivers of course vowed to fight a ban. Here is a clip of Stewart and Neeson’s mini-debate, via TMZ: Unfortunately your browser does not support IFrames. This isn’t the only cause Neeson is passionate about. The actor—recently famous for playing a good-natured CIA torturer who massacres ethnic stereotypes who kidnap his daughter—has a long history of working with the United Nations Children’s Fund (UNICEF), including his work as a Goodwill Ambassador and his participation in a campaign to combat violence against children. And he once stripped almost completely naked to raise money for breast cancer research. But of all his causes, this one might be getting him the most press. For years, Neeson has been vocal on the issue of New York’s horse-drawn carriages, to the point that the Daily Caller asked in January, “Will Liam Neeson stand in the way of Bill de Blasio’s horse carriage ban?” PETA has slammed Neeson over this. “Liam Neeson…has PETA wondering if one of his horses might have kicked him in the head,” the organization wrote. In 2009, he issued a letter to city officials to rage against the “coordinated attempt by animal activists and a certain Queens council member to ban the industry from the city.” Here’s part of the letter, which you can read while keeping Neeson’s voice in mind: As a horse lover and rider, I am deeply disturbed by the unnecessary and misguided political and extreme rhetoric against the horse-drawn carriage industry and feel obliged to counter this action. The horse-drawn carriage business is an iconic part of this city, employing hundreds of dedicated, hard-working men and women, caring for well-bred, well-trained horses and attracting tourists to New York City for over 100 years. As a proud New York resident, I have personally enjoyed the beauty of Central Park on a daily basis for many years, and these horses are an undeniable integral part of that experience. The notion that a well-nourished horse pulling a carriage through Central Park is considered cruelty may fit in with animal activists’ extremist view, but not with the rest of us. Surely we have a responsibility to protect commerce, especially one with such history, and one I truly feel helps define this city. May pragmatism prevail. In 2009, Neeson made another appearance on The Daily Show—and discussed horses and carriages: Neeson was also the star of the 2012 film The Grey, which was criticized by animal rights activists for smearing wolves as brutal and ravenous human-killers. Mayor de Blasio’s office did not immediately respond to a request for comment, perhaps out of fear of Liam Neeson.
Vote for Thursday’s anime here! Hey everyone! We showed episode 1 of Durarararararararararararararararararararara!!x2 Shou today for Shounen Week! Summary from MAL: “Half a year after the turmoil that rocked the […]
1. Field of the Invention The invention relates to a system for recording/reproducing signals on/from a magnetic tape in a cassette, which system comprises standard apparatuses and standard cassettes which are adapted to each other in conformity with a specific standard, which standard apparatuses comprise at least one standard locating pin for correctly positioning a standard cassette, and which standard cassettes have at least one standard locating aperture in which the standard locating pin is engageable. 2. Description of Related Art An example of a system of the type referred to above is the well-known Compact Cassette system in conformity with the international standard IEC 94-7. This system has been enormously successful, which has led to the sale of billions of cassettes and correspondingly huge quantities of associated apparatuses in the more than twenty years it has been on the market. Just like any other standard system the Compact Cassette system leaves only limited room for improvements and innovations. This is because the standard prescribes the principal electrical, magnetic, mechanical and format parameters, so that within the standard there are no or only limited possibilities of deviating from these parameters which are essential for the compatibility between the cassettes and apparatuses belonging to the system. The Compact Cassette system has been designed for analog recording and reproducing of audio signals. In recent years there has been an enormous evolution in digital technologies for recording and reproducing audio signals. Digital technologies have enabled a substantially higher quality of sound reproduction to be achieved, so that in this respect the Compact Cassette system may be regarded as outmoded. In order to meet the consumer's demand for a system of recording/reproducing digital audio signals on magnetic-tape cassettes, new system have been proposed in conformity with a new standard, comprising novel cassettes and associated novel apparatuses. In principle, old apparatuses and old cassettes in conformity with an existing standard might be used for the novel digital audio system, requiring the necessary modifications to the apparatuses and perhaps the use of a different magnetic tape. However, the resulting confusion amongst consumers, who would no longer know which cassettes to use in which apparatuses, would be likely to invoke an antagonistic response from the consumer, not only with respect to the new but even with respect to the old system. The new standard deviates from the old standard. This makes it necessary for the consumer to purchase new equipment with associated new cassettes. This represents a considerable investment, in particular because the manufacture of both the new equipment and the new cassettes requires investments from the manufacturer, leading to higher prices, in particular upon the introduction of the new system. The manufacturer needs time to gain experience with the manufacture of the new products. Therefore, cutbacks in costs through rationalisation and scaled-up production are to be envisaged only after a certain period of time. For basically the same reasons new systems, such as new digital audio systems but also other new systems, may therefore meet with failure already in the initial stage. The large initial investments made in the development and the start of production may then be regarded as lost, at least partly.
Q: If i login with the user from row[0] from database the image is displayed but when i logged with the user from row[1] the image it is not displyed When I login I want to display the image of user. I use Rows[0] but when i change the user that is at row[1] it shows me the row[0] image. How do i implement a code to switch the rows and know what user is logged on. I am a beginner here, so take it slow. `C# protected void Page_Load(object sender, EventArgs e) { if (Session["Username"] != null) { String a = ConfigurationManager.ConnectionStrings["MyDatabase"].ConnectionString; using (SqlConnection con = new SqlConnection(a)) { DataTable dt = new DataTable(); SqlDataAdapter comanda = new SqlDataAdapter("SELECT *FROM Register", con); comanda.Fill(dt); if (dt.Rows.Count > 0) { emailutilizator.Text = dt.Rows[0]["Email"].ToString(); } if (dt.Rows[0]["ImageData"].ToString().Length > 1) { Image1.ImageUrl = dt.Rows[0]["ImageData"].ToString(); } else { Image1.ImageUrl = "~/images/defaultuserimg.png"; } }` } The database looks like this `Tabel CREATE TABLE [dbo].[Register] ( [Uid] INT IDENTITY (1, 1) NOT NULL, [Username] NVARCHAR (MAX) NULL, [Email] NVARCHAR (MAX) NULL, [Password] NVARCHAR (MAX) NULL, [ImageData] NVARCHAR (MAX) NULL, PRIMARY KEY CLUSTERED ([Uid] ASC) ); ` [The information from database] []2 A: First, in your SQL statement use a WHERE condition to bring back the user who is logged in. e.g. SqlDataAdapter comanda = new SqlDataAdapter("SELECT * FROM Register WHERE Username=@username", con); comanda.SelectCommand.Parameters.AddWithValue("@username",Session["Username"]); Then you can always use row[0] as it will only return the current user. This solution includes SQL injection protection.
Why is Najib the only one of six Prime Ministers to sanction, condone and defend the totally indefensible rabid racist statement of one of his Ministers? The rabid racist statement by the Minister for Agriculture and Agro-based Industry Datuk Seri Ismail Sabri Yaakob calling on Malay consumers to boycott Chinese businesses has snowballed from the aberration of one errant Minister to a crisis of an entire errant Cabinet of 35 Ministers because of the extraordinary and outrageous decision by the Cabinet to sanction, condone and defend Ismail’s racist fulminations. Today is the first Prime Minister Tunku Abdul Rahman’s 112th birthday anniversary. I have no doubt that if the Ismail Sabry episode had happened in Tunku’s time as Prime Minister, Ismail would have been made to apologise for his racist fulminations and even been sacked from Tunku’s Cabinet. This was why in my statement six days ago on 2nd February, I had said: “If a past Minister had done what Ismail did under the first three Prime Ministers, Tunku Abdul Rahman, Tun Razak and Tun Hussein Onn, he would have been sacked on the spot immediately after the expression of such racist sentiments, for it would be conclusive proof of his total unsuitability to continue as a Cabinet Minister in a plural society. “I think such a Minister would have been sacked by Tun Dr. Mahathir during his 22 years of premiership although Mahathir may now look for excuses to come to Ismail’s defence or rescue.” Mahathir had proven me wrong, for he had publicly expressed his disapproval for Ismail’s racist fulminations, rightly pointing out that the rising cost of living had nothing to do with ethnicity. But Mahathir had tried to soften his blow of disapproval by saying: “Maybe he didn’t think (about) what he said. If he thinks carefully, he would not say boycott Chinese goods.” Mahathir said Malay and Indian businesses were also not reducing their prices and not all traders are Chinese, stressing “This is not about Chinese or Malay, this is about oil prices going down, but goods prices are not going down.” I do not think under Tun Abdullah’s premiership, there would have been place for such a rank racist in the Cabinet. Why then is Najib the only one of six Prime Ministers to sanction, condone and defend Ismail’s racist fulminations when the other five Prime Ministers before him would never have done such a thing? Is this because he is the weakest Prime Minister in the nation’s 58 year history? All the other 34 Ministers have done a great disservice to their Ministerial offices and their own credibility and reputation in lining up behind Ismail to deny the undeniable, defend the indefensible that the Agriculture and Agro-based Industry Minister had crossed the line of what is permissible for a Minister, in making a rabid racist call on a matter which has nothing to do with ethnicity. Ismail had in fact betrayed his oath of Ministerial office, to serve all Malaysians regardless of race, religion or region! In sanctioning, condoning and defending Ismail’s racist fulminations, all the other 34 Cabinet Ministers, regardless of their political party of origin, have also betrayed their oath of Ministerial office! The Cabinet on Wednesday (Feb. 11) should revisit the Ismail Sabri case, and it must be made clear to Ismail that he should unconditionally retract and apologise for this baseless racist fulmination, or he should be sacked from the Cabinet if not as a decision of the Prime Minister, then as a decision of the Cabinet by way of a Cabinet resolution! Malaysia cannot have a Minister who live in the world of “double think”, “double talk” and “double act”! Now the country is also in danger of having a Cabinet of “double think”, “double talk” and “double act”. Wednesday is the last opportunity for the Cabinet to prove that the 34 Ministers have pulled back from the precipice and have not joined the rank of Ismail Sabri to “double think”, “double talk” and “double act” But rest assured, even if there is a Minister or Cabinet of “double think”, “double talk” and “double act”, the overwhelming majority of Malaysians, regardless of race, religion or region, will not be subdued or cowed into a people of “double think”, “double talk” and “double act” for they will continue to be decent, sensible and moderate human beings who will not allow the rhetoric and politics of hate, intolerance, bigotry and extremism to govern their lives!
Q: Empty all input fields in a tr in jQuery? <tr> <td>X</td> <td><input type="text" value="blabla"></td> <td><input type="text" value="blabla"></td> <td><input type="text" value="blabla"></td> <td><input type="text" value="blabla"></td> </tr> <tr> <td>X</td> <td><input type="text" value="again"></td> <td><input type="text" value="again"></td> <td><input type="text" value="again"></td> <td><input type="text" value="again"></td> </tr> How can i in jQuery, click the X and it will empty the input fields that are inside the tr? A: Change the cell with X to this: <td><a href="#" class="click-here">X</a></td> Then put this inside script tags in the page: jQuery(document).ready(function() { jQuery('.click-here').click(function() { jQuery(this).parents('tr').find('input').val(''); }); }); UPDATE Made the jQuery based on class name rather than ID so it's more re-usable.
Q: What are the differences between the various boost ublas sparse vectors? In boost::numeric::ublas, there are three sparse vector types. I can see that the mapped_vector is essentially an stl::map from index to value, which considers all not-found values to be 0 (or whatever is the common value). But the documentation is sparse (ha ha) on information about compressed_vector and coordinate_vector. Is anyone able to clarify? I'm trying to figure out the algorithmic complexity of adding items to the various vectors, and also of dot products between two such vectors. A very helpful answer offered that compressed_vector is very similar to compressed_matrix. But it seems that, for example, compressed row storage is only for storing matrices -- not just vectors. I see that unbounded_array is the storage type, but I'm not quite sure what the specification is for that, either. If I create a compressed_vector with size 200,000,000, but with only 5 non-zero locations, is this less efficient in any way than creating a compressed_vector with size 10 and 5 non-zero locations? Many thanks! A: replace matrix with vector and you have the answers http://www.guwi17.de/ublas/matrix_sparse_usage.html
This book amply fulfils the authors' intent of defining a practical approach to every aspect of drug resistant tuberculosis (TB) especially multi-drug-resistant (MDR) and extensively drug-resistant (XDR) TB. This book details various aspects of drug-resistant TB encompassing management, current status, treatment strategies as well as associated national and international programmes, *etc*. discussed under 27 chapters. The book covers the diagnosis and management of MDR and XDR TB both at the public health care facility as per the National TB Control Programme Guidelines and also individual case based management by the private providers. The authors' own experiences in managing TB patients as experts in the field have also been incorporated at appropriate places for easy understanding. Of particular interest to readers is chapter 12, where MDR TB has been discussed with a case-based approach. Furthermore, chapter 15, where the current status of XDR TB is discussed, is of value as it gives a holistic view and updates the reader about the current status of XDR TB globally. In the same lines, tuberculosis control in India, the related framework, enrolment programmes, *etc*. are also discussed in chapter 25. Overall, this is a well written and easily readable text book and is useful for students, experts and also private practitioners managing TB cases in their day to day practice.
Last Thursday, the nation watched with a mix of amusement and horror as the House Benghazi committee spent 11 hours grilling Hillary Clinton on a bizarre farrago of issues, many of which bore only tangential connection to the Benghazi attack. Over the past few weeks, the political narrative seems to have shifted from "Clinton in trouble" to "congressional witch hunt seeks to take down Clinton." Between McCarthy's accidental truth telling, an ex-staffer confirming the worst reports about the committee, and another House Republican conceding the obvious, it has become clear that the Benghazi committee is a thoroughly partisan political endeavor. Opinion has turned, but Republicans are trapped. Last Thursday, the nation watched with a mix of amusement and horror as the House Benghazi committee spent 11 hours grilling Hillary Clinton on a bizarre farrago of issues, many of which bore only tangential connection to the Benghazi attack. Over the past few weeks, the political narrative seems to have shifted from "Clinton in trouble" to "congressional witch hunt seeks to take down Clinton." Between McCarthy's accidental truth telling, an ex-staffer confirming the worst reports about the committee, and another House Republican conceding the obvious, it has become clear that the Benghazi committee is a thoroughly partisan political endeavor. Opinion has turned, but Republicans are trapped.
Argument from fallacy An argument from fallacy, or the fallacy fallacy is a formal fallacy which occurs when analyzing an argument and assuming that, because the argument contains a logical fallacy the conclusion of that argument must be false. It is also commonly referred to as the fallacist's fallacy. Form The form of the argument from fallacy requires a meta-argument, or an argument about the claims of an argument. The issue here is that while the presence of a fallacy is sufficient to render argument A invalid, it does not make C false. Rather, the truth value of C is unknown, because there is no valid argument as to whether C is true or false.
Effect of regeneration and hyperplasia on levels of DNA base oxidation in rat liver. Elevations of oxidatively modified DNA bases have been associated with a variety of carcinogens and tumor promoters, and implicated in causation of cancer. Since carcinogen exposure can induce cell proliferation, the relationship between induction of cell proliferation and levels of DNA base oxidation was examined. Cell proliferation was induced in livers of male F344 rats by stimuli of either regeneration or hyperplasia. Levels of DNA base oxidation were evaluated by measuring 8-OH-deoxyguanosine/deoxyguanosine (8-OHdG/dG) ratios by HPLC in enzymatic digests of DNA isolates. Despite induction of cell proliferation, hepatic levels of 8-OHdG/dG were not increased at 1, 2, 3 or 5 days after any of these treatments. Results of the present work suggest that the mechanism of elevated levels of DNA base oxidation is not directly related to induction of cell proliferation.
That is, we WOULD, if it was ever to be truly investigated, which it won’t be. The Donks, for obvious reasons, don’t want any digging around in their Tranzi Dream Organization and the Republicans, led by George the Wimp’s “New Tone in Washington”, won’t be pushing too hard either. Expect this whole thing to die down in a few months with the usual wiping it off on a couple of unimportant janitorial positions in the UN while George lines up to lick their sweaty nutsacks again.
Molecular systematics of two sister clades, the Fusarium concolor and F. babinda species complexes, and the discovery of a novel microcycle macroconidium-producing species from South Africa. Multilocus DNA sequence data were used to investigate species identity and diversity in two sister clades, the Fusarium concolor (FCOSC) and F. babinda species complexes. Of the 109 isolates analyzed, only 4 were received correctly identified to species and these included 1/46 F. concolor, 1/31 F. babinda, and 2/3 F. anguioides. The majority of the F. concolor and F. babinda isolates were received as F. polyphialidicum, which is a heterotypic synonym of the former species. Previously documented from South America, Africa, Europe, and Australia, our data show that F. concolor is also present in North America. The present study expands the known distribution of F. babinda in Australia to Asia, Europe, and North America. The molecular phylogenetic results support the recognition of a novel Fusarium species within the FCOSC, which is described and illustrated here as F. austroafricanum, sp. nov. It was isolated as an endophyte of kikuyu grass associated with a putative mycotoxicosis of cattle and from plant debris in soil in South Africa. Fusarium austroafricanum is most similar morphologically to F. concolor and F. babinda but differs from the latter two species in producing (i) much longer macroconidia in which the apical cell is blunt to slightly papillate and the basal cell is only slightly notched and (ii) macroconidia via microcycle conidiation on water agar. BLASTn searches of the whole genome sequence of F. austroafricanum NRRL 53441 were conducted to predict mycotoxin potential, using genes known to be essential for the synthesis of several mycotoxins and biologically active metabolites. Based on the presence of intact gene clusters that confer the ability to synthesize mycotoxins and pigments, we analyzed cracked corn kernel cultures of F. austroafricanum via liquid chromatography-mass spectrometry (LC-MS) but failed to detect these metabolites in vitro.
Oh my god. The Power Rangers reboot has officially announced that the show’s classic, scenery-chewing villainess Rita Repulsa will be played by Elizabeth Banks. This is either going to be ridiculous, or a chance to see Banks go absolutely full on bonkers. As one of the primary antagonists of Might Morphin’ Power Rangers, Rita Repulsa has long been rumored to be part of the Power Rangers adaptation. But now it’s been finally confirmed with Banks’ casting—joining Naomi Scott, Ludy Lin, Becky G, RJ Cyler, and Dacre Montgomery, who will play the titular Rangers. There’s not much out there about her role, but we can presume that she wants to conquer Earth, because that’s what Rita Repulsa always wants. Banks herself took to twitter to confirm the casting, quoting Rita’s classic, cheese-oozing declaration from the original opening to Mighty Morphin’ Power Rangers. You know, I’m looking over those old rumors of Rita Repulsa wanting to rob banks on a quest for gold that floated around last year, and I’m trying to imagine Elizabeth Banks coveting gold. This could be an amazing disaster. [THR]
Q: A domain I've never heard of resolves to my website I've discovered (via looking at mod_pagespeed cache entries) that a completely random domain I've never heard of before is resolving to my website. If I visit this domain, my website loads. The DNS for that domain is pointing to my server's IP. Right now in my vhost config I have *:80, which I'm guessing is where I'm going wrong. I immediately changed this to example.com:80 where example.com is my domain. Assuming this would mean the server would only respond to and fulfil requests for my domain name, rather than any request on port 80. My original vhost config; <VirtualHost *:80> DocumentRoot "/var/www/example.com" <Directory "/var/www/example.com"> Order allow,deny Allow from all Allowoverride all </Directory> </VirtualHost> My new tried config; Listen 80 ServerName example.com <VirtualHost example.com:80> DocumentRoot "/var/www/example.com" <Directory "/var/www/example.com"> Order allow,deny Allow from all Allowoverride all </Directory> </VirtualHost> When I tried to restart apache with the new config I got the following error: * Restarting web server apache2 [Fri Mar 28 08:55:47.821904 2014] [core:error] [pid 5555] (EAI 2)Name or service not known: AH00549: Failed to resolve server name for 152.155.254.241 (check DNS) -- or specify an explicit ServerName (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 Note: The IP beginning 152 in the above error has been slightly edited, but the original wasn't my server's IP address anyway. Can anyone offer advice on this issue? Is the domain (actually there's a couple) that is resolving to my website innocently just the previous user of the dedicated server, whose DNS is just still pointing to it? How can I resolve the apache virtual host config issue, and any other advice is welcome. Thanks. A: There's probably no harm in having those other domains pointing to your host, except of course that it increases the load on your server. If you want to block them, set up new virtual hosts for them: NameVirtualHost *:80 <VirtualHost *:80> ServerName example.com # example.com configuration </VirtualHost> <VirtualHost *:80> ServerName baddomain.com Deny from all </VirtualHost> Instead of Deny from all you could use Redirect permanent /error.html to show them a custom error message. You could repeat the second VirtualHost for each domain you want to block, or if there are a lot of them, put it first to make it the default VirtualHost, and make exceptions for your domain(s): NameVirtualHost *:80 <VirtualHost *:80> # default VirtualHost Deny from all </VirtualHost> <VirtualHost *:80> ServerName example.com # example.com config </VirtualHost> As for your error messages, it seems that Apache couldn't resolve the hostname example.com when it started, or couldn't find your ServerName directive. Not sure why. The second error says that port 80 is already in use on your host. Did you finish shutting down all of the previous instances of Apache? A: Apache serves as a sort of default the first domain you define. If you want to serve up myowndomain.com with the content you desire and all other domains some behavior (perhaps redirecting to the corresponding page on your preferred domain), define the "catchall" domain first, handle traffic appropriately (I recommend redirecting to your real domain), and then define your real domain etc. with subsequent VirtualHosts. A: Regarding "where the heck did this other name for my website come from", anyone can put any A record they like at any name below a domain they control. You can investigate a little by using whois to find out who has registered the second-level domain containing the offending DNS name.
Q: Javascript variable scope when dynamically referencing functions I'm trying to create a function that creates a "proxy" around an object's method. This "proxy" will be used to handle web requests and then call the target method. The proxy function looks similar to this: var proxy = function(c) { var proxy = {}; for(var member in c) { var args = c[member].toString().match (/function\s*\w*\s*\((.*?)\)/)[1].split (/\s*,\s*/); proxy[member] = function(params) { var methodArgs = args.map(function(argName) { return params[argName]; }); return c[member].apply(c, methodArgs); } } return proxy; }; So if I have this original controller, var c = { sum: function(x, y) { return x + y; }, multiply: function(x, y) { return x * y; } }; calling proxy(c) on this will return a proxy object with sum() and multiply() functions. However, because of the scope of the member variable in the proxy() function, it will always call the last referenced function in c - in this case, multiply(). var cProxy = proxy(c); //this should call c.sum, but instead calls c.multiply cProxy.sum({ x: 3, y: 8 }); How would I reference the right function in the proxy() function so that the right method gets called? A: The following worked for me, just create a closure for member var proxy = function(c) { var proxy = {}; for(var member in c) { !function(member){ var args = c[member].toString().match (/function\s*\w*\s*\((.*?)\)/)[1].split (/\s*,\s*/); proxy[member] = function(params) { var methodArgs = args.map(function(argName) { return params[argName]; }); return c[member].apply(c, methodArgs); } }(member) } return proxy; }; console.log( cProxy.sum({x: 3,y: 8})) // returns 11 console.log( cProxy.multiply({x: 3,y: 8})) //returns 24
using Cirrious.CrossCore; using Cirrious.CrossCore.IoC; namespace CollectABull.Core { public class App : Cirrious.MvvmCross.ViewModels.MvxApplication { public override void Initialize() { CreatableTypes() .EndingWith("Repository") .AsInterfaces() .RegisterAsLazySingleton(); CreatableTypes() .EndingWith("Service") .AsInterfaces() .RegisterAsLazySingleton(); RegisterAppStart<ViewModels.HomeViewModel>(); } } }
Several licensed marijuana producers have penned a letter to Ottawa, urging the federal government to allow them to brand their products and provide medical cannabis on a tax-free basis. The seven producers – Tilray, Tweed, Mettrum, CannTrust, Green Organic Dutchman Holdings, RedeCan Pharm and Delta 9 Bio-Tech – are lobbying the government ahead of the week of April 10, when legislation legalizing recreational use of the drug is expected to be introduced. A federal task force has recommended requiring plain packaging for cannabis and advertising restrictions similar to those placed on the tobacco industry. Story continues below advertisement But in their letter, the licensed producers argue that preventing them from branding their products will make it tougher for them to compete with black market operations such as illegal dispensaries. "Brands allow professional companies to separate themselves from less scrupulous competitors," says Brendan Kennedy, president of B.C.-based marijuana producer Tilray. Some health advocates have argued that restricting branding and advertising is necessary in order to ensure that users are aware of possible health risks associated with the substance. Another concern is that cannabis producers could use advertising to compel widespread usage of the drug, similar to what occurred with tobacco and alcohol in the past. But the licensed producers say they aren't looking to lure people into consuming marijuana. Instead, they wish to use branding and in-store advertising to educate users about various strains and their impacts, according to the letter. "No one in this industry is looking to repeat the same mistakes as tobacco or alcohol," says Kennedy. "No one wants to see a Joe Camel of this industry." Cannabis producers also take issue with the tax force's recommendation that medical and recreational cannabis be taxed the same amount. Story continues below advertisement This would "unduly burden" medical cannabis patients, according to the companies, who argue that medical cannabis should be sold tax free. "Other pharmaceutical products aren't taxed," says Kennedy.
San Bernardo (Madrid Metro) San Bernardo is a station on Line 2 and Line 4 of the Madrid Metro. It is located in fare Zone A. History The station was opened on 21 October, 1925 as part of Line 2. On 24 March, 1944 it was expanded to the Line 4. References Category:Madrid Metro stations Category:1925 establishments in Spain Category:Railway stations opened in 1925
Q: Should I pass a managed entity to a method that requires a new transaction? My application loads a list of entities that should be processed. This happens in a class that uses a scheduler @Component class TaskScheduler { @Autowired private TaskRepository taskRepository; @Autowired private HandlingService handlingService; @Scheduled(fixedRate = 15000) @Transactional public void triggerTransactionStatusChangeHandling() { taskRepository.findByStatus(Status.OPEN).stream() .forEach(handlingService::handle); } } In my HandlingService processes each task in issolation using REQUIRES_NEW for propagation level. @Component class HandlingService { @Transactional(propagation = Propagation.REQUIRES_NEW) public void handle(Task task) { try { processTask(task); // here the actual processing would take place task.setStatus(Status.PROCCESED); } catch (RuntimeException e) { task.setStatus(Status.ERROR); } } } The code works only because i started the parent transaction on TaskScheduler class. If i remove the @Transactional annotation the entities are not managed anymore and the update to the task entity is not propagated to the db.I don't find it natural to make the scheduled method transactional. From what i see i have two options: 1. Keep code as it is today. Maybe it`s just me and this is a correct aproach. This varianthas the least trips to the database. 2. Remove the @Transactional annotation from the Scheduler, pass the id of the task and reload the task entity in the HandlingService. @Component class HandlingService { @Autowired private TaskRepository taskRepository; @Transactional(propagation = Propagation.REQUIRES_NEW) public void handle(Long taskId) { Task task = taskRepository.findOne(taskId); try { processTask(task); // here the actual processing would take place task.setStatus(Status.PROCCESED); } catch (RuntimeException e) { task.setStatus(Status.ERROR); } } } Has more trips to the database (one extra query/element) Can be executed using @Async Can you please offer your opinion on which is the correct way of tackling this kind of problems, maybe with another method that i didn't know about? A: If your intention is to process each task in a separate transaction, then your first approach actually does not work because everything is committed at the end of the scheduler transaction. The reason for that is that in the nested transactions Task instances are basically detached entities (Sessions started in the nested transactions are not aware of those instances). At the end of the scheduler transaction Hibernate performs dirty check on the managed instances and synchronizes changes with the database. This approach is also very risky, because there may be troubles if you try to access an uninitialized proxy on a Task instance in the nested transaction. And there may be troubles if you change the Task object graph in the nested transaction by adding to it some other entity instance loaded in the nested transaction (because that instance will now be detached when the control returns to the scheduler transaction). On the other hand, your second approach is correct and straightforward and helps avoid all of the above pitfalls. Only, I would read the ids and commit the transaction (there is no need to keep it suspended while the tasks are being processed). The easiest way to achieve it is to remove the Transactional annotation from the scheduler and make the repository method transactional (if it isn't transactional already). If (and only if) the performance of the second approach is an issue, as you already mentioned you could go with asynchronous processing or even parallelize the processing to some degree. Also, you may want to take a look at extended sessions (conversations), maybe you could find it suitable for your use case. A: The current code processes the task in the nested transaction, but updates the status of the task in the outer transaction (because the Task object is managed by the outer transaction). Because these are different transactions, it is possible that one succeeds while the other fails, leaving the database in an inconsistent state. In particular, with this code, completed tasks remain in status open if processing another task throws an exception, or the server is restarted before all tasks have been processed. As your example shows, passing managed entities to another transaction makes it ambiguous which transaction should update these entities, and is therefore best avoided. Instead, you should be passing ids (or detached entities), and avoid unnecessary nesting of transactions.
Watchmen Mask And Symbol Black T-Shirt 100% cotton, standard mens fit. Please note: This item ships via standard/ground shipping within the USA ONLY, separately from the rest of your order. No express mail services or international shipping are available for this item. Please allow 3-5 business days for this item to ship.
Q: `\pounds` misbehaving with the `eulervm` package even in T1 encoding I am teaching a probability course involving lots of gambling and hence, since I'm in the UK, I would like to use £ in some formulae. I use utf8x input enconding, T1 font encoding and the eulervm maths fonts. Nevertheless, £ inside math-mode comes out like a dollar sign. Here's a minimum working example: \documentclass{article} \usepackage[T1]{fontenc} \usepackage[utf8x]{inputenc} \usepackage[small]{eulervm} \begin{document} A gambler wins $£1$ with probability $p$ and loses $£1$ with probability $1-p$. \end{document} Commenting out the line which calls the eulervm package works, but I really like the eulervm fonts, so any help would be immensely appreciated. A: I would recommend using utf8 instead of utf8x and of leaving £ outside of math mode, to which it doesn't belong. If you want to use it in math mode you have to teach how to LaTeX Method A (utf8 option): \usepackage[utf8]{inputenc} \usepackage{amsmath} \DeclareUnicodeCharacter{00A3}{\text{\textsterling}} Method B (utf8x option): \usepackage[utf8x]{inputenc} \usepackage{amsmath} \makeatletter \uc@dclc{163}{default}{\text{\textsterling}} \makeatother
Q: Prove that $\mathbb L^{-1}\{\mathbb p^\mathbb k\}=0$ There is a question in my book at the end of which it is written that $\mathbb L^{-1}\{\mathbb p^\mathbb k\}=0$ for $\mathbb k$= 0,1,2,..... But we know that $\mathbb L\{\mathbb 0\}$ = $\mathbb 0$ So here laplace transformation isn't one to one so how the inverse laplace transformation exists? How do I prove $\mathbb L^{-1}\{\mathbb p^\mathbb k\}=0$ ? link Original problem A: We know $$\mathbb L[\delta(t)]=1\\\mathbb L[f'(t)]=s\mathbb L[f(t)](s)-f(0^+)$$ So $$\mathbb L^{-1}[s]=\mathbb L^{-1}\left[\mathbb L[\delta'(t)]-\delta(0^+)\right]=\delta'(t)$$ Similarly, $$\mathbb L^{-1}(s^k)=\delta^{(k)}(t)$$ However we use Laplace transforms on functions which are only non-zero for $t>0$, which is why we would then take $$\mathbb L^{-1}(s^k)=\left[\delta^{(k)}(t)\right]_{t>0}=0$$
Search This Blog Subscribe to this blog Follow by Email Joy in the Journey I have a confession: I have ALWAYS hated the phrase "Bloom where you're planted." I mean REALLY hated it. Possibly because I have always had some issues with contentment. I remember my mom saying this to me and it would make me so mad! It makes me laugh to think about that now, but I still have some difficulty with this cliche. I currently live in my hometown of Warner Robins, Georgia. I lived here for my entire life until I left for college. I was adamant that I would never return to live here. Although I enjoyed my high school days/friends, I hated this town. It is an Air Force town, although that doesn't have anything to do with my distaste for the town. My major issues stem from the fact that there is simply nothing to do -- no culture, no attractions (unless you count the Museum of Aviation), and pretty much no fun. As a high school student the only place we had to hang out was Steak 'n Shake -- pretty sad. As an adult, there is still not much to do. So why are we here? When we moved here from Florida, I was able to get a substantial pay raise for a lateral move and the cost of living is much much lower (due to the fact that no one is fighting to move here). My parents still live here, and my grandparents lived much closer to Georgia than to Florida. Because my grandfather had cancer, moving here allowed us to spend a lot more time with him in his final years. That was a blessing. We moved Carlos' parents here because we were able to sell their house (which had a mortgage) and buy them a house here outright. That has been a blessing for them. Due to the lower cost of living, we have been able to start a family with a lot less stress; not to mention the fact that now both sets of grandparents live in the same town as us as support. Also, I really enjoy where I teach - my colleagues are great and there is a great family atmosphere among the staff. All that to say, I should be quite content here but I'm not. I still struggle with the whole "Bloom where you're planted" concept. I guess I have "The grass is always greener on the other side of the fence" syndrome. Enter the phrase "Joy in the Journey" - this phrase I can accept. I like it. It speaks to me. My goal is to pursue joy in my every day life. To enjoy the small things. Like I said before, contentment doesn't come easily for me, so I believe that joy is a choice, and it is a choice I want to make every day. I want to find the joy in my journey - even if RIGHT NOW I am not exactly where I want to be. Because it's a journey, and I'm constantly moving toward where I want to be. "Bloom where you're planted" has a sense of permanence - you are stuck - you have roots. A journey makes one think of moving forward - the exact opposite of being stuck. Maybe the phrases essentially mean the same thing. Regardless, I will continue to seek joy in the journey and will refuse to bloom where I'm planted! Comments Popular posts from this blog There's only one real joy in a home renovation . . . and that is FINISHING a project! :) We live in a fixer-upper, and there is a certain pleasure in making a home exactly what you want it to be. There are many challenges and stressful times, but being able to say 'WE ARE DONE!' is an amazing feeling. (Side note: It is really my husband who is able to say that - as he is the one who completed this bathroom project - one that was MUCH bigger than we were anticipating!) Before: Our original plan was to scrape the paint and save what was there. We love all things mid-century and thought we could make the pink and black tile work for us. So we scraped. And scraped. And scraped. After we scraped the paint off of the wall tile we realized that there was a reason that the tile had been painted. It just wasn't in great shape, so Carlos decided it all had to come down! Imagine this: Your town is in shambles due to war and fighting. Schools and hospitals are routinely attacked and your kids can't sleep at night due to the nightmares they have. Your husband didn't come home one day, and you were told that he was arrested, but no one knows anything more than that. One day it gets to be too much and you gather what material possessions you can, take your children and leave. You don't know where you are going, but you know you can't stay here and survive. You go to a refugee camp where there is shelter and some food assistance, but you still are financially responsible for some things, and there isn't work for you to do. You hear of a better place somewhere else. You still haven't heard about your husband. You have no idea if he is even still alive. Do you stay where you are? Do you move on to somewhere that might have more opportunities? As the days tick forward toward my 40th birthday (1.5 years away), I am working on checking off the list of my 40 before 40 bucket list items! One thing I wanted to do was donate my hair. I had wanted to at one extreme haircut, but because my hair was layered it wasn't long enough to donate. The high school where I work hosts a hair drive for Pantene Beautiful Lengths every year. So a little more than a year ago I decided I would grow my hair out to donate. I loved my long hair, but the short 'do is great for summer, and guess what? Hair grows! :) I'll have long hair again before you know it!
– This Video does not have a rating because it does not have enough votes. Tried it? Rate it! The Biggest Loser 12 Walk Out Courtney Crozier speaks out on the walkout and the most positive episode this season. Hear what Courtney Crozier has to say about the recent Biggest Loser Walk Out. Plus, she's as relieved as the rest of us to finally see something positive come out of this season as the contestants go home for 18 days. User Feedback (Page 1 of 1, 2 total comments) Linda Why won't they annouce who walked off and who is coming back ? Personally, Conda HAD to have a person to hate in EVERY episode.. She is EVIL and MEAN. It should not be about BULLYING, it should be about hope and weight loss... posted Mar 4th, 2012 8:13 am Amias Courtney, there are two sides to every story, especially in the context of a heavily edited, producer driven reality show. However, as your experience on the ranch was exemplary of, BL is about losing weight, not winning money. I think the negative reactions are because people feel like contestants this season have reacted in an ugly fashion to things like contestants returning that are common. Furthermore, as you say, you should support EVERYONE on their journey to weight loss, and it just doesn't seem like the BL 13 people are looking to do that. posted Feb 29th, 2012 7:11 pm Leave Feedback Skip the moderation queue by becoming a MyDIR member. Already a member? Need to sign up?It’s free and only it takes a minute.There are two ways to join: This week Starbucks launched a whole line of new foods and beverages all inspired by customer ideas submitted at My Starbucks Idea. Customers have been clamoring for healthier food choices on the website and Starbucks has delivered with a new line of healthy smoothies (a personal star from me because I have not been able to find a commercial... Holiday Survival Guide: How Not to Gain Weight Over the Holidays Susan is a NESTA Certified Fitness Nutrition Coach, a regular contributor on Gyminee.com, and writes about fitness and nutrition via her blog, Catapult Fitness Blog. Gyminee is the premier fitness social network for detailed tracking, online accountability, and motivation. Whether you are trying to lose weight or get fit, it's time to... Change the way you think about meatloaf with the Turkey Mini-Meatloafs from Biggest Loser's Tara Costa. You'll use a variety of warm spices to give flavor to the lean ground turkey and finish with a sweet and spicy sugar-free topping. Wake up to fluffy, homemade blueberry pancakes, without any of the dieter's guilt. Made with whole grains, nutrition-packed blueberries and low-fat buttermilk, mornings will taste a little brighter when you start your days with these soon-to-be favorites. Canned pumpkin is a pantry must have. Low fat, high in fiber and beta carotene, its flavor makes any meal seem special. Easy to make with ingredients on hand, and suitable for a myriad of diets, including diabetic and vegetarian! Stay in Touch The information provided within this site is strictly for the purposes of information only and is not a replacement or substitute for professional advice, doctors visit or treatment. The provided content on this site should serve, at most, as a companion to a professional consult. It should under no circumstance replace the advice of your primary care provider. You should always consult your primary care physician prior to starting any new fitness, nutrition or weight loss regime.
Q: c++ directshow filter Private Interface I am using direct show sample ezrgb24 and trying to expose its private interface. In iez.h file has the below code DEFINE_GUID(IID_IIPEffect, 0xfd5010a3, 0x8ebe, 0x11ce, 0x81, 0x83, 0x00, 0xaa, 0x00, 0x57, 0x7d, 0xa1); DECLARE_INTERFACE_(IIPEffect, IUnknown) { STDMETHOD(get_IPEffect) (THIS_ int *effectNum, // The current effect REFTIME *StartTime, // Start time of effect REFTIME *Length // length of effect ) PURE; STDMETHOD(put_IPEffect) (THIS_ int effectNum, // Change to this effect REFTIME StartTime, // Start time of effect REFTIME Length // Length of effect ) PURE; }; after building the dll, i registered it using cmd window. How should i expose or use get_IPEffect() or put_IPEffect() from our project? i coded as below but it didnt worked DEFINE_GUID(IID_IIPEffect, 0xfd5010a3, 0x8ebe, 0x11ce, 0x81, 0x83, 0x00, 0xaa, 0x00, 0x57, 0x7d, 0xa1); DEFINE_GUID(CLSID_ImageEffect, 0x8B498501, 0x1218, 0x11CF, 0xAD, 0xC4, 0x00, 0xA0, 0xD1, 0x00, 0x04, 0x1B); IBaseFilter *pImageEffect = NULL; chr = CoCreateInstance(CLSID_ImageEffect, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**) &pImageEffect); chr = pGraph->AddFilter(pImageEffect, L"RGB Resizer"); IIPEffect *pEZrgb24 = NULL; chr = pImageEffect->QueryInterface(IID_IIPEffect, (void **) &pEZrgb24); At IIPEffect i am getting an error "'IIPEffect' : undeclared identifier" how should i declare it? A: Thanks RomarR and Wimmel, i included iez.h in my project and it is working fine
Q: Is it possible to disable CSS and Javascript aggregation by role? Is it possible to disable CSS and Javascript aggregation by role? That way during content editiong or administration stages the user is not affected and can check the CSS alright, while end users received cache values as usual. If the cache lifetimes are long enough the editing can be completed before it any changes interfere with the other users. A: To do exactly what you want; using hook_css_alter & hook_js_alter is how to do it. Set the preprocess key to FALSE for everything. For an out of the box solution that is fairly close to what you want https://www.drupal.org/project/advagg/ can do it. Give that role the "bypass advanced aggregation" permission and then they can add ?advagg=-1 to the end of the URL to see the change. Will also mention that using a dev environment when altering css/js is ideal.
<!DOCTYPE html> <html data-require="math math-format expressions graphie interactive"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Graphing and solving systems of inequalities</title> <script data-main="../local-only/main.js" src="../local-only/require.js"></script> </head> <body> <div class="exercise"> <div class="problems"> <div> <div class="vars"> <div data-ensure="abs(YINT_1 + SLOPE_FRAC_1[0]) &lt;= 10 &amp;&amp; abs(YINT_2 + SLOPE_FRAC_2[0]) &lt;= 10"> <var id="X">randRangeNonZero(-5, 5)</var> <var id="Y">randRange(-5, 5)</var> <div data-ensure="abs(YINT_1 - YINT_2) &gt; 3"> <var id="YINT_1">randRangeExclude(-5, 5, [0, Y])</var> <var id="YINT_2">randRangeExclude(-5, 5, [0, Y])</var> </div> <var id="SLOPE_1">(Y - YINT_1) / X</var> <var id="SLOPE_2">(Y - YINT_2) / X</var> <var id="SLOPE_FRAC_1">toFraction(SLOPE_1, 0.001)</var> <var id="SLOPE_FRAC_2">toFraction(SLOPE_2, 0.001)</var> </div> <var id="PRETTY_SLOPE_1">fractionVariable(SLOPE_FRAC_1[0], SLOPE_FRAC_1[1], "x")</var> <var id="PRETTY_SLOPE_2">fractionVariable(SLOPE_FRAC_2[0], SLOPE_FRAC_2[1], "x")</var> <var id="MULT_1">randRangeNonZero(-3, 3)</var> <var id="MULT_2">randRangeNonZero(-3, 3)</var> <var id="A_1">SLOPE_FRAC_1[0] * -MULT_1</var> <var id="A_2">SLOPE_FRAC_2[0] * -MULT_2</var> <var id="B_1">SLOPE_FRAC_1[1] * MULT_1</var> <var id="B_2">SLOPE_FRAC_2[1] * MULT_2</var> <var id="C_1">SLOPE_FRAC_1[1] * YINT_1 * MULT_1</var> <var id="C_2">SLOPE_FRAC_2[1] * YINT_2 * MULT_2</var> <var id="STD_FORM_1">randFromArray([true, false])</var> <var id="STD_FORM_2">randFromArray([true, false])</var> <var id="COMP_1">randFromArray(["&lt;", "&gt;", "≤", "≥"])</var> <var id="COMP_2">randFromArray(["&lt;", "&gt;", "≤", "≥"])</var> <var id="STD_FORM_COMP_1">B_1 &lt; 0 ? { "&lt;": "&gt;", "&gt;": "&lt;", "≤": "≥", "≥": "≤" }[COMP_1] : COMP_1</var> <var id="STD_FORM_COMP_2">B_2 &lt; 0 ? { "&lt;": "&gt;", "&gt;": "&lt;", "≤": "≥", "≥": "≤" }[COMP_2] : COMP_2</var> <var id="LESS_THAN_1">COMP_1 === "&lt;" || COMP_1 === "≤"</var> <var id="LESS_THAN_2">COMP_2 === "&lt;" || COMP_2 === "≤"</var> <var id="INCLUSIVE_1">COMP_1 === "≥" || COMP_1 === "≤"</var> <var id="INCLUSIVE_2">COMP_2 === "≥" || COMP_2 === "≤"</var> <var id="EDGE_POINT" data-ensure="abs(EDGE_POINT[0]) < 10 && abs(EDGE_POINT[1]) < 10">(function() { // Create a point on one of the lines var minN = -floor((9 + X) / abs(SLOPE_FRAC_1[1])); var maxN = floor((9 - X) / abs(SLOPE_FRAC_1[1])); var n = randRange(minN, maxN); if (rand(2) &lt; 1) { return [X + n * SLOPE_FRAC_1[1], Y + n * SLOPE_FRAC_1[0]]; } else { return [X + n * SLOPE_FRAC_2[1], Y + n * SLOPE_FRAC_2[0]]; } })()</var> <!-- !/3 of the time one of the points should be on the line --> <var id="POINT">randFromArray([ [randRangeExclude(-9, 9, [-1, -2]), randRangeExclude(-9, 9, [-1, -2])], [randRangeExclude(-9, 9, [-1, -2]), randRangeExclude(-9, 9, [-1, -2])], EDGE_POINT ]) </var> <var id="POINT_SOLUTION">(function() { var p = POINT[1]; var p1 = SLOPE_1 * POINT[0] + YINT_1; var p2 = SLOPE_2 * POINT[0] + YINT_2; return ((COMP_1 === "&lt;" &amp;&amp; p &lt; p1) || (COMP_1 === "≤" &amp;&amp; p &lt;= p1) || (COMP_1 === "&gt;" &amp;&amp; p &gt; p1) || (COMP_1 === "≥" &amp;&amp; p &gt;= p1)) &amp;&amp; ((COMP_2 === "&lt;" &amp;&amp; p &lt; p2) || (COMP_2 === "≤" &amp;&amp; p &lt;= p2) || (COMP_2 === "&gt;" &amp;&amp; p &gt; p2) || (COMP_2 === "≥" &amp;&amp; p &gt;= p2)); })()</var> </div> <p class="question"> Graph the following system of inequalities: </p> <div class="problem"> <p data-if="STD_FORM_1"> <code>\blue{<var>expr([ "+", [ "*", A_1, "x" ], [ "*", B_1, "y" ] ])</var> <var>STD_FORM_COMP_1</var> <var>C_1</var>}</code> </p><p data-else=""> <code>\blue{y <var>COMP_1</var> <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var>}</code> </p> <p data-if="STD_FORM_2"> <code>\green{<var>expr([ "+", [ "*", A_2, "x" ], [ "*", B_2, "y" ] ])</var> <var>STD_FORM_COMP_2</var> <var>C_2</var>}</code> </p> <p data-else=""> <code>\green{y <var>COMP_2</var> <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var>}</code> </p> <form> <span class="hint_blue" style="width: 40px">Inequality 1:</span> <input onclick="javascript: KhanUtil.currentGraph.graph.shadetop1 = !KhanUtil.currentGraph.graph.shadetop1; KhanUtil.currentGraph.graph.update(); " type="button" value="Shade other side"> <ul class="inequalities-one-line-radios"> <li> <label class="hint_blue"> <input checked name="dashradio1" onclick="javascript: KhanUtil.currentGraph.graph.dasharray1 = ''; KhanUtil.currentGraph.graph.update(); " type="radio" value="solid"> Solid line </label> </li> <li> <label class="hint_blue"> <input name="dashradio1" onclick="javascript: KhanUtil.currentGraph.graph.dasharray1 = '- '; KhanUtil.currentGraph.graph.update(); " type="radio" value="dashed"> Dashed line </label> </li> </ul> <br> <span class="hint_green" style="width: 40px">Inequality 2:</span> <input onclick="javascript: KhanUtil.currentGraph.graph.shadetop2 = !KhanUtil.currentGraph.graph.shadetop2; KhanUtil.currentGraph.graph.update(); " type="button" value="Shade other side"> <ul class="inequalities-one-line-radios"> <li> <label class="hint_green"> <input checked name="dashradio2" onclick="javascript: KhanUtil.currentGraph.graph.dasharray2 = ''; KhanUtil.currentGraph.graph.update(); " type="radio" value="solid"> Solid line </label> </li> <li> <label class="hint_green"> <input name="dashradio2" onclick="javascript: KhanUtil.currentGraph.graph.dasharray2 = '- '; KhanUtil.currentGraph.graph.update(); " type="radio" value="dashed"> Dashed line </label> </li> </ul> </form> <div class="graphie inequalities-padding" id="grid"> graphInit({ range: 11, scale: 20, axisArrows: "&lt;-&gt;", tickStep: 1, labelStep: 1, gridOpacity: 0.05, axisOpacity: 0.2, tickOpacity: 0.4, labelOpacity: 0.5 }); label( [ 0, -11 ], "y", "below" ); label( [ 11, 0 ], "x", "right" ); addMouseLayer(); graph.pointA = addMovablePoint({ coord: [ -5, 5 ], snapX: 0.5, snapY: 0.5, normalStyle: { stroke: KhanUtil.BLUE, fill: KhanUtil.BLUE } }); graph.pointB = addMovablePoint({ coord: [ 5, 5 ], snapX: 0.5, snapY: 0.5, normalStyle: { stroke: KhanUtil.BLUE, fill: KhanUtil.BLUE } }); graph.pointC = addMovablePoint({ coord: [ -5, -5 ], snapX: 0.5, snapY: 0.5, normalStyle: { stroke: KhanUtil.BLUE, fill: KhanUtil.BLUE } }); graph.pointD = addMovablePoint({ coord: [ 5, -5 ], snapX: 0.5, snapY: 0.5, normalStyle: { stroke: KhanUtil.BLUE, fill: KhanUtil.BLUE } }); graph.set = raphael.set(); graph.update = function() { graph.set.remove(); if ( abs( graph.pointB.coord[0] - graph.pointA.coord[0] ) &gt; 0.001 ) { var slope = ( graph.pointB.coord[1] - graph.pointA.coord[1] ) / ( graph.pointB.coord[0] - graph.pointA.coord[0] ); var yint = slope * ( 0 - graph.pointA.coord[0] ) + graph.pointA.coord[1]; var shadeEdge = ( ( graph.pointA.coord[0] &lt; graph.pointB.coord[0] ) ? graph.shadetop1 : !graph.shadetop1 ) ? 11 : -11; style({ stroke: BLUE, strokeWidth: 2, strokeDasharray: graph.dasharray1 }, function() { graph.set.push( line( [ -11, -11 * slope + yint ], [ 11, 11 * slope + yint ] ) ); }); style({ fill: BLUE, stroke: null, opacity: KhanUtil.FILL_OPACITY }, function() { graph.set.push( path([ [ 11, shadeEdge ], [ 11, 11 * slope + yint ], [ -11, -11 * slope + yint ], [ -11, shadeEdge ] ]) ); }); } else { // vertical line var x = graph.pointA.coord[0]; var shadeEdge = ( ( graph.pointB.coord[1] &lt; graph.pointA.coord[1] ) ? graph.shadetop1 : !graph.shadetop1 ) ? 11 : -11; style({ stroke: BLUE, strokeWidth: 2, strokeDasharray: graph.dasharray1 }, function() { graph.set.push( line( [ x, -11 ], [ x, 11 ] ) ); }); style({ fill: BLUE, stroke: null, opacity: KhanUtil.FILL_OPACITY }, function() { graph.set.push( path([ [ x, -11 ], [ x, 11 ], [ shadeEdge, 11 ], [ shadeEdge, -11 ] ]) ); }); } if ( abs( graph.pointD.coord[0] - graph.pointC.coord[0] ) &gt; 0.001 ) { var slope = ( graph.pointD.coord[1] - graph.pointC.coord[1] ) / ( graph.pointD.coord[0] - graph.pointC.coord[0] ); var yint = slope * ( 0 - graph.pointC.coord[0] ) + graph.pointC.coord[1]; var shadeEdge = ( ( graph.pointC.coord[0] &lt; graph.pointD.coord[0] ) ? graph.shadetop2 : !graph.shadetop2 ) ? 11 : -11; style({ stroke: GREEN, strokeWidth: 2, strokeDasharray: graph.dasharray2 }, function() { graph.set.push( line( [ -11, -11 * slope + yint ], [ 11, 11 * slope + yint ] ) ); }); style({ fill: GREEN, stroke: null, opacity: KhanUtil.FILL_OPACITY }, function() { graph.set.push( path([ [ 11, shadeEdge ], [ 11, 11 * slope + yint ], [ -11, -11 * slope + yint ], [ -11, shadeEdge ] ]) ); }); } else { // vertical line var x = graph.pointC.coord[0]; var shadeEdge = ( ( graph.pointD.coord[1] &lt; graph.pointC.coord[1] ) ? graph.shadetop2 : !graph.shadetop2 ) ? 11 : -11; style({ stroke: GREEN, strokeWidth: 2, strokeDasharray: graph.dasharray2 }, function() { graph.set.push( line( [ x, -11 ], [ x, 11 ] ) ); }); style({ fill: GREEN, stroke: null, opacity: KhanUtil.FILL_OPACITY }, function() { graph.set.push( path([ [ x, -11 ], [ x, 11 ], [ shadeEdge, 11 ], [ shadeEdge, -11 ] ]) ); }); } graph.set.toBack(); }; graph.showCorrect = function() { graph.pointA.setCoord([ 0, YINT_1 ]); graph.pointB.setCoord([ SLOPE_FRAC_1[1], YINT_1 + SLOPE_FRAC_1[0] ]); graph.pointC.setCoord([ 0, YINT_2 ]); graph.pointD.setCoord([ SLOPE_FRAC_2[1], YINT_2 + SLOPE_FRAC_2[0] ]); graph.shadetop1 = graph.pointA.coord[0] &gt; graph.pointB.coord[0] ? LESS_THAN_1 : !LESS_THAN_1; graph.shadetop2 = graph.pointC.coord[0] &gt; graph.pointD.coord[0] ? LESS_THAN_2 : !LESS_THAN_2; if ( INCLUSIVE_1 ) { graph.dasharray1 = ''; $( 'input[name=dashradio1][value=solid]' ).attr( 'checked', true ); } else { graph.dasharray1 = '- '; $( 'input[name=dashradio1][value=dashed]' ).attr( 'checked', true ); } if ( INCLUSIVE_2 ) { graph.dasharray2 = ''; $( 'input[name=dashradio2][value=solid]' ).attr( 'checked', true ); } else { graph.dasharray2 = '- '; $( 'input[name=dashradio2][value=dashed]' ).attr( 'checked', true ); } graph.update(); }; // A and B can't be in the same place graph.pointA.onMove = function( x, y ) { if ( x != graph.pointB.coord[0] || y != graph.pointB.coord[1] ) { graph.pointA.setCoord([ x, y ]); graph.update(); return true; } else { return false; } } graph.pointB.onMove = function( x, y ) { if ( x != graph.pointA.coord[0] || y != graph.pointA.coord[1] ) { graph.pointB.setCoord([ x, y, ]); graph.update(); return true; } else { return false; } } // C and D can't be in the same place graph.pointC.onMove = function( x, y ) { if ( x != graph.pointD.coord[0] || y != graph.pointD.coord[1] ) { graph.pointC.setCoord([ x, y ]); graph.update(); return true; } else { return false; } } graph.pointD.onMove = function( x, y ) { if ( x != graph.pointC.coord[0] || y != graph.pointC.coord[1] ) { graph.pointD.setCoord([ x, y, ]); graph.update(); return true; } else { return false; } } graph.dasharray1 = ""; graph.dasharray2 = ""; graph.shadetop1 = true; graph.shadetop2 = false; graph.update(); graph.pointA.toFront(); graph.pointB.toFront(); graph.pointC.toFront(); graph.pointD.toFront(); </div> <p> Is <code>(<var>POINT[0]</var>, <var>POINT[1]</var>)</code> a solution to this system of the inequalities? </p> <div class="render-answer-area-here"></div> </div> <div class="solution" data-type="multiple"> <div class="instruction"> <ul> <li> <label> <input id="yes" checked name="isSolution" type="radio"> <span>Yes</span> </label> </li> <li> <label> <input id="no" name="isSolution" type="radio"> <span>No</span> </label> </li> </ul> </div> <div class="sol" data-type="custom"> <div class="instruction"></div> <div class="guess">[ graph.pointA.coord, graph.pointB.coord, graph.pointA.coord[0] &gt; graph.pointB.coord[0] ? graph.shadetop1 : !graph.shadetop1, graph.dasharray1 === "- " ? false : true, graph.pointC.coord, graph.pointD.coord, graph.pointC.coord[0] &gt; graph.pointD.coord[0] ? graph.shadetop2 : !graph.shadetop2, graph.dasharray2 === "- " ? false : true, $("input[name='isSolution']:checked").attr("id")] </div> <div class="validator-function"> if (_.isEqual(guess, [[-5,5],[5,5],false,true,[-5,-5],[5,-5],true,true])) { return ""; } var slope1 = ( guess[1][1] - guess[0][1] ) / ( guess[1][0] - guess[0][0] ); var yint1 = slope1 * ( 0 - guess[0][0] ) + guess[0][1]; var slope2 = ( guess[5][1] - guess[4][1] ) / ( guess[5][0] - guess[4][0] ); var yint2 = slope2 * ( 0 - guess[4][0] ) + guess[4][1]; return POINT_SOLUTION === (guess[8] === 'yes') &amp;&amp; (abs(SLOPE_1 - slope1) &lt; 0.001 &amp;&amp; abs(YINT_1 - yint1) &lt; 0.001 &amp;&amp; guess[2] === LESS_THAN_1 &amp;&amp; guess[3] === INCLUSIVE_1 &amp;&amp; abs(SLOPE_2 - slope2) &lt; 0.001 &amp;&amp; abs(YINT_2 - yint2) &lt; 0.001 &amp;&amp; guess[6] === LESS_THAN_2 &amp;&amp; guess[7] === INCLUSIVE_2) || (abs(SLOPE_2 - slope1) &lt; 0.001 &amp;&amp; abs(YINT_2 - yint1) &lt; 0.001 &amp;&amp; guess[2] === LESS_THAN_2 &amp;&amp; guess[3] === INCLUSIVE_2 &amp;&amp; abs(SLOPE_1 - slope2) &lt; 0.001 &amp;&amp; abs(YINT_1 - yint2) &lt; 0.001 &amp;&amp; guess[6] === LESS_THAN_1 &amp;&amp; guess[7] === INCLUSIVE_1) </div> <div class="show-guess"> graph.pointA.setCoord( guess[0] ); graph.pointB.setCoord( guess[1] ); graph.pointC.setCoord( guess[4] ); graph.pointD.setCoord( guess[5] ); graph.shadetop1 = graph.pointA.coord[0] &gt; graph.pointB.coord[0] ? guess[2] : !guess[2]; graph.shadetop2 = graph.pointC.coord[0] &gt; graph.pointD.coord[0] ? guess[6] : !guess[6]; if ( guess[3] ) { graph.dasharray1 = ""; $( "input[name=dashradio1][value=solid]" ).attr( "checked", true ); } else { graph.dasharray1 = "- "; $( "input[name=dashradio1][value=dashed]" ).attr( "checked", true ); } if ( guess[7] ) { graph.dasharray2 = ""; $( "input[name=dashradio2][value=solid]" ).attr( "checked", true ); } else { graph.dasharray2 = "- "; $( "input[name=dashradio2][value=dashed]" ).attr( "checked", true ); } graph.update(); </div> </div> </div> </div> </div> <div class="hints"> <div> <p> Let's first graph the boundary lines. If an inequality is in slope-intercept form, we can use it to determine the slope and the <code>y</code>-intercept of the line. If an inequality isn't in slope-intercept form, let's convert it to this form. </p> <p> The inequality signs in each inequality then tell us which side of the lines are to be shaded and whether the lines are solid or dashed. </p> <p> Finally, we can see whether or not our given point lies in the shaded area representing both inequalities to check if it's a solution of the system. </p> </div> <div> <div data-if="STD_FORM_1"> <p> The first inequality, <code>\blue{<var>expr(["+", ["*", A_1, "x"], ["*", B_1, "y"]])</var><var>STD_FORM_COMP_1</var><var>C_1</var>}</code>, isn't in slope-intercept form. Let's convert it: </p> <p><code> \qquad\begin{eqnarray} <var>expr(["+", ["*", A_1, "x"], ["*", B_1, "y"]])</var> &amp;<var>STD_FORM_COMP_1</var>&amp; <var>C_1</var> \\ <var>coefficient(B_1)</var>y &amp;<var>STD_FORM_COMP_1</var>&amp; <var>expr([ "+", [ "*", -A_1, "x" ], C_1 ])</var> \\ y &amp;<var>COMP_1</var>&amp; <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var> \end{eqnarray} </code></p> <p> Now we see the slope of the line is <code>\pink{<var>fractionReduce(SLOPE_FRAC_1[0], SLOPE_FRAC_1[1])</var>}</code> and its <code>y</code>-intercept is <code>\purple{(0, <var>YINT_1</var>)}.</code> </p> </div> <p data-else=""> The first inequality, <code>\blue{y <var>COMP_1</var> <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var>}</code>, is in slope-intercept form: the slope of the line is <code><var>fractionReduce(SLOPE_FRAC_1[0], SLOPE_FRAC_1[1])</var></code> and its <code>y</code>-intercept is <code>(0, <var>YINT_1</var>).</code> </p> <p> According to the slope, we know the line also passes through <code>(0 \pink{+ <var>SLOPE_FRAC_1[1]</var>}, <var>YINT_1</var> \purple{+ <var>SLOPE_FRAC_1[0]</var>}) = (<var>SLOPE_FRAC_1[1]</var>, <var>YINT_1 + SLOPE_FRAC_1[0]</var>) </code>. </p> <div class="graphie" data-update="grid"> graph.pointA.setCoord([0, YINT_1]); graph.pointB.setCoord([SLOPE_FRAC_1[1], YINT_1 + SLOPE_FRAC_1[0]]); graph.update(); </div> </div> <div> <div data-if="STD_FORM_2"> <p> The second inequality, <code>\green{<var>expr(["+", ["*", A_2, "x"], ["*", B_2, "y"]])</var><var>STD_FORM_COMP_2</var><var>C_2</var>}</code>, isn't in slope-intercept form. Let's convert it: </p> <p><code> \qquad\begin{eqnarray} <var>expr(["+", ["*", A_2, "x"], ["*", B_2, "y"]])</var> &amp;<var>STD_FORM_COMP_2</var>&amp; <var>C_2</var> \\ <var>coefficient(B_2)</var>y &amp;<var>STD_FORM_COMP_2</var>&amp; <var>expr([ "+", [ "*", -A_2, "x" ], C_2 ])</var> \\ y &amp;<var>COMP_2</var>&amp; <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var> \end{eqnarray} </code></p> <p> Now we see the slope of the line is <code>\pink{<var>fractionReduce(SLOPE_FRAC_2[0], SLOPE_FRAC_2[1])</var>}</code> and its <code>y</code>-intercept is <code>\purple{(0, <var>YINT_2</var>)}.</code> </p> </div> <p data-else=""> The second inequality, <code>\green{y <var>COMP_2</var> <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var>}</code>, is in slope-intercept form: the slope of the line is <code><var>fractionReduce(SLOPE_FRAC_2[0], SLOPE_FRAC_2[1])</var></code> and its <code>y</code>-intercept is <code>(0, <var>YINT_2</var>).</code> </p> <p> According to the slope, we know the line also passes through <code>(0 \pink{+ <var>SLOPE_FRAC_2[1]</var>}, <var>YINT_2</var> \purple{+ <var>SLOPE_FRAC_2[0]</var>}) = (<var>SLOPE_FRAC_2[1]</var>, <var>YINT_2 + SLOPE_FRAC_2[0]</var>) </code>. </p> <div class="graphie" data-update="grid"> graph.pointC.setCoord([0, YINT_2]); graph.pointD.setCoord([SLOPE_FRAC_2[1], YINT_2 + SLOPE_FRAC_2[0]]); graph.update(); </div> </div> <div> <div data-if="LESS_THAN_1"> <p data-if="INCLUSIVE_1"> According to the sign of the first inequality, <code>\blue{y <var>COMP_1</var> <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var>}</code>, its solution set lies <em>below</em> the boundary line and the line should be <em>solid</em>. </p><p data-else=""> According to the sign of the first inequality, <code>\blue{y <var>COMP_1</var> <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var>}</code>, its solution set lies <em>below</em> the boundary line and the line should be <em>dashed</em>. </p> </div><div data-else=""> <p data-if="INCLUSIVE_1"> According to the sign of the first inequality, <code>\blue{y <var>COMP_1</var> <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var>}</code>, its solution set lies <em>above</em> the boundary line and the line should be <em>solid</em>. </p><p data-else=""> According to the sign of the first inequality, <code>\blue{y <var>COMP_1</var> <var>PRETTY_SLOPE_1</var> + <var>YINT_1</var>}</code>, its solution set lies <em>above</em> the boundary line and the line should be <em>dashed</em>. </p> </div> <div data-if="LESS_THAN_2"> <p data-if="INCLUSIVE_2"> According to the sign of the second inequality, <code>\green{y <var>COMP_2</var> <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var>}</code>, its solution set lies <em>below</em> the boundary line and the line should be <em>solid</em>. </p><p data-else=""> According to the sign of the second inequality, <code>\green{y <var>COMP_2</var> <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var>}</code>, its solution set lies <em>below</em> the boundary line and the line should be <em>dashed</em>. </p> </div><div data-else=""> <p data-if="INCLUSIVE_2"> According to the sign of the second inequality, <code>\green{y <var>COMP_2</var> <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var>}</code>, its solution set lies <em>above</em> the boundary line and the line should be <em>solid</em>. </p><p data-else=""> According to the sign of the second inequality, <code>\green{y <var>COMP_2</var> <var>PRETTY_SLOPE_2</var> + <var>YINT_2</var>}</code>, its solution set lies <em>above</em> the boundary line and the line should be <em>dashed</em>. </p> </div> <div class="graphie" data-update="grid"> graph.shadetop1 = graph.pointA.coord[0] &gt; graph.pointB.coord[0] ? LESS_THAN_1 : !LESS_THAN_1; graph.shadetop2 = graph.pointC.coord[0] &gt; graph.pointD.coord[0] ? LESS_THAN_2 : !LESS_THAN_2; if (INCLUSIVE_1) { graph.dasharray1 = ''; $('input[name=dashradio1][value=solid]').attr('checked', true); } else { graph.dasharray1 = '- '; $('input[name=dashradio1][value=dashed]').attr('checked', true); } if (INCLUSIVE_2) { graph.dasharray2 = ''; $('input[name=dashradio2][value=solid]').attr('checked', true); } else { graph.dasharray2 = '- '; $('input[name=dashradio2][value=dashed]').attr('checked', true); } graph.update(); </div> </div> <div> <div class="graphie" data-update="grid"> style({ stroke: RED, fill: RED }, function() { circle(POINT, 0.2); label(POINT, "\\red{(" + POINT[0] + ", " + POINT[1] + ")}", "right"); }); </div> <p data-if="POINT_SOLUTION"> We can see that the point <code>\red{(<var>POINT[0]</var>, <var>POINT[1]</var>)}</code> lies in the shaded area representing <em>both</em> inequalities, which means that it <em>is</em> a solution of the system represented by the graph. </p><p data-else=""> We can see that the point <code>\red{(<var>POINT[0]</var>, <var>POINT[1]</var>)}</code> does not lie in the shaded area representing <em>both</em> inequalities, which means that it <em>is not</em> a solution of the system represented by the graph. </p> </div> </div> </div> </body> </html>
Illinois Rep. Cheri Bustos weighed in on the hotly debated "Medicare-for-all" bill — a sweeping overhaul of the nation’s health care system — on Tuesday, shrugging it off as just one idea. Bustos, who was elected chairwoman of the Democratic Congressional Campaign Committee (DCCC) for the House of Representatives in late November, said in an interview with The Hill on Wednesday that the estimated $33 trillion price tag was "a little scary" and suggested there may be alternative options. “The Green New Deal is an idea. ‘Medicare-for-all’ is an idea. But there are many others that are out there,” Bustos told the publication. HOW MUCH WOULD 'MEDICARE-FOR-ALL' COST? DEMOCRATS' HEALTH CARE PLAN EXPLAINED "Medicare-for-all" would expand benefits beyond what is already offered under former President Barack Obama's Affordable Care Act. It would require significant tax increases since the government would essentially take over premiums now paid by employers and individuals as it replaces the private health insurance industry. "The transition from what we have now to Medicare for all, it’s just hard to conceive how that would work." — Cheri Bustos “What do we have — 130 million-something Americans who get their health insurance through their work? The transition from what we have now to 'Medicare-for-all,' it’s just hard to conceive how that would work. You have so many jobs attached to the health care industry," Bustos commented. On her campaign website, Bustos touts her previous career in the health industry, working for "one of the nation’s largest non-denominational, non-profit health care systems" to help families find affordable coverage. She worked in the health field "before, during and after the passage of the Affordable Care Act," her biography states. A study released last summer by the Mercatus Center at George Mason University estimated it would cost $32.6 trillion ($3.26 trillion per year) over 10 years. For comparison, the federal budget proposal for the fiscal year 2019 was $4.4 trillion, the Congressional Budget Office states. NEW 'MEDICARE-FOR-ALL' BILL WOULD LARGELY OUTLAW PRIVATE INSURANCE However, Vermont Sen. Bernie Sanders, who first drafted the proposal, has blasted the Mercatus Center's analysis as “grossly misleading and biased." More than 100 House Democrats including a handful of 2020 presidential hopefuls have already agreed to co-sponsor the legislation — which is strongly opposed by President Trump and the GOP — that would move the U.S. to a virtual single-payer system. House Speaker Nancy Pelosi, D-Calif., has yet to endorse the bill but indicated she would allow hearings on the legislation to proceed. Bustos said she, too, would be open to holding discussions on "Medicare-for-all" in the near future. “The vast majority of Democrats in the U.S. House of Representatives want to see us fix the Affordable Care Act and make it functional ... so we can protect people with pre-existing conditions and so people have affordable health care," she told The Hill. Fox News' Adam Shaw contributed to this report.
Yes, I know, it's about WitchCraft, not Wicca, but the title is staying. Basically my failures in WitchCraft, the notes of a fumbling pagan; you might even call it an advice column:) Maybe I'll add a 2nd chapter to explain some stuff... Please R+R. I wrote this with the intention of it being a duet, I have the music in my head..Anyways its basically the norm...seeing your crush or your first love be in love with someone else who treats them like crap and you know in your heart you were the best for Moderate language. More musings on being a Wiccan in a Catholic School- but no offense intended to anyone! Maybe not the cause, but the feelings do apply to everyone at one point or another! Happy New Year! Hey, y'all, I wrote this for school and even thought my teacher didn't like it, my mom did. She wants me to submit it to a publisher, but I'm not sure. I'd really appreciate your honest opinion, please?
Article content Calgary MP Rob Anders, known for his contentious public comments, is asking Canadians to oppose transgender rights Bill C-279, dubbed by critics the “Bathroom Bill.” Sponsored by B.C. NDP MP Randall Garrison, the bill proposes recognizing gender identity and gender expression as groups under Canada’s Criminal Code hate crimes section. It would also offer these individuals protection under the Canadian Human Rights Act. We apologize, but this video has failed to load. tap here to see other videos from our team. Try refreshing your browser, or MP Rob Anders takes aim at transgender rights 'bathroom bill' proposal Back to video In a petition posted on his website, Anders says the bill’s aim “is to give transgendered men access to women’s public washroom facilities.” “[It] is the duty of the House of Commons to protect and safeguard our children from any exposure and harm that will come from giving a man access to women’s public washroom facilities,” the petition reads. Jan Buterman, a transgender advocate,told the CBC that Anders’ interpretation of the bill is “ludicrous.” “The suggestion that this is somehow some … conspiracy of trans people to sneak into bathrooms deliberately to harm people it’s ludicrous,” Buterman said.
2015 FIFA Women's World Cup qualification – UEFA Group 5 The 2015 FIFA Women's World Cup qualification UEFA Group 5 was a UEFA qualifying group for the 2015 FIFA Women's World Cup. The group comprised Albania, Belgium, Greece, Netherlands, Norway and Portugal. The group winners qualified directly for the 2015 FIFA Women's World Cup. Among the seven group runners-up, the four best (determined by records against the first-, third-, fourth- and fifth-placed teams only for balance between different groups) advanced to the play-offs. Standings Results All times are CEST (UTC+02:00) during summer and CET (UTC+01:00) during winter. Goalscorers 13 goals Vivianne Miedema 12 goals Aline Zeler 10 goals Tessa Wullaert 9 goals Renée Slegers 8 goals Caroline Graham Hansen 6 goals Isabell Herlovsen 5 goals Manon Melis Ada Hegerberg Maren Mjelde 4 goals Lieke Martens Kristine Wigdahl Hegland Elise Thorsnes 3 goals Lien Mermans Mandy van den Berg Laura Luís Carolina Mendes Jéssica Silva 2 goals Tinne De Caigny Lorca Van De Putte Sophia Koggouli Eshly Bakker Anouk Dekker Emilie Haavi Ingvild Stensland Vanessa Rodrigues 1 goal Albina Rrahmani Aurora Serenaj Furtuna Velaj Janice Cayman Maud Coutereels Cécile de Gernier Davina Philtjens Elke Van Gorp Sofia Pelekouda Christina Kokoviadou Dimitra Panteliadou Tessel Middag Sherida Spitse Daniëlle van de Donk Claudia van den Heiligenberg Nora Holstad Berge Melissa Bjånesøy Marit Fiane Christensen Ida Elise Enget Solveig Gulbrandsen Carole Costa Edite Fernandes Cristiana Garcia Vanessa Malho Mónica Mendes Cláudia Neto Regina Pereira 1 own goal Lucie Gjini (playing against the Netherlands) Ezmiralda Franja (playing against Greece) Efrosini Xera (playing against Portugal) References External links Women's World Cup – Qualifying round Group 5, UEFA.com Group 5 Category:2013 in Norwegian women's football Category:2014 in Norwegian women's football Category:2013–14 in Dutch women's football Category:2014–15 in Dutch women's football Qual Category:2013–14 in Belgian football Category:2014–15 in Belgian football Category:2013–14 in Greek football Category:2014–15 in Greek football Category:2013–14 in Portuguese women's football Category:2014–15 in Portuguese women's football Category:2013–14 in Albanian football Category:2014–15 in Albanian football Qual
Tobler-moan: UK fans bare sweet teeth over scaled-down chocolate bar Reuters Staff 4 Min Read ZURICH (Reuters) - British fans of Toblerone chocolate bars have bared their sweet teeth over a cost-cutting move to space out the distinctive jagged peaks on versions of the Swiss treat sold in the UK. The scaled-down version was prompted by higher commodity prices and had nothing to do with the British pound’s plunge in value since Britons voted in June to exit the European Union, manufacturer Mondelez International (MDLZ.O) said. All the same, a Tobler-moan broke out on social media in Britain as it was the third case in a month in which UK brands have taken steps - including hefty price rises - to offset higher costs for their products in the wake of the Brexit vote. “This must be up there with the dumbest corporate decisions of all time,” Toblerone customer Michal Tat posted. “You have a somewhat premium chocolate bar which is very well known for its distinctive shape, and to save money you change the shape? Now you have a premium-priced product that looks like a weird knock-off of itself....Shame on you, Mondelez.” “It’s not as if people eat Toblerone every day. You could literally double the price and people would still buy it. Fools,” posted Nicholas Barker. Mondelez reduced the weight of a version of Toblerone sold to British discounter Poundland to 150 grams from 170 grams by spacing its triangular chocolate peaks out more widely. Another altered version, lightened to 360 grams from 400 grams, is sold in stores other than Poundland, a Toblerone spokeswoman said. While denying that the reductions were related to any consequences of Brexit, Mondelez said on Tuesday that Toblerone bars would continue to be sold elsewhere without changes. “We always work hard to ensure we offer value for money for our consumers, but like many other companies, unfortunately we are experiencing higher costs for many ingredients,” the Toblerone spokeswoman said. “We carry these costs for as long as possible, but to ensure Toblerone remains on shelf, is affordable and retains the iconic shape we all know and love, we have had to reduce the weight of this particular bar (for the UK market).” Slideshow (3 Images) Mondelez exports Toblerone to 120 countries from a Swiss plant in Bern. Its main sales channel is duty-free outlets. Sugar SBc1 prices have risen about 45 percent this year. Milk prices have also started to rise, boosted by a pick-up in demand and tighter supplies in the EU. Cocoa prices CCc2 have been weaker this year but remain comparatively high after hitting a more than four-year peak late last year. Economists believe that sterling’s slump since the June vote - it is down about 19 percent against the dollar and 16 percent against the euro - will lead to higher prices in Britain despite fierce competition between supermarkets. Unilever (ULVR.L) was the first to move with an attempt to impose 10 percent rises on a host of big brands like savory spread Marmite, Pot Noodle and Magnum ice cream last month, triggering a dispute with supermarket group Tesco (TSCO.L). A bag of Britain’s biggest-selling potato chips is set to rise by 10 percent after maker Walkers said this week the sterling slump had pushed up manufacturing costs.
Formulate Fabric Make a stunning impression with Formulate™ fabric 10ft inline displays. Available in Essential, Master and Designer series collections, these 10ft inline displays are lightweight, highly portable and are easy to assemble with the use of push-button connectors and pull-over fabric graphics.
Health officials are urging all patients who were exposed prior to March 4 to get tested. The last time we visited Washington state in these posts, there was a significant outbreak of measles in the county adjacent to the blue wonderland of Portland, Oregon. Now, public health officials have warned that more than 1,000 students have a potential risk for HIV and hepatitis B and C after they received dental care at 12 schools in and around Seattle due to improperly sterilized dental tools. The utensils, which were used to treat 1,250 kids at the schools’ dental clinics, weren’t properly sterilized, KING-TV reported. The King County Department of Health said the risk was low, but recommended students who may have been exposed get tested, according to KING 5. Neighborhood Care, which operates the clinics for low-income families, said in a statement in part, “We immediately re-trained all school-based dental staff in sterilization processes and policies. We will also reassure that all new and current dental assistants across the Neighborcare Health organization are following sterilization procedures.” Contracting HIV (Human Immunodeficiency Virus that causes AIDS) leads to a lifetime of antiviral drug treatments. Hepatitis B and C are liver diseases caused by viruses and can readily spread from exposure to contaminated blood or needles. The notice covers specific schools in the Seattle area. The affected schools in Seattle are Denny International Middle School, Chief Sealth International High School, Van Asselt Elementary, Mercer Middle School, Roxhill Elementary, West Seattle Elementary, Highland Park Elementary, Madison Middle School, Beacon Hill International and Bailey Gazert Elementary. The two Vashon Island schools are Chautauqua Elementary and McMurray Middle School. Neighborcare Health said that during the period in question, the handpieces were cleaned with a germicidal disinfectant that kills pathogens associated with hepatitis B, hepatitis C, and HIV – but some handpieces did not undergo required heat sterilization. All other instruments used during dental procedures were properly sterilized. Officials with the King County Department of Health says the infection risk is low and that it has not received any reports of infected patients yet. ‘We are sincerely sorry for any distress this incident may have caused our patients, their families, and our partners,’ Neighborcare said in a statement. ‘We are working to be transparent in our understanding of what happened, the actual risk to potentially affected patients, and how we can ensure that this incident will not happen again.’ I would like to remind the health officials in Washington that the early stage symptoms of HIV, as well as Hepatitis B and C, may be mistaken for a severe cold or the flu. So, the statement that there are “no reports” of illness are meaningless until testing confirms none of the children have an infection. In fact, public health officials have urged all patients who were exposed prior to March 4, the date the sterilization process was corrected, to get tested. 
// Licensed to Elasticsearch B.V. under one or more contributor // license agreements. See the NOTICE file distributed with // this work for additional information regarding copyright // ownership. Elasticsearch B.V. licenses this file to you under // the Apache License, Version 2.0 (the "License"); you may // not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, // software distributed under the License is distributed on an // "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY // KIND, either express or implied. See the License for the // specific language governing permissions and limitations // under the License. package mage import ( "path/filepath" "github.com/pkg/errors" devtools "github.com/elastic/beats/v7/dev-tools/mage" ) const ( // configTemplateGlob matches Auditbeat modules' config file templates. configTemplateGlob = "module/*/_meta/config*.yml.tmpl" ) // OSSConfigFileParams returns the parameters for generating OSS config. func OSSConfigFileParams() devtools.ConfigFileParams { params, err := configFileParams(devtools.OSSBeatDir()) if err != nil { panic(err) } return params } // XPackConfigFileParams returns the parameters for generating X-Pack config. func XPackConfigFileParams() devtools.ConfigFileParams { params, err := configFileParams(devtools.OSSBeatDir(), devtools.XPackBeatDir()) if err != nil { panic(err) } return params } func configFileParams(dirs ...string) (devtools.ConfigFileParams, error) { var globs []string for _, dir := range dirs { globs = append(globs, filepath.Join(dir, configTemplateGlob)) } configFiles, err := devtools.FindFiles(globs...) if err != nil { return devtools.ConfigFileParams{}, errors.Wrap(err, "failed to find config templates") } if len(configFiles) == 0 { return devtools.ConfigFileParams{}, errors.Errorf("no config files found in %v", globs) } devtools.MustFileConcat("build/config.modules.yml.tmpl", 0644, configFiles...) p := devtools.DefaultConfigFileParams() p.Templates = append(p.Templates, devtools.OSSBeatDir("_meta/config/*.tmpl")) p.Templates = append(p.Templates, "build/config.modules.yml.tmpl") p.ExtraVars = map[string]interface{}{ "ArchBits": archBits, } return p, nil } // archBits returns the number of bit width of the GOARCH architecture value. // This function is used by the auditd module configuration templates to // generate architecture specific audit rules. func archBits(goarch string) int { switch goarch { case "386", "arm": return 32 default: return 64 } }
Love yourself enough to set boundaries. Your time and energy are precious and only you get to choose how you use it. Let's take care of ourselves and each other in this hard times. ♥️ All players who fulfil the requirements of this challenge in the next 7 days will get a share in the prize pool! There is no limit to the number of winners, and the more players get involved the higher the prize pool will be! Requirements: Win a bet in Mines with the following settings - 11 Gems, 9 mines (1010.26x multiplier) and the given S pattern: Minimum bet: 0.00000100 0.00002800 0.00003800 0.00015000 3.60000000 0.03700000 0.53000000 Bet must have been made after the commencement of this promotion. 1 valid entry per household. Hidden bets mode must be disabled during the whole duration of the promotion. Do not change your linked account during the whole duration of the promotion. 10+ Post count on the forum. Prize Pool(s): (based on the number of unique players) Minimum prize pool: 0.01 BTC If over 30 participants: 0.03 BTC If over 40 participants: 0.05 BTC If over 50 participants: 0.06 BTC If over 200 participants: 0.08 BTC How to Enter: Respond to this topic, and link your bet IDs using the link function in the text editor. (Paste your bet id, then highlight and click on the link bet button, as shown in image below)
import { Component, Input } from '@angular/core'; import { ButtonState, SearchBoxState, NavbarService } from '../../services/navbar.service'; import { WithUnsubscribe } from '../../../utils/mixins/with-unsubscribe'; import { AuthService } from '../../../shared/services/auth.service'; import { takeUntil } from 'rxjs/operators'; import { Route } from '../../nav-menu/models'; @Component({ selector: 'cs-navbar', templateUrl: './navbar.component.html', styleUrls: ['./navbar.component.scss'], }) export class NavbarComponent extends WithUnsubscribe() { @Input() public routes: Route[]; @Input() public currentRoute: Route; @Input() public currentSubroute: Route; public searchBoxOpen = false; public searchBoxState: SearchBoxState = this.toolbar.defaultSearchBoxState; public buttonState: ButtonState; constructor(private auth: AuthService, private toolbar: NavbarService) { super(); this.toolbar.searchBoxState$.pipe(takeUntil(this.unsubscribe$)).subscribe(searchBoxState => { this.searchBoxState = searchBoxState; }); this.toolbar.buttonState$.pipe(takeUntil(this.unsubscribe$)).subscribe(buttonState => { this.buttonState = buttonState; }); } public onSearchBoxClose() { this.searchBoxState.query = ''; this.searchBoxState.event(this.searchBoxState.query); this.searchBoxOpen = false; } }
NL West Now Commenting On: Decision on Gibbons has implications Gibbons' delay complicates O's roster Email Print By Spencer Fordin / MLB.com | FORT LAUDERDALE, Fla. -- The Orioles received a stay of discipline for designated hitter Jay Gibbons on Friday, a ruling that complicates the team's Opening Day roster and forces a last-minute decision on the veteran's status. Gibbons had previously been scheduled to serve a 15-game suspension for the purchase of performance-enhancing drugs, but the league owners and the Major League Baseball Players Association agreed Friday to postpone the discipline until after they've completed further negotiations on the Joint Drug Agreement. Kansas City outfielder Jose Guillen also had his suspension delayed. "I'm supposed to be suspended in three days, and now I'm not," said Gibbons, explaining his perspective on the late-breaking development. "Put it this way: I'm cautiously optimistic. At least they're talking." Gibbons was originally disciplined on Dec. 7 after a Sports Illustrated article reported that he'd received 10 shipments of performance-enhancing drugs from Signature Pharmacies between October 2003 and July 2005. Andy MacPhail, Baltimore's president of baseball operations, said he's not sure exactly how to read the latest ruling. MacPhail said he's not sure if the league will allow for another stay of discipline or make up its mind within 10 days. He also said that the late decision means that the Orioles may have an even harder time filling out their bench. When Baltimore expected Gibbons to begin the year on the restricted list, Tike Redman and Scott Moore positioned themselves as favorites for a reserve job. The Orioles adjusted by outrighting Redman off the roster Friday afternoon, but Moore will travel north with the team and won't make the roster unless he does so at Gibbons' expense. "From a club standpoint, the known is preferable to the unknown," said MacPhail. "But having some experience in it, I understand that sometimes you can't get it all resolved to your satisfaction at a particular time. We're working our roster down. We have until Sunday. We'll try to figure out what makes the most sense and try to prepare." Gibbons, one of the longest-serving Orioles, batted .230 with six home runs last season. The Orioles have made noise about preferring a reserve who can play more than one defensive position, which has led to speculation that Gibbons may get released. In that case, Baltimore would have to eat two years and nearly $12 million of his contract. MacPhail hasn't expressed an opinion one way or the other, but he said ownership likely wouldn't stand in his way. "We haven't made a decision," McPhail said in the moments after Baltimore's 4-3 win on Friday. "I haven't determined it at this point. I was waiting to see what the ruling was going to be. Now that we know, we have a couple of days to finalize our roster, and that's what we'll do. We'll weigh the plusses and minuses on what makes the most sense for us and go forward." Moore has proven adept at both infield corners, and manager Dave Trembley has even tried to use him in the outfield and at second base. Gibbons, on the other hand, is blocked at both first base and right field. If he's going to earn playing time this season, the left-handed hitter would have to do it at either left field or designated hitter. For now, though, Gibbons is just trying to figure out how the latest ruling affects his season. He had already accepted his suspension and was prepared to miss the first two weeks of the season. "Obviously, you feel a little more comfortable when you know what's going on in all sorts of realms," Gibbons said. "But, you know, it's part of the game. I was fully prepared to serve my suspension. If it doesn't happen, it doesn't happen." As for the roster spot, MacPhail said the Orioles would likely take all the time allotted to them. Baltimore doesn't have to make a decision on the final spot until 3 p.m. on Sunday. The Orioles may make a waiver claim before then -- which could further complicate the process, but MacPhail doesn't expect to come to a hasty conclusion. "We'll take advantage of the 48 hours that we have," he said. "I'm certainly one that takes full use of his time. Things evolve. Generally, there is some activity at this time, with guys going back and forth, and some things happening. We'll take a look at the landscape, and use the time that is afforded us and make a decision." Spencer Fordin is a reporter for MLB.com. This story was not subject to the approval of Major League Baseball or its clubs.
--- abstract: 'Type Ia supernovae (SNe Ia) exhibit a wide diversity of peak luminosities and light curve shapes: the faintest SNe Ia are $10$ times less luminous and evolve more rapidly than the brightest SNe Ia. Their differing characteristics also extend to their stellar age distributions, with fainter SNe Ia preferentially occurring in old stellar populations and vice versa. In this Letter, we quantify this SN Ia luminosity – stellar age connection using data from the Lick Observatory Supernova Search (LOSS). Our binary population synthesis calculations agree qualitatively with the observed trend in the $> 1 \,$Gyr-old populations probed by LOSS if the majority of SNe Ia arise from prompt detonations of sub-Chandrasekhar mass white dwarfs (WDs) in double WD systems. Under appropriate assumptions, we show that double WD systems with less massive primaries, which yield fainter SNe Ia, interact and explode at older ages than those with more massive primaries. We find that prompt detonations in double WD systems are capable of reproducing the observed evolution of the SN Ia luminosity function, a constraint that any SN Ia progenitor scenario must confront.' author: - 'Ken J. Shen' - Silvia Toonen - Or Graur title: 'The Evolution of the Type [Ia ]{}Supernova Luminosity Function' --- Introduction {#sec:intro} ============ Type Ia supernovae (SNe Ia) are often referred to as “standard candles.” However, their intrinsic light curves vary significantly: bright SN 1991T-like SNe Ia are $10$ times more luminous and evolve more slowly than the faint SN 1991bg-likes (see @taub17a for a review). The relationship between intrinsic luminosity and light curve shape is often referred to as the [@phil93a] relation, and it forms the basis for the use of SNe Ia as cosmological distance indicators. Brighter and fainter SNe Ia also differ in their host galaxy distributions: bright SNe Ia occur more often in low mass spiral galaxies, while faint SNe Ia prefer high mass ellipticals [@hamu95a; @sull06a; @grau17b]. While the range of progenitor metallicities may account for some of the dispersion in the Phillips relation, no amount of metallicity variation can account for the entire SN Ia luminosity range for any progenitor scenario [@tbt03; @shen17b]. Thus, studies have suggested that the difference in host galaxy distributions of SN Ia subtypes is due to the differing ages of the underlying stellar populations. Linking stellar age to SN luminosity for Chandrasekhar-mass (${ M_{\rm Ch} }$) explosion models has not been extensively studied (for one example, see @wang14a) and appears difficult, if not impossible, to achieve. Adjusting various quantities (e.g., the density at which the deflagration transitions to a detonation or the number of initial deflagration kernels) does not produce the relatively tight correlation of the Phillips relation and also fails to yield the low luminosity, rapidly evolving SN 1991bg-likes (@sim13a [@blon17a]; although see @hoef17a). Since ${ M_{\rm Ch} }$ explosions do not reproduce the full range of the Phillips relation, connecting the stellar age to the various SN Ia subtypes is as yet impossible within the ${ M_{\rm Ch} }$ paradigm. Furthermore, it is not obvious why the deflagration-to-detonation transition density or number of ignition kernels would change with age. Note that the category of ${ M_{\rm Ch} }$ explosion models includes both standard “single degenerate” scenarios (e.g., @wi73) as well as “double degenerate” scenarios (e.g., @webb84) for which the ignition occurs at the center of a super-${ M_{\rm Ch} }$ merger remnant, as these have the same explosion mechanism and similar radiative output. At first glance, prospects appear better for sub-${ M_{\rm Ch} }$ explosion models, in which the luminosity of the SN Ia is directly related to the mass of the exploding WD [@sim10; @blon17a; @shen17b], a quantity that could conceivably vary with stellar age. Naïvely, it seems obvious that the masses of exploding sub-${ M_{\rm Ch} }$ WDs decrease with age, because WD masses are directly related to main sequences masses, which are inversely related to main sequence lifetimes, and thus dimmer SNe Ia would occur in older stellar populations as observed. However, half of all SNe Ia occur $> \unit [1]{Gyr}$ after their progenitor systems form (e.g., @maoz14a and references therein), much longer than the main sequence lifetimes of the stars that produce the $ \gtrsim 0.85 { \, M_\sun }$ WDs that yield SNe Ia. For sub-${ M_{\rm Ch} }$ explosions produced by double WD binaries, either by double detonations [@guil10] or direct carbon ignitions [@pakm10], the age of the system at the time of interaction is instead dominated by the gravitational wave inspiral timescale, which is itself a complicated outcome of multiple phases of stable and unstable mass transfer prior to the formation of the double WD system. Note that sub-${ M_{\rm Ch} }$ double detonation explosions may also occur in single degenerate systems in which the donor is a non-degenerate helium-rich star (e.g., @wtw86) or in triple star systems [@kush13a]; however, because predicted rates from these systems are much lower than the SN Ia rate [@geie13a; @toon17b], we restrict ourselves throughout the rest of this work to sub-${ M_{\rm Ch} }$ explosions in isolated double WD systems. In this Letter, for the first time, we quantify the evolution of exploding WD masses and resulting SN Ia subtypes for sub-${ M_{\rm Ch} }$ double WD progenitors and compare to observational constraints.[^1] In §\[sec:loss\], we describe our basis for comparison: SN Ia subtypes and stellar age distributions inferred from the Lick Observatory Supernova Search (LOSS) survey. In §\[sec:seba\], we detail the methodology by which we derive the theoretical SN Ia subtype evolution from the `SeBa` binary population synthesis code. We conclude and outline future work in §\[sec:conc\]. Observed evolution of the luminosity function {#sec:loss} ============================================= During its first decade of operations, LOSS discovered more than 1000 SNe in the 14,882 galaxies it surveyed (e.g., @leam11a [@li11b]). [@li11b] constructed a volume-limited subsample that included 180 SNe and SN impostors. All SNe were classified spectroscopically, and individual SN light curves were used to calculate completeness corrections. The resulting sample is complete for SNe Ia out to 80 Mpc. The SNe in this volume-limited sample were recently reclassified, based on additional data and an updated understanding of SN physics, but SNe Ia were unaffected [@grau17a; @grau17b; @shiv17a]. The LOSS volume-limited sample is homogeneous, well-characterized, and spectroscopically complete. However, LOSS targeted massive, luminous galaxies, so that low-luminosity galaxies and SN 1991T-like SNe Ia, which are known to preferentially occur in these galaxies, are underrepresented. With this in mind, we restrict our comparisons to the galactic ages $ > \unit[1]{Gyr}$ that are well-sampled in LOSS. Future work will use data from volume-limited samples that include more SNe Ia in low-luminosity galaxies, which will allow us to better probe the early evolution of the luminosity function. Of the 74 SNe Ia in the updated volume-limited sample, we use the 70 SNe Ia that were classified as “normal,” SN 1991bg-like, SN 1991T-like, or SN 1999aa-like. We exclude SNe 1999bh, 2002es, 2005cc, and 2005hk, which were classified as either SN 2002es-like or SN 2002cx-like. Instead of relying on the discrete spectroscopic classifications of the SNe, we use the continuous and extinction-independent scale afforded by the $ { \Delta {\rm m}_{15}(B) }$ parameter, which measures the decrease in $B$-band magnitudes between peak and $ \unit[15]{d}$ after peak. Through the @phil93a width-luminosity relation, this parameter is a good proxy for the intrinsic luminosity of a SN Ia. Fifty-four SNe have $ { \Delta {\rm m}_{15}(B) }$ measurements performed by different groups [@hick09a; @cont10a; @gane13a]. Twenty-six SNe did not have enough points on their light curves to fit for $ { \Delta {\rm m}_{15}(B) }$ (J. M. Silverman and W. Zhang, private communication). To fill in these missing values, we perform a linear fit between the extant $ { \Delta {\rm m}_{15}(B) }$ values and the light-curve template number assigned to each LOSS SN by @li11b. Next, we estimate the ages of the SN host galaxies by making use of the correlation between a galaxy’s age and its stellar mass (e.g., @gall08a). We acknowledge that this relationship has large variance and that, furthermore, the average galaxy age is at best a rough proxy for the SN Ia progenitor’s age. We leave a more accurate derivation of SN Ia progenitor age to future work. LOSS estimated host-galaxy stellar masses based on their $B$- and $K$-band luminosities [@leam11a], but four of our host galaxies lack such estimates; they are assigned stellar masses using the method outlined by [@grau17b]. These masses are then used to estimate stellar ages using Sloan Digital Sky Survey (SDSS) data (@york00; @gall08a and private communication; @calu14a). We can further refine our stellar age estimates by also using the morphological information of the galaxies. [@gonz15a] present luminosity-weighted ages for a range of galaxy masses and Hubble types using data from the Calar Alto Legacy Integral Field Area (CALIFA) survey. We interpolate among their results and apply a constant $+0.35$ dex correction to convert from luminosity- to mass-weighted ages [@godd17a], which are more appropriate for the $> \unit[1]{Gyr}$ progenitors we consider. In the following section, we compare theoretical CDFs of SN Ia luminosities to observed CDFs for binned ages inferred from both methods. Theoretical evolution of the luminosity function {#sec:seba} ================================================ In order to predict the evolution of SN Ia subtypes from binary population synthesis calculations, we must construct a mapping from exploding WD mass, $M_1$, to ${ \Delta {\rm m}_{15}(B) }$, our observational proxy. Radiative transfer simulations of a suite of sub-${ M_{\rm Ch} }$ explosions were first performed by [@sim10]. Recently, @shen17b [hereafter, S17] reexamined the subject using more precise detonation calculations and found significant differences in the nucleosynthetic products. In complementary work, @blon17a [hereafter, B17] used a simplified nuclear network but improved upon the radiative transfer by employing a non-local thermodynamic equilibrium (non-LTE) code; they also found significant differences compared to [@sim10]. None of the aforementioned studies was able to completely reproduce the Phillips relation: [@sim10] and S17 derived light curves confined to high values of ${ \Delta {\rm m}_{15}(B) }$, and while B17 found a good match to the Phillips relation in the high luminosity, low ${ \Delta {\rm m}_{15}(B) }$ regime, they were unable to achieve the high values of ${ \Delta {\rm m}_{15}(B) }$ at faint luminosities. However, there are good reasons to believe that a combination of S17’s nucleosynthesis and a non-LTE radiative transfer calculation like B17’s will reproduce the Phillips relation. S17’s more detailed nucleosynthesis does not differ too substantially from that of B17 for higher WD masses $\simeq 1.1 { \, M_\sun }$, so a combination of the two improvements will not significantly alter B17’s good agreement with observations of bright SNe Ia. At lower WD masses $ \leq 0.9 { \, M_\sun }$, S17’s nucleosynthesis produces $\sim 3$ times more $^{56}$Ni than B17’s. Thus, a similar amount of $^{56}$Ni is produced in an explosion with a smaller ejecta mass, which implies a more rapid light curve evolution and higher values of $ { \Delta {\rm m}_{15}(B) }$ at low luminosities, pushing B17’s non-LTE calculations in the right direction. Confirmation of the ability of sub-${ M_{\rm Ch} }$ explosions to reproduce the entirety of the Phillips relation awaits future calculations combining detailed nucleosynthesis with non-LTE radiative transfer. For the remainder of this work, we assume that this effort will be successful and construct an appropriate mapping of exploding WD mass to $ { \Delta {\rm m}_{15}(B) }$. We assume SN 1991bg-likes with ${ \Delta {\rm m}_{15}(B) }= \unit[2.0]{mag} $ are produced by the explosions of $0.85 { \, M_\sun }$ WDs, as found by S17. At the opposite end, we adjust B17’s results to account for the slightly boosted $^{56}$Ni production found by S17, so that $1.15 { \, M_\sun }$ explosions yield light curves with ${ \Delta {\rm m}_{15}(B) }= \unit[0.7]{mag}$. Above $1.15 { \, M_\sun }$, we extend the mapping with an ad hoc linear relation between WD mass and ${ \Delta {\rm m}_{15}(B) }$. Finally, in between $0.85$ and $1.15 { \, M_\sun }$, we roughly convolve B17’s non-LTE radiation transport results with S17’s nucleosynthesis. This leads to the mapping shown in Figure \[fig:dm15map\]. ![Assumed mapping of $M_1$ to ${ \Delta {\rm m}_{15}(B) }$ (*solid line*). A combination of the results from [@shen17b] (*dotted line*) and [@blon17a] (*dashed line*) is used to infer the mapping.[]{data-label="fig:dm15map"}](dm15map){width="\columnwidth"} We now turn to a theoretical prediction for the evolution of the exploding WD mass using the `SeBa` binary population synthesis code [@port96a; @toon12a]. We employ `SeBa` to simulate a large number of binaries focusing on those that lead to a merger between two WDs. The simulations include stellar evolution and interactions such as mass transfer and accretion, angular momentum loss, and gravitational wave emission. We only consider double WD progenitors that explode promptly as sub-$ { M_{\rm Ch} }$ detonations, before they can evolve into super-${ M_{\rm Ch} }$ remnants. We are agnostic as to the exact explosion mechanism, as long as it occurs shortly after the onset of mass transfer and in such a way that the light curve of the SN Ia is primarily determined by $M_1$, the mass of the more massive WD, which we constrain to be a C/O WD. Explosion mechanisms that fit these criteria can occur in merging double WD systems via “dynamically-driven double degenerate double detonations” [@guil10] or direct carbon ignitions [@pakm10]. Stably mass-transferring double WD systems may also lead to double detonation SNe Ia [@bild07], but recent work suggests that even extreme mass ratio double WD systems will merge unstably [@shen15a; @brow16b], so we continue under this assumption for simplicity. The `SeBa` simulations used here are based on the primary $\alpha \gamma$-Abt model in [@toon17a]. In this model, the common envelope (CE) prescription is tuned to best reproduce the observed double WD population [@nele00a; @toon12a]. The $\gamma$-CE prescription [@nele00a] is applied with $\gamma=1.75$, unless the binary contains a compact object or the CE is triggered by a tidal instability. In the latter case, the classical $\alpha$-CE prescription is applied [@pacz76; @webb84], with $\alpha\lambda=2$. The initial orbital separations follow a power-law distribution with an exponent of $-1$ [@abt83a]. For further information, see [@toon17a] and references therein. Note that while we show results using the $\gamma$-formalism in this Letter, the trends remain if we exclusively use the $\alpha$-prescription with $\alpha\lambda=2$. The retention efficiency of helium has been updated with respect to [@toon17a]. Based on recent modeling of helium accretion onto WDs [@pier14a; @broo16a], we assume that WDs accrete helium conservatively when the logarithm of the mass transfer rate is between $$\begin{aligned} \log_{10} \left( \frac{ \dot{M}_{\rm upper}}{M_\odot / {\rm yr}} \right) &=& -7.226 + 2.504 \left( \frac{ M_{\rm WD}}{M_\odot} \right) \nonumber \\ && -0.805 \left( \frac{ M_{\rm WD}}{M_\odot} \right)^2\end{aligned}$$ and $$\begin{aligned} \log_{10} \left( \frac{ \dot{M}_{\rm lower}}{M_\odot / {\rm yr}} \right) &=& -8.918+ 4.099 \left( \frac{ M_{\rm WD}}{M_\odot} \right) \nonumber \\ &&-1.232 \left( \frac{ M_{\rm WD}}{M_\odot} \right)^2 ,\end{aligned}$$ where $M_{\rm WD}$ is the mass of the accreting WD. Outside of this regime, the accretion is assumed to be completely non-conservative. The updated helium retention efficiency leads to less WD mass growth compared to previous assumptions [@kato99a; @bour13a; @ruit13a]. ![Primary and secondary WD masses at merger for short ($ \unit[1-3]{Gyr}$; *red circles*) and long ($ \unit[6-14]{Gyr}$; *green triangles*) delay times. We assume binaries above the solid line explode as SNe Ia.[]{data-label="fig:m1m2"}](m1m2){width="\columnwidth"} Figure \[fig:m1m2\] shows the primary and secondary WD masses at the time of merger for short and long delay times. It is clear that there is an overabundance of $\sim 0.875 { \, M_\sun }+ 0.825 { \, M_\sun }$ mergers in the old population compared to the young population. These primary masses are what we assume lead to SN 1991bg-like SNe; thus, if the currently theoretically uncertain criterion for which mergers lead to subluminous SNe includes only these binaries with relatively massive secondaries, the theoretical $ { \Delta {\rm m}_{15}(B) }$ distribution will shift toward subluminous SNe in older populations. So as to maximize SN 1991bg-likes in old populations while including as many SNe Ia overall as possible, we impose a quadratic minimum secondary mass as shown by the solid line in Figure \[fig:m1m2\]. While ad hoc, there is a physical basis for our chosen criterion. More massive secondaries yield more directly impacting accretion streams, and more massive primaries have higher gravitational potentials. Both of these effects lead to higher temperature hotspots during the merger, which more easily initiate detonations, suggesting a minimum secondary mass that varies inversely with primary mass. We note that the often-used $M_1+M_2 > { M_{\rm Ch} }$ constraint does not reproduce the observed luminosity function evolution; such a constraint yields too many subluminous SNe Ia in young stellar populations. ![Time between birth and merger vs. initial separation for $5.5 { \, M_\sun }+3.5 { \, M_\sun }$ binaries. Separations that lead to helium star – sub-giant mergers are shown in red; separations that yield double WD mergers are shown in black.[]{data-label="fig:tvsa"}](tvsa){width="\columnwidth"} In order to understand the relative overproduction of WD binaries with masses $\sim 0.875 { \, M_\sun }+ 0.825 { \, M_\sun }$ in the older population, we consider the evolution of main sequence binaries with masses $5.5 { \, M_\sun }+3.5 { \, M_\sun }$, which are the main progenitors of these double WD systems. Figure \[fig:tvsa\] shows the time between the birth of a $5.5 { \, M_\sun }+3.5 { \, M_\sun }$ binary and the merger of its two components vs. initial separation. For initial separations $< 19 \, R_\odot$, the secondary star fills its Roche lobe as it crosses the Hertzsprung gap before the primary becomes a WD, resulting in a helium star – sub-giant merger. For wider initial separations, this mass transfer occurs later, when the primary is already a WD, and leads to a common envelope and a surviving double WD binary whose separation and gravitational inspiral time are correlated with the initial separation. Such systems with merger times $\unit[1-3]{Gyr}$ do exist and will lead to subluminous SNe Ia in young populations, but they are significantly outnumbered by those with merger times $ \unit[6-14]{Gyr}$; thus, we find more faint SNe in old stellar populations. ![Cumulative distribution functions of ${ \Delta {\rm m}_{15}(B) }$ from the LOSS data (*dashed lines*, §\[sec:loss\]) for different age bins as labeled, compared to `SeBa` CDFs (*solid lines*, §\[sec:seba\]). The LOSS CDFs in the top panel use relations derived from SDSS data to estimate ages from galaxy masses; stellar ages in the bottom panel are inferred from galaxy masses and morphologies using data from the CALIFA survey. The youngest age bin’s theoretical CDF does not have an observational counterpart. (The data used to create the observational CDFs in this figure are available in the online journal.)[]{data-label="fig:cdfs"}](dm15CDF){width="\columnwidth"} The resulting theoretical CDFs for four age bins are shown in Figure \[fig:cdfs\]. The CDFs are significantly different from one another and in qualitative agreement with the observed CDFs from LOSS: younger stellar populations host fewer dim SNe Ia than older populations. Quantitative discrepancies certainly exist between the theoretical and observed CDFs. However, given the approximations in our analysis, our goal in this Letter is to merely demonstrate that double WD mergers have the capability to explain the evolution of the SN Ia luminosity function. Note that the lack of young, low-luminosity galaxies in the LOSS sample precludes a comparison to the theoretical CDF of the youngest age bin. The overall SN Ia rates from our binary population synthesis calculations range from $ 10.0{\times10^{-15}} \, M_\odot^{-1} {\rm \, yr^{-1}}$ $ \unit[1-3]{Gyr}$ after birth to $ 7.3{\times10^{-15}} \, M_\odot^{-1} {\rm \, yr^{-1}}$ $ \unit[6-14]{Gyr}$ after birth. These rates are $3-10$ times lower than the observed delay time distribution [@maoz17b]. However, this disagreement is within current uncertainties given the similar factor of a few discrepancy between the observed and theoretical local double WD space density [@maoz17a; @toon17a]. Conclusions {#sec:conc} =========== In this Letter, we have shown that prompt detonations in double WD systems can qualitatively explain the time evolution of the SN Ia luminosity function. Given the many approximations we have made, precise agreement between theory and observations is not expected and indeed is not achieved; we simply demonstrate a proof of concept. The largest observational uncertainties relate to our derivation of stellar ages from global galaxy properties such as mass and morphology. Future work can improve these age estimates by including information, particularly star formation proxies, local to the SN Ia site. Furthermore, upcoming surveys such as the Zwicky Transient Facility and the Large Synoptic Survey Telescope will greatly increase the numbers of SNe Ia, reducing Poisson errors and allowing more finely grained age bins, particularly for the low mass, young galaxies not probed by LOSS. The theoretical side of this work relies on several assumptions that will be improved in the near future. A combination of more precise detonation simulations and non-LTE radiative transfer calculations is currently underway and will better quantify the mapping between exploding WD mass and ${ \Delta {\rm m}_{15}(B) }$. Future merger simulations will determine the minimum secondary mass that can trigger the primary WD to explode, obviating the need to impose an ad hoc constraint. Furthermore, concrete progress is being made in modeling common envelopes, which will reduce one of the largest binary population synthesis uncertainties. A more quantitative study measuring and reproducing the evolution of the SN Ia luminosity function awaits these and other improvements. Our work in this Letter simply demonstrates that prompt detonations in double WD systems have the capacity to match this evolution, a constraint that any progenitor scenario attempting to explain the majority of SNe Ia must confront. We gratefully acknowledge Samaya Nissanke and the organizers of the Physics of Extreme Gravity Stars workshop, where some of this work was carried out. We thank Alison Miller, Peter Nugent, and Mark Sullivan for helpful discussions and Anna Gallazzi for sharing data. KJS receives support from the NASA Astrophysics Theory Program (NNX15AB16G and NNX17AG28G). OG is supported by an NSF Astronomy and Astrophysics Fellowship under award AST-1602595. ST gratefully acknowledges support from the Netherlands Research Council NWO (grant VENI \[\#639.041.645\]). natexlab\#1[\#1]{}\[1\][[\#1](#1)]{} , H. A. 1983, , 21, 343 , L., [Shen]{}, K. J., [Weinberg]{}, N. N., & [Nelemans]{}, G. 2007, , 662, L95 , S., [Dessart]{}, L., [Hillier]{}, D. J., & [Khokhlov]{}, A. M. 2017, , 470, 157 , M. C. P., [Toonen]{}, S., & [Nelemans]{}, G. 2013, , 552, A24 , J., [Bildsten]{}, L., [Schwab]{}, J., & [Paxton]{}, B. 2016, , 821, 28 , W. R., [Kilic]{}, M., [Kenyon]{}, S. J., & [Gianninas]{}, A. 2016, , 824, 46 , F., [Menci]{}, N., & [Gallazzi]{}, A. 2014, , 440, 2066 , C., [Hamuy]{}, M., [Phillips]{}, M. M., [et al.]{} 2010, , 139, 519 , A., [Brinchmann]{}, J., [Charlot]{}, S., & [White]{}, S. D. M. 2008, , 383, 1439 , M., [Li]{}, W., & [Filippenko]{}, A. V. 2013, , 433, 2240 , S., [Marsh]{}, T. R., [Wang]{}, B., [et al.]{} 2013, , 554, A54 , D., [Thomas]{}, D., [Maraston]{}, C., [et al.]{} 2017, , 466, 4731 , R. M., [Garc[í]{}a-Benito]{}, R., [P[é]{}rez]{}, E., [et al.]{} 2015, , 581, A103 , O., [Bianco]{}, F. B., [Huang]{}, S., [et al.]{} 2017, , 837, 120 , O., [Bianco]{}, F. B., [Modjaz]{}, M., [et al.]{} 2017, , 837, 121 , J., [Dan]{}, M., [Ramirez-Ruiz]{}, E., & [Rosswog]{}, S. 2010, , 709, L64 , M., [Phillips]{}, M. M., [Maza]{}, J., [et al.]{} 1995, , 109, 1 , M., [Challis]{}, P., [Jha]{}, S., [et al.]{} 2009, , 700, 331 , P., [Hsiao]{}, E. Y., [Ashall]{}, C., [et al.]{} 2017, , 846, 58 , M., & [Hachisu]{}, I. 1999, , 513, L41 , D., [Katz]{}, B., [Dong]{}, S., [Livne]{}, E., & [Fern[á]{}ndez]{}, R. 2013, , 778, L37 , J., [Li]{}, W., [Chornock]{}, R., & [Filippenko]{}, A. V. 2011, , 412, 1419 , W., [Leaman]{}, J., [Chornock]{}, R., [et al.]{} 2011, , 412, 1441 , D., & [Graur]{}, O. 2017, , 848, 25 , D., & [Hallakoun]{}, N. 2017, , 467, 1414 , D., [Mannucci]{}, F., & [Nelemans]{}, G. 2014, , 52, 107 , G., [Verbunt]{}, F., [Yungelson]{}, L. R., & [Portegies Zwart]{}, S. F. 2000, , 360, 1011 , B. 1976, in IAU Symp. 73, The Structure and Evolution of Close Binary Systems, ed. [P. Eggleton, S. Mitton, & J. Whelan]{} (Dordrecht: Reidel), 75 , R., [Kromer]{}, M., [R[ö]{}pke]{}, F. K., [et al.]{} 2010, , 463, 61 , M. M. 1993, , 413, L105 , L., [Tornamb[é]{}]{}, A., & [Yungelson]{}, L. R. 2014, , 445, 3239 , A. L., [Thompson]{}, T. A., & [Kochanek]{}, C. S. 2014, , 438, 3456 , S. F., & [Verbunt]{}, F. 1996, , 309, 179 , A. J., [Sim]{}, S. A., [Pakmor]{}, R., [et al.]{} 2013, , 429, 1425 , K. J. 2015, , 805, L6 , K. J., [Kasen]{}, D., [Miles]{}, B. J., & [Townsley]{}, D. M. 2017, submitted (arXiv:1706.01898) , I., [Modjaz]{}, M., [Zheng]{}, W., [et al.]{} 2017, , 129, 054201 , S. A., [R[ö]{}pke]{}, F. K., [Hillebrandt]{}, W., [et al.]{} 2010, , 714, L52 , S. A., [Seitenzahl]{}, I. R., [Kromer]{}, M., [et al.]{} 2013, , 436, 333 , M., [Le Borgne]{}, D., [Pritchet]{}, C. J., [et al.]{} 2006, , 648, 868 , S. 2017, arXiv:1703.00528 , F. X., [Brown]{}, E. F., & [Truran]{}, J. W. 2003, , 590, L83 , S., [Hollands]{}, M., [G[ä]{}nsicke]{}, B. T., & [Boekholt]{}, T. 2017, , 602, A16 , S., [Nelemans]{}, G., & [Portegies Zwart]{}, S. 2012, , 546, A70 , S., [Perets]{}, H. B., & [Hamers]{}, A. S. 2017, ArXiv e-prints, arXiv:1709.00422 , B., [Justham]{}, S., [Liu]{}, Z.-W., [et al.]{} 2014, , 445, 2340 , R. F. 1984, , 277, 355 , J., & [Iben]{}, I. J. 1973, , 186, 1007 , S. E., [Taam]{}, R. E., & [Weaver]{}, T. A. 1986, , 301, 601 , D. G., [Adelman]{}, J., [Anderson]{}, Jr., J. E., [et al.]{} 2000, , 120, 1579 [^1]: We note that [@ruit13a] and [@piro14b] also studied the SN Ia luminosity function but did not analyze its evolution with time.
Sirhan Sirhan, the assassin of RFK, is said to have been stabbed to death in California's prison Sirhan Sirhan, the man convicted of murdering Robert F. Kennedy in 1968, was hospitalized after being stabbed to death on Friday afternoon in a San Diego jail. A source confirmed to the Associated Press that it was the man who was sentenced to life imprisonment in connection with the death of Kennedy, a US Senator from New York. He was assassinated at the Ambassador Hotel in Los Angeles after winning the Democratic presidential code in California in 1968. Sirhan is said to have been in a stable state after the attack. Circumstances that led to the knife-cutting were not immediately known. RFK GRANDDAUGHTER SAOIRSE KENNEDY HILL, 22, TOT AT FAMILY COMPOUND Sirhan Sirhan was convicted in 1968 of assassination of Democratic presidential candidate Robert Kennedy. (Reuters) "The suspect of the attack was identified and sent to the Department of Administrative Separation of the Prison in anticipation of an investigation," said the California Department of Corrections and Rehabilitation, according to Los Angeles-based KABC-TV. He takes place at the Richard J. Donovan Correctional Facility. This is an evolving story. Look for updates. The Associated Press contributed to this report.
INCLUDE_DIRECTORIES( ${BULLET_PHYSICS_SOURCE_DIR}/src ${BULLET_PHYSICS_SOURCE_DIR}/examples/ThirdPartyLibs ${BULLET_PHYSICS_SOURCE_DIR}/examples/ThirdPartyLibs/glad ) IF(NOT WIN32 AND NOT APPLE) INCLUDE_DIRECTORIES( ${BULLET_PHYSICS_SOURCE_DIR}/examples/ThirdPartyLibs/optionalX11 ) ADD_DEFINITIONS(-DGLEW_STATIC) ADD_DEFINITIONS("-DGLEW_INIT_OPENGL11_FUNCTIONS=1") ADD_DEFINITIONS("-DGLEW_DYNAMIC_LOAD_ALL_GLX_FUNCTIONS=1") ADD_DEFINITIONS("-DDYNAMIC_LOAD_X11_FUNCTIONS=1") ENDIF() ADD_DEFINITIONS( -DGLEW_STATIC -DGWEN_COMPILE_STATIC -D_HAS_EXCEPTIONS=0 ) FILE(GLOB gwen_SRCS "*.cpp" "Controls/*.cpp" "Controls/Dialog/*.cpp" "Controls/Dialogs/*.cpp" "Controls/Layout/*.cpp" "Controls/Property/*.cpp" "Input/*.cpp" "Platforms/*.cpp" "Renderers/*.cpp" "Skins/*.cpp") FILE(GLOB gwen_HDRS "*.h" "Controls/*.h" "Controls/Dialog/*.h" "Controls/Dialogs/*.h" "Controls/Layout/*.h" "Controls/Property/*.h" "Input/*.h" "Platforms/*.h" "Renderers/*.h" "Skins/*.h") ADD_LIBRARY(gwen ${gwen_SRCS} ${gwen_HDRS}) IF (BUILD_SHARED_LIBS) IF(WIN32 OR APPLE) target_link_libraries(gwen ${OPENGL_gl_LIBRARY}) ENDIF() ENDIF() INSTALL(TARGETS gwen RUNTIME DESTINATION bin LIBRARY DESTINATION lib${LIB_SUFFIX} ARCHIVE DESTINATION lib${LIB_SUFFIX})
Dissociable effects of reward magnitude on fronto-medial theta and FRN during performance monitoring. Reward processing is influenced by reward magnitude, as previous EEG studies showed changes in amplitude of the feedback-related negativity (FRN) and reward positivity (RewP), or power of fronto-medial theta (FMθ). However, it remains unclear whether these changes are driven by increased reward sensitivity, altered reward predictions, enhanced cognitive control, or a combination of these effects. To address this question, we asked 36 participants to perform a simple gambling task where feedback valence (reward vs. no-reward), its magnitude (small vs. large reward), and expectancy (expected vs. unexpected) were manipulated in a factorial design, while 64-channel EEG was recorded concurrently. We performed standard ERP analyses (FRN and RewP) as well as time-frequency decompositions (FMθ) of feedback-locked EEG data. Subjective reports showed that large rewards were more liked and expected than small ones. At the EEG level, increasing magnitude led to a larger RewP irrespective of expectancy, whereas the FRN was not influenced by this manipulation. In comparison, FMθ power was overall increased when reward magnitude was large, except if it was unexpected. These results show dissociable effects of reward magnitude on the RewP and FMθ power. Further, they suggest, that although large reward magnitude boosts reward processing (RewP), it can nonetheless undermine the need for enhanced cognitive control (FMθ) in case reward is unexpected. We discuss these new results in terms of optimistic bias or positive mood resulting from an increased reward magnitude.
# 20.30.15. The INFORMATION_SCHEMA INNODB_SYS_TABLESPACES Table `INNODB_SYS_TABLESPACES`表存储有关`InnoDB`表空间的信息,允许通过`INFORMATION_SCHEMA`查询。 Table 20.15. `INNODB_SYS_TABLESPACES`的列 <table> <thead> <tr> <th scope="col">Column name</th> <th scope="col">Description</th> </tr> </thead> <tbody> <tr> <td scope="row"><code class="literal">SPACE</code></td> <td>表空间的ID。</td> </tr> <tr> <td scope="row"><code class="literal">NAME</code></td> <td>数据库和表的名称 (for example, world_innodb\city)。</td> </tr> <tr> <td scope="row"><code class="literal">FLAG</code></td> <td>表被通过语句CREATE TABLE ... DATA DIRECTORY (0 = false, 1 = true)创建。</td> </tr> <tr> <td scope="row"><code class="literal">FILE_FORMAT</code></td> <td>表空间文件格式(例如,[Antelope]()或[Barracuda]())。这列中的数据是解释自驻留在.ibd文件中的表空间的标志信息。更多关于`InnoDB`文件格式信息,看[Section 5.4.7, “InnoDB File-Format Management”]()。</td> </tr> <tr> <td scope="row"><code class="literal">PAGE_SIZE</code></td> <td>表空间的行格式(例如,Compact或Redundant)。这列的数据是解释自驻留在 .ibd文件中的表空间标志信息。</td> </tr> <tr> <td scope="row"><code class="literal">PAGE_SIZE</code></td> <td>表空间的页大小。这列的数据是解释自驻留在 .ibd文件中的表空间标志信息。</td> </tr> <tr> <td scope="row"><code class="literal">ZIP_PAGE_SIZE</code></td> <td>表空间压缩页大小。这列的数据是解释自驻留在 .ibd文件中的表空间标志信息。</a>. </td> </tr> </tbody> </table> **附注**: - 你必须有`PROCESS`权限才能查询这个表。 - 因为表空间标志对于所有的Antelope文件格式(不像表标志)一直是0,没有办法去判断这个整型标志 如果这个表空间行格式是Redundant 或 Compact。其结果是,ROW_FORMAT列的可能值是“Compact or Redundant”, “Compressed”, 或 “Dynamic”。
708 F.Supp.2d 95 (2010) Robert HOCHSTADT and Edward Hazelrig, Jr., on behalf of themselves, the Boston Scientific Corp. 401(k) Retirement Savings Plan, and all other similarly situated, Plaintiffs, v. BOSTON SCIENTIFIC CORP. et al., Defendants. Civil Action No. 08-12139-DPW. United States District Court, D. Massachusetts. April 27, 2010. *97 Stuart J. Baskin, Kirsten Nelson Cunha, Christopher R. Fenton, John Gueli, Shearman & Sterling LLP, Lori G. Feldman, Milberg & LLP, New York, NY, for Plaintiffs. Joy Hochstadt, Joy Hochstadt, P.C., New York, NY, Anne Hoffman, Lynch, Brewer, Hoffman & Fink LLP, Boston, MA, for Defendants. MEMORANDUM AND ORDER DOUGLAS P. WOODLOCK, District Judge. Before me is a motion seeking preliminary review[1] of a settlement agreement resolving two putative class actions[2] against Boston Scientific Corporation ("Boston Scientific" or the "Company") and alleged fiduciaries[3] of Boston Scientific Corporation's 401 (k) Retirement Savings Plan (the "Plan") (collectively, the "Defendants"). Both class actions are based on the allegation that Defendants breached their fiduciary duty to the Plan and to the Plan's participants, in violation of the Employee Retirement Income Security Act ("ERISA"), by imprudently selecting Boston Scientific stock as an investment, despite their knowledge that the stock price was artificially inflated. The proposed settlement class consists of participants in the Plan whose individual Plan accounts held an interest in Boston Scientific common stock at any time between May 7, 2004 and January 26, 2006 (the "Class Period"). For the reasons set forth more fully below, I will certify the settlement class and authorize the publication of the proposed class notice. *98 I. BACKGROUND A. Facts Boston Scientific develops, manufactures, and distributes medical devices whose products are used in the cardiovascular and endosurgery health care arena. During the Class Period, Boston Scientific administered the Plan in the interest of its participants. The Plan qualifies as an "employee pension benefit plan" within the meaning of ERISA § 3(2)(A), 29 U.S.C. § 1002(2)(A). Participants in the Plan make voluntary contributions and the Company makes matching contributions. Throughout the Class Period, participants could contribute to the Plan between 1% and 25% of their pre-tax annual compensation and between 1% and 10% of their compensation on an after-tax basis each year. Effective January 1, 2005, the Company provided a matching contribution equal to 200% of the employee's contribution for up to 2% of the employee's earnings, plus 50% of the next 4% of the employee's earnings. During the Class Period, the Plan offered approximately ten separate investment options, including Boston Scientific stock. Plaintiffs[4] allege that Defendants issued several misleading public disclosures, which caused the Company stock to be inflated artificially during the Class Period (the "Inflation Claim"). Plaintiffs further contend that, despite their knowledge that Boston Scientific stock was not a prudent investment for the Plan, Defendants continued to accept the Company matching contributions in Boston Scientific stock throughout the Class Period, thereby causing losses to the Plan's participants (the "Prudence Claim"). The alleged misleading disclosures are based on four events.[5] First, Plaintiffs contend that Defendants failed to disclose adequately the 1998 investigation conducted by the Department of Justice ("DOJ") concerning defective NIR stents. This investigation led the DOJ to file a civil complaint in 2005 charging Boston Scientific with distributing in interstate commerce medical devices that were altered and misbranded between 1998 and 2005 and resulted *99 in the payment by Boston Scientific of $74 million. Second, Plaintiffs allege that Defendants misrepresented the seriousness of the litigation with Medinol Ltd., one of Boston Scientific's suppliers, as to which Defendants agreed to pay a $750 million settlement in 2005. Third, Plaintiffs contend that Defendants failed to disclose adequately concerns associated with the 2004 recall of TAXUS stent systems when Defendants knew or should have known before the recalls took place that the TAXUS product contained dangerous manufacturing defects, which would lead to massive liabilities adversely affecting the Company stock. The last subject as to which Defendants allegedly made misrepresentations concerns a series of "warning letters" sent between 2005 and 2006 by the U.S. Food and Drug Administration ("FDA") to Boston Scientific in connection with FDA violations by several of its manufacturing facilities. B. Procedural History In January 2006, Plaintiffs Douglas Fletcher, Michael Lowe, Jeffrey Klunke, and Robert Hochstadt each filed separate class action complaints against Defendants. The four complaints were consolidated before Judge Tauro on April 3, 2006; a consolidated complaint was subsequently filed by Plaintiffs. In re Boston Scientific Corp. ERISA Litig., Civil Action No. 06-cv-10105-JLT (D.Mass.) ("ERISA I"). On October 10, 2006, Defendants filed a motion to dismiss the Consolidated Complaint. Judge Tauro denied Defendants' motion in significant part on August 27, 2007.[6]In re Boston Scientific Corp. ERISA Litig., 506 F.Supp.2d 73 (D.Mass. 2007). Thereafter, the parties commenced fact discovery. On March 12, 2008, Plaintiffs Fletcher, Lowe, Klunke and Hochstadt moved to certify the class under Federal Rule of Civil Procedure 23(a) and (b)(1). Klunke and Hochstadt later withdrew from the litigation. However, on June 30, 2008, Hochstadt filed a motion to intervene and asked to be reappointed as a class representative. On November 3, 2008, Judge Tauro denied the motion for class certification and Hochstadt's motion to intervene; he then dismissed the case because Plaintiffs Fletcher and Lowe lacked Article III standing. In re Boston Scientific Corp. ERISA Litig., 254 F.R.D. 24 (D.Mass. 2008). On December 2, 2008, Plaintiffs Fletcher, Lowe, and Hochstadt filed a notice of appeal. In re Boston Scientific Corp. ERISA Litig. (1st Cir. No. 08-2568). The matter is currently stayed in the First Circuit, pending settlement developments. On December 24, 2008, Plaintiffs Hochstadt and Hazelrig filed the instant action, Hochstadt et al. v. Boston Scientific Corp. et al., Civil Action 08-cv-12139-DPW (D.Mass.) ("ERISA II"), seeking to sidestep the standing issue and the problem of Hochstadt's failure to reenter the case through intervention, which together ended ERISA I before Judge Tauro. Thereafter, the parties resumed fact discovery where they left off in ERISA I. In doing so, the parties agreed that all documents produced in ERISA I would be deemed produced in ERISA II. Under the auspices of Settlement Counsel for the First Circuit, counsel for Plaintiffs Fletcher, Lowe, and Hazelrig and the Defendants in September 2009 agreed to settle ERISA I and ERISA II (collectively, *100 the "ERISA Actions") for an amount of $8.2 million (the "Settlement Amount") to be paid in cash by Boston Scientific and its insurer, St. Paul Mercury Insurance Company. On December 1, 2009, Plaintiffs Fletcher, Lowe, and Hazelrig filed a motion for preliminary review, see Note 1 supra, of the Proposed Settlement Agreement, contending that the agreement was an excellent result for the Settlement Class on whose behalf the ERISA Actions were brought. In their motion, they also sought certification of a mandatory class under Rule 23(b)(1) on the basis that the ERISA Actions involved Defendants' Plan-wide conduct and relief was sought on behalf of the Plan as a whole. Plaintiff Hochstadt filed an opposition to this motion by Plaintiffs Fletcher, Lowe, and Hazelrig, alleging that Fletcher and Lowe had been found to lack standing to settle the ERISA Actions, that the settlement amount was insufficient and that the non-opt-out provision, the proposed plan of allocation and the class notice were inappropriate. I held a hearing on January 13, 2010 in connection with my preliminary review of the Proposed Settlement Agreement. During that hearing, I requested the parties to file supplemental briefing on three points: (1) the participation of Plaintiffs Fletcher and Lowe in the class settlement in light of Judge Tauro's decision that they lacked Article III standing, (2) the proposed plan of allocation, which at the time did not separately take into consideration discrete disclosure events that occurred during the Class Period, and (3) the publication of the report of the independent fiduciary Plaintiff Hazelrig and Defendants agreed to retain to review the settlement agreement. On February 17, 2010, Plaintiff Hazelrig and Defendants submitted the Amended Stipulation and Agreement of Settlement (the "Amended Settlement Agreement") now before me. Pursuant to this amended agreement, Plaintiffs Fletcher and Lowe are now excluded from the Settlement Class, leaving Hazelrig the only settlement class representative.[7] The Amended Settlement Agreement incorporates a revised plan of allocation (the "Revised Plan of Allocation"), a copy of which is attached hereto as Exhibit A, which now provides for the payment of settlement proceeds to class members separately based upon discrete disclosure events that occurred during the Class Period. A revised class notice has been prepared reflecting the Revised Plan of Allocation (the "Revised Class Notice"). In addition, Plaintiff Hazelrig and the Defendants have committed to ensure that the report of the independent fiduciary they will retain will be made publicly available at least thirty days before the deadline for objecting to the Amended Settlement Agreement. *101 II. PRELIMINARY CLASS CERTIFICATION Before preliminarily determining whether the settlement is fair, I must determine whether to certify the class for settlement purposes. The Amended Settlement Agreement defines the Settlement Class as a non-opt-out class consisting of: [A]ll Participants in the Plan for whose individual accounts the Plan held an interest in Boston Scientific common stock at any time during the Class Period. Excluded from the Proposed Class are Douglas Fletcher, Michael Lowe, Defendants, members of the Defendants' Immediate Families, any officer, director or principal stockholder of Boston Scientific under Section 16 of the Securities Exchange Act of 1934, any entity in which a Defendant has a controlling interest, and their heirs, Successors-In-Interest, or assigns (in their capacities as heirs, Successors-In-Interest, or assigns). Am. Settlement Agreement, ¶ 1.1.29. I first address the standing issue before turning to the requirements of Federal Rule of Civil Procedure 23. A. Standing In a class action lawsuit, as in every law suit, "Article III standing is a `threshold requirement,' and the representative plaintiff must demonstrate personal injury in fact to certify a class." In re Boston Scientific ERISA Litig., 254 F.R.D. at 28. As discussed above, Judge Tauro previously dismissed class certification in ERISA I because the proposed class representatives, Fletcher and Lowe, failed to demonstrate individual injury in fact and therefore lacked Article III standing. Id. at 28-32. Plaintiff Hochstadt initially relied on Judge Tauro's ruling to show that Plaintiffs Fletcher and Lowe lacked standing to settle the ERISA Actions. Given that Plaintiffs Fletcher and Lowe are now excluded from the Settlement Class, see Notes 2 and 7 supra, the only issue at this point is therefore whether Plaintiff Hazelrig has adequate standing to settle the ERISA Actions as class representative. Hochstadt did not specifically address Hazelrig's standing in his initial briefing. Rather he merely contended, without adducing any evidence, that "the presently `proposed' representatives are not representative because they did not lose money[,] were not injured." For his part, Hazelrig asserted without evidentiary support, that he has constitutional and statutory standing to maintain and settle the ERISA Actions because he has suffered a compensable loss. At a further hearing in this matter on April 21, 2010, I directed counsel for Hazelrig to make a submission demonstrating that Hazelrig in fact has a compensable loss, which would support his standing to act as class representative. Counsel has submitted a Declaration from Candace L. Preston, CFA, who assisted crafting the Revised Plan of Allocation. Ms. Preston opined that Hazelrig would have a likely recovery of approximately $1,970.00. This is sufficient to establish standing at this stage.[8] For purposes of this preliminary review, I find that Plaintiff Hazelrig has the requisite standing to settle the ERISA Actions and will therefore focus my analysis on whether *102 the Rule 23 requirements are met in this case. B. Rule 23 1. Legal Standard In order to certify a class, "[a] district court must conduct a rigorous analysis of the prerequisites established by Rule 23." Smilow v. Sw. Bell Mobile Sys., Inc., 323 F.3d 32, 38 (1st Cir.2003) (citing Gen. Tel. Co. v. Falcon, 457 U.S. 147, 161, 102 S.Ct. 2364, 72 L.Ed.2d 740 (1982)). In doing so, "the question is not whether the plaintiff or plaintiffs have stated a cause of action or will prevail on the merits, but rather whether the requirements of Rule 23 are met." Waste Mgt. Holdings, Inc. v. Mowbray, 208 F.3d 288, 298 (1st Cir.2000) (quoting Eisen v. Carlisle & Jacquelin, 417 U.S. 156, 178, 94 S.Ct. 2140, 40 L.Ed.2d 732 (1974)). "[W]hen confronted with a request for settlement-only class certification, a district court need not inquire whether the case, if tried, would present intractable management problems, for the proposal is that there be no trial." Id. (quoting Amchem Prods., Inc. v. Windsor, 521 U.S. 591, 620, 117 S.Ct. 2231, 138 L.Ed.2d 689 (1997)). Nevertheless, "[w]hen a settlement class is proposed, it is incumbent on the district court to give heightened scrutiny to the requirements of Rule 23 in order to protect absent class members." In re Lupron Mktg. and Sales Practices Litig., 228 F.R.D. 75, 88 (D.Mass.2005) (citing Amchem, 521 U.S. at 620, 117 S.Ct. 2231). "This cautionary approach notwithstanding, the law favors class action settlements." Id. (citing City P'ship Co. v. Atl. Acquisition Ltd. P'ship, 100 F.3d 1041, 1043 (1st Cir.1996)). To obtain class certification, the plaintiff must establish the Rule 23(a) requirements of numerosity, commonality, typicality, and adequacy of representation and demonstrate that the action may be maintained under Rule 23(b)(1), (2), or (3). See Smilow, 323 F.3d at 38 (citing Amchem, 521 U.S. at 614, 117 S.Ct. 2231). Here, Hazelrig seeks to obtain class certification pursuant to Rule 23(b)(1). 2. iRule 23(a) In light of the First Circuit's instruction in Smilow that the court to which a settlement is tendered conduct a "rigorous analysis of the prerequisites established by Rule 23," I address each of Rule 23 requirements, although only the typicality and the adequacy requirements appear to be in dispute. a. Numerosity In order to satisfy Rule 23(a)(1)'s numerosity requirement, Plaintiff must demonstrate that "the class [would be] so numerous that joinder of all members is impracticable." FED.R.CIV.P. 23(a)(1). This requirement is easily met here because the Settlement Class consists of approximately 12,000 Boston Scientific employees who held Boston Scientific stock in their Plan accounts during the Class Period. b. Commonality Rule (23)(a)(2)'s commonality requirement is satisfied when "there are questions of law or fact common to the class." FED. R. CIV. P. 23(a)(2). "While at least one common issue of fact or law at the core of the action must shape the class, Rule 23(a) does not require that every class member share every factual and legal predicate of the action." In re Lupron, 228 F.R.D. at 88. "The threshold of commonality is not a difficult one to meet." In re Relafen Antitrust Litig., 231 F.R.D. 52, 69 (D.Mass.2005). In this case, there are a number of common issues of fact and law that the Settlement Class members bear upon in *103 establishing the Defendants' liability, as well as Plaintiffs' entitlement to damages. Such questions concern Defendants' alleged breaches of fiduciary duties under ERISA and their impact on the price of Boston Scientific stock. Accordingly, I conclude that the commonality requirement of Rule 23(a)(2) is satisfied. c. Typicality The typicality requirement set forth in Rule 23(a)(3) requires that "the claims or defenses of the representative parties are typical of the claims or defenses of the class." FED.R.CIV.P. 23(a)(3). "The representative plaintiff satisfies the typicality requirement when its injuries arise from the same events or course of conduct as do the injuries of the class and when plaintiff's claims and those of the class are based on the same legal theory." In re Credit Suisse-AOL Sec. Litig., 253 F.R.D. 17, 23 (D.Mass.2008). The typicality inquiry "is designed to align the interests of the class and the class representatives so that the latter will work to benefit the entire class through the pursuit of their own goals." In re Prudential Ins. Co. of Am. Sales Practice Litig., 148 F.3d 283, 311 (3d Cir.1998). "Rule 23(a)(3), however, does not require that the representative plaintiff's claims be identical to those of absent class members." In re Credit Suisse, 253 F.R.D. at 23. Here, Plaintiff Hazelrig was a Boston Scientific employee and his claim arises from the fact he held Boston Scientific stock in its Plan account during the Class Period. Contrary to Hochstadt's allegations, Hazelrig's claim is therefore based on the same basic legal theory as the claims of all other class members. This fact is sufficient to support a finding of typicality because I need not determine that Plaintiff Hazelrig's claims and the claims of the Settlement Class are precisely aligned as to all issues in order to find that Hazelrig has satisfied his burden with respect to the typicality requirement. See In re Tyco Int'l, Ltd. Multidistrict Litig., No. MD-02-1335-PB, 2006 WL 2349338, at *6 (D.N.H. Aug. 15, 2006). Under these circumstances, for purposes of settlement only, I conclude that the claims asserted by Plaintiff Hazelrig are sufficiently typical of the claims of the Settlement Class as a whole to satisfy Rule 23(a)(3). d. Adequacy The final requirement articulated in Rule 23(a)(4) requires that the proposed class representatives "fairly and adequately protect the interests of the class." FED. R.CIV.P. 23(a)(4). This entails a two-prong showing: "The moving party must show first that the interests of the representative party will not conflict with the interests of any of the class members, and second, that counsel chosen by the representative party is qualified, experienced and able to vigorously conduct the proposed litigation." Andrews v. Bechtel Power Corp., 780 F.2d 124, 130 (1st Cir. 1985). The first prong of the test seeks to ensure that the interests of the class representatives are aligned with the interests of absent class members. For essentially the same reasons that Plaintiff Hazelrig's claims are "typical" of the claims of the Settlement Class, I find that, for purposes of the settlement,[9] Plaintiff Hazelrig's interests do not conflict with the interests of other class members. Cf. In re Credit *104 Suisse, 253 F.R.D. at 22 (noting that "[t]he requirements of typicality and adequacy tend to merge"). I also find that, unlike Hochstadt's new counsel,[10] Plaintiff Hazelrig's counsel have demonstrated that they are qualified, experienced, and are fully prepared to represent the Settlement Class to the best of their abilities. Accordingly, I conclude that the adequacy requirement is satisfied in this case. In sum, all of the Rule 23(a) requirements are met. 3. Rule 23(b) Plaintiff Hazelrig seeks class certification under Rule 23(b)(1)(B).[11] In addition to satisfying the four elements set forth in Rule 23(a), Rule 23(b)(1) requires that a class action may be maintained if: (1) prosecuting separate actions by or against individual class members would create a risk of: . . . (B) adjudications with respect to individual class members that, as a practical matter, would be dispositive of the interests of the other members not parties to the individual adjudications or would substantially impair or impede their ability to protect their interests. FED.R.CIV.P. 23(b)(1)(B). Because Rule 23(b)(1) does not provide opt-out protections, *105 class actions brought under this rule "are often referred to as `mandatory' class actions." Ortiz v. Fibreboard Corp., 527 U.S. 815, 833 n. 13, 842, 119 S.Ct. 2295, 144 L.Ed.2d 715 (1999). Plaintiff Hazelrig argues that certification of a non-opt-out class under Rule 23(b)(1) is appropriate in this case because the ERISA Actions involve Defendants' Plan-wide conduct and relief is sought on behalf of the Plan as a whole under ERISA § 502(a)(2), 29 U.S.C. 1132(a)(2). In making this argument, Hazelrig relies on the assumption that "[s]uits brought pursuant to this provision are derivative in nature; those who bring suit do so on behalf of the plan and the plan takes legal title to any recovery." Evans v. Akers, 534 F.3d 65, 70 n. 4 (1st Cir.2008) (citing Mass. Mut. Life Ins. Co. v. Russell, 473 U.S. 134, 141, 105 S.Ct. 3085, 87 L.Ed.2d 96 (1985)). Generally, an action "charging `a breach of trust by an indenture trustee or other fiduciary similarly affecting the members of a large class' of beneficiaries, requiring an accounting or similar procedure `to restore the subject of the trust'" is a classic example of the type of case appropriate for certification under Rule 23(b)(1)(B). Ortiz, 527 U.S. at 833-34, 119 S.Ct. 2295 (quoting FED.R.CIV.P. 23 advisory committee's notes). Not surprisingly, therefore, "[i]n light of the derivative nature of ERISA § 502(a)(2) claims, breach of fiduciary duty claims brought under § 502(a)(2) are paradigmatic examples of claims appropriate for certification as a Rule 23(b)(1) class, as numerous courts have held." In re Schering Plough Corp. ERISA Litig., 589 F.3d 585, 604 (3rd Cir. 2009); Evans v. Akers, No. 04-11380-WGY, slip op. at 4 (D.Mass. Oct. 7, 2009) (finding class certification appropriate under Rule 23(b)(1)(B) because "[g]iven the Plan-representative nature of Named Plaintiffs' breach of fiduciary duty claims, there is a risk that failure to certify the Settlement Class would leave future plaintiffs without relief"); Stanford v. Foamex L.P., 263 F.R.D. 156, 174 (E.D.Pa.2009) ("because of the unique and representative nature of an ERISA § 502(a)(2) suit, numerous courts have held class certification proper pursuant to Rule 23(b)(1)(B)"); In re Nortel Networks Corp. ERISA Litig., No. 03-MD-01537, 2009 WL 3294827, at *15 (M.D.Tenn.2009) (finding class certification appropriate under Rule 23(b)(1)(B) because "[i]f individual adjudications would be dispositive of the interests of other Plan Participants, it would be better for those Plan Participants to be members of a class"); Jones v. NovaStar Fin., Inc., 257 F.R.D. 181, 193 (W.D.Mo.2009) (certifying a class under Rule 23(b)(1)(B) because "[g]iven that [named plaintiff]'s claim seeks `Plan-wide relief, there is a risk that failure to certify the class would leave future plaintiffs without relief'"); In re Merck & Co., Inc. Sec., Derivative & ERISA Litig., MDL No. 1658, 2009 WL 331426, at *10 (D.N.J. Feb. 10, 2009) (finding class certification appropriate under Rule 23(b)(1)(B) because "[i]f the prudence claims proceeded individually, and one court removed a Plan fiduciary, this would be, as a practical matter, dispositive of the interests of the other Plan members in that particular regard"); In re Tyco, Int'l, Ltd. Multidistrict Litig., 2006 WL 2349338, at *7 ("the majority of courts have concluded that certification under 23(b)(1)(B) is proper" for ERISA fiduciary class actions).[12] Given that the present case involves an ERISA § 502(a)(2) claim brought on behalf *106 of the Plan and alleging breaches of fiduciary duty on the part of Defendants that will, if true, be the same with respect to every class member, I find that Rule 23(b)(1)(B) is clearly satisfied. Accordingly, I conclude that the Settlement Class should be certified. C. Subclasses Plaintiff Hochstadt argues that sub-classes should be created to reflect the greater needs of those of are "retired and approaching retirement." Under Rule 23(c)(5), "[w]hen appropriate, a class may be divided into subclasses that are each treated as a class under this rule." FED.R.CIV.P. 23(c)(5). "Subclasses must be created when differences in the positions of class members require separate representatives and separate counsel." MANUAL FOR COMPLEX LITIGATION (FOURTH) § 21.23 (2004). Subclassing may also provide "structural guaranties that a proposed settlement is fair." Natchitoches Parish Hosp. Serv. Dist. v. Tyco Int'l, Ltd., 247 F.R.D. 253, 269 (D.Mass.2008) (citing 1 HERBERT B. NEWBERG & ALBA CONTE, NEWBERG ON CLASS ACTIONS § 3.31 (4th ed. 2002) ("When the class members are united in interest on the liability issues but have potential conflicts regarding the nature of the relief or the division of a monetary award, the court may avoid the potential conflict by creating subclasses")). I reject Hochstadt's contention that sub-classes should be created in this case because the needs-based sub-classes he proposes would not treat all class members in an equitable manner and would make the distribution of the Settlement Amount unduly complicated. Hochstadt cites no case law, and my research has not identified any, in which a court has certified sub-classes based on the personal needs of the class members rather than on their losses, which are at the core of their claims. Hochstadt does not, in any event, offer a way to create sub-classes to reflect the greater needs of those of are "retired and approaching retirement." Under these circumstances, I overrule Hochstadt's objection based upon a failure to create sub-classes in the present case. Moreover, to the degree that the differentiation of claims that subclassing allows is appropriate, I find the Revised Plan of Allocation serves that purpose adequately. III. PRELIMINARY FAIRNESS DETERMINATION A. Legal Standard Pursuant to Rule 23(e), "[t]he claims, issues, or defenses of a certified class may be settled, voluntarily dismissed, or compromised only with the court's approval." FED.R.CIV.P. 23(e). When approving a settlement: [T]he judge is required to scrutinize the proposed settlement to ensure that it is fair to the persons whose interests the court is to protect. Those affected may be entitled to notice and an opportunity to be heard. This usually involves a two-stage procedure. First, the judge reviews the proposal preliminarily to determine whether it is sufficient to warrant public notice and a hearing. If so, the final decision on approval is made after the hearing. *107 MANUAL FOR COMPLEX LITIGATION (FOURTH) § 13.14 (2004). Therefore, before making a final decision on the "approval" of a settlement, a court must first make a "preliminary determination on the fairness, reasonableness, and adequacy of the settlement terms." Id. § 21.632. While "policy encourages settlements, the burden remains on the proponents to show that the settlement is reasonable." Nat'l Ass'n of Chain Drug Stores v. New England Carpenters Health Benefits Fund, 582 F.3d 30, 44 (1st Cir. 2009) (internal citations omitted). "Rule 23's reasonableness standard has been given substance by case law offering laundry lists of factors, most of them intuitively obvious and dependent largely on variables that are hard to quantify." Id. Nonetheless, there is generally a presumption in favor of the settlement "[i]f the parties negotiated at arm's length and conducted sufficient discovery." In re Pharm. Indus. Average Wholesale Price Litig., 588 F.3d 24, 32-33 (1st Cir.2009) (citing (City P'ship, 100 F.3d at 1043)). More specifically, a presumption of fairness attaches to the court's preliminary fairness determination when "(1) the negotiations occurred at arm's length; (2) there was sufficient discovery; (3) the proponents of the settlement are experienced in similar litigation; and (4) only a small fraction of the class objected." In re Lupron Mktg. and Sales Practices Litig., 345 F.Supp.2d 135, 137 (D.Mass.2004) (quoting In re Gen. Motors Corp. Pick-Up Truck Fuel Tank Prods. Liab. Litig., 55 F.3d 768, 785 (3d Cir. 1995)). B. Application to the Amended Settlement Agreement 1. The Negotiations Occurred at Arms' Length Plaintiff Hochstadt argues that the settlement agreement was the result of collusion and did not occur at arms' length. The record shows otherwise. Even though the parties did not reach an agreement at that stage, settlement discussions began on February 10, 2009 during the pre-argument settlement conference at the Court of Appeals. The parties resumed settlement discussions on or about August 14, 2009. During the month that followed, they exchanged multiple settlement proposals and involved Paul W. Sandman, Boston Scientific's general counsel, in the settlement negotiations. On or about the evening of September 11, 2009, the parties reached an agreed compromise to settle the ERISA Actions. Thereafter, the parties' counsel spent two months detailing the settlement terms. Under these circumstances, I find that the time spent and the efforts made by parties on both sides during the settlement negotiations are persuasive indicators that the Amended Settlement Agreement was not the result of collusion but rather the result of negotiations conducted at arms' length. Accordingly, I conclude that this requirement is satisfied. 2. Sufficient Discovery Was Conducted Plaintiff Hochstadt alleges that the proponents of the settlement cannot justify the Settlement Amount because the discovery has not been completed. In doing so, Hochstadt misconceives the applicable standard, which does not require that discovery be completed, but rather that sufficient discovery be conducted to make an intelligent judgment about settlement. Applying this standard, I find that sufficient discovery was undertaken. Fact discovery began in 2006 with ERISA I. When the parties engaged in fact discovery for ERISA II, they resumed fact discovery *108 where they left off in ERISA I and adopted an aggressive schedule. The parties' counsel agreed that all documents produced in ERISA I would be deemed produced in ERISA II and reviewed over three million pages of documents produced by Defendants. Fact discovery also involved the deposition of several witnesses, including the Plan administrator and an outside consultant to the Plan. Accordingly, I find that, given the thorough investigation of the facts over the last four years, this case is at a stage where both the court and counsel are able to evaluate the merits of the claims. 3. The Proponents of the Settlement Are Experienced in Similar Litigation The experience of co-lead counsel in this case is apparent from their prosecution of the ERISA Actions over the last four years. They have worked from the inception of their pre-filing factual investigation, filed the successive complaints and conducted discovery. In addition, they have significant class action experience in ERISA and related investor disputes and in class action litigation generally. Plaintiff Hochstadt recognizes that, in his words, "[t]here is no doubt that proposing counsel teams have extensive experience in the field." Accordingly, I am satisfied that the proponents of the settlement are experienced in similar litigation. 4. Number of Objections Because the notice to the Settlement Class has not yet been issued, this factor can only be assessed preliminarily based on the objection of Plaintiff Hochstadt. To date, Hochstadt is the only known objector to the Amended Settlement Agreement. However, when the notice will be issued, other putative class members will be given a full opportunity to develop and lodge any objection. In doing so, putative class members will be able to rely on the independent fiduciary's report, which will be made publicly available at least thirty days before the expiration of the deadline for objecting to the Amended Settlement Agreement. Whether a significant number of the class members will ultimately object to the Amended Settlement Agreement when it is proposed for final approval will be further discussed at that time. At this point, there is but one objector, a party who has variously sought, then abandoned and then sought again representative status. Under these circumstances and for purposes of this preliminary review, I find the Amended Settlement Agreement to be fair, adequate and reasonable as a general proposition. I turn now to specific objections. IV. OBJECTIONS[13] A. Objection to the Proposed Settlement Amount Hochstadt first objects to the Settlement Amount, as defined in paragraph 1.1.37 of the Amended Settlement Agreement, which he considers to be "inadequate." I find, however, that the amount of $8.2 million offered by Defendants is reasonable in light of the risks of continuing litigation. As noted by Plaintiff Hazelrig, ERISA I has been dismissed and the outcome of the appeal from that dismissal remains unknown. Both ERISA I, if reinstated, and ERISA II could face significant legal and factual hurdles in obtaining any recovery greater than the Settlement *109 Amount. In addition, I find the Settlement Amount to be reasonable in light of the best possible recovery. While co-lead counsel preliminarily estimated damages at approximately $160 million based on a "lost opportunity" theory, that theory was rejected by Judge Tauro in ERISA I in favor of the "out-of-pocket" theory, thereby reducing class members' prospect for damages. See In re Boston Scientific Corp. ERISA Litig., 254 F.R.D. at 28-32. This prospect of more limited damages is also reinforced by the fact that no loss was apparently sustained by the litigation with Medinol, Ltd. or by the DOJ investigation because the stock price did not decline when settlements with Medinol and the DOJ were disclosed. Damages obtainable at trial would at best be limited to stock declines associated with the TAXUS recalls and the FDA warning letters, which approximate $30 million. Moreover, as it bears noting, see Note 5 supra, that I have today granted summary judgment for Defendants in the parallel securities litigation on the TAXUS recall claim and that Plaintiffs in that litigation had earlier abandoned the FDA warning and other claims. In short, the amount of $8.2 million, which represents approximately 27% of the more conservatively estimated $30 million loss, is plainly reasonable in a disputable matter such as this. In sum, the risks of continuing litigation and the best possible recovery make it uncertain, if not unlikely, that Defendants would ever be required to pay more, through further litigation than they are willing to pay now. Accordingly, I find the Settlement Amount to be reasonable. B. Objection to the Revised Plan of Allocation[14] Hochstadt also objects to the proposed distribution of the Settlement Amount.[15] He argues that the Settlement Amount should be allocated to provide greater benefits to "those retired and approaching retirement." As with the settlement itself, "the plan of allocation must be fair, reasonable, and adequate." In re Tyco Int'l, Ltd. Multidistrict Litig., 535 F.Supp.2d 249, 262 (D.N.H.2007). For the reasons discussed in Section II.C. supra with respect *110 to subclasses, I disagree with Hochstadt that the proposed Plan of Allocation should be based on the needs of the members of the Settlement Class rather than on their losses. Furthermore, I find the Revised Plan of Allocation to be reasonable because its provides for payments of settlement proceeds to class members based upon discrete disclosure events that occurred during the Class Period. Under this Revised Plan of Allocation, Plan participants who held and purchased Boston Scientific stock in the Plan prior to the various disclosures of the TAXUS stent problems and FDA issues but did not sell until after such disclosures were made, have "Recognizable Claims," whereas those who sold before the disclosures were made are not entitled to receive any settlement proceeds. In addition, damages are limited to the actual market loss a Plan participant actually incurred; therefore if the Plan participant had a market gain, he will not be deemed not to have suffered any damages. Under these circumstances and for purposes of this preliminary review, I find the Revised Plan of Allocation to be fair, reasonable, and adequate. In addition, I note that, as part of its review of the Amended Settlement Agreement, the independent fiduciary will specifically review the allocation formula for reasonableness and the results of that review will be available before the final fairness hearing is conducted. C. Objection to the Revised Class Notice Finally, Hochstadt argues that the proposed class notice is inappropriate because it does not clearly and concisely recite the distribution of the Amended Settlement Amount or any method for calibrating the apportionment of the proceeds in light of the age of the class members. While it is not mandatory, "the court may direct appropriate notice to the class" certified under Rule 23(b)(1). FED. R.CIV.P. 23(c)(2)(A). Contrary to Hochstadt's allegations, I find the Revised Class Notice, as defined in Exhibit B.1 of the Amended Settlement Agreement, to be appropriate because it clearly provides background information on the ERISA Actions, accurately recites the legal rights and options of the Settlement Class and fully explains the Revised Plan of Allocation of the Settlement Amount in the light of the discrete disclosure events that occurred during the Class Period. V. CONCLUSION For the reasons set forth more fully above, I GRANT class certification of the Settlement Class, as defined in paragraph 1.1.29 of the Amended Settlement Agreement, and AUTHORIZE the publication of the Revised Class Notice, as defined in Exhibit B.1 of the Amended Settlement Agreement. A Final Fairness Hearing will be held before me in Courtroom 1 of the John Joseph Moakley Courthouse in Boston at 2:30 p.m., August 5, 2010. NOTES [1] By terms, the motion seeks "preliminary approval" of the settlement agreement. As will be discussed below, the approval of a settlement agreement is a two-step process, which first requires the court to make a preliminary determination regarding the fairness, reasonableness, and adequacy of the settlement terms. MANUAL FOR COMPLEX LITIGATION (FOURTH) § 13.14 (2004). It is only after the second step, a fairness hearing has taken place, however, that the court may "approve" the settlement agreement. Id.; see generally PRINCIPLES OF THE LAW OF AGGREGATE LITIG. § 3.03, cmt. a (Proposed Final Draft, Apr. 1, 2009, approved in substance at the American Law Institute's 2009 Annual Meeting) (discussing preliminary review process). Accordingly, I have replaced the term "approval" with the term "review" for this step in the process. [2] The settlement agreement before me would effectively resolve two parallel class actions based on the same allegations: (A) Hochstadt et al. v. Boston Scientific Corp. et al., Civil Action No. 08-cv-12139-DPW (D.Mass.) ("ERISA II"), in which Plaintiff Hazelrig is a representative party and (B) In re Boston Scientific Corp. ERISA Litig., Civil Action No. 06-cv-10105-JLT (D.Mass.) ("ERISA I") (appeal currently pending, In re Boston Scientific Corp. ERISA Litig. (1st Cir. No. 08-2568)), in which the named representative parties-who were found not to have standing in ERISA I, as a consequence of which they have no standing in ERISA II under principles of res judicata—are purporting to prosecute on behalf of other ERISA II class members. As noted at Note 7 infra, the named representative parties in ERISA I have entered into a separate individual settlement as to their claims. [3] The individual defendants include James R. Tobin, Peter M. Nicholas, John E. Abele, Joel L. Fleishman, Ernest Mario, Ph.D., Uwe E. Reinhardt, John E. Pepper, Ursula M. Burns, Marye Anne Fox, Ph.D., Ray J. Groves, N.J. Nicholas, Jr., Senator Warren B. Rudman, Lawrence C. Best, Robert G. MacLean, Lucia L. Quinn, Paul W. Sandman, Richard A. Duffy, Warren Clark III, Rose Marie Brana Haslinger, Jamie Rubin, John and Jane Does 1-10. [4] The term "Plaintiffs" refers in this section to the Plaintiffs both in ERISA I and in ERISA II, see Note 2 supra, because the complaints in both actions are based on identical underlying allegations. [5] Parallel to this litigation, several related securities fraud cases filed in 2005 against Boston Scientific were consolidated into a single case before Judge Tauro on February 15, 2006. In re Boston Scientific Corp. Sec. Litig., Civil Action No. 05-11934. Lead Plaintiff Mississippi Public Employees' Retirement System, a pension fund, subsequently filed a Consolidated Amended Complaint, alleging false and misleading statements in connection with the same four events at issue in the present case: (1) the Department of Justice investigation into a 1998 product recall, (2) the civil lawsuit with Medinol Ltd., (3) the introduction and subsequent recall of the TAXUS stents, and (4) investigations and warnings by the U.S. Food and Drug Administration regarding Boston Scientific's manufacturing facilities. Judge Tauro dismissed the action on June 21, 2007. In re Boston Scientific Corp. Sec. Litig., 490 F.Supp.2d 142 (D.Mass.2007). Lead Plaintiff filed an appeal limited to the TAXUS stent claims, and the First Circuit reversed dismissal with regard to those claims and remanded the case on April 16, 2008. Miss. Pub. Employees' Ret. Sys. v. Boston Scientific Corp., 523 F.3d 75 (1st Cir. 2008). The securities fraud litigation was transferred to me on remand. Thereafter, Lead Plaintiff filed a Second Consolidated Amended Complaint and a motion for class certification, which I granted on March 10, 2009. In re Boston Scientific Corp. Sec. Litig., 604 F.Supp.2d 275 (D.Mass.2009). I have this day, however, granted summary judgment for the Defendants as to the remaining TAXUS claims. In re Boston Scientific Corp. Sec. Litig., No. 05-11934-DPW (D.Mass. filed Apr. 27, 2010). [6] Plaintiffs' ERISA § 502(a)(3) claims were dismissed, however. In re Boston Scientific Corp. ERISA Litig., 506 F.Supp.2d 73, 76 (D.Mass.2007). [7] Fletcher and Lowe are entering into a separate settlement agreement with the Defendants to resolve their individual claims, including dismissal of the currently-pending appeal of ERISA I. Plaintiff Hazelrig and Defendants report in their briefing that the additional payment Defendants are making in connection with the separate Fletcher/Lowe settlement will not affect the amount being paid under the proposed class settlement. I have reviewed the Fletcher/Lowe settlement agreement and find that their settlement does not derogate from or otherwise adversely affect the proposed Class Settlement before me. I have directed the settling parties to file the Fletcher/Lowe Settlement Agreement, with the amount of financial consideration redacted, because my review of that Agreement as a whole—but not the amount of financial consideration—has played a role in my preliminary review of the Class Settlement. The redacted Fletcher/Lowe Settlement Agreement is docketed as No. 71. As to Hochstadt's separate appeal, it will presumably be mooted if the non-opt-out Class Settlement before me is ultimately approved. [8] Hochstadt has challenged the Preston Declaration as inadequate because "rather than providing the underlying factual information, the Declaration presents [only] Preston's ultimate conclusion that Hazelrig has a recognized claim." While I do not find Hazelrig's objection compelling at this stage, I will permit him to engage in limited discovery regarding the issue, should he seek to do so. [9] When initially opposing class certification, Defendants argued that Plaintiffs could not satisfy the adequacy requirement because inherent conflicts existed between the interests of the class representatives and the interests of other class members. Specifically, Defendants contended that class members had different and conflicting interests as to "(i) when, why, and the extent to which the price of Boston Scientific stock was artificially inflated, (ii) when and why it became an imprudent investment that should have been eliminated as a Plan investment option, and (iii) what alternative investment should have been offered in its place." In making this argument, Defendants relied on Langbecker v. Elec. Data Sys. Corp., 476 F.3d 299 (5th Cir. 2007), where a divided Fifth Circuit panel in a case similar to this vacated a district court's decision to certify a class action on the ground, among others, that "[s]ubstantial conflicts exist[ed] among the class members, raising questions about the adequacy of the lead Plaintiff's ability to represent the class." Id. at 315. In particular, the Fifth Circuit found that participants in the company were affected by changes in their company's stock price in "dramatically different ways" because some of the class members had made money while others had lost money and because there were significant variations among class members concerning their optimal breach dates and resulting maximum recovery. Id. The existence of similar detailed intraclass conflicts has not been raised or argued by Plaintiff Hochstadt. Rather Hochstadt merely asserts "the presently `proposed' representatives are not [adequate] representative because they did not lose money[,] were not injured." See Note 8 supra and accompanying text. At this stage of the proceeding, I am satisfied that the Revised Plan of Allocation adequately addresses any problems of intraclass conflict. [10] Hochstadt suggests that his new counsel, who is apparently one of his relatives, be named interim class counsel because Plaintiff Hazelrig's counsel is purportedly inadequate. I find Hazelrig's counsel more than adequate to the task, consequently I need not address the adequacy of Hochstadt's counsel. [11] While Plaintiff Hazelrig purports to seek class certification under Rule 23(b)(1)(A) and (B), I find that class certification is inappropriate under Rule 23(b)(1)(A) because Hazelrig only seeks monetary damages as opposed to equitable relief. See In re Tyco Int'l, Ltd., No. MD-02-1335-PB, 2006 WL 2349338, at *3 (D.N.H. Aug. 15, 2006) ("Certification under Rule 23(b)(1)(A) is . . . not appropriate in an action for damages.") (quoting Zinser v. Accufix Research Inst., Inc., 253 F.3d 1180, 1193 (9th Cir.2001)); Johnson v. Geico Cas. Co., 673 F.Supp.2d 255, 270 (D.Del.2009) ("Certification under Rule 23(b)(1)(A) is generally inappropriate where the primary relief sought is monetary damages."); but see Stanford v. Foamex L.P., 263 F.R.D. 156, 173 (E.D.Pa.2009) (certifying class under Rule 23(b)(1)(A) because "[t]he issue is not whether plaintiff seeks primarily monetary damages; rather, the focus of a Rule 23(b)(1)(A) analysis is on whether separate actions could lead to adjudications that establish `incompatible standards of conduct for the party opposing the class.'") (quoting FED.R.CIV.P. 23(b)(1)(A)). Finding Rule 23(b)(1)(A) inappropriate to the proposed settlement, I will therefore focus my analysis on Rule 23(b)(1)(B). [12] When initially opposing class certification, Defendants argued that LaRue v. DeWolff, Boberg & Assocs., Inc., 552 U.S. 248, 128 S.Ct. 1020, 169 L.Ed.2d 847 (2008) precluded class certification under Rule 23(b)(1)(B) because Plan participants could now pursue individual account actions against Plan fiduciaries. Though this argument has some support, In re First Am. Corp. ERISA Litig., 258 F.R.D. 610, 622 (C.D.Cal.2009) (holding class certification inappropriate under Rule 23(b)(1)(B) because "LaRue cures any concern that the potential class members' claims would essentially be disposed of by this litigation"), I find it to be unpersuasive at this stage. See Stanford, 263 F.R.D. at 174 (discussing LaRue) (certifying class under Rule 23(b)(1)(B) because "[t]he availability of an individual account claim under § 502(a)(2) does not alleviate the concerns cited by the numerous courts that have certified ERISA class actions pursuant to Rule 23(b)(1)(B) in situations where claims on behalf of the Plan are identical to those on behalf of an individual account."). [13] Because Hochstadt is the only known objector at this stage, I address only his objections without prejudice to further consideration of these and other objections when confronting the question of final approval in connection with the Fairness Hearing. [14] As noted, the plan of allocation has been revised in response to my inquiries at the initial preliminary review hearing. These revisions do not, however, entirely obviate Hochstadt's objection. [15] The revised plan of allocation provides the following damage calculation formula: "For each Participant, the Administrator shall determine the Participant's `Recognized Claim' approximating the damages the Participant allegedly suffered." Am. Settlement Agreement Ex. D., § II.C. In doing so, the Administrator will take into consideration for each Participant the disclosure of discrete events. Id. Additionally, the following limit will be imposed on Recognized Claims: "For each Participant, the Administrator shall determine their overall market loss (the `Net Market Loss') for the entire Class Period, as follows: Net Market Loss = A + B—C—D, where for each Participant's account: A = the dollar value of the Boston Scientific shares held in the Boston Scientific Stock Fund, if any, held for the Participant on the first day of the Class Period (valued at $39.50 per share); B = the dollar value, if any, of all of the purchases of interests in the Boston Scientific Stock Fund during the Class Period as of the time of the purchase(s); C = the dollar value, if any, of all dispositions of interests in the Boston Scientific Stock Fund during the Class Period as of the time of the sale(s); and D = the dollar value of the Boston Scientific shares held in the Boston Scientific Stock Fund, if any, held for the Participant on the close of trading on the last day of the Class Period (valued at $21.87 per share). Id. § II.D. The entire Revised Plan of Allocation is attached as Exhibit A to this Memorandum and Order.
Services > Oil Changes Eagan Oil Change Changing your oil is a necessity, not a luxury, and most automobile owners manuals recommend an oil change every 3,500 miles. The demands your car puts on your engine's oil are very unforgiving, and eventually the oil loses its ability to lubricate, cool, clean and flow freely. In fact, not changing your oil regularly will substantially reduce your engine's life span, and oil is much cheaper than steel.
Why Supergirl And Jimmy Olsen Will Probably Never Get Together Supergirl doesn’t hit the schedule until the end of the month, but we’ve already been hearing plenty about the series, including the flirtation between lead Kara Danvers a.k.a Supergirl and James Olsen, who doesn’t really like the Jimmy moniker in this new TV reimagining. At New York Comic-Con, actor Mehcad Brooks explained that he’s there to protect Kara rather than to strike up a romance with the young magazine worker-turned-superheroine. Here’s what the actor had to say: There’s definitely an attraction between the two [James and Kara] but there’s kind of a bro code with Superman. If Superman asked you to go check in on his cousin, you can’t just go hitting on Superman’s cousin, because you start getting hit on by Superman – and that’s not a good thing. He will heat vision you and freeze breath you to death. When Supergirl kicks off, Kara Danvers is already aware of the powers she and her powerful cousin, Superman, have as a result of coming to Earth from another planet. Still, in a lot of ways, she’s an awkward young professional, willing to jump for more work given to her by powerful individuals. She rocks the dorky glasses and has fights with her sister just like a normal person, and she’s interested in romance just like most people living and working in the city in their twenties. Unfortunately, it looks like a potentially blossoming romance between Kara and her coworker Olsen--who will have just started to work for Cat Grant’s media empire CatCo at the start of the series--will remain only a flirtation for the foreseeable future. According to the interview with Screenrant, Brooks says Olsen is meant to be a much more mature character than Kara on the hit series. He was Superman’s peer before he even met her, and he knows what the hero life can be like. So, he will help to take care of the heroine as she faces new challenges during Season 1. He’s very protective of her. He’s very protective but he also allows her the room to grow. Being Supergirl is just the metaphor for all of us being our higher selves. If Supergirl continues as a series, down the line the Jimmy and Kara flirtation could turn into something more obviously romantic, but that really, really doesn't seem to be the direction the CBS drama wants to go in. Unfortunately for those who were hoping for a little more romantic flair to the relationship, it looks like you are out of luck. But there are plenty of fish in the sea, and I'm sure Supergirl will introduce us to some of them. We can officially meet Kara and Jimmy on the show rather soon. Supergirl is expected to hit the schedule on Monday, October 26 at 8:30 p.m. ET after The Big Bang Theory and in subsequent weeks will air at 8 p.m. ET. For more premiere dates, check out our fall TV premiere schedule. And check out what we know about Supergirl, here.
India Kumbha Mela in Allahabad Maha Kumbha Mela is a mass gathering of Hindu devotees. The holiest pilgrimage in the world happens every 12 years (Maha Kumbha) and every 6 years (Ardh Kumbha). The gathering of devotes happens for only one purpose and that is the holy bath in the divine river waters. The festival is celebrated at four places in India. Ujjain (on the banks of River Kshipra), Nashik (on the banks of river Godavari), Haridwar (on the banks of River Ganga and Allahabad Prayag (confluence of Ganga, Yamuna and Saraswathi).
Q: Como agrupar arquivo texto de acordo com parâmetros da primeiro linha em C# Consegui juntar diversos arquivos texto de um mesmo diretório em um arquivo texto final, agrupando os códigos iguais e somando as suas respectivas quantidades, utilizando o seguinte código (créditos ao amigo Vitor Mendes): Dictionary<string, int> valores = new Dictionary<string, int>(); string diretorio = @"C:\teste"; string[] listaDeArquivos = Directory.GetFiles(diretorio); if (listaDeArquivos.Length > 0) { string caminhoArquivoDestino = @"C:\teste\saida.txt"; FileStream arquivoDestino = File.Open(caminhoArquivoDestino, FileMode.OpenOrCreate); arquivoDestino.Close(); List<string> linhasDestino = new List<string>(); foreach (string caminhoArquivo in listaDeArquivos) { foreach (var linhaArquivoAtual in File.ReadAllLines(caminhoArquivo)) { string id = linhaArquivoAtual.Substring(0, linhaArquivoAtual.Length - 3); string quantidade = linhaArquivoAtual.Substring(linhaArquivoAtual.Length - 3, 3); if (valores.ContainsKey(id)) valores[id] = valores[id] + Convert.ToInt32(quantidade); else valores.Add(id, Convert.ToInt32(quantidade)); } } File.WriteAllLines(caminhoArquivoDestino, valores.Select(x => x.Key + x.Valeu.ToString("000")).ToArray()); } A primeira linha de casa arquivo texto contem 2 parâmetros de identificação separados por ponto. Vou exemplificar: Conteúdo do Arq1.txt 000032;30032014 123456010 654321020 Conteúdo do Arq2.txt 000032;30032014 123456005 654321005 Conteúdo do Arq3.txt 000033;23052014 123456050 654321020 Conteúdo do Arq4.txt 000033;23052014 123456020 654321005 Conteúdo do Arq5.txt 000033;20052014 123456001 654321002 Conteúdo do Arq6.txt 000033;20052014 123456009 654321008 Ao agrupar esses arquivos, o programa deverá gerar diferentes arquivos finais de acordo com os parâmetros da primeira linha. Nesses exemplos de arquivos, o resultado final serão os seguintes arquivos: ArqFinal00003320052014.txt 123456010 654321010 ArqFinal00003323052014.txt 123456070 654321025 ArqFinal00003230032014.txt 123456015 654321025 Ou seja, o programa deverá agrupar os arquivos de acordo com a primeira linha, criando arquivos finais diferentes. A: Conforme exemplo nesta resposta, um dicionário é a solução para agrupar itens. No caso agora você tem dois níveis de agrupamento, então o uso de dicionários aninhados se faz necessário. Você tem o dicionário com os nomes dos arquivos como chave e o valor é um outro dicionário com códigos como chave e as quantidades como valor. O código está longe de estar ótimo mas ele está testado e faz o que você deseja. Os comentários foram colocados para fins didáticos e não exprime a maneira como eu comento códigos. using System; using System.IO; using System.Collections.Generic; using System.Linq; public class MergeFiles { public static void Main(string[] args) { var itens = new Dictionary<string, Dictionary<string, int>>(); //Cria a estrutura que permite chaves únicas do tipo string e valores associados do tipo int int resultado; foreach (var arquivo in Directory.GetFiles(@"C:\teste", "*.txt")) { //Pega todos os arquivos com extensão txt disponíveis no diretório var chaveArquivo = ""; foreach (var linha in File.ReadAllLines(arquivo)){ //Lê todas as linhas individualmente de cada arquivo if (linha.Substring(6, 1) == ";") { //Verifica se esta é a primeira linha chaveArquivo = linha.Substring(0, 6) + linha.Substring(7, 8); //Pega os 6 primeiros caracteres e os 8 sequintes pulando o ; if (!itens.ContainsKey(chaveArquivo)) { //verifica se não existe a chave com nome do arquivo itens.Add(chaveArquivo, new Dictionary<string, int>()); //Adiciona uma nova chave ainda inexistente no dicionário } } else { var chave = linha.Substring(0, 6); //Pega os 6 primeiros caracteres var valor = (int.TryParse(linha.Substring(6, 3), out resultado) ? resultado : 0); //Pega os 3 caracteres seguintes e converte para numérico if (itens[chaveArquivo].ContainsKey(chave)) { //verifica se já existe a chave no dicionário itens[chaveArquivo][chave] = itens[chaveArquivo][chave] + valor; //adiciona o valor obtido na linha à chave já existe no dicionário } else { itens[chaveArquivo].Add(chave, valor); //Adiciona uma nova chave ainda inexistente no dicionário } } } } //Cria os arquivos agrupados adicionando todas as linhas do dicionário recriando a mesma estrutura anterior através do LINQ foreach(var arquivo in itens) { File.WriteAllLines(arquivo.Key + ".txt", arquivo.Value.Select(item => item.Key + item.Value.ToString("000")).ToArray()); } } } Coloquei no GitHub para referência futura.
Quick Tips – local honey The use of local honey has been recommended as a non-medicinal way of treating allergy. As it turns out, it can either help or hurt. In order for “local honey” to actually help, it has to be taken in incrementally increasing doses, much the same way an allergy shot is built up. The benefit is extremely modest. Remember, bees carry entomophilous pollen, whereas anemophilous (airborne) pollen accounts for most allergies. Just using regular amounts of honey on cereal or in tea can worsen allergy due to the random exposure to the pollen it contains.
Ecotoxicology of polychlorinated biphenyls in fish--a critical review. Polychlorinated biphenyls (PCBs) are widespread persistent anthropogenic contaminants that can accumulate in tissues of fish. The toxicity of PCBs and their transformation products has been investigated for nearly 50 years, but there is a lack of consensus regarding the effects of these environmental contaminants on wild fish populations. The objective of this review is to critically examine these investigations and evaluate publicly available databases for evidence of effects of PCBs in wild fish. Biological activity of PCBs is limited to a small proportion of PCB congeners [e.g., dioxin-like PCBs (DL-PCBs)] and occurs at concentrations that are typically orders of magnitude higher than PCB levels detected in wild fish. Induction of biomarkers consistent with PCB exposure (e.g., induction of cytochrome P450 monooxygenase system) has been evaluated frequently and shown to be induced in fish from some environments, but there does not appear to be consistent reports of damage (i.e., biomarkers of effect) to biomolecules (i.e., oxidative injury) in these fish. Numerous investigations of endocrine system dysfunction or effects on other organ systems have been conducted in wild fish, but collectively there is no consistent evidence of PCB effects on these systems in wild fish. Early life stage toxicity of DL-PCBs does not appear to occur at concentrations reported in wild fish embryos, and results do not support an association between PCBs and decreased survival of early life stages of wild fish. Overall, there appears to be little evidence that PCBs have had any widespread effect on the health or survival of wild fish.
What Is A Vape Pen? A vape pen (also known as a vaporizer pen, vapor pen, or pen vaporizer) is a compact, pen-shaped vaporizer used for vaping on-the-go. The vape pen is one of the most popular types of vaporizers, coming in a variety of types from leading vape manufacturers like KandyPens, Grenco Science, and Yocan. Pen vapes are generally conduction vaporizers, meaning they heat material directly against the heating element. This means a vape pen may not have the kind of vapor purity that a portable vaporizer or desktop vape has, but it is certainly more discreet, easy-to-use, and affordable in most cases. Vape pens generally consist of a mouthpiece, atomizer, and vaporizer battery, however each of the elements of a vape pen can vary from unit to unit. The length and width of a vapor pen mouthpiece decides the resistance level of each hit. Vape pen atomizers feature different materials and designs that affect their vapor production. And vaporizer pen batteries can boast precision or preset temperature, a variety of power capacities, and single or multiple button controls. Assembling a vape pen is easy. Vapor pens generally have standard 510-threaded connections. This makes it simple to replace broken pieces or clean the vape pen. What Do Vape Pens Offer? Vape pens offer a discreet vaping experience. Ever feel uncomfortable vaping in public? Vaporizer pens ease that pain point by allowing you to vape without drawing attention to your habit. The vape pen is shaped like an office pen, slender and sleek. In fact, vape pens look more like e-cigarettes than portable vapes, giving them an incognito quality. Unlike portable vapes, which aren’t pen-shaped, the vapor pen won’t burden your pocket, and can be concealed almost completely in your hand. Pen vapes also enable you to vape wax concentrates, dry herb, and eliquid on-the-fly. The vapor pen accommodates busy commuter lifestyles and 9-to-5 schedules, as well as casual situations. You can use a vape pen on your lunch break, or simply step outside for a hit. You can also easily pass a vaporizer pen around at a party, or pick it up for a quick toke at home. This is the main difference between vape pens and desktop vapes, which are not portable. Lastly, vape pens can pack powerful vapor production that defies their small size. Vape pen atomizers with double or triple heating rods create impressively sized clouds of vapor. Vape pen batteries can harness incredible power, and yet unleashing that energy is simple. Just a few clicks, and your vape pen is ready. Vape pens are slim and stealthy on-the-go devices. To see the best vape pen options, check out our highest rated pen vapes! How To Use A Vape Pen Vape pens are very simple to operate, generally featuring an easy-to-use single button control. A certain number of clicks will turn the vaporizer pen on or off, a different number of clicks will toggle through vaping temperatures. Loading a vape pen usually involves filling the heating chamber (or the atomizer) with dry herb or wax. E-juice vaporizer pens generally utilize e-liquid cartridges. Every model of pen vaporizer can vary in design, operation, and features. While portable vaporizers and mod vapes have precision temperature control, most vape pens feature preset vaping temperatures. Vape pens can be optimized for the most popular vaping temperatures, depending on the material the vape pen is designed to vaporize. Most vaporizer pens use lithium-ion batteries, which harness much power in a compact unit. Charging a vape pen is simple as well. Many vaporizer pens feature micro-USB charging so you can power up from almost anywhere like a smartphone. Vape Pen Types Dry Herb Vape A dry herb vape pen vaporizes cannabis. Herb vape pens like the Atmos Jump vape dry herb with rapid heat up times and offers a sleek, urban design and quality herb vaporization at an affordable price for a vape pen. Dry herb vape pens are easy to conceal in your hands and pocket, making for a stealthier session than a pipe or cone. Concentrate Pen A concentrate pen vape enables portable vaping of waxy oils. Concentrate vape pens offer a discreet way to dab on-the-fly. Most concentrate pens are pen-shaped, but device's like the HoneyStick Elf utilize powerful mod batteries enabling intensely flavored, large-sized vapor clouds. Vapor pens for concentrates range from affordable to high-priced depending on the model. Popular concentrate pens include the KandyPens Elite, considered one of the best vape pens, and the G Pen Nova, both of which offer quality concentrate vaping with sleek designs. A concentrate vape pen puts the power of a concentrate rig in your pocket, often at less than half the price. E-Liquid Vape Pen E-liquid vape pens vaporize a liquid solution containing concentrates from dry herb. Eliquid can contain nicotine, dry herb/concentrates, or neither--and at varying potency. This is why e-liquid vaporizer pens are helpful in quitting smoking. Smokers may use an ejuice vape pen to wean themselves off nicotine addiction by gradually reducing the nicotine potency of their eliquid cartridges. How Vape Pens Work Vape pens can feature a variety of atomizers that vaporize material using a heat element, which is usually a rod with coils wrapped around it. The atomizer acts as the vape pen heating chamber, which features a single rod, double rod, triple rod, or a bucket-like coilless design. Vaporizer pen atomizers can heat up rapidly, which is ideal for vaping on-the-go. To load a pen vaporizer, material is dabbed into the chamber, where it sits directly on the coils. In other words, most vapor pens are conduction vaporizers, rather than convection units. While convection vaporizers heat materials with hot air that’s blown through the heating chamber, conduction vape pens directly heat the material on the surface of the heating element. Users who value vapor purity over all else should find a portable vaporizer with convection heating, or a desktop vaporizer. Vaporizer pens are better suited for decent hits on-the-fly. That said, vape pen vapor is much purer than the smoke produced by water pipes, joints, and pipes. When the vape pen is loaded, the user clicks the single button interface, which heats the coil inside the atomizer. Some vape pens have one temperature, others have multiple presets. Most vapor pens have click controls. For example, with many vape pens, five clicks powers the vaporizer pen on and three clicks toggles temperature. Holding the button down heats the vape pen. At that point, the vaporizer pen coil heats materials at a temperature beneath the point of combustion, creating smoother, smokeless hits. Vape pen heat up time is usually under a minute Thanks We'll send you an email shortly confirming your subscription! Still don’t see it? Check your spam folder and mark us as a safe sender. Customer Center Contact Us Disclaimer The statements and vaporizers shown on this website have not been evaluated by the US Food and Drug Administration (FDA). These vaporizers are not designed to diagnose, cure, prevent, or treat any disease. Before using a vaporizer, please consult with a licensed health care provider. If you use a vaporizer, you do so at your own risk. Inhalation is inadvisable and may potentially be harmful. Any comments from user-submitted reviews found on this website are related to the users own personal experiences, and are not endorsed, reviewed, or necessarily shared by vapor.com or its affiliates. Vaporizing does not necessarily eliminate any and all toxins found in vaporized substances, so much care should be taken prior to use. You must be 18 or older and respect all local laws to purchase a vaporizer. By using vapor.com, you acknowledge and agree to abide by our Terms of Use page before making any purchase on this website.
Q: RaisePropertyChanged broadcast using incorrect Messenger In ViewModelLocator: SimpleIoc.Default.Register<IMessenger, Messenger>(); In a property on a view model: RaisePropertyChanged<string>("CurrentDrug", oldValue, value, true); In View model #2: this.messengerService.Register<PropertyChangedMessage<string>>( this, this.HandleDrugChangedMessage); View model #2 never receives the broadcast message. However, if I change view model #2 to: Messenger.Default.Register<PropertyChangedMessage<string>>( this, this.HandleDrugChangedMessage); Then all is fine. What do I do so that my messengerService that is passed in to the view models is the one that originally broadcasts the message, not the Messenger.Default one? To show how messenger service is passed: public class ViewModelNumberOne: ViewModelBase { // dependency private IMessenger messengerService; // constructor public ViewModelNumberOne(IMessenger messengerService) { this.messengerService = messengerService; } } public class ViewModelNumberTwo: ViewModelBase { // dependency private IMessenger messengerService; // constructor public ViewModelNumberTwo(IMessenger messengerService) { this.messengerService = messengerService; } } A: Currently, when you register the IMessenger interface, you are actually registering to the concrete type itself, but not to a specific instance of that concrete type. SimpleIoc.Default.Register<IMessenger>(() => Messenger.Default); With the above syntax, when SimpleIoC resolves IMessenger, it will do so with the Messenger.Default instance, preserving it as a singleton. This will allow you to use your injected instance.
Introduction {#S1} ============ Low oxygen (O~2~) levels (hypoxia) characterize the microenvironment of many solid tumors, occurring as a consequence of structurally disorganized blood vessels and tumor growth that exceeds the rate of vascularization. Hypoxia is common within breast cancers, which have a median O~2~ concentration of 1.4%, as compared to \~9.3% for normal breast tissue ([@B1]). In response to hypoxia, cells express genes that are essential for their survival. In tumor cells, this O~2~-regulated gene expression leads to more aggressive phenotypes, including those that increase the ability of cells to resist therapy, recruit a vasculature and metastasize ([@B2]--[@B4]). Accordingly, there is a growing body of evidence correlating tumor hypoxia with poor clinical outcome for patients with a variety of cancers ([@B5]--[@B7]). O~2~ availability has also been shown to regulate immune editing, allowing cancer cells to evade the immune system *via* a variety of mechanisms ([@B8]). For example, hypoxia upregulates hypoxia inducible factor 1-alpha (HIF1α)-dependent ADAM10 expression resulting in MHC class I polypeptide-related sequence A (MICA) shedding from the surface and decreased lysis of tumor cells ([@B9]). While many studies have focused on myeloid-derived suppressor cells or conventional CD8+ T cells ([@B8]), so far none have considered the impact of tumor hypoxia on gamma delta T cells (γδTcs). While γδTc kill cancer cell lines, derived from both hematological and solid tumors alike \[reviewed in Ref. ([@B10])\], it is unclear whether they are still active cancer killers when confronted with the harsh and immunosuppressive tumor microenvironment (TME) ([@B10]--[@B13]). We have focused on breast cancer, since there have been conflicting reports in the literature with respect to γδTc function in this disease. While *in vitro* studies clearly show that γδTc are able to kill breast cancer cell lines MDA-MB231, MCF-7, and T47D ([@B14]--[@B16]), it is unclear as to whether γδTc retain their cytotoxic properties once exposed to the breast tumor TME ([@B11]). Here, we set out to determine how γδTc behave under low O~2~, a TME factor likely encountered by γδTc in many malignancies. Carbonic anhydrase IX (CAIX) is a transmembrane protein that catalyzes the reversible hydration of carbon dioxide. It is expressed in response to hypoxia and is thus used as a surrogate marker for hypoxia ([@B17]). High CAIX expression indicates poor prognosis in many cancers, including breast cancer ([@B18]--[@B20]). Breast cancer cell lines express MICA, a ligand for the natural killer group 2, member D (NKG2D) receptor expressed by γδTc and implicated in γδTc cytotoxicity ([@B21]--[@B25]). Thus, we have further explored the integral role for NKG2D/MICA in γδTc cytotoxicity against breast cancer cell lines under hypoxia and normoxia. Since γδTc are being developed for cancer immunotherapy ([@B26]--[@B31]), and have shown both safety and even some efficacy---despite advanced disease stage---in a Phase I trial for breast cancer ([@B32]), it is imperative that we learn how the TME impacts the function of γδTc infiltrating breast and other solid tumors. Materials and Methods {#S2} ===================== Ethics Statement {#S2-1} ---------------- This study was carried out in accordance with the recommendations of the Research Ethics Guidelines, Health Research Ethics Board of Alberta---Cancer Committee with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Health Research Ethics Board of Alberta---Cancer Committee. Patients and Tissues {#S2-2} -------------------- We assessed 17 surgically resected breast tumors from cancer patients diagnosed at the Cross Cancer Institute, Edmonton, AB, Canada from 1997 to 1998. Patient and tumor characteristics are listed in Table [1](#T1){ref-type="table"}. ###### Characteristics of breast cancer cohort. *n* = 17 *n* (%) Median (range) ------------------------------- --------- ---------------- Age at diagnosis 51 (40--69) Histology  Invasive ductal carcinoma 14 (82)  Invasive non-ductal tubular 1 (6)  Invasive non-ductal mucinous 1 (6)  Non-invasive 1 (6) Tumor size (cm)  \<2 11 (65) 1.4 (0.2--5.5)  2--5 4 (24)  \>5 1 (6)  Not specified 1 (6) Tumor grade  1/3 4 (24)  2/3 5 (29)  3/3 8 (47) Nodal status  Positive 9 (53)  Negative 8 (47) Estrogen receptor  Positive 12 (71)  Negative 3 (18)  Not available 2 (12) Progesterone receptor  Positive 10 (59)  Negative 5 (29)  Not available 2 (12) Immunohistochemistry {#S2-3} -------------------- Anti-human T cell antigen receptor (TCR)δ staining was performed as reported ([@B33]). Briefly, 4.5 µm serial sections from formalin-fixed paraffin-embedded tumors were melted on a slide warmer at 60°C for a minimum of 10 min followed by de-paraffinization using a fresh Xylenes (Thermo Fisher Scientific, Burlington, ON, Canada) bath. Sections were then hydrated with a series of graded ethanol (100, 95, 70, and 60%) followed by brief incubation in water, then tris-buffered saline plus 0.05% Tween-20 (TBST). Antigen retrieval was performed at 100°C for 20 min in target retrieval solution pH 9 (DAKO North America, Carpinteria, CA, USA). After cooling to room temperature, tissues were circled with an ImmEdge pen (Vector Laboratories, Burlingame, CA, USA) and blocked with Peroxidase Block (DAKO) for 5 min. Slides were washed in TBST for 5 min then blocked with Protein Block Serum Free (DAKO) for 10 min. Protein block was gently removed and replaced with 1:150 dilution of mouse monoclonal anti-human TCRδ antibody (clone H-41, Santa Cruz Biotechnology, Dallas, TX, USA) or 1:50 dilution of rabbit monoclonal anti-human CAIX \[clone EPR4151(2), abcam, Cambridge, MA, USA\] or corresponding isotype control diluted to the same antibody concentration; all dilutions were made in antibody diluent (DAKO). Known positive controls and isotype controls were included with each batch to ensure quality control of staining. Sections were incubated in a humidified chamber for 30 min at 25°C. Slides were then rinsed and washed five times in TBST for 5 min. Slides were then incubated with 100 µl secondary antibody, labeled polymer---horseradish peroxidase (HRP) anti-mouse or---HRP anti-rabbit (DAKO), for 60 min at room temperature in the humidified chamber. Washing was done as before, and then slides were treated with 75--100 µl 3,3′-diaminobenzidine chromogen solution (DAKO) for 8--10 min before the reaction was stopped by rinsing in water. Hematoxylin (DAKO) counterstaining was performed, slides were rinsed in water and then dehydrated using a series of graded ethanol (60, 70, 95, and 100%). Slides were then cleared with Xylenes, dried and coverslips mounted with VectaMount permanent mounting medium (Vector Laboratories). Assessment of CAIX Expression and γδTc Infiltration {#S2-4} --------------------------------------------------- Light microscopy and semi-quantitative scoring for CAIX was performed by a single pathologist; scores were 0, no staining; 1, weak and/or very focal staining; 2+, strong but focal staining; and 3, strong and extensive staining. Serial sections stained for TCRγδ and CAIX were scanned. Areas of CAIX-positivity and negativity were defined, and images from slides superimposed to enable counting of γδTc in CAIX-positive and -negative areas. Five consecutive areas within each region were quantified for the frequency of γδTc infiltration. Primary γδTc {#S2-5} ------------ We established and maintained primary human γδTc cultures as described ([@B34]). Briefly, healthy donor blood was diluted with phosphate buffered saline (PBS) and peripheral blood mononuclear cells (PBMCs) isolated using density gradient separation (Lymphoprep, Stem Cell Technologies, Vancouver, BC, Canada). PBMCs were cultured in a humidified atmosphere at 37°C with 5% CO~2~ at 1 × 10^6^ cells/ml in RPMI complete medium containing 1 µg/ml Concanavalin A (Sigma-Aldrich, Oakville, ON, Canada), 10% fetal bovine serum (FBS), 1× MEM NEAA, 10 mM HEPES, 1 mM Sodium Pyruvate (all Invitrogen, Burlington, ON, Canada), and 10 ng/ml recombinant human interleukin (IL)-2 and IL-4 (Miltenyi Biotec, Auburn, CA, USA). Cells were counted and viability assessed *via* Trypan Blue Exclusion Assay (Invitrogen/Thermo Fisher Scientific, Waltham, MA, USA); fresh medium and cytokines added to adjust density to 1 × 10^6^ cells/ml every 3--4 days. After 1 week, αβ T cells were labeled with anti-TCRαβ PE antibodies (BioLegend, San Diego, CA, USA) and anti-PE microbeads (Miltenyi Biotec), and depleted after filtering (50 µm Cell Trics filter, Partec, Görlitz, Germany) and passing over an LD depletion column (Miltenyi Biotec). γδTcs, which did not bind to the column, were further cultured in complete medium plus cytokines (as above). For cytotoxicity and blocking experiments, γδTc cultures were used on days 19--21, as they were most cytotoxic then. Some hypoxia experiments were done at earlier time points. Donor cultures are identified as follows: donor number culture letter-culture day; thus, 7B-13 = the second culture derived from donor 7 on day 13. Culture purities and subset compositions are shown in Table S1 in Supplementary Material. Breast Cancer Cell Lines {#S2-6} ------------------------ Human breast carcinoma cell lines, MCF-7 and T47D, were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and maintained as per ATCC guidelines. For surface marker staining of breast cancer cell lines, cells were harvested by washing with PBS followed by dissociation in Accutase (Sigma-Aldrich) for 20 min at 37°C. Hypoxia Experiments {#S2-7} ------------------- To examine the effects of hypoxia, cells were cultured in O~2~ concentrations as indicated for 40--48 h using an X3 Xvivo Closed Incubation System (BioSpherix). After incubation under normoxic or hypoxic conditions, cell culture supernatants were collected, chilled on ice, and then frozen at −80°C until further analysis; harvested cells were used in cytotoxicity assays or stained for flow cytometric analysis. In some cases, cells were cold harvested, pellets frozen on dry ice, and stored at −80°C until lysis for Western blotting. Flow Cytometry {#S2-8} -------------- ### Antibodies {#S2-8-1} For surface marker staining of γδTc, the following anti-human antibodies from BioLegend, unless otherwise indicated, were employed: TCRγδ PE (clone B1, 1:25); TCR Vδ1 FITC (Miltenyi, clone REA173, 1:10); TCR Vδ2 PerCP (clone B6, 1:25); NKG2D APC (BD Biosciences, Mississauga, ON, Canada, 1:25); CD56 FITC (clone MEM-188, 1:5); CD69 AF700 (clone FN50, 1:4); CD94 FITC (clone DX22, 1:5); CD95 APC (clone DX2, 1:100); HLA ABC PE (clone W6/32, 1:10); FasL PE (clone NOK-1, 1:5); and CD40L APC (clone 24--31, 1:5). Anti-human MICA/B PE (BioLegend, clone 6D4, 0.1 µg) was used to stain breast cancer cell lines. ### Surface Marker Staining {#S2-8-2} Gamma delta T cell and breast cancer cell lines were adjusted to 10 × 10^6^ cells/1 ml, stained with 1 μl/10^6^ cells Zombie Aqua fixable viability dye in PBS (ZA, BioLegend) for 15--30 min at room temperature in the dark. γδTc were stained directly with fluorochrome-conjugated antibodies diluted in FACS buffer \[PBS containing 1% FBS and 2 mM EDTA (Invitrogen)\] as indicated above. Breast cancer cell lines at 10 × 10^6^ cells/ml were blocked in FACS buffer containing 50 µl/ml Trustain FcX (BioLegend) and incubated on ice for 30 min prior to antibody incubation. After blocking, cells were centrifuged and supernatants removed, leaving 10 µl FACS buffer plus block/10^6^ cells. Antibodies and FACS buffer were added to 20 µl, and cells incubated on ice 15--20 min followed by washing. All cells were fixed in FACS buffer containing 2% paraformaldehyde (Sigma-Aldrich), stored at 4°C and acquired within 1 week. ### Flow Cytometer Specifications {#S2-8-3} Cells were analyzed using a FACS CANTO II (Becton Dickinson, Mississauga, ON, Canada) equipped with an air-cooled 405-nm solid state diode, 30 mW fiber power output violet laser, with 450/50 and 510/50 band pass (BP) \[502 long pass (LP) detector\]; a 488-nm solid state, 20-mW blue laser with 530/30 BP (502 LP), 585/42 BP (556 LP), 670 LP (655 LP), and 780/60 BP (735 LP) filters; and a 633-nm HeNe, 17-mW red laser with 660/20 BP and 780/60 BO (735 LP) filters. Calibration was performed with CS&T beads (Becton Dickenson, Mississauga, ON, Canada). Live singlets were gated based on forward and side-scatter properties. Fluorescence minus one (FMO) controls were used to set gates. Analysis was performed using FlowJo^©^ software (Tree Star, Ashland, OR, USA, Version 10.0.8r1). ### Cytokine Arrays {#S2-8-4} The Proteome Profiler Human Cytokine Array Kit, Panel A (R&D Systems, Minneapolis, MN, USA) was used to detect proteins secreted by γδTc cultured under normoxic or hypoxic conditions. Undiluted culture supernatants were used in these assays, which were carried out according to the manufacturer's instructions. Analysis of resulting films was done as follows: pixel intensities were measured using FIJI software (ImageJ Version 2.0.0-rc-15/1.49m) using a consistent circular region of interest; measured values from duplicate spots were subtracted from 255. The average intensity from the two negative spots was subtracted from all values to obtain net values. The intensities of the six reference spots (positive controls) were averaged and a multiplier was defined for each array (normalized to the array with the lowest pixel intensity). Values were adjusted accordingly and then values for the duplicates were averaged. Finally, ratios were calculated for each cytokine, normalized to normoxia. ### ELISAs {#S2-8-5} 1--2 ml aliquots of culture supernatants stored at −80°C were thawed on ice. Halt™ Protease and Phosphatase Inhibitor Cocktail (PIC, Thermo Fisher Scientific) was added to samples prior to use in ELISAs or further storage at 4°C. The following ELISA kits were used: ELISA MAX Deluxe regulated on activation, normal T cell expressed and secreted (RANTES/CCL5) (BioLegend), Human macrophage inflammatory protein 1α (MIP1α) and Human CD40L Quantikine ELISA kits (R&D Systems), and Human MICA ELISA Kit (abcam). For RANTES and CD40L ELISAs, culture supernatant samples were diluted up to 16-fold to obtain readings within range (1:2, 1:4, 1:8, 1:16). For MIP1α ELISAs, samples were diluted up to 1:20. For MICA ELISAs, culture supernatants stored at −80°C were thawed overnight in at 4°C, then 4 ml applied to Amicon Ultra-4 10 K spin columns (Merck-Millipore, Carrigtwohill, Ireland) that were subsequently centrifuged at 3,000 *g* for 2 h at 12°C. Concentrated media was then transferred into 1.5 ml Eppendorf tubes and diluted to 200 and 20 µl of a 1:10 dilution of PIC were added. For the ELISA, 100 µl per well were assayed in duplicate. All ELISAs were done according to the manufacturer's instructions. Absorbance at 450 and 550 nm was measured using a FLUOstar Omega plate reader (BMG Labtech, Offenburg, Germany) with Omega Software version 5.11. The difference linear regression fit of the standard curve was used for concentration calculations. ELISA data were normalized to γδTc cell numbers and culture volumes. ### Immunoblotting {#S2-8-6} Cell lysates were prepared by mixing γδTc with M-PER Mammalian Protein Extraction Reagent (Thermo Fisher Scientific) containing PIC at 10 µl lysis buffer per million γδTc followed by incubation at room temperature for 10 min. Lysates were then centrifuged at 13,000 rpm for 15 min 4°C, after which supernatants were transferred to fresh tubes and 5× reducing sample buffer \[0.0625 M Tris/HCl pH6.8, 2% SDS, 20% glycerol, 0.05% β-mercaptoethanol, 0.025% (w/v) Bromophenol Blue\] added. Samples were boiled 5 min, cooled, and briefly centrifuged in a benchtop centrifuge prior to running on 10 or 12% SDS-PAGE gels. Proteins were transferred onto Immobilon-FL PVDF membranes (Millipore) using the Trans-Blot Turbo Transfer System (Bio-Rad, Mississauga, ON, Canada). The high molecular weight (MW) program was used when transferring proteins for HIF1α detection. Otherwise, the mixed MW program was used. Membranes were blocked 40 min in 3% milk in TBST, followed by overnight incubation in primary antibody baths at 4°C. After washing, membranes were incubated with the corresponding species-specific HRP-labeled secondary antibody for 1 h, followed by further washing and then detection using Clarity™ Western ECL Substrate (Bio-Rad). Primary antibodies were diluted in PBS containing 2% bovine serum albumin and 0.05% sodium azide at the following dilutions: 1:500 mouse anti-human HIF-1α (clone MOP1, BD Biosciences); 1:2,000 goat anti-human CCL3/MIP1α (R&D Systems); 1:1,000 mouse anti-human/primate CCL5/RANTES (Clone \#21418, R&D Systems); 1:500 mouse anti-human CD40 ligand/TNFSF5 (Clone \#40804, R&D Systems); 1:2,000 rabbit anti-human β-Actin (Cell Signaling Technologies, Danvers, MA, USA). Secondary antibodies were diluted in blocking buffer as follows: 1:10,000 goat anti-mouse IgG HRP (Bio-Rad); 1:20,000 goat anti-rabbit IgG HRP (Bio-Rad); and 1:1,000 donkey anti-goat IgG HRP (R&D Systems). ### Quantification of Bands on Western Blots {#S2-8-7} Band intensities for CD40L, MIP1α, and RANTES were measured using FIJI software (ImageJ Version 2.0.0-rc-15/1.49m) on converted grayscale images using consistent rectangular regions of interest. Measured values for bands and background (region of same size beneath each band) were subtracted from 255, then background was subtracted from bands to obtain net values for protein bands of interest and loading control bands (actin). The ratios of protein bands to loading control bands were then calculated. In the case of CD40L and RANTES, these values were multiplied by 10 to obtain values between 0.1 and 10. For calculation of induction, hypoxia values were divided by normoxia values, and average values for each protein were plotted. Calculations were done in Microsoft Excel version 15.3 (Microsoft, Redmond, WA, USA). Cytotoxicity Assays {#S2-9} ------------------- ### Target Cell Labeling With Calcein AM (CalAM) {#S2-9-1} As per the manufacturer's instructions, target cells were labeled with 5 µM CalAM (Invitrogen/Thermo Fisher Scientific). Cells were diluted to 30,000 cells/100 μl medium for cytotoxicity assays. ### Blocking Antibodies {#S2-9-2} The following anti-human antibodies were used: LEAF purified anti-NKG2D (BioLegend, Clone 1D11); anti-human CCL3/MIP1α (R&D Systems); anti-human/primate CCL5/RANTES (Clone \#21418, R&D Systems); and anti-human CD40 ligand/TNFSF5 (Clone \#40804, R&D Systems). Mouse IgG (Sigma-Aldrich) was used as a control. ### Blocking/Cytotoxicity Assay {#S2-9-3} For blocking and cytotoxicity assays, 6 × 10^6^ cells/ml γδTc cells were re-suspended in complete medium: RPMI 1640 plus 10% heat-inactivated FBS; 10 mM HEPES; 1× MEM NEAA; 1 mM sodium pyruvate; 50 U/ml penicillin--streptomycin; and 2 mM [l]{.smallcaps}-glutamine, all purchased from Invitrogen. Blocking antibodies were added at 6 µg mAb per 600 µl cell suspension/test in Eppendorf tubes, then plated at 100 μl/well in a 96-well round-bottomed plate and incubated at 37°C for 30 min. Thereafter, 100 µl CalAM-labeled targets were added. For cytotoxicity assays, the effector:target (E:T) ratio is indicated; blocking assays were done at 20:1. Co-cultures were incubated at 37°C for 4 h, after which plates were centrifuged and supernatants transferred to black clear-bottom 96-well (flat) plates (Costar, VWR International, Edmonton, AB, Canada). CalAM fluorescence was then detected on a FLUOstar Omega, BMG labtech fluorimeter. Controls were untreated and IgG-treated cells (for blocking assays), CalAM-labeled target cells incubated alone (spontaneous release) as well as 0.05% Triton-X 100 (Thermo Fisher Scientific)-treated cells (maximum release). The calculation for percent lysis is: \[(test-spontaneous release)/(maximum-spontaneous release)\] × 100%. ### Statistics {#S2-9-4} The following tests were used to determine significance: paired one-tailed Student's *t*-tests \[Figures [2](#F2){ref-type="fig"}A,B only, Microsoft Excel version 15.3 (Microsoft, Redmond, WA, USA)\]; paired two-tailed Student's *t*-tests \[Figures [2](#F2){ref-type="fig"}C--K, Prism 7.0 for Mac OSX (GraphPad Software, San Diego, CA, USA)\]; one-way ANOVA analysis (Figure [4](#F4){ref-type="fig"}, Prism); and Shapiro--Wilk normality tests followed by two-way ANOVA (Figure [1](#F1){ref-type="fig"}E, 3, 5, 6, Prism). Sidak's pairwise multiple comparison *post hoc* tests were performed alongside ANOVA analyses. The threshold for significance was set at *P* \< 0.05; asterisks indicate degrees of significance as defined in the figure legends. ![Gamma delta T cells (γδTcs) are present in areas of hypoxia in estrogen receptor positive (ER+) breast tumors. Serial sections from ER+ breast tumors were stained for carbonic anhydrase IX (CAIX) and T cell antigen receptor δ. **(A)** Example of CAIX-positive staining at 400× magnification from case 14; **(B)** CAIX-negative field of view (FOV) from the same slide as in **(A)**; **(C)** γδTc in the same area as **(A)** at 1,000× magnification; and **(D)** γδTc in the same area as **(B)** at 1,000× magnification. Scale bars are 50 µm. Brown indicates positive staining. **(E)** Parallel staining for γδTc and CAIX suggests that γδTc infiltration increases in hypoxic regions. CAIX scoring is indicated below the case numbers: 0 = no staining; 1 = weak and/or very focal staining; 2+ = strong but focal staining; and 3 = strong and extensive staining. Quantification and statistical analysis of γδTc frequency in CAIX-positive versus -negative regions (blue and red bars, respectively) reveal significantly increased γδTc infiltration in hypoxic regions (two-way ANOVA, \*\*\*\**P* \< 0.0001).](fimmu-09-01367-g001){#F1} Results {#S3} ======= γδTc Can Be Found in Hypoxic Regions in Breast Cancer Cases {#S3-1} ----------------------------------------------------------- In order to determine whether γδTc are present in areas of hypoxia in breast tumors, we performed immunohistochemistry to detect the hypoxia marker CAIX and γδTc using single stains of serial sections from a panel of 17 breast tumors (Table [1](#T1){ref-type="table"}). Examples from one case (case 14) are shown (Figures [1](#F1){ref-type="fig"}A--D), including images of a CAIX-positive region (Figure [1](#F1){ref-type="fig"}A), an area with no appreciable CAIX positivity (Figure [1](#F1){ref-type="fig"}B), and increased magnification of γδTc found in the same region depicted in Figure [1](#F1){ref-type="fig"}A (Figure [1](#F1){ref-type="fig"}C) and Figure [1](#F1){ref-type="fig"}B (Figure [1](#F1){ref-type="fig"}D). Of these 17 cases, 47% (8/17) stained positively for CAIX. In CAIX-negative cases, there was little γδTc infiltration; however, when γδTc were quantified in CAIX-positive versus CAIX-negative areas of breast tumors, γδTc frequency was greater in hypoxic regions, significantly so in three cases in particular (Figure [1](#F1){ref-type="fig"}E, cases 13, 14, and 17, *P* \< 0.0001). Images for cases 13 and 17 are in Figure S1 in Supplementary Material. In our cohort, 71% (12/17) of tumors were estrogen receptor positive (ER+); most ER+ cases were CAIX-negative (Figure [1](#F1){ref-type="fig"}E, ER status indicated below case numbers). Exposure to Hypoxia Reduces γδTc Density {#S3-2} ---------------------------------------- Given the co-localization of γδTc and CAIX in breast tumors, we measured the effects of hypoxia on γδTc viability and density *in vitro*. We cultured γδTc for 12--19 days, then subjected them to 48 h in hypoxic (2% O~2~) or normoxic (20% O~2~) conditions. We found that exposure to hypoxia had variable effects on γδTc viability (Figure [2](#F2){ref-type="fig"}A, *P* = 0.08), and significantly decreased cell density (Figure [2](#F2){ref-type="fig"}B, *P* = 5.7 × 10^−4^). Immunophenotyping was performed using flow cytometric analyses of activation markers including γδTCR, NKG2D, CD56, CD69, CD95, CD40L, and HLA ABC as well as the inhibitory markers FasL and CD94. γδTc were stained with live/dead ZA prior to surface marker staining. Median fluorescence intensity values (MFIs) of hypoxia and normoxia samples were divided by the MFI of FMO controls to obtain fold-change values. Surface markers on γδTc cultures subjected to 48 h 20 or 2% O~2~ were not significantly different (Figures [2](#F2){ref-type="fig"}C--K). ![Gamma delta T cell (γδTc) viability and proliferation under hypoxia/normoxia differ, but overall surface marker expression is not significantly impacted by oxygen levels. **(A)** Viability of γδTc cultured under 20% versus 2% O~2~ for 48 h beginning on culture days 12--19, for eight cultures from seven different donors, assessed *via* Trypan Blue exclusion. Donor numbers are given with A and B indicating different cultures from the same donor; numbers after the hyphen are the culture days on which the experiment was begun. **(B)** Cell density assessment from experiment shown in **(A)**. **(C--K)** The indicated surface markers were assessed by flow cytometric analysis \[**(C,D)** *n* = 5; **(E--J)** *n* = 4; **(K)** *n* = 3 different donor cultures\].](fimmu-09-01367-g002){#F2} MIP1α, RANTES, and CD40L Are Secreted by γδTc in Hypoxia {#S3-3} -------------------------------------------------------- Culture supernatants from three different donor γδTc cultures subject to 40 h of normoxia or hypoxia were analyzed by cytokine array. While IL-8 appears elevated in the cumulative results graph depicted here (Figure [3](#F3){ref-type="fig"}A), this cytokine was only greatly increased under hypoxia in one of three experiments (Figure S2B in Supplementary Material, *P* \< 0.0001), was moderately increased in one experiment (Figure S2A in Supplementary Material, *P* \< 0.05), and not significantly elevated in the third experiment (Figure S2C in Supplementary Material). Due to significant variation among donor cultures, cumulative results reveal significantly increased secretion of only CD40 ligand (CD40L or CD154) under hypoxia compared to normoxia (Figure [3](#F3){ref-type="fig"}A, *P* = 0.0472). However, in all three individual cytokine arrays, significantly increased secretion of MIP1α \[or CCL3 = chemokine (C--C motif) ligand 3\], RANTES (or CCL5), and CD40L under hypoxia compared to normoxia was observed (Figures S2A--C in Supplementary Material). Note that equal cell numbers were plated, and relative values at 2 and 1% O~2~ were normalized to normoxia without taking harvested cell numbers into account. Considering the decrease in γδTc densities observed under hypoxia, this suggests an even greater effect would be observed if comparing the output of equal cell numbers. ![Hypoxia induces secretion of macrophage inflammatory protein 1α (MIP1α), CCL5/regulated on activation, normal T cell expressed and secreted (RANTES), and CD40L/TNFSF5 by gamma delta T cells (γδTcs). **(A)** Culture supernatants from γδTc subjected to 40 h at 20 or 1% O~2~ were analyzed by cytokine array. Cumulative results of three independent experiments for a panel of cytokines that were differentially secreted by γδTcs under hypoxia compared to normoxia are shown. Error bars are SEM; A.U. = arbitrary units; **(B)** ELISA validation of RANTES cytokine results shown in **(A)** for three independent experiments carried out at 20 and 1% O~2~ for 40 h; **(C)** RANTES ELISA for eight hypoxia experiments carried out for 48 h at 20 and 2% O~2~; **(D)** MIP1α ELISA for the same experiments shown in **(B)**; **(E)** CD40L ELISA for culture 6A-16 subject to 48 h 20 or 2% O~2~, and two of the experiments shown in **(B,D)**. Statistical analyses for **(A--E)**: two-way ANOVA, \**P* \< 0.05, \*\**P* \< 0.01, \*\*\**P* \< 0.001, \*\*\*\**P* \< 0.0001; **(F--H)** Western blot analysis of lysates from γδTc cultures subject to 20, 2, and/or 1% O~2~ for 48 h as indicated. γδTc culture identification is given above the blots and molecular weight (MW) markers are shown on the left; corresponding β-actin loading controls are shown in the bottom panels; relative band intensities were quantified and are indicated in arbitrary units; **(F)** three examples shown for detection of hypoxia inducible factor 1-alpha (HIF1α) (*n* = 6, 5 different donors) and CD40L (*n* = 8, 7 γδTc cultures from six donors); **(G)** MIP1α (*n* = 7, 6 γδTc cultures from five donors); **(H)** RANTES (*n* = 7); and **(I)** induction of proteins in **(F**--**H)** was determined by dividing protein band intensities from hypoxic samples by their corresponding normoxia control, and averaging these values. Error bars are SD.](fimmu-09-01367-g003){#F3} ELISA validation for expression of RANTES, MIP1α, and CD40L was performed with culture supernatants from three different γδTc cultures (Figures [3](#F3){ref-type="fig"}B--E, hypoxia = 1 or 2% O~2~ as indicated). For RANTES expression, an additional eight experiments were assayed, for secretion over 48 h at 20 or 2% O~2~ (Figure [3](#F3){ref-type="fig"}C). In this case, and in contrast to the cytokine array data, ELISA values were normalized to cell numbers. Significantly increased secretion of these cytokines by γδTc was observed when cells were cultured in hypoxia compared to normoxia (asterisks indicate significance). A wide range of average secreted RANTES levels was observed, ranging from 93 to 521 pg/million γδTc in normoxia to 431 to 856 pg/million γδTc under hypoxia; the average ratio hypoxia:normoxia is indicated above the bars (Figures [3](#F3){ref-type="fig"}B--E). Likewise, secreted MIP1α and CD40L levels were quantified for three independent experiments using ELISA (Figures [3](#F3){ref-type="fig"}D,E). MIP1α levels ranged from 152 to 394 pg/million γδTc in normoxia to 1,406 to 2,509 pg/million γδTc under hypoxia, with fold changes from 4.0 to 14.2 (Figure [3](#F3){ref-type="fig"}D). Similarly, CD40L secretion by γδTc increased significantly when cultured in low O~2~, with 2% O~2~ in one experiment yielding an average of 171 pg CD40L/million γδTc in hypoxia, a 4.9-fold increase over just 35 pg CD40L/million γδTc in normoxia (Figure [3](#F3){ref-type="fig"}E). Two experiments conducted with 1% O~2~ yielded a wide range of CD40L secretion by γδTc in both conditions (Figure [3](#F3){ref-type="fig"}E, 120--395 and 536--653 pg CD40L/million γδTc in normoxia and hypoxia, respectively). Western blotting was done to verify induction of HIF1α in γδTc under hypoxia, and also to investigate whether intracellular levels of CD40L, MIP1α, and RANTES reflected those of secreted proteins (Figures [3](#F3){ref-type="fig"}F,G). HIF1α was clearly induced in γδTc at 2 and 1% O~2~ in all cases; three examples from six independent experiments with five donor cultures are shown (Figure [3](#F3){ref-type="fig"}F, top panel, compare lane 1 versus 2 and 3, 4 versus 5, and 6 versus 7). CD40L appears visibly increased in hypoxia samples for γδTc culture 6A-16 (Figure [3](#F3){ref-type="fig"}F, middle panel, compare lane 1 versus 2 and 3), and quantification suggests this is also the case for the other two donor cultures shown (lane 4 versus 5 and lane 6 versus 7). Note that several forms of CD40L are evident here, which were included in the quantification of bands. Of eight experiments with seven γδTc cultures from six donors, intracellular CD40L was clearly visibly increased in three (38%). HIF1α and CD40L blots originated from the same gel, which was transferred and then cut at 75 kDa; thus, the β-actin loading control serves for both (Figure [3](#F3){ref-type="fig"}F, lower panel). MIP1α levels were not consistently higher in γδTc subject to hypoxia versus normoxia (Figure [3](#F3){ref-type="fig"}G, representative of seven experiments with six γδTc cultures from five donors), as demonstrated by very similar quantification values within each experiment. By contrast, RANTES was typically induced by hypoxia, with higher protein levels evident in cellular lysates from γδTc cultured in 1 or 2% O~2~ compared to normoxia (Figure [3](#F3){ref-type="fig"}H, compare lane 1 versus 2 and 3, 4 versus 5, and 8 versus 9; *n* = seven independent experiments, seven donors, induction clear in six, unclear in one). Longer exposure of this blot also revealed RANTES induction in lane 7 versus 6 (Figure S3 in Supplementary Material). Full scans of Western blots can be found in Figure S4 in Supplementary Material. The average induction of CD40L, MIP1α, and RANTES in γδTc under hypoxia relative to normoxia was calculated using Western blot band intensity values, and confirmed elevated levels of intracellular CD40L and RANTES, but not MIP1α, under hypoxia (Figure [3](#F3){ref-type="fig"}I). NKG2D Expressed on γδTc and MICA/B on Breast Cancer Targets Are Critical for γδTc Killing {#S3-4} ----------------------------------------------------------------------------------------- MCF-7 and T47D are estrogen receptor (ER) positive luminal A breast carcinoma cell lines ([@B35]). Both of these cell lines express MICA/B on the surface as identified by flow cytometric analysis (Figures [4](#F4){ref-type="fig"}A,B). Blocking NKG2D on γδTc significantly decreased lysis of MCF-7 (Figure [4](#F4){ref-type="fig"}C, one-way ANOVA versus IgG control, *P* \< 0.0001, representative of four independent experiments, *n* = 4) and T47D (Figure [4](#F4){ref-type="fig"}D, *P* = 0.0002, *n* = 5). Likewise, blocking the NKG2D ligand MICA/B on targets prevented MCF-7 and T47D cell lysis (Figures [4](#F4){ref-type="fig"}C,D, both *P* \< 0.0001, *n* = 2 and 3, respectively). By contrast, no decrease in cell lysis of either line was observed when γδTc were pre-incubated with antibodies against MIP1α, RANTES, or CD40L (Figures [4](#F4){ref-type="fig"}E,F, *n* = 3 and 2, respectively). Since antibodies were not washed away prior to co-incubation with targets, blocking should have been effective against both membrane-bound and soluble proteins. Thus, it appears that MIP1α, RANTES, and CD40L are not directly involved in γδTc cytotoxicity against MCF-7 or T47D. ![Natural killer group 2, member D (NKG2D) on gamma delta T cells (γδTcs) and MHC class I polypeptide-related sequence A (MICA)/B on breast cancer cell lines mediate γδTc cytotoxicity. Flow cytometric analysis of **(A)** MCF-7 (*n* = 4) and **(B)** T47D (*n* = 2) confirms that both cell lines express MICA/B. **(C)** Cytotoxicity assays in which NKG2D on γδTcs or MICA/B on MCF-7 cells are blocked with antibodies confirm γδTc recognition of breast cancer targets *via* this receptor/ligand interaction (*n* = 3, representative of three independent experiments with three different donor cultures). **(D)** Blocking assays as in **(C)** using T47D targets (*n* = 3). **(E)** Blocking macrophage inflammatory protein 1α (MIP1α), CCL5/regulated on activation, normal T cell expressed and secreted (RANTES), and CD40L/TNFSF5 does not decrease lysis of MCF-7 (*n* = 3 independent experiments with two different donor cultures) or **(F)** T47D (*n* = 2). Statistical analyses for **(C--F)**: one-way ANOVA, \*\*\**P* \< 0.001, \*\*\*\**P* \< 0.0001.](fimmu-09-01367-g004){#F4} γδTc Cytotoxicity Against MCF-7 and T47D Targets Is Enhanced in Hypoxia {#S3-5} ----------------------------------------------------------------------- Cytotoxicity experiments were performed in which γδTc effectors and breast cancer cell lines were pre-incubated for 48 h under normoxia or hypoxia (2% O~2~) and then co-cultured at 1:1, 10:1, and 20:1 E:T ratios in parallel under normoxia or hypoxia, as per target pre-incubation conditions, for 4 h. Pre-incubation in hypoxia enhanced γδTc cytotoxicity against MCF-7 targets cultured in normoxia (Figures [5](#F5){ref-type="fig"}A,B). In a representative example, significantly increased MCF-7 cell lysis was observed at 20:1 (Figure [5](#F5){ref-type="fig"}A, *P* = 0.0005); when data from all six experiments performed with day 21 γδTc from five different donors (six different cultures) were compiled and subject to statistical analysis, this result was confirmed (Figure [5](#F5){ref-type="fig"}B, *P* = 0.007). Likewise, γδTc cultured in hypoxia were better able to kill T47D cultured in normoxia (Figures [5](#F5){ref-type="fig"}C--D). In an example representative of five experiments with day 21 γδTc from four different donors, target cell lysis was significantly increased at all E:T ratios tested (Figure [5](#F5){ref-type="fig"}C, *P* \< 0.01); analysis of compiled results from all five experiments revealed significantly increased lysis of targets by hypoxia-treated γδTc at 1:1 and 20:1 E:T (Figure [5](#F5){ref-type="fig"}D, *P* \< 0.05). ![Enhanced cytotoxicity of gamma delta T cells (γδTcs) cultured in hypoxia. Cytotoxicity assays comparing γδTc cultured in 20% (red bars) or 2% O~2~ (blue bars) 48 h prior to co-culture with breast cancer target lines cultured at 20% O~2~. **(A)** A representative example of γδTc targeting MCF-7 cells; **(B)** compiled results from six independent experiments with γδTc cultures from five different donors targeting MCF-7; **(C)** a representative example with T47D targets; **(D)** compiled results from five independent experiments with γδTc cultures from four different donors targeting T47D. Two-way ANOVA, \**P* \< 0.05, \*\**P* \< 0.01, \*\*\**P* \< 0.001.](fimmu-09-01367-g005){#F5} Breast Cancer Targets in Hypoxia Are Resistant to γδTc Killing due to MICA Shedding {#S3-6} ----------------------------------------------------------------------------------- As outlined above, cytotoxicity experiments were performed in which breast cancer cell lines were pre-incubated for 48 h under normoxia or hypoxia (2% O~2~) and then co-cultured with γδTc at 1:1, 10:1, and 20:1 E:T in parallel under normoxia or hypoxia for 4 h. In most cases (4/6, 67%), pre-incubation in hypoxia induced MCF-7 resistance to γδTc cytotoxicity (Figures [6](#F6){ref-type="fig"}A--C). In a representative example from an experiment performed with γδTc culture 4B-21, significantly decreased MCF-7 cell lysis was observed at 10:1 (Figure [6](#F6){ref-type="fig"}A, *P* = 0.0054) and 20:1 (Figure [6](#F6){ref-type="fig"}A, *P* = 0.0119). By contrast, in two experiments with two different γδTc cultures from the same donor, no resistance was observed; one example is shown in which MCF-7 cultured under hypoxia appeared to be more susceptible to γδTc killing (Figure [6](#F6){ref-type="fig"}B, *P* \< 0.0001 at 1:1 and 10:1). When data from five experiments performed with day 21 γδTc from five different donors were compiled and subject to statistical analysis, the overall effect of hypoxia inducing MCF-7 resistance was confirmed (Figure [6](#F6){ref-type="fig"}C, *P* = 0.0011). Likewise, T47D cultured in hypoxia were more resistant to γδTc killing at 20:1 than those cultured in normoxia (Figure [6](#F6){ref-type="fig"}D, *P* = 0.0043), although the 1:1 result is opposite (*P* = 0.0076); these compiled results were from four experiments conducted with four different γδTc donor cultures. Flow cytometric analysis of MICA/B surface expression on breast cancer lines subjected to 48 h normoxia or hypoxia revealed no significant change in MFI; representative examples are shown for MCF-7 (Figure [6](#F6){ref-type="fig"}E, *n* = 4) and T47D (Figure [6](#F6){ref-type="fig"}F, *n* = 2). Of note, Accutase was used for dissociation of these adherent cell lines, out of concern for potential trypsin sensitivity of surface MICA/B that might have confounded our results. Supernatants from MCF-7 and T47D subject to 48 h 20 or 2% O~2~ were subject to MICA ELISA (Figure [6](#F6){ref-type="fig"}G). MICA could not be detected in supernatants directly, thus samples were concentrated and MICA ELISA was repeated. MICA in T47D remained below the detection limit; however, after normalization to cell numbers, a significant increase in secreted MICA by MCF-7 cells under hypoxia was observed in 3/4 experiments (Figure [6](#F6){ref-type="fig"}G, \*\*\**P* = 0.0005, \*\*\*\**P* \< 0.0001). These results match those observed in cytotoxicity experiments, with ELISA from MCF-7 targets used in cytotoxicity assays with 4B-21 showing increased MICA secretion under hypoxia that fits with the observed resistance to γδTc cytotoxicity in Figure [6](#F6){ref-type="fig"}A. Likewise, no difference in MICA secretion was observed in MCF-7 targets under 20 or 2% O~2~ subject to cytotoxicity assays with γδTc culture 10B-21, which also showed no MCF-7 resistance to γδTc killing in Figure [6](#F6){ref-type="fig"}B. Thus, resistance to γδTc killing appears to be correlated with MICA secretion by breast cancer targets. Despite enhanced cytotoxicity of γδTc cultured under 2% compared to 20% O~2~ against targets cultured under normoxia (Figure [5](#F5){ref-type="fig"}), they are unable to overcome resistance exhibited by MCF-7 under 2% O~2~, as revealed by analysis of five compiled experiments comparing γδTc cultured under 20 or 2% O~2~ against MCF-7 cells cultured in hypoxia (Figure [6](#F6){ref-type="fig"}H). ![Breast cancer cell lines pre-incubated in hypoxia are resistant to gamma delta T cell (γδTc) killing. Cytotoxicity assays comparing the ability of γδTc cultured under normoxia to target breast cancer target lines cultured at 20% O~2~ (red bars) or 2% O~2~ (blue bars) for 48 h prior to co-culture under hypoxia; **(A)** a representative example in which MCF-7 cells were resistant to γδTc killing (4B-21); **(B)** an example in which MCF-7 cells cultured under 2% O~2~ were susceptible to γδTc killing (10B-21); **(C)** compiled results from five independent experiments with γδTc cultures from five different donors targeting MCF-7; **(D)** compiled results from four experiments with four different donor-derived γδTc cultures targeting T47D; **(E)** surface expression of MHC class I polypeptide-related sequence A (MICA)/B on T47D remains unchanged under hypoxia versus normoxia; **(F)** surface expression of MICA/B on MCF-7 is not differentially impacted by hypoxia versus normoxia; **(G)** MICA ELISA on concentrated supernatants of MCF-7 from experiments in **(A)**; **(H)** compiled results from five independent experiments with γδTc cultures from five different donors cultured at 20% O~2~ or 2% O~2~ targeting MCF-7 cultured under hypoxia for 48 h prior to co-culture under hypoxia. Two-way ANOVA, \**P* \< 0.05, \*\**P* \< 0.01, \*\*\**P* \< 0.001, \*\*\*\**P* \< 0.0001.](fimmu-09-01367-g006){#F6} Discussion {#S4} ========== Gamma delta T cells are being developed as immunotherapeutic agents for a variety of cancer indications and clinical trials (Phase I/II) thus far have shown excellent safety profiles ([@B36]). Yet, they are known to embody remarkable functional plasticity, dependent on the environment in which they find themselves ([@B24], [@B37]--[@B39]). Thus, it is important to explore the function of γδTc infiltrating solid tumors, some of which may be hypoxic. In our small cohort of 17 breast cancer cases, 47% of tumors contained areas of CAIX positivity indicating hypoxia (Figure [1](#F1){ref-type="fig"}). The CAIX-negative cases were 89% ER+ (Figure [1](#F1){ref-type="fig"}E, cases 1--9, case 7 was of unknown ER status); of ER+ cases, 76% were CAIX negative. This confirms reports showing up to 80% CAIX negativity in studies assessing ER+ breast tumors; in these cases, CAIX negativity correlated with low histological grade ([@B40]). While our cohort was admittedly small, the very low levels of γδTc infiltrates in CAIX-negative tumors, correlated with low histological grade, confirm results showing that levels of γδTc infiltration correlate positively with higher histological grades ([@B41]). Unfortunately, our cohort size was too limited to determine whether γδTc infiltration correlated with patient outcome. We did, however, find γδTc in areas of hypoxia in some tumors. While we did not have the power in our study, or *in vivo* functional data, to claim that γδTc are preferentially attracted to hypoxic regions, our results at least provide an indication that γδTc can be found in hypoxic areas of tumors, and that studying their function under low O~2~ is worthwhile. As CAIX is associated more so with triple negative breast cancers (TNBC) ([@B18], [@B42]), future studies of γδTc and hypoxia should focus on a larger cohort of TNBC patients. Indeed, the groundwork for such studies has been laid by Hidalgo and colleagues, who recently reported on the pattern of distribution of γδTc in TNBC ([@B43]). It was unsurprising that γδTc cell density decreased under hypoxia (Figure [2](#F2){ref-type="fig"}), as terminally differentiated γδTc stop proliferating to become cytotoxic ([@B44]), and hypoxia enhanced γδTc cytotoxicity (Figure [5](#F5){ref-type="fig"}). Delayed cell-cycle progression was also noted in a study on PBMC in hypoxia ([@B45]). To our knowledge, the only study of γδTc in the context of hypoxia showed that circulating γδTc in patients with obstructive sleep apnea had elevated intracellular tumor necrosis factor alpha (TNFα) and IL-8 levels, increased TNFα and L-selectin-mediated adhesion properties, and enhanced cytotoxicity against endothelial cells compared to those isolated from healthy donors ([@B46]). While that study compared freshly isolated blood-derived γδTc from patients and healthy donors, we used healthy donor-derived *in vitro* expanded γδTc for our experiments, which potentially accounts for different results. TNFα secretion was not impacted by hypoxia in our study, as no differential effects were detected by cytokine array (data not shown). While we did observe strongly elevated hypoxia-induced IL-8 in the supernatant of one of the three γδTc cultures subject to cytokine array analysis (Figure S2 in Supplementary Material), this was not the case for the other two cultures. More significant were cytokine array data pointing to increased secretion of RANTES, MIP1α, and CD40L by γδTc under low O~2~ compared to normoxia that were confirmed by subsequent ELISAs (Figures S2A--C in Supplementary Material; Figures [3](#F3){ref-type="fig"}B--E). Intracellular protein levels induced by hypoxia matched ELISA results only in the case of RANTES (Figure [3](#F3){ref-type="fig"}H); the same could not be said for CD40L and MIP1α, where hypoxia treatment did not appear to increase intracellular levels (Figures [3](#F3){ref-type="fig"}F,G), and surface expression of CD40L was variable (Figure [2](#F2){ref-type="fig"}K). Since blocking these proteins appeared to have no impact on γδTc cytotoxicity against breast cancer target lines (Figures [4](#F4){ref-type="fig"}E,F), they must have an indirect function related to enhanced cytotoxicity of γδTc under hypoxia. Human memory Vγ2Vδ2 cells were reported to store cytoplasmic RANTES that was secreted rapidly in response to TCR signaling, but little MIP1α protein was found in these cells ([@B47]). RANTES is a chemokine employed to recruit antigen presenting cells, such as dendritic cells ([@B48], [@B49]), and thus speaks to the anti-tumor function of γδTc in hypoxia, though breast tumors may use this to their own advantage to promote malignancy ([@B50]). RANTES and MIP1α expression were also reported to aid Vδ1 cell suppression of HIV replication ([@B51]). CD40 ligation is thought to enhance the immunogenicity of tumors ([@B52]), thus γδTc may secrete CD40L in order to better "see" tumor targets. CD40L may also inhibit growth of CD40-expressing tumors directly ([@B52]--[@B55]). Further investigation will be required to determine the functions served by these cytokines with respect to γδTc targeting solid tumors. A study of the Vγ9Vδ2 γδTc subset in the context of breast cancer suggested that surface levels of MICA/B on breast cancer target cell lines were associated with γδTc cytotoxicity against these lines; however, direct blocking assays were not carried out ([@B16]). Both MCF-7 and T47D cells expressed surface MICA/B, in contrast to an earlier report suggesting a lack of MICA/B expression on MCF-7 ([@B56]). If trypsin was used to dissociate MCF-7 in that study, it might explain their inability to detect MICA/B; to avoid this issue, we used Accutase to dissociate our adherent cell lines, as detachment of cells is gentler and protects most surface epitopes. We have confirmed the involvement of NKG2D on γδTc and MICA/B on MCF-7 and T47D in cytotoxicity of γδTc against breast tumor targets (Figure [4](#F4){ref-type="fig"}), although differences in the ability of γδTc to kill targets pre-incubated in hypoxia or normoxia do not appear to be related to surface levels of MICA (Figure [6](#F6){ref-type="fig"}). One mechanism of hypoxia-mediated tumor evasion is MICA shedding ([@B57]). MICA downregulation related to shedding under hypoxia, as well as downregulated expression of NKG2D on PBMCs incubated with culture supernatants of prostate cancer cells exposed to hypoxia---abrogated upon incubation with MICA blocking antibodies---has been reported ([@B58]). MICA shedding is not a universal evasion mechanism employed by all cancer cells, however, as glioblastoma cell lines did not shed MICA, although this study was only carried out under normoxia ([@B59]). While we assume that soluble MICA may bind NKG2D and block or downregulate this receptor to prevent γδTc recognition of breast cancer targets, a recent report suggests that, in mice, soluble NKG2D might activate NK cells and aid in tumor eradication, but this anti-tumor effect has yet to be shown in humans or with γδTc ([@B60]). By contrast, soluble MIC was shown to decrease γδTc cytotoxicity in pancreatic cancer ([@B61]) and has been implicated in evasion of human ovarian cancer cells from γδTc recognition ([@B21]). Thus, we were surprised that surface expression of MICA/B on MCF-7 and T47D breast cancer lines appeared unaffected by 48 h under hypoxia (Figure [6](#F6){ref-type="fig"}). However, MICA secretion did not correlate with MICA surface levels, as soluble MICA increased in the supernatants of MCF-7 cells cultured under hypoxia, while surface MICA levels remained unchanged (Figure [6](#F6){ref-type="fig"}). Thus, it appears that we would need to neutralize soluble MICA to improve γδTc cytotoxicity, since target surface expression did not appear to be affected by hypoxia. That said, we did not directly assess MICA expression during co-culture with γδTc, and it is possible that MICA was downregulated in the presence of γδTc, although the correlation between resistance to γδTc killing and soluble MICA levels in culture supernatants under hypoxia speaks against this (Figure [6](#F6){ref-type="fig"}). One way to overcome MICA shedding may be to increase nitric oxide signaling ([@B58]), although its impact on γδTc would have to be assessed. Although the γδTc tumor infiltrating lymphocytes (TIL) signature was deemed the most positive prognosticator across a range of cancers, including breast cancer ([@B62]), some reports suggest that γδTc may take on a regulatory phenotype within the breast TME ([@B41], [@B56], [@B63], [@B64]). In one study, γδTc TIL isolated from a breast tumor were expanded in high levels of IL-2 for several weeks prior to immunosuppression assays and proved to inhibit dendritic cell maturation and CD8+ T cell cytotoxicity ([@B56]); however, given the known functional plasticity of γδTc, such assays conducted on *ex vivo* expanded cells removed from the TME cannot inform the function of γδTc *in situ*. A positive correlation was observed between γδTc infiltration and breast cancer stage, leading the authors to suggest that γδTc may contribute to disease pathology; however, causality was not established ([@B41]). Although our cohort size was much smaller, we too observed a positive correlation between CAIX expression, indicating hypoxia---typically an indicator of cancer progression---and γδTc infiltration (Figure [1](#F1){ref-type="fig"}). This could just as easily indicate the greater need for γδTc attempting to eradicate disease. Our hypoxia experiments reveal enhanced cytotoxicity of γδTc exposed to 48 h of low O~2~, suggesting that γδTc are indeed able to kill in this environment (Figure [5](#F5){ref-type="fig"}). Soluble MICA appears to inhibit γδTc cytotoxicity against breast tumor targets in hypoxia and, despite their increased killing capacity under low O~2~, γδTc are unable to overcome resistance exhibited by MCF-7 under 2% O~2~ (Figure [6](#F6){ref-type="fig"}), a condition under which γδTc must operate within at least some parts of a tumor. Further studies will be required to definitively identify γδTc function in breast tumors *in situ*. Ethics Statement {#S5} ================ This study was carried out in accordance with the recommendations of the Research Ethics Guidelines, Health Research Ethics Board of Alberta---Cancer Committee with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Health Research Ethics Board of Alberta---Cancer Committee. Author Contributions {#S6} ==================== GS and L-MP contributed to research design. GS and ID conducted experiments; data analysis was carried out by GS, ID, and RL. GS wrote the manuscript; all authors provided feedback and approved the final version. Conflict of Interest Statement {#S7} ============================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Flow cytometry was performed at the University of Alberta, Faculty of Medicine and Dentistry Flow Cytometry Facility, which received financial support from the Faculty of Medicine and Dentistry and the Canadian Foundation for Innovation (CFI) awards to contributing investigators. We thank Nidhi Gupta for assistance in obtaining the breast tumor tissues, and Achim Jungbluth for sharing his protocol for detection of γδTc by immunohistochemistry prior to its publication. **Funding.** This work has been funded by the London Regional Cancer Program, London, ON (Translational Breast Cancer Postdoctoral award to GS), the Cancer Research Society (CRSOG2013 to L-MP and GS), and the Canadian Breast Cancer Foundation (L-MP). Support was also provided by the Sawin-Baldwin Chair in Ovarian Cancer, Dr. Anthony Noujaim Legacy Oncology Chair, and Alberta Innovates Health Solutions Translational Health Chair to LP. ID has been supported by the Queen Elizabeth II Graduate Scholarship, the University of Alberta Doctoral Recruitment Scholarship, and the Alberta Cancer Foundation Antoine Noujaim Scholarship. Supplementary Material {#S9} ====================== The Supplementary Material for this article can be found online at <https://www.frontiersin.org/articles/10.3389/fimmu.2018.01367/full#supplementary-material>. ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. Abbreviations {#S10} ============= BP, band pass; CAIX, carbonic anhydrase IX; CalAM, Calcein AM; CD40L, CD40 ligand (or CD154); E:T, effector:target ratio; ER, estrogen receptor; FBS, fetal bovine serum; FMO, fluorescence minus one; γδTcs, gamma delta T cells; HIF1α, hypoxia inducible factor 1-alpha; HRP, horseradish peroxidase; IL, interleukin; LP, long pass; MICA, MHC class I polypeptide-related sequence A; MIP1α, macrophage inflammatory protein 1α \[or CCL3 = chemokine (C--C motif) ligand 3\]; MFI, median fluorescence intensity; NKG2D, natural killer group 2, member D; O~2~, oxygen; PBMCs, peripheral blood mononuclear cells; PBS, phosphate buffered saline; PIC, protease and phosphatase inhibitor cocktail; PR, progesterone receptor; RANTES, regulated on activation, normal T cell expressed and secreted (or CCL5); TBST, tris-buffered saline plus 0.05% Tween-20; TCR, T cell antigen receptor; TIL, tumor infiltrating lymphocytes; TME, tumor microenvironment; TNFα, tumor necrosis factor alpha; ZA, Zombie Aqua fixable viability dye. [^1]: Edited by: Kenth Gustafsson, University College London, United Kingdom [^2]: Reviewed by: Tomasz Zal, University of Texas MD Anderson Cancer Center, United States; Christoph Wülfing, University of Bristol, United Kingdom [^3]: Specialty section: This article was submitted to T Cell Biology, a section of the journal Frontiers in Immunology
My Preemie Book Reviews Make yourself comfortable and browse through this comprehensive listing of my preemie book reviews. I have been reviewing books on pre-term birth, prematurity, and preemies since 1997, and the number of book reviews has grown quite long! Here is a listing of each of my preemie book reviews with a link to the review. While you are here, take advantage of the chance to meet the authors of these preemie books in exclusive preemie author interviews.
Authentic reproduction double-cloth jacketed wire. 22 (US) gauge solid copper 600V rated. Great for repairing vintage gear, but you can use it on new & homebrew projects as well. Great for rewiring guitars and the brown is spot on for rewiring old Hammonds...Bag of 5 feet (approx 1.5M). Select from an array of 7 different colors. If you order Green, you will receive 18 (US) gauge solid core heater wire. SOLD IN 5 FOOT INCREMENTS
Novel use of laparoscopic-guided TAP block in total laparoscopic hysterectomy. Transverse abdominis plane (TAP) block is a peripheral nerve block designed to anaesthetise the nerves supplying the anterolateral abdominal wall (T6 to L1). We introduced laparoscopic TAP block at Ninewells Hospital in 2014 and present a retrospective study assessing its efficacy. To our knowledge, there is limited study done on laparoscopic-guided TAP block whilst there are abundant literatures available on ultrasound-guided TAP block. To evaluate the efficacy of laparoscopic-guided TAP block as postoperative analgesia following total laparoscopic hysterectomy (TLH). A retrospective study was done between November 2014 to October 2016 (24 months) comparing patients who had TLH with TAP block (Group 1; n = 45) and patients who had TLH without TAP block (Group 2; n = 31) in our gynaecology unit. Patients were identified from theatre database. Data was collected from clinical portal and medical notes. The data included demographic information, BMI, METS score, intra-operative opiates use, post-operative pain scores, opiate requirements and use of patient-controlled analgesia (PCA), total dose of opiates used and day of discharge. The outcomes were analysed using means, odds ratios (OR), Mann-Whitney U-test and Fisher's exact or Chi-square test with 95% confidence interval (CI). Patients in Group 1 were older (mean age of 64.4, range 38-87) when compared to Group 2 (mean age of 49.3, range 37-81). Group 1 and 2 had comparable mean BMI (30.34 vs. 30.02) and METS score (6.77 vs. 7.76). Mean post-operative pain scores were lower in Group 1 within 4 hours, in periods of 4-12 hours, 12-24 hours and 24-48 hours post-op. Smaller proportion of patients in Group 1 required opiates post-operatively in all periods as compared to Group 2. This was statistically significant in the periods of 12-24 hours post-op (OR 0.31, 95% CI 0.11-0.82; p = .01). PCA use was significantly lower in Group 1 (OR 0.02, 95% CI 0.0014-0.46; p = .01). Group 1 had lower mean total dose of opiates used (27.182 mg, range 0-102 mg) than Group 2 (59.452 mg, range 0-240 mg), which was statistically significant (p < .0001). Average post-op hospital stay was 1.3 and 1.8 days in Group 1 and 2, respectively. Laparoscopic-guided TAP block delivered as post-operative analgesia following TLH results in reduced opiate requirement at post-operative period 12-24 hours, reduced PCA use and lower total dose of opiates used.
1. Field of the Invention The present invention relates to a seat slide structure for a vehicle, such as an automobile. 2. Description of the Related Art Heretofore, there has been proposed a seat slide structure for a vehicle, as disclosed, for example, in JP 2004-106713A. A seat slide structure disclosed in this publication comprises a screw rod (threaded rod) non-rotatably fixed to a lower rail, and a nut member screwed with the screw rod. A holding member is fixed to the upper rail. The holding member is formed by subjecting a plate-shaped body to bending. The holding member includes a front holding piece, a rear holding piece, and a connection piece connecting between respective base ends of the front and rear holding pieces, and has a U shape. The nut member is rotatably supported by a gearbox. The gearbox is installed between the front and rear holding pieces of the holding member. In other words, the nut member is held by the holding member through the gearbox (see FIG. 4 in the JP 2004-106713A). When the nut member is rotated and moved along an axial direction of the screw rod, the nut member presses the holding member frontwardly or rearwardly through the gearbox. Thus, the upper rail is moved frontwardly or rearwardly with respect to the lower rail. However, the holding member having a U shape involves the following problem. For example, as shown in FIG. 28, when a frontward or rearward force P is applied to an upper rail 100 (FIG. 28 illustrates an example where a rearward force is applied thereto), a holding member 101 is urged to be moved in the direction of the force P together with the upper rail 100, so that it presses a gearbox 102. The gearbox 102 receiving the force P presses a nut member 103. However, the pressed nut member 103 cannot be moved because it is screwed with a screw rod 104. Thus, the gearbox 102 and the holding member 101 receive a reaction force from the nut member 103. The reaction force is likely to cause deformation of the holding member 101, as shown in FIG. 28. As means for suppressing such deformation, for example, it is contemplated to increase a plate thickness of each of the gearbox 102 and the holding member 101. However, in this case, due to an increase in force required for bending, it becomes difficult to adequately form the gearbox 102 and the holding member 101. Moreover, due to an increase in size of the holding member, it becomes difficult to install the holding member in the upper or lower rail.
Q: Adding unique id in rails4 logger to identify the request and response I have added the following rails logger in my worker file. @@logger = Logger.new "log/rest_client.log" @@logger.level = Logger::DEBUG end begin @@logger.debug "The request #{url}" response = RestClient.get '#{url}' @@logger.debug "Successful response #{response}" rescue => e @@logger.debug "Failure response #{e.message}" end I want to identify the request and response(success or failure) with the unique id in the logger(rest_client.log). Now the log is D, [2014-12-09T14:27:18.576498 #29871] DEBUG -- : The request https://api.bitfinex.com/v1/symbols D, [2014-12-09T14:27:21.547365 #29874] DEBUG -- :Successful response ["btcusd","ltcusd","ltcbtc","drkusd","drkbtc","th1btc"] I want some thing unique to both request and response to identify both belong to the same. A: Since you control the contents of the log message at each step, you can yourself create and use a unique id every time the request is made. A ID based on timestamp would be unique. One possible solution: begin unique_id = "ID-" + Time.now.strftime("%Y%m%d-%H%M%S") @@logger.debug "#{unique_id}: The request #{url}" response = RestClient.get '#{url}' @@logger.debug "#{unique_id}: Successful response #{response}" rescue => e @@logger.debug "#{unique_id}: Failure response #{e.message}" end Log: D, [2014-12-09T14:27:18.576498 #29871] DEBUG -- : ID-20141209-142718: The request https://api.bitfinex.com/v1/symbols D, [2014-12-09T14:27:21.547365 #29874] DEBUG -- : ID-20141209-142718: Successful response ["btcusd","ltcusd","ltcbtc","drkusd","drkbtc","th1btc"]
Art lecturers affiliated to a tertiary institute observed the occurrence of negative emotional content and expression in the artwork of their students. These students are also inclined to manifest negative behavioural and interaction patterns. The lecturers appealed for this research as a method to determine what the content of the expressed artwork indicate. The request was for the research methods to proceed within an art framework in order for it to be applied as a class project. The research proceeded with the use of art as a projection medium, applied during the process of facilitative interaction. The aim was to determine the degree to which the projected content in the young adults' artwork correlate with their personal life- and experiential world. The young adults were requested to write spontaneous sketches depicting the story of their life. The information was passed on to a graphologist for the analysis of their handwriting. The findings of the graphologist was later applied as external triangulation in order to verify the identified themes obtained from the analysis. With the aid of art as projection medium during facilitative interaction, it has been determined that the young adults struggle with unresolved trauma as a result of abuse. The exposure to abuse resulted in barriers influencing their relationships with others as well as themselves. The barriers manifest as experiences of pain and confusion; mistrust and isolation; aggression and depression. The research resulted in the development of a model for educational psychologists equipping them to identify and address unresolved trauma with young adults through the use of art as projection medium during facilitative interaction. The development of the model proceeded in four stages. During stage one concepts were identified, defined and classified after completion of the fieldwork. The sample included 30 respondents from different cultures ranging from ages 18 to 24. The collection of data proceeded with the use of art as projection medium involving the following - a Gestalt-therapeutic exercise: the drawing of a rosebush, in-depth interviews, the analysis of cartoons and the writing of spontaneous sketches on unlineated paper for graphological analysis. The model of Guba was used to ensure trustworthiness in qualitative methodology. This refers to the credibility, transferability, reliability and verification of the research. In step two the relationship between concepts was drawn, after which step three followed, involving the description of the model. Guidelines in operationalising the model, are stated in step four. The model aims at the empowerment of young adults suffering from unresolved childhood trauma, with the use of art as projection medium during facilitative interaction. During this process the young adults are guided to an enhanced self-awareness in order for self-insight and self-empowerment to develop so that mental health can be obtained. The power of the model lies in the continuous plotting taking place through the use of art as projection medium during facilitative interaction.
Follow the author of this article Follow the topics within this article Benoît Hamon, a leftist radical, and Manuel Valls, a centrist former prime minister, will face each other next week after beating five other candidates in the first round of France’s Socialist party primary on Sunday. According to partial results from more than a third of polling stations, Mr Hamon was leading with 35.21 per cent, with Mr Valls on 31.56 per cent. Mr Hamon, 49, who is often compared to Jeremy Corbyn and Bernie Sanders, proposes to give all adults monthly welfare payments of €750 (£650) regardless of their income. Arnaud Montebourg, who came third with more than 17 per cent of the vote, according to the partial results, urged his supporters to back Mr Hamon, which could considerably increase the candidate’s chances of victory. “We’ve got to finish with old recipes, old politics and old solutions that don’t work,” Mr Hamon said. Opinion polls suggest that Ms Le Pen will win the first round of the presidential election this spring, but will be beaten in the decisive second round by Mr Fillon. Front National leader Marine Le PenCredit: Reuters The Socialists’ traditional working-class base has largely switched allegiance to the hard-Right Front National under Ms Le Pen, who has moderated its image. Young, urban Left-wingers are moving to Emmanuel Macron, 39, an independent centrist who is also drawing support from the Right and is emerging as the third candidate in the presidential race. The outcome of the Socialist primary will make a big difference to Mr Macron. Analysts say Mr Valls would be a more formidable rival than Mr Hamon but the former prime minister is a polarising figure whose tough line on security and business-friendly economic policies are unpopular with the Left-wing of the party. Mr Hamon appeals to young leftists and the unemployed, but the estimated €300 billion (£260bn) cost of his monthly handouts has raised fears of higher taxes and economic disaster. Pablo, 26, a Paris businessman, said: “Hamon was courageous to talk about the ‘universal income’ [monthly handouts]. It’s a measure that would touch millions of people’s lives and make a big difference for the young.” Guillaume Eustache, 37, a music industry executive, said: “I chose Valls because he’s the only Socialist with a chance of winning. He’s a statesman and a pragmatist, and he’s not making unrealistic promises.” Some of those who cast ballots in the Socialist primary said they were considering voting for Mr Macron in the presidential election. Hadrien Labeyrie, 37, a Paris engineer, who voted for Mr Valls “because of his experience and pragmatism”, said: “I’m waiting to see whether Mr Macron goes to the Right or the Left.” Reflecting the collapse of the mainstream Left across Europe and rise of far-Right Eurosceptic populist parties, the first round of the Socialist primary kindled little public enthusiasm. The low turnout of less than two million was a blow for the party. More than double that number took part in the centre-Right primary in November and it was also significantly lower than the turnout in the previous Socialist primary in 2011. Regardless of their candidate, the Socialists are only predicted to take only about 10 per cent of the vote in the presidential election. Mr Macron, who served as economy minister under Mr Hollande but has never been a Socialist Party member, proposes to reform France’s ailing economy, but argues that painful public sector cuts planned by Mr Fillon are unnecessary. He has deliberately avoided revealing details of his policies but promises to unveil his manifesto in the coming weeks. Mr Macron has won over several Socialist heavyweights, including Gérard Collomb, a senator and the mayor of Lyon. “France badly needs his daring and his energy to start making progress again.” Ségolène Royal, the mother of Mr Hollande’s four children and his environment minister, has hinted that she may back Mr Macron. He has also attracted the support of prominent Right-wingers such as Alain Minc, who was a close advisor to the former conservative president, Nicolas Sarkozy.
/* * << * wormhole * == * Copyright (C) 2016 - 2017 EDP * == * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * >> */ import React from 'react' import PropTypes from 'prop-types' import Helmet from 'react-helmet' import Instance from '../../containers/Instance' import DataBase from '../../containers/DataBase' export class DataSystem extends React.Component { render () { const { router } = this.props return ( <div> <Helmet title="Data System" /> <Instance router={router} /> <DataBase router={router} /> </div> ) } } DataSystem.propTypes = { router: PropTypes.any } export default DataSystem
In-N-Out Burger was founded in 1948 in Baldwin Park, California by Harry and Esther Snyder as the first drive-thru hamburger stand. In-N-Out is a family owned company that has locations in 3 states (California, Arizona, Nevada) and is one of the favorite hangouts for Southern Californians. Interestingly, In-N-Out was slow to start as a chain and didn't expand much under Harry Snyder. However, following his death in 1976 his son Rich became CEO at 24, and expanded the chain from 18 stores to 93 at the time of his death in 1993. It was also during this time that Rich, working with his brother Guy, started what many people can see driving on Interstate 10, the In-N-Out University where managers are trained. Located in the same area is In-N-Out's headquarters where a commisary was built to help the company keep direct control of all stores. The company also sells merchandise with 2 stores (Baldwin Park and Las Vegas, Nevada), as well as online and mail order possible. One can pick up catalogs at any restaurant. Jack In the Box says, "We don't make it until you order it," I suppose In-N-Out takes that literally. They proudly admit that they always use fresh meat, and one can see employees peeling and slicing potatoes for french fries. It's a little bit of a wait, and you have to take a number when you order, but it's quite good I assure you (Although if you order a shake, expect to be quite thick). They also cater to special events, often at my brother's school's open house, they'll have one of their cookout trailers there selling food. The company has also come under fire, although nothing major has apparently occured, for putting passages from the Holy Bible in the crimp on the bottom of their cups. This offends quite a bit of people. Although I personally dislike religion displayed anywhere but at a place of worship or on something personal, it is a private company and they have the right to do what they want. Besides, it hasn't slowed down business, who has time to look under their cup to see what verse they got for lunch? Menu Shakes Chocolate, Strawberry, Vanilla Burgers Double Double, Hamburger, Cheeseburger And of course..fries and soda. Store Hours Sunday through Thursday 10:30 a.m.-1:00 a.m. Friday and Saturday 10:30 a.m.-1:30 a.m.
1. Field The invention, in certain embodiments, relates to communication systems which may employ transmission of feedback information and responses thereto, on a wireless connection. 2. Description of the Related Art Wireless data traffic is projected to grow significantly. However, innovations in cellular air-interface design, culminating in the third generation partnership project (3GPP) long term evolution (LTE), provide spectral efficiency performance that may not be able to improve at a corresponding rate. To meet the growing traffic demand, other approaches may be used, such as increasing the cellular capacity per square meter by either shrinking cell-sizes or acquiring additional spectrum. For example, smaller cells may be implemented through heterogeneous networks of picos and macros, such as for carrier frequencies below 6 GHz, for example LTE heterogeneous network (HetNet). Similarly, 500 MHz of more spectrum is being made available below 5 GHz, which may help to meet the growing demand. This added spectrum, however, may also eventually be outpaced by the demand. Moreover, the available spectrum below 6 GHz is limited and there may be practical limits to how small cells can shrink. Thus, resources in frequencies above 6 GHz may be used to meet this demand for future (for example beyond 4G) cellular systems. Unlike traditional cellular systems, electromagnetic (EM) waves in for example the millimeters bands (for example, for frequencies above 6 GHz) do not benefit from diffraction and dispersion, making it difficult for them to propagate around obstacles. Moreover, such millimeter waves also suffer higher penetration loss in some materials. For example, penetration loss of concrete block is 10 times higher at millimeter bands as compared to microwave bands. As a result, millimeter transmissions may be much more likely to encounter shadowing effects than microwave transmission. Millimeter transmissions may also have less favorable link budgets due to lower power amplifier (PA) output powers and greater pathloss at these higher frequencies. As a result, to provide sufficient coverage from each access point, for example, 100 meter radius, narrow directional antenna array beams may be used both at the access point (AP) and the user equipment (UE). The smaller wavelengths may allow for fabrication of much larger antenna arrays in much smaller areas than is typical at microwave bands. For example, arrays with as many as 8 to 32 elements providing 18 to 30 dB in link budget gain may be implemented. Reliance on these array gains can complicate link acquisition and maintenance. Traditional cellular systems, such as 3G LTE, cannot simply be upbanded and expected to function in the millimeter bands. For example, because of the large number of antennas, the beam created by the array may be fairly narrow. With narrow beams, the user equipment may lose radio connection to the access point in case of blockage or misalignment. This may be due to, for example, obstruction between the user equipment and access point by objects, such as humans, trees, cars, or the like. Misalignment may be due to misalignment of the antenna array beams caused by wind-induced vibrations at the access point, or beam misalignment due to changes in user orientation, for example, due to how a device is held. Current cellular radio standards such as 3GPP LTE provide solutions for frequency bands below 6 GHz which have well known propagation characteristics. A LTE system which is simply upbanded to 70 GHz would not provide adequate coverage or economy. LTE relies on radio wave diffraction around obstacles and therefore an LTE millimeter wave system would not achieve a reasonable coverage reliability target, for example 90% coverage reliability. Similarly, the power efficiency of semiconductor devices is reduced at frequencies above 10 GHz. LTE, which employs OFDM modulation, conventionally requires a significant Power Amplifier (PA) backoff making the solution less desirable at 70 GHz. Local area solutions such as IEEE 802.11ad and IEEE 802.15.c exist and define air interfaces for local area access. The solutions are typically targeted to indoor deployments or for personal area networks. For example, 10 meter ranges are typically sited as a solution. For future (e.g., beyond 4G (B4G)) cellular system, one access architecture for deployment of cellular radio equipment may employ millimeter wave (mmWave) radio spectrum. Example requirements for B4G include peak data rate of 20-30 Gbps and latency of less than 1 ms. To allow this, several features may be required, very high bandwidth, very small subframe size, near line-of-sight with rapid site selection and collaboration, and narrow beam-width. There are two main issues related to latency, as discussed below. In Rel-10 LTE, user equipment category 8 is capable of supporting a maximum transport block size (TBS) of 2998560 per 1 ms transmission time interval (TTI) which is equivalent to 3 Gbps (assuming 5 carriers are aggregated using 8×8 multiple input multiple output (MIMO)). The processing time requirement for this user equipment may be 3 ms. In B4G, if the subframe size is reduced to 0.1 ms and the peak data rate is 30 Gbps, then the user equipment may be required to processing the same maximum TBS as user equipment category 8 but for a 0.1 ms subframe. Using current technology, the processing time for this user equipment will remain 3 ms, which is significantly longer than the subframe length and therefore likely to introduced large latency when retransmission is required. Even with significant improvement in user equipment processing capability, the reduced processing time (e.g. 1 ms) may still be significantly larger than the subframe size and can lead to unnecessarily large latency. Secondly, with mmWave and narrow beams, the user equipment may lose connection to the transmission point, for example, due to blockage or misalignment of the antenna array beams caused by human movement or by wind-induced vibrations at the access point, and may require rapid site selection. However, the transmission/reception point may require feedback from the user equipment that connection has been lost. Traditionally, the eNB may wait for HARQ feedback from the user equipment to determine that connection has been lost. However, this can take a long time due to user equipment processing requirement as discussed above.
Q: String was not recognized I have 1 datagridview with 3 columns Column1 = StartLoading Column2 = FinishLoading Column3 = TotalLoadingHours If the user entered the date and time in StartLoading and FinishLoading Columns, the total hours from start to finish will show in TotalLoadingHours Column. My problem is, If the user entered the date and time ONLY in StartLoading Column, there is always an error showing up - "String was not recognized as a valid DateTime". Appreciate your help. Below is my code. Dim StartLoading As New System.DateTime Dim FinishLoading As New System.DateTime For x As Integer = 0 To LoadingStatusDataGridview.Rows.Count - 2 Dim StartLoadingvalue As String = LoadingStatusDataGridview.Rows(x).Cells(1).Value.ToString() StartLoading = DateTime.Parse(StartLoadingvalue) Dim FinishLoadingvalue As String = LoadingStatusDataGridview.Rows(x).Cells(2).Value.ToString() FinishLoading = DateTime.Parse(FinishLoadingvalue) Dim TotalLoadingHours1 As TimeSpan = (FinishLoading - StartLoading) String.Format("{0:00}:{1:00}:{2:00}", TotalLoadingHours1.TotalHours, TotalLoadingHours1.Minutes, TotalLoadingHours1.Seconds) Dim TotalLoadingHours2 As TimeSpan = (DateTime.Now - StartLoading) String.Format("{0:00}:{1:00}:{2:00}", TotalLoadingHours2.TotalHours, TotalLoadingHours2.Minutes, TotalLoadingHours2.Seconds) If IsDBNull(LoadingStatusDataGridview.Rows(x).Cells(1).Value()) OrElse LoadingStatusDataGridview.Rows(x).Cells(1).Value() Is Nothing Then LoadingStatusDataGridview.Rows(x).Cells(3).Value() = Nothing ElseIf IsDBNull(LoadingStatusDataGridview.Rows(x).Cells(2).Value()) OrElse LoadingStatusDataGridview.Rows(x).Cells(2).Value() Is Nothing Then LoadingStatusDataGridview.Rows(x).Cells(3).Value() = TotalLoadingHours2 Else LoadingStatusDataGridview.Rows(x).Cells(3).Value() = TotalLoadingHours1 End If Next End Sub A: Its working now using below code.. Dim startloading As New System.DateTime Dim finishloading As New System.DateTime For x As Integer = 0 To LoadingStatusDataGridview.Rows.Count - 2 Dim startloadingvalue As String = LoadingStatusDataGridview.Rows(x).Cells(1).Value.ToString() If Not IsDBNull(LoadingStatusDataGridview.Rows(x).Cells(1).Value) AndAlso LoadingStatusDataGridview.Rows(x).Cells(1).Value.ToString.Length <> 0 Then startloading = DateTime.Parse(startloadingvalue) End If Dim finishloadingvalue As String = LoadingStatusDataGridview.Rows(x).Cells(2).Value.ToString() If Not IsDBNull(LoadingStatusDataGridview.Rows(x).Cells(2).Value) AndAlso LoadingStatusDataGridview.Rows(x).Cells(2).Value.ToString.Length <> 0 Then finishloading = DateTime.Parse(finishloadingvalue) End If If IsDBNull(LoadingStatusDataGridview.Rows(x).Cells(1).Value()) OrElse LoadingStatusDataGridview.Rows(x).Cells(1).Value() Is Nothing OrElse LoadingStatusDataGridview.Rows(x).Cells(1).Value.ToString.Trim() = "" Then LoadingStatusDataGridview.Rows(x).Cells(3).Value() = Nothing End If If IsDBNull(LoadingStatusDataGridview.Rows(x).Cells(2).Value()) OrElse LoadingStatusDataGridview.Rows(x).Cells(2).Value() Is Nothing OrElse LoadingStatusDataGridview.Rows(x).Cells(2).Value.ToString.Trim() = "" Then Dim LoadingHours2 As TimeSpan = (DateTime.Now - startloading) String.Format("{0:00}:{1:00}:{2:00}", LoadingHours2.TotalHours, LoadingHours2.Minutes, LoadingHours2.Seconds) LoadingStatusDataGridview.Rows(x).Cells(3).Value() = LoadingHours2 Else Dim LoadingHours1 As TimeSpan = (finishloading - startloading) String.Format("{0:00}:{1:00}:{2:00}", LoadingHours1.TotalHours, LoadingHours1.Minutes, LoadingHours1.Seconds) LoadingStatusDataGridview.Rows(x).Cells(3).Value() = LoadingHours1 End If Next
One of the new things that God is doing at Community is challenging us to be an even more generous church! And the biggest shift is an understand that generosity is not something God wants from us, it’s something God wants for us. During the first 2 weekends in December, we called an Audible and shared a bigger vision of what we believe God is calling us to as a church. This vision includes a real strategy for accomplishing the following: Eradicate Poverty Help1 Billion People Find Their Way Back to God A vision that big requires commitment, sacrifice and generosity beyond what we have ever experienced. What are the barriers we need to overcome to accomplish this vision? We asked you that question and we heard your answers: Debt Budgeting Generosity: Obligation vs Opportunity How can we begin to help each other move beyond these barriers to advance this vision? On February 2nd we will host the CCC Generosity Conference at the Yellow Box from 9am-1pm. Come join us and hear from James MacDonald, Senior Pastor at Harvest Bible Chapel, and Patrick Johnson, Vice President for Church Partners at the National Christian Foundation. The day is packed with powerful worship and practical breakout sessions (addressing the three things listed above!) as well as refreshments and a complimentary delicious lunch! Additionally, I will be delivering a message on Generosity With a Purpose. You won’t want to miss it! Complimentary lunch and programmed childcare with fun activities will be provided, so you must register so that your lunch and childcare spot will be reserved. If you want to get more info or register, click HERE.
The long-term goal of this research is to provide greater insight into the role of CNS serotonin in human behavior. In the past few years, compelling evidence has been adduced to implicate serotonin in the mechanisms of action of potent hallucinogenic drugs, certain aspects of sleep, and in a variety of clinical psycho- and neuro-pathological disorders. The present research program seeks to clarify the functional role of CNS serotonin through animal experiments which analyze the basic component of the system, serotonin-containing neurons of the raphe nuclei, in freely moving cats.
--- abstract: 'We discuss the known evidence for the conjecture that the Dolbeault cohomology of nilmanifolds with left-invariant complex structure can be computed as Lie-algebra cohomology and also mention some applications.' address: | Dr. Sönke Rollenske\ Mathematisches Institut\ Rheinische Friedrich-Wilhelms-Universität Bonn\ Endenicher Allee 60\ 53115 Bonn, Germany author: - Sönke Rollenske title: 'Dolbeault cohomology of nilmanifolds with left-invariant complex structure' --- Introduction ============ Dolbeault cohomology is one of the most fundamental holomorphic invariants of a complex manifold $X$ but in general it is quite hard to compute. If $X$ is Kähler then this amounts to describing the decomposition of the de Rham cohomology $$H^k_{dR}(X,\IC)=\bigoplus_{p+q=k} H^{p,q}(X) =\bigoplus_{p+q=k}H^q(X, \Omega^p_X)$$ but in general there is only a spectral sequence connecting these invariants. One case where at least de Rham cohomology is easily computable is the case of nilmanifolds, that is, compact quotients of real nilpotent Lie groups. If $M=\Gamma\backslash G$ is a nilmanifold and ${\ensuremath{\mathfrak g}}$ is the associated nilpotent Lie algebra Nomizu proved that we have a natural isomorphism $$H^*({\ensuremath{\mathfrak g}}, \IR) \isom H^*_{\mathrm{dR}}(M, \IR)$$ where the left hand side is the Lie-algebra cohomology of ${\ensuremath{\mathfrak g}}$. In other words, computing the cohomology of $M$ has become a matter of linear algebra There is a natural way to endow an even-dimensional nilmanifold with an almost complex structure: choose any endomorphism $J:{\ensuremath{\mathfrak g}}\to {\ensuremath{\mathfrak g}}$ with $J^2=-\id$ and extend it to an endomorphism of $TG$, also denoted by $J$, by left-multiplication. Then $J$ is invariant under the action of $\Gamma$ and descends to an almost complex structure on $M$. If $J$ satisfies the integrability condition $$\label{nijenhuis} [x,y]-[Jx,Jy]+J[Jx,y]+J[x,Jy]=0 \text{ for all } x,y \in {\ensuremath{\mathfrak g}}$$ then, by Newlander–Nirenberg [@Kob-NumII p.145], it makes $M_J=(M,J)$ into a complex manifold. In this survey we want to discuss the conjecture *The Dolbeault cohomology of a nilmanifold with left-invariant complex structure $M_J$ can be computed using only left-invariant forms.* This was stated as a question in [@cfgu00; @con-fin01] but we decided to call it Conjecture in the hope that it should motivate other people to come up with a proof or a counterexample. A more precise formulation in terms of Lie-algebra cohomology is given in Section \[reminder\]. Before concentrating on this topic we would like to indicate why nilmanifolds have attracted much interest over the last years. Their main feature is that the construction and study of left-invariant geometric structures on them usually boils down to finite dimensional linear algebra. On the other hand, the structure is sufficiently flexible to allow the construction of many exotic examples. We only want to mention the three most prominent in complex geometry: - If $G$ is abelian then $M_J$ is a complex torus. - The Iwasawa manifold $X=\Gamma\backslash G$ is obtained as the quotient of the complex Lie group $$G=\left\{ \begin{pmatrix}1 & z_1 &z_3\\ 0&1 &z_2\\ 0&0&1\end{pmatrix}\right\}\subset \mathrm{Gl}(3,\IC)$$ by the lattice $\Gamma=G\cap \mathrm{Gl}(3,\IZ[{{i}}])$ and as such is complex parallelisable. Nakamura studied its small deformations and thus showed that a small deformation of a complex parallelisable manifold need not be complex parallelisable [@nakamura75]. Observe that $X$ cannot be Kähler since $dz_3-z_2dz_1$ is a holomorphic 1-form that is not closed. - Kodaira surfaces, also known as Kodaira-Thurston manifolds, had appeared in Kodaira’s classification of compact complex surfaces as non-trivial principal bundle of elliptic curves over an elliptic curve [@kodaira66] and were later considered independently by Thurston as the first example of a manifold that admits both a symplectic and a complex structure but no Kähler structure. In our context it can be described as follows: let $$G=\left\{ \begin{pmatrix}1 & \bar z_1 &z_2\\ 0&1 & z_1\\ 0&0&1\end{pmatrix}\mid z_1, z_2\in \IC\right\}\subset \mathrm{Gl}(3,\IC)$$ and $\Gamma=G\cap \mathrm{Gl}(3,\IZ[{{i}}])$. Then $G\isom \IC^2$ with coordinates $z_1, z_2$ and the action of $\Gamma$ on the left is holomorphic; the quotient is a compact complex manifold. If we set $\alpha= dz_1\wedge(d\bar z_2- z_1 d\bar z_1)$ then $\alpha+\bar \alpha$ is a left-invariant symplectic form on $G$ and thus descends to the quotient. In fact, the first example is the only nilmanifold that can admit a Kähler structure [@ben-gor88], so none of the familiar techniques available for Kähler manifolds will be useful in our case. Some more applications in complex geometry will be given in Section \[apps\]. Nilmanifolds also play a role in hermitian geometry [@ags01; @bdv09; @lauret06], riemannian geometry [@gromov78; @buser-karcher81], ergodic theory [@host-kra05], arithmetic combinatorics [@green-tao06], and theoretical physics [@MR2542937; @gmpt07]. In order to discuss the above conjecture on Dolbeault cohomology we start by sketching the proof of Nomizu’s theorem because some of the ideas carry over to the holomorphic setting. Then we recall the necessary details on Dolbeault cohomology to give a precise statement of the conjecture. It turns out that we are in a good position to prove the conjecture whenever we can inductively decompose the nilmanifold with left-invariant complex structure into simpler pieces. This is due to Console and Fino [@con-fin01], generalising previous results of Cordero, Fernández, Gray and Ugarte [@cfgu00]. Section \[new\] contains the only new result in this article. We prove that the conjecture always holds true if we pass to a suitable quotient of the nilmanifold with left-invariant complex structure and also discuss some possible approaches to attack the general case. Notations --------- Throughout the paper $G$ will be a simply connected nilpotent real Lie-group with Lie-algebra ${\ensuremath{\mathfrak g}}$. Every nilpotent Lie group can be realised as a subgroup of the group of upper triangular matrices with 1’s on the diagonal. We will always assume that $G$ contains a lattice $\Gamma$ thus giving rise to a (compact) nilmanifold $M=\Gamma\backslash G$. Elements in ${\ensuremath{\mathfrak g}}$ will usually be interpreted as left-invariant vector fields on $G$ or on $M$. We restrict our attention to those complex structures on $M$ that are induced by an integrable left-invariant complex structure on $G$ and are thus uniquely determined by an (integrable) complex structure $J:{\ensuremath{\mathfrak g}}\to {\ensuremath{\mathfrak g}}$. The resulting complex manifold is denoted $M_J$. Note that even on a real torus of even dimension at least 6 there are many complex structures that do not arise in this way [@catanese02]. The group $G$ is determined up to isomorphism by the fundamental group of $M$ [@VinGorbShvart Corollary 2.8, p.45] and by abuse of notation we sometimes call ${\ensuremath{\mathfrak g}}$ the Lie-algebra of $M$. Real nilmanifolds and Nomizu’s result on de Rham cohomology =========================================================== The aim of this section is to prove Nomizu’s theorem. \[nomizu\] Let $M$ be a compact nilmanifold. Then the inclusion of left-invariant differential forms in the de Rham complex $$\Lambda^\bullet {\ensuremath{\mathfrak g}}^*\into \ka^\bullet(M)$$ induces an isomorphism between the Lie-algebra cohomology of ${\ensuremath{\mathfrak g}}$ and the de Rham cohomology of $M$, $$H^*({\ensuremath{\mathfrak g}}, \IR) \isom H^*_{\mathrm{dR}}(M, \IR).$$ Since some of the main results on Dolbeault cohomology discussed in the next section rely on similar ideas we will examine the proof in some detail: at its heart lies an inductive argument. Let $M=\Gamma\backslash G$ be a real nilmanifold with associated Lie algebra ${\ensuremath{\mathfrak g}}$ and let $\kz G$ be the centre of $G$. By [@Cor-Green p. 208], $\kz\Gamma=\Gamma\cap \kz G$ is again a lattice and the projection $G\to G/\kz G$ descends to a fibration $M\to M'$. The fibres are real tori $T=\kz G/\kz\Gamma$. Since elements in $\kz G$ commute with elements in $\Gamma$ their action descends to the quotient and $M\to M'$ is a principal $T$-bundle. To iterate this process we recall the following definition. \[ZgCg\] For a Lie-algebra ${\ensuremath{\mathfrak g}}$ we call $$\kz^0{\ensuremath{\mathfrak g}}:= 0, \qquad \kz^{i+1} {\ensuremath{\mathfrak g}}:= \{ x\in {\ensuremath{\mathfrak g}}\mid [x,{\ensuremath{\mathfrak g}}]\subset \kz^{i}{\ensuremath{\mathfrak g}}\}$$ the ascending central series and $$\kc^0{\ensuremath{\mathfrak g}}:={\ensuremath{\mathfrak g}}, \qquad \kc^{i+1}{\ensuremath{\mathfrak g}}:= [\kc^{i}{\ensuremath{\mathfrak g}}, {\ensuremath{\mathfrak g}}]$$ the descending central series of ${\ensuremath{\mathfrak g}}$. The Lie-algebra is called nilpotent if there is a $\nu\in \IN$ such that $\kz^\nu{\ensuremath{\mathfrak g}}={\ensuremath{\mathfrak g}}$, or equivalently $\kc^\nu{\ensuremath{\mathfrak g}}=0$. The minimal such $\nu=\nu({\ensuremath{\mathfrak g}})$ is called the index of nilpotency or step-length of ${\ensuremath{\mathfrak g}}$. The same definition can be made on the level of the Lie-group $G$ and the resulting sub-algebras and subgroups correspond to each other under the exponential map. Proceeding inductively, we can use the first filtration on ${\ensuremath{\mathfrak g}}$ to decompose $M$ geometrically; the second one induces a similar decomposition since $\kc^i{\ensuremath{\mathfrak g}}\subset \kz^{\nu-i}{\ensuremath{\mathfrak g}}$. More precisely, if we denote by $T_i$ the torus obtained as a quotient of $\kz^i G /\kz^{i+1} G$ by $\kz^i \Gamma /\kz^{i+1} \Gamma$ then there is a tower $$\label{tower} {\xymatrix{ T_1 \ar@{^(->}[r] & M_1 \ar[d]^{\pi_1}\\ T_2 \ar@{^(->}[r] & M_2\ar[d]^{\pi_2}\\ &\vdots\ar[d]\\ T_{\nu-1}\ar@{^(->}[r] & M_{\nu-1}\ar[d]^{\pi_{\nu-1}}\\ & M_\nu} }$$ and each $\pi_i:M_i\to M_{i+1}$ is a $T_i$-principal bundle. This geometric description is crucial in the proof of Nomizu’s Theorem. The underlying idea is quite simple: we perform induction over the index of nilpotency $\nu$. If $\nu=1$, i.e., ${\ensuremath{\mathfrak g}}$ is abelian, then $M$ is a torus and the result is well known. For the induction step, we consider $M$ as a principal torus bundle over a nilmanifold $M'$ with lower nilpotency index. Then we have to combine our knowledge of the cohomology of the fibre and of the base to describe the cohomology of the total space $M$. This is achieved by means of two spectral sequences, the Leray-Serre spectral sequence and the Serre-Hochschild spectral sequence. Let us work this out a bit more in detail starting on the geometric side: let $\ka^k(M)$ be the the space of smooth differential $k$-forms on $M$ and consider the de Rham complex $$0\to \ka^0(M)\overset{d}{\longrightarrow}\ka^1(M)\overset{d}{\longrightarrow}\dots \overset{d}{\longrightarrow}\ka^{n}(M)\to 0.$$ The principal bundle $\pi: M\to M'$ with fibre $T$ induces an inclusion $\pi^*\ka^1(M')\into \ka^1(M)$ and thus a filtration of $\ka^k(M)$ whose graded pieces are generated by forms of the type $(\pi^*\alpha)\wedge \beta$ where $\beta$ is a differential form along the fibres. Decomposing also the differential and starting with the vertical component we have constructed a version of the Leray Serre spectral sequence $$E_2^{p,q}=H^p(M', H^q(T, \IR))\implies H^{p+q}_{dR}(M).$$ In the general case the $E_2$-term has to be interpreted as cohomology with values in a local system but since we have a principal bundle with connected structure group the monodromy action on $H^q(T, \IR)$ is trivial and we have $E_2^{p,q}=H^p_{dR}(M')\tensor H^q_{dR}(T)$. Now we repeat the construction on the level of left-invariant forms. Consider $\Lambda^\bullet {\ensuremath{\mathfrak g}}^*$ as a subcomplex of the de Rham complex $(\ka^\bullet, d)$. The differential of a $k$-form $\alpha$ can be defined entirely in terms of the Lie-bracket and the Lie-derivative as $$\begin{gathered} \label{ch-diff} (d_k\alpha)(x_1, \dots , x_{k+1}):=\sum_{i=1}^{k+1} (-1)^{i+1} x_i (\alpha(x_1, \dots ,\hat x_i, \dots , x_{k+1}))\\ +\sum_{1\leq i <j\leq k+1} (-1)^{i+j} \alpha([x_i,x_j], x_1, \dots, \hat x_i, \dots, \hat x_j,\dots , x_{k+1}).\end{gathered}$$ For left-invariant $\alpha\in \Lambda^k{\ensuremath{\mathfrak g}}^*$ and $x_i\in {\ensuremath{\mathfrak g}}$ it reduces to $$(d_k\alpha)(x_1, \dots , x_{k+1})=\sum_{1\leq i <j\leq k+1} (-1)^{i+j} \alpha([x_i,x_j], x_1, \dots, \hat x_i, \dots, \hat x_j,\dots , x_{k+1})$$ and the complex $(\Lambda^\bullet {\ensuremath{\mathfrak g}}^*, d)$ is defined purly algebraically. It is known as Chevalley complex [@che-eil48] and computes the Lie-algebra cohomology of ${\ensuremath{\mathfrak g}}$ (see also [@Weibel Chapter 7]). If the fibration $\pi:M\to M'$ corresponds to the short exact sequence $$0\to {\mathfrak h}\to {\ensuremath{\mathfrak g}}\to {\ensuremath{\mathfrak g}}/{\mathfrak h}\to 0$$ where ${\mathfrak h}=\kz{\ensuremath{\mathfrak g}}$ as explained above then the dual sequence induces a filtration on the exterior powers $\Lambda^k{\ensuremath{\mathfrak g}}^*$ and we can organise the graded pieces into a spectral sequence, the Hochschild-Serre spectral sequence (see [@Weibel Section 7.5]), with $$\begin{gathered} E_0^{p,q}=\Lambda^p({\ensuremath{\mathfrak g}}/{\mathfrak h})^*\tensor \Lambda^q{\mathfrak h}^*\\ E_2^{p,q}=H^p({\ensuremath{\mathfrak g}}/{\mathfrak h}, H^q({\mathfrak h}))=H^p({\ensuremath{\mathfrak g}}/{\mathfrak h})\tensor H^q({\mathfrak h})\implies H^{p+q}({\ensuremath{\mathfrak g}}, \IR).\end{gathered}$$ The second description of the $E_2$-term holds in our setting since ${\mathfrak h}$ is contained in the centre of ${\ensuremath{\mathfrak g}}$, which corresponds to $\pi$ being a principal bundle. Now we deduce a proof of Nomizu’s theorem: we know the result for the torus and then proceed by induction on the nilpotency index. The inclusion $(\Lambda^\bullet{\ensuremath{\mathfrak g}}^*,d)\into (\ka^\bullet(M), d)$ is compatible with the filtrations we introduced and thus we get an induced homomorphism of spectral sequences. At the $E_2$ level this is $$H^p({\ensuremath{\mathfrak g}}/{\mathfrak h})\tensor H^q({\mathfrak h})\to H^p_{dR}(M')\tensor H^q_{dR}(T)$$ which is an isomorphism by induction hypothesis. Thus also in the limit we have the desired isomorphism $$H^*({\ensuremath{\mathfrak g}})\overset{\isom}{\longrightarrow}H^*_{dR}(M).$$ The statement we just proved extends to solvmanifolds, i.e., compact quotients of solvable groups, that satisfy the so-called Mostow condition [@mostow61]. The de Rham cohomology of more general solvmanifolds can be studied via an auxiliary construction due to Guan [@guan07] which was recently reconsidered by Console and Fino [@con-fin09]. Left-invariant complex structures and Dolbeault cohomology ========================================================== We start this section by recalling the definition of Dolbeault cohomology and giving the precise statement of the conjecture. Then we discuss to what extent the proof of Nomizu’s result, discussed in the preceding section, carries over to the holomorphic setting. After mentioning the openness result of Console and Fino we will also give some new results and discuss directions of future research. Reminder on Dolbeault cohomology {#reminder} -------------------------------- Recall that an (integrable) complex structure on a differentiable manifold $M$ is a vector bundle endomorphism $J$ of the tangent bundle which satisfies $J^2=-\id$ and the integrability condition . The endomorphism $J$ induces a decomposition of the complexified tangent bundle by letting pointwise ${{{T}^{1,0}}}M\subset T_\IC M=T M\tensor\IC$ be the ${{i}}$-eigenspace of $J$. Then the $-{{i}}$-eigenspace is ${{{T}^{0,1}}}M=\overline{{{{T}^{1,0}}}M}$. Note that ${{{T}^{1,0}}}M $ is naturally isomorphic to $(TM,J)$ as a complex vector bundle via the projection, and the integrability condition can be formulated as $[{{{T}^{1,0}}}M, {{{T}^{1,0}}}M]\subset {{{T}^{1,0}}}M$. The bundle of differential $k$-forms decomposes $$\Lambda^kT^*_\IC M=\bigoplus_{p+q=k}\Lambda^p {{{T^*}^{1,0}}}M\tensor \Lambda^q{{{T^*}^{0,1}}}M=\bigoplus_{p+q=k}\Lambda^{p,q}T^*M,$$ and we denote by $\ka^{p,q}(M)$ the $\kc^\infty$-sections of the bundle $\Lambda^{p,q}T^*M$, i.e., the global differential forms of type $(p,q)$. The integrability condition is equivalent to the decomposition of the differential $d=\del+\delbar$ and for all $p$ we get the Dolbeault complex $$(\ka^{p,\bullet}(M_J), \delbar): 0\to \ka^{p,0}(M)\overset\delbar\longrightarrow \ka^{p,1}(M)\overset\delbar\longrightarrow \dots$$ The Dolbeault cohomology groups $H^{p,q}(M)=H^q(\ka^{p,\bullet}(M), \delbar)$ are one of the most fundamental holomorphic invariants of $M_J$; from another point of view, the Dolbeault complex computes the cohomology groups of the sheaf $\Omega^p_{M_J}$ of holomorphic $p$-forms. In case $M$ is a nilmanifold and $J$ is left-invariant all of the above can be considered at the level of left-invariant forms. Decomposing ${\ensuremath{\mathfrak g}}^*_\IC={{{{\ensuremath{\mathfrak g}}^*}^{1,0}}}\oplus {{{{\ensuremath{\mathfrak g}}^*}^{0,1}}}$ and setting $\Lambda^{p,q}{\ensuremath{\mathfrak g}}^*=\Lambda^p{{{{\ensuremath{\mathfrak g}}^*}^{1,0}}}\tensor \Lambda^q{{{{\ensuremath{\mathfrak g}}^*}^{0,1}}}$ we get subcomplexes $$\label{inclusion} (\Lambda^{p,\bullet}{\ensuremath{\mathfrak g}}^*, \delbar)\into (\ka^{p,\bullet}(M_J), \delbar).$$ In fact, the left hand side has a purely algebraic interpretation worked out in [@rollenske09a]: ${{{{\ensuremath{\mathfrak g}}}^{0,1}}}$ is a Lie-subalgebra of ${\ensuremath{\mathfrak g}}_\IC$ and the adjoint action followed by the projection to the $(1,0)$-part makes ${{{{\ensuremath{\mathfrak g}}}^{1,0}}}$ into an ${{{{\ensuremath{\mathfrak g}}}^{0,1}}}$-module. Then the complex $(\Lambda^{p,\bullet}{\ensuremath{\mathfrak g}}^*, \delbar)$ computes the Lie-algebra cohomology of ${{{{\ensuremath{\mathfrak g}}}^{0,1}}}$ with values in $\Lambda^p{{{{\ensuremath{\mathfrak g}}^*}^{1,0}}}$ and we call $$H^{p,q}({\ensuremath{\mathfrak g}}, J)=H^q({{{{\ensuremath{\mathfrak g}}}^{0,1}}}, \Lambda^p{{{{\ensuremath{\mathfrak g}}^*}^{1,0}}})=H^q(\Lambda^{p,\bullet}{\ensuremath{\mathfrak g}}^*, \delbar)$$ the Lie-algebra Dolbeault cohomology of $({\ensuremath{\mathfrak g}}, J)$. We can now formulate the analogue of Nomizu’s theorem for Dolbeault cohomology as a conjecture. \[conj\] Let $M_J$ be a nilmanifold with left-invariant complex structure. Then the map $$\phi_J: H^{p,q}({\ensuremath{\mathfrak g}}, J)\to H^{p,q}(M_J)$$ induced by is an isomorphism. It is known that $\phi_J$ is always injective (see [@con-fin01] or [@rollenske09a]). We will accumulate evidence for the conjecture over the next sections and also explain which are the open cases. The inductive proof ------------------- In order to extend the idea of Nomizu’s proof to Dolbeault cohomology we need to have three ingredients: 1. Can we start the induction, i.e., can we express the Dolbeault cohomology of a complex torus as a suitable Lie-algebra cohomology? 2. Does the complex geometry of nilmanifolds allow us to proceed by induction? For example, is every nilmanifold with left-invariant complex structure a holomorphic principal bundle? 3. Are there spectral sequences that play the role of the Leray-Serre and Hochschild-Serre spectral sequence for (Lie-algebra) Dolbeault cohomology? It is well known that the first question has a positive answer (see e.g. [@Birkenhake-Lange p.15]). In our language, assume that ${\ensuremath{\mathfrak g}}$ is abelian and $J$ is a complex structure. Then the differential in the Lie-algebra Dolbeault complex $(\Lambda^{p, \bullet}{\ensuremath{\mathfrak g}}^*, \delbar)$ is trivial (being induced by the adjoint action) and thus $$H^{p,q}({\ensuremath{\mathfrak g}}, J)=\Lambda^{p,q}{\ensuremath{\mathfrak g}}^*=\Lambda^p{\ensuremath{\mathfrak g}}^*\tensor \Lambda^q\bar{\ensuremath{\mathfrak g}}^*=H^{p,q}(M_J).$$ Unfortunately, the answer to the second question is negative. We will discuss the geometry of nilmanifolds with left-invariant complex structure in Section \[geometry\] and see that nevertheless the inductive approach works in many important special cases. The positive answer to the third question, important for the induction step, has been worked out by Cordero, Fernández, Gray and Ugarte [@cfgu00] for principal holomorphic torus bundles and in greater generality by Console and Fino [@con-fin01]. The extra grading coming from the $(p,q)$-type of the differential forms makes the notation and the construction of the necessary spectral sequences more involved. For the usual Dolbeault cohomology of a holomorphic fibration the result goes back to Borel [@Hirzebruch Appendix II, Theorem 2.1]. \[indstep\] Let $M_J$ be a nilmanifold with left-invariant complex structure and let $\pi:M\to M'$ be a holomorphic fibration with typical fibre $F$ induced by a $\Gamma$-rational and $J$-invariant ideal ${\mathfrak h}\subset {\ensuremath{\mathfrak g}}$ (as explained in Section \[geometry\]). If for all $p,q$ we have $$H^{p,q}({\mathfrak h}, J\restr{\mathfrak h})\isom H^{p,q}(F)\text{ and }H^{p,q}({\ensuremath{\mathfrak g}}/{\mathfrak h}, J')\isom H^{p,q}(M'),$$ where $J'$ is the complex structure on ${\ensuremath{\mathfrak g}}/{\mathfrak h}$ induced by $J$, then also $$H^{p,q}({\ensuremath{\mathfrak g}}, J)\isom H^{p,q}(M).$$ Clearly, with the above proposition we can proceed inductively to compute the Dolbeault cohomology of iterated holomorphic principal bundles as we did in the real case. Unfortunately, considering principal holomorphic torus bundles is not enough so we really need to decide when a nilmanifolds with left-invariant complex structure admits a suitable fibration. ### When is a nilmanifold with left-invariant complex structure an iterated (principal) bundle? {#geometry} We have seen that we need to understand the geometry of nilmanifolds with left-invariant complex structure, in particular whether there are natural fibrations over nilmanifolds of smaller dimension. In general, the projections in the tower of (real) principal bundles will not be holomorphic, for example, the centre could be odd-dimensional. It would be convenient if we could detect fibrations of $M$ by studying only the Lie-algebra ${\ensuremath{\mathfrak g}}$. For universal cover, i.e., the simply connected Lie group, this is easy: a fibration $G\to G'$ over another simply connected nilpotent Lie-group corresponds to a short exacts sequence of Lie algebras $$0\to {\mathfrak h}\to {\ensuremath{\mathfrak g}}\to {\ensuremath{\mathfrak g}}'\to 0$$ or, in other words, to an ideal ${\mathfrak h}\subset {\ensuremath{\mathfrak g}}$. Here we use that, by the Baker-Campell-Hausdorff formula (see e.g. [@Knapp Section B.4]), the exponential map $\exp:{\ensuremath{\mathfrak g}}\to G$ is a diffeomorphism and hence every ideal induces a closed subgroup of $G$. If we look at a 2-dimensional torus $M=\IR^2/\IZ^2$ then every 1-dimensional subspace ${\mathfrak h}$ in the abelian Lie-algebra ${\ensuremath{\mathfrak g}}=\IR^2$ is an ideal. But there is some extra structure: a basis for the lattice (or, strictly speaking, the logarithm of this basis) generates a $\IQ$-vector space ${\ensuremath{\mathfrak g}}_\IQ\isom\IQ^2\subset {\ensuremath{\mathfrak g}}$ such that ${\ensuremath{\mathfrak g}}_\IQ\tensor\IR={\ensuremath{\mathfrak g}}$. Clearly, a 1-dimensional subgroup corresponding to ${\mathfrak h}\subset {\ensuremath{\mathfrak g}}$ closes to a circle in the quotient if an only if it has rational slope, i.e., if and only if ${\mathfrak h}\cap {\ensuremath{\mathfrak g}}_\IQ$ is a $\IQ$-vector space of dimension 1. The general case is captured in the following definition. Let ${\ensuremath{\mathfrak g}}$ be a nilpotent Lie-algebra. A *rational structure* for ${\ensuremath{\mathfrak g}}$ is a subalgebra ${\ensuremath{\mathfrak g}}_\IQ$ defined over $\IQ$ such that ${\ensuremath{\mathfrak g}}_\IQ\tensor \IR ={\ensuremath{\mathfrak g}}$. A subalgebra ${\mathfrak h}\subset {\ensuremath{\mathfrak g}}$ is said to be rational with respect to a given rational structure ${\ensuremath{\mathfrak g}}_\IQ$ if ${\mathfrak h}_\IQ:={\mathfrak h}\cap {\ensuremath{\mathfrak g}}_\IQ$ is a rational structure for ${\mathfrak h}$. If $\Gamma$ is a lattice in the corresponding simply connected Lie-group $G$ then its associated rational structure is given by the $\IQ$-span of $\log \Gamma$. A rational subspace with respect to this structure is called *$\Gamma$-rational*. \[Qstr\] One has to check that this is well defined, i.e., that the $\IQ$-span of $\log\Gamma$ gives a rational structure. Indeed more is true: a nilpotent Lie-algebra admits a $\IQ$-structure if and only if the corresponding simply connected Lie-group contains a lattice [@Cor-Green Theorem 5.1.8]. This criterion makes it particularly simple to produce examples: given a nilpotent Lie-algebra ${\ensuremath{\mathfrak g}}$ with rational structure constants we know that there exists a lattice $\Gamma$ in the corresponding Lie-group $G$ and we get a compact nilmanifold $M=\Gamma\backslash G$. Since most properties of $M$ are encoded in ${\ensuremath{\mathfrak g}}$ there is usually no need to specify the lattice concretely. Coming back to the original problem we have [@Cor-Green Lemma 5.1.4, Theorem 5.1.11]: Let ${\mathfrak h}\subset {\ensuremath{\mathfrak g}}$ be an ideal. Then the fibration $G\to G/\exp{\mathfrak h}$ descends to a fibration of compact nilmanifolds $\pi:M\to M'$ if and only if ${\mathfrak h}$ is $\Gamma$-rational. In principle, all subspaces that are naturally associated to the Lie-algebra structure of ${\ensuremath{\mathfrak g}}$ are rational with respect to any rational structure in ${\ensuremath{\mathfrak g}}$. In particular this holds for the subspaces in the ascending and descending central series (Definition \[ZgCg\]) and intersections thereof [@Cor-Green p. 208]. If we add left-invariant complex structures, we would like the fibration $\pi:M_J\to M'_{J'}$ to be holomorphic as well, which, by left-invariance, is the same as to say that ${\ensuremath{\mathfrak g}}\to {\ensuremath{\mathfrak g}}'$ is complex linear or equivalently that ${\mathfrak h}$ is a complex subspace of $({\ensuremath{\mathfrak g}}, J)$. We have proved \[fibration\] Let $M_J$ be a nilmanifold with left-invariant complex structure. Then ${\mathfrak h}\subset {\ensuremath{\mathfrak g}}$ defines a holomorphic fibration $\pi:M_J\to M'_{J'}$ if and only if ${\mathfrak h}$ is a $J$-invariant and $\Gamma$-rational ideal in ${\ensuremath{\mathfrak g}}$. It is time for an example that shows what can go wrong: \[badex\] We define a 6-dimensional Lie algebra ${\mathfrak h}_7$ with basis $e_1, \dots, e_6$ where, up to anti-commutativity, the only non-zero brackets are $$[e_1, e_2]=-e_4,\, [e_1, e_3]=-e_5,\, [e_2, e_3]=-e_6.$$ The vectors $e_4\dots,e_6$ span the centre $\kz^1{\mathfrak h}_7=\kc^1{\mathfrak h}_7$. Since the structure equations are rational there is a lattice $\Gamma$ in the corresponding simply connected Lie-group $H_7$ and we can consider the nilmanifold $M=\Gamma\backslash H_7$. For $\lambda\in \IR$ we give a left-invariant complex structure $J_\lambda$ on $M$ by specifying a basis for the space of $(1,0)$-vectors: $$({{{{\mathfrak h}_7}^{1,0}}})_\lambda:=\langle X_1=e_1-ie_2, X_2^\lambda= e_3-i (e_4-\lambda e_1), X_3^\lambda=-e_5+\lambda e_4+ie_6\rangle$$ One can check that $[X_1, X_2^\lambda]=X_3^\lambda$ and, since $X_3^\lambda$ is contained in the centre, the complex structure is integrable. The largest complex subspace of the centre is spanned by the real and imaginary part of $X_3^\lambda $ since the centre has real dimension three. The simply connected Lie-group $H_7$ has a filtration by subgroups induced by the filtration $${\mathfrak h}_7\supset V_1=\langle \lambda e_2+e_3, e_4,Im(X_3^\lambda),Re(X_3^\lambda)\rangle\supset V_2=\langle Im(X_3^\lambda),Re(X_3^\lambda)\rangle \supset 0$$ on the Lie-algebra and, since all these are $J$ invariant, $H_7$ has the structure of a tower of principal holomorphic bundles with fibre $\IC$. In fact, using the results of [@ugarte07], a simple calculation shows that every complex structure on ${\mathfrak h}_7$ is equivalent to $J_0$. Now we take the compatibility with the lattice into account. The rational structure induced by $\Gamma$ coincides with the $\IQ$-algebra generated by the basis vectors $e_k$ and, by the criterion in Proposition \[fibration\], the fibrations on $H_7$ descends to the compact nilmanifold $M$ if and only if $\lambda$ is rational. In fact, one can check that for $\lambda\notin \IQ$ the Lie-algebra ${\mathfrak h}_7$ does not contain any non-trivial $J$-invariant and $\Gamma$-rational ideals, so there is no holomorphic fibration at all over a nilmanifold of smaller dimension. To understand when there is a suitable tower of fibrations on a nilmanifold the following definitions turn out to be useful: \[stableseries\] Let ${\ensuremath{\mathfrak g}}$ be a nilpotent Lie-algebra with rational structure ${\ensuremath{\mathfrak g}}_\IQ$. We call an ascending filtration $$0=\ks^0{\ensuremath{\mathfrak g}}\subset \ks^1{\ensuremath{\mathfrak g}}\subset \dots \subset \ks^t{\ensuremath{\mathfrak g}}={\ensuremath{\mathfrak g}}$$ a *(complex) torus bundle series* with respect to a complex structure $J$ if for all $i=1\dots , t$ $$\begin{gathered} \ks^i{\ensuremath{\mathfrak g}}\text{ is rational with respect to ${\ensuremath{\mathfrak g}}_\IQ$ and an ideal in }\ks^{i+1}{\ensuremath{\mathfrak g}}, \tag{$a$}\\ J\ks^i{\ensuremath{\mathfrak g}}=\ks^i{\ensuremath{\mathfrak g}},\tag{$b$}\\ \ks^{i+1}{\ensuremath{\mathfrak g}}/\ks^{i}{\ensuremath{\mathfrak g}}\text{ is abelian }.\tag{$c$} \intertext{If in addition} \ks^{i+1}{\ensuremath{\mathfrak g}}/\ks^{i}{\ensuremath{\mathfrak g}}\subset\kz({\ensuremath{\mathfrak g}}/\ks^{i}{\ensuremath{\mathfrak g}}),\tag{$c'$}\label{princ}\end{gathered}$$ then $(\ks^i{\ensuremath{\mathfrak g}})_{i=0,\dots, t}$ is called a *principal torus bundle series*. An ascending filtration $(\ks^i{\ensuremath{\mathfrak g}})_{i=0,\dots, t}$ on ${\ensuremath{\mathfrak g}}$ is said to be a *stable torus bundle series* for ${\ensuremath{\mathfrak g}}$, if $(\ks^i{\ensuremath{\mathfrak g}})_{i=0,\dots, t}$ is a torus bundle series for every complex structure $J$ and every rational structure ${\ensuremath{\mathfrak g}}_\IQ$ in ${\ensuremath{\mathfrak g}}$. If also condition holds then it is called a *stable principal torus bundle series*. Geometrically, a principal torus bundle series induces the holomorphic analogue of the tower of real principal torus bundles described in . With a torus bundle series we get in some sense the opposite picture: we start by fibring $M$ over a complex torus with fibre a nilmanifold with left-invariant complex structure and then proceed by decomposing the fibre further. More precisely, the complex structure $J$ restricts to each of the sub-algebras $\ks^i{\ensuremath{\mathfrak g}}$, and since they are rational we get a nilmanifold with left-invariant complex structure $M_i=\ks^i\Gamma\backslash \ks^iG$ where $\ks^iG=\exp \ks^i{\ensuremath{\mathfrak g}}$ and $\ks^i\Gamma=\Gamma\cap \ks_iG$. Let $T_i$ be the complex torus associated to $\ks^{i}{\ensuremath{\mathfrak g}}/\ks^{i-1}{\ensuremath{\mathfrak g}}$ with the induced complex structure and lattice. The short exact sequences $$0\to \ks^{i-1}{\ensuremath{\mathfrak g}}\to\ks^{i}{\ensuremath{\mathfrak g}}\to \ks^{i}{\ensuremath{\mathfrak g}}/\ks^{i-1}{\ensuremath{\mathfrak g}}\to 0$$ give rise to holomorphic fibre bundles $$\label{wall} {\xymatrix{ M_{i-1} \ar@{^(->}[r] & M_i \ar[d]^{\pi_i}\\ & T_i}} \qquad \text{for } i=1, \dots, t$$ with $M_t=M$ and $M_1=T_1$. Note that these bundles cannot be principal bundles in general since the fibre is not a complex Lie group. Thus a torus bundle series gives an inductive decomposition of $M_J$ into complex tori. Considering the complex structure $J_0$ in Example \[badex\] we see that the length of a (principal) torus bundle series may be larger than the nilpotency index. The notions of stable (principal) torus bundle series appear to be quite strong but in [@rollenske09b] many examples of such have been produced. For example, the classification of complex structures on Lie-algebras with $\dim \kc^1{\ensuremath{\mathfrak g}}=1$, worked out independently by several authors, shows that $0\subset \kz{\ensuremath{\mathfrak g}}\subset {\ensuremath{\mathfrak g}}$ is a stable principal torus bundle series [@rollenske09b Propostion 3.6]. The notion has the advantage to be independent of the chosen lattice and complex structure and allows to give structural information valid for all nilmanifolds with left-invariant complex structure and Lie-algebra ${\ensuremath{\mathfrak g}}$. If we have a holomorphic decomposition as on page or then, by Proposition \[indstep\], the inductive approach works and we obtain If $M_J$ is a nilmanifold with left-invariant complex structure such that ${\ensuremath{\mathfrak g}}$ admits a (principal) torus bundle series with respect to $J$ then Conjecture \[conj\] holds for $M_J$. If ${\ensuremath{\mathfrak g}}$ admits a stable (principal) torus bundle series then Conjecture \[conj\] holds for every nilmanifold with left-invariant structure with Lie-algebra ${\ensuremath{\mathfrak g}}$. All possible types of nilmanifolds with left-invariant complex structure up to real dimension 4 were mentioned in the introduction – there are only complex tori and Kodaira surfaces for which the conjecture is well known. In real dimension 6 there are only 34 isomorphism classes of nilpotent Lie-algebras and the 18 classes admitting a complex structure have been classified by Salamon [@salamon01]. We already met the Lie-algebra ${\mathfrak h}_7$ in Example \[badex\]. The first part of the following result, which implies the second, is contained in [@rollenske09b Section 4.2]. If $M_J$ is a nilmanifold of dimension at most six with Lie-algebra ${\ensuremath{\mathfrak g}}\ncong{\mathfrak h}_7$ then ${\ensuremath{\mathfrak g}}$ admits a stable (principal) torus bundle series and Conjecture \[conj\] holds for $M_J$. Roughly half of the Hodge numbers of a nilmanifold $(\Gamma\backslash H_7, J)$ can be checked by hand to coincide with the predictions but the ones in the middle are not immediately accessible. The conjecture is known to be true in other important special cases. If $M_J$ is the quotient of a complex Lie group, i.e., $({\ensuremath{\mathfrak g}}, J)$ is a complex Lie algebra, then the tangent bundle of $M_J$ is holomorphically trivial and $M_J$ is complex parallelisable. This can be reformulated as $[Jx, y]=J[x,y]$ for all $x,y\in {\ensuremath{\mathfrak g}}$ or equivalently as $[{{{{\ensuremath{\mathfrak g}}}^{1,0}}}, {{{{\ensuremath{\mathfrak g}}}^{0,1}}}]=0$. Complex structures satisfying the opposite condition $[{{{{\ensuremath{\mathfrak g}}}^{1,0}}}, {{{{\ensuremath{\mathfrak g}}}^{1,0}}}]=0$ are called abelian (because ${{{{\ensuremath{\mathfrak g}}}^{1,0}}}$ is an abelian subalgebra of ${\ensuremath{\mathfrak g}}_\IC$). Such complex structures were introduced by Barberis [@barberis99] and come up in different contexts [@andrada-salamon05; @dotti-fino00]. In both cases it is straightforward to check that the ascending central series is a principal torus bundle series and thus we have \[acp\] If $M_J$ is a nilmanifold with left-invariant complex structure and $J$ is abelian or if $M_J$ is complex parallelisable then $M_J$ is an iterated principal holomorphic torus bundle and Conjecture \[conj\] holds for $M_J$. It was another insight of Console and Fino that the essential issue here is rationality of ideals: consider the descending central series adapted to $J$ defined by $$\kc^i_J({\ensuremath{\mathfrak g}})=\kc^i{\ensuremath{\mathfrak g}}+J\kc^i{\ensuremath{\mathfrak g}},$$ in other words $\kc^i_J{\ensuremath{\mathfrak g}}$ is the smallest $J$-invariant subspace of ${\ensuremath{\mathfrak g}}$ containing $\kc^i{\ensuremath{\mathfrak g}}$. Then, by [@con-fin01 Lemma 1], these subspaces satisfy condition $(b)$ and $(c)$ of Definition \[stableseries\]. Thus they induce a decomposition of the universal cover $(G,J)$ as an iterated holomorphic bundle over complex vector spaces similar to . The decomposition of the universal cover descends to the compact manifold $M_J$ if and only if the subspaces $\kc^i_J{\ensuremath{\mathfrak g}}$ are rational. In particular this is the case, if $J$ itself is rational, i.e., if $J$ maps ${\ensuremath{\mathfrak g}}_\IQ$ to itself. Thus we have \[rational\] If $J$ is rational then ${\ensuremath{\mathfrak g}}$ admits a torus bundle series adapted to $J$ and Conjecture \[conj\] holds for $M_J$. This result is very useful, since if one is looking for specific examples usually everthing can be chosen to be rational. Console and Fino’s result on openness ------------------------------------- In the last section we have seen that we can compute Dolbeault cohomology with left-invariant forms whenever we have some control over the geometry of $M_J$. Using deformation theoretic methods one can go further. Recall that the datum of a complex structure $J:{\ensuremath{\mathfrak g}}\to {\ensuremath{\mathfrak g}}$ is equivalent to specifying the subspace ${{{{\ensuremath{\mathfrak g}}}^{1,0}}}\subset {\ensuremath{\mathfrak g}}_\IC$. So the set of left-invariant complex structures can be identified with the subset $$\kc({\ensuremath{\mathfrak g}})=\{ V\in \mathbb{G}r(n , {\ensuremath{\mathfrak g}}_\IC)\mid V\cap \bar V=0, [ V,V]\subset V\}$$ of the Grassmannian of half-dimensional subspaces of ${\ensuremath{\mathfrak g}}_\IC$. The first condition ensures that ${\ensuremath{\mathfrak g}}_\IC=V\oplus\bar V$ and the second that the complex structure $J_V$ with the corresponding eigenspace decomposition is integrable. The question when the universal cover decomposes as an iterated principal bundle as in has been studied by Cordero, Fernández, Gray and Ugarte. Such left-invariant complex structures are called nilpotent and an algebraic characterisation has been given in [@cfgu00]. Note that it is a hard problem to decide whether $\kc({\ensuremath{\mathfrak g}})\neq\varnothing$ for a given nilpotent Lie-algebra ${\ensuremath{\mathfrak g}}$. \[open\] Let $U\subset \kc({\ensuremath{\mathfrak g}})$ be the subset of left-invariant complex structures $J$ for which the inclusion $$\phi_J: H^{p,q}({\ensuremath{\mathfrak g}}, J)\into H^{p,q}(M_J)$$ is an isomorphism. Then $U$ is an open subset of $\kc({\ensuremath{\mathfrak g}})$. The strategy of the proof is to show that the dimension of the complement of $H^{p,q}({\ensuremath{\mathfrak g}}, J)$ in $H^{p,q}(M_J)$ is upper-semi-continuous and thus remains equal to zero in an open neighbourhood of any point $J$ where $\phi_J$ is an isomorphism. So to prove Conjecture \[conj\] it would be sufficient to show that, for each connected component of $\kc({\ensuremath{\mathfrak g}})$, the subset $U$ as in the Theorem is non-empty and closed. Unfortunately Hodge-numbers do behave badly when going to the limit, especially for non-Kähler manifolds, so closedness is very difficult. The set of rational complex structures is a good candidate to show that $U$ is non-empty and dense but it is not clear to me whether $\kc({\ensuremath{\mathfrak g}})$ does always contain rational complex structures provided it is non-empty. Calculations suggest that this will not be the case but a concrete counterexample is complicated to write down. In Corollary \[acp\] we saw that the conjecture holds for abelian complex structure and complex parallelisable nilmanifolds. Small deformations of such structures have been studied in some detail and deformations of these are again left-invariant but in general neither abelian nor complex parallelisable (see Section \[defos\] and [@con-fin-poon06; @mpps06; @rollenske08a]). In this way we can get more examples of interesting complex structures where the conjecture still holds. Some new results and open questions {#new} ----------------------------------- In this section we first present a result that any nilmanifold with left-invariant complex structure is not too far away from satisfying Conjecture \[conj\], it suffices to take a finite quotient. This result is new and might lead to a complete proof; we will discuss some possible approaches below. We first need a lemma that exploits the especially simple arithmetics of lattices in nilpotent Lie groups. \[biglatt\] Let ${\ensuremath{\mathfrak g}}$ be a nilpotent real Lie algebra, $\Gamma\subset G$ a lattice and ${\ensuremath{\mathfrak g}}_\IQ$ the rational structure associated to $\log\Gamma$. Then for any $x\in {\ensuremath{\mathfrak g}}_\IQ$ there exists a lattice $\Gamma'$ such that $\Gamma\subset \Gamma'$ of finite index and $\exp(x)\in \Gamma'$. Pick any lattice $\tilde\Gamma$ containing $\exp(x)$ and inducing the same rational structure in ${\ensuremath{\mathfrak g}}$ as $\Gamma$. This is possible by [@Cor-Green Lemma 5.1.10]. Then by [@Cor-Green Theorem 5.1.12] $\Gamma\cap\tilde\Gamma$ is a lattice in $G$ which is of finite index in both $\Gamma$ and $\tilde\Gamma$. If we define $\Gamma'$ to be the subgroup of $G$ generated by $\Gamma$ and $\tilde\Gamma$ then $\Gamma'$ is again discrete and contains both $\exp(x)$ and $\Gamma$. \[quotient\] Let $M_J=(\Gamma\backslash G, J)$ be a nilmanifold with left-invariant complex structure. Then there exists a lattice $\Gamma'\subset G$ with $\Gamma$ of finite index in $\Gamma'$ such that $$\phi_J:H^{p,q}({\ensuremath{\mathfrak g}})\isom H^{p,q}(\Gamma'\backslash G,J).$$ In other word, given any nilmanifold with left-invariant complex structure $M_J$ there is a finite regular covering $\pi:M_J\to M'_J$ such that the conjecture holds for $M_J'$. Endow all involved bundles with left-invariant hermitian metrics. Then the Laplacian $\Delta_{\delbar}=\delbar\delbar^*+\delbar^*\delbar$ is a left-invariant elliptic differential operator on $G$. Let $\kh(G):=\ker(\Delta_{\delbar})$ be the space of harmonic forms of type $(p,q)$ on $G$. We can take invariants under $G$ and $\Gamma$ respectively and get $$H^{p,q}(M)\isom \kh(G)^\Gamma\supset \kh(G)^G=H^{p,q}({\ensuremath{\mathfrak g}},J).$$ The last equality comes from the compatibility of the Hodge-decomposition with the subspace of left-invariant form; this hat been worked out in detail in [@rollenske09a]. We prove our claim by induction on $d:=\dim \kh(G)^\Gamma-\dim \kh(G)^G$. If $d=0$ we can take $\Gamma'=\Gamma$. If $d>0$ there exists an $\alpha\in \kh(G)^\Gamma$ and an open subset $U\subset G$ such that $$g^*\alpha\neq\alpha$$ for $g\in U$. Let ${\ensuremath{\mathfrak g}}_\IQ$ be the rational structure induced by $\log(\Gamma)\subset {\ensuremath{\mathfrak g}}$. Since the exponential map is a diffeomorphism the image of ${\ensuremath{\mathfrak g}}_\IQ$ is dense in $G$ and we can find an $x\in {\ensuremath{\mathfrak g}}_\IQ$ such that $\exp (x)\in U$. By Lemma \[biglatt\] we can find a lattice $\Gamma'\subset G$ such that $\Gamma\subset \Gamma'$ of finite index and $\exp(x)\in \Gamma'$; then $\alpha \notin \kh(G)^{\Gamma'}=H^{p,q}( \Gamma'\backslash G, J)$ and we conclude by induction. Proposition \[quotient\] suggested an approach that unfortunately did not prove successful. Assume we have constructed for a nilmanifold with left-invariant complex structure $M_J$ a lattice $\Gamma\subset\Gamma'$ as above and then manage to find a way to scale it down, i.e., to find a contracting automorphism $\mu$ of $G$ such that $\mu(\Gamma')=\tilde\Gamma'\subset \Gamma$. This is possible if ${\ensuremath{\mathfrak g}}$ is naturally graded but not in general [@dyer70]. On the level of real manifolds this corresponds to two regular coverings $$\tilde M'=\tilde\Gamma'\backslash G\to M \to M'=\Gamma'\backslash G$$ and a (different) isomorphism $\mu:M'\isom\tilde M'$. If $\mu$ preserves the complex structure, i.e., $M'_J$ and $\tilde M'_J$ are isomorphic as complex manifolds then the injections $$H^{p,q}({\ensuremath{\mathfrak g}}, J)=H^{p,q}(M'_J)\into H^{p,q}(M_J)\into H^{p,q}(\tilde M'_J)=H^{p,q}({\ensuremath{\mathfrak g}}, J)$$ prove the conjecture for $M_J$. But this will generally not be the case, as can be worked out for the Lie-algebra given in Example \[badex\]. We have seen that Conjecture \[conj\] holds if we understand the complex geometry of a nilmanifold with left-invariant complex structure $M_J$. In addition we have the openness result of Console and Fino. Nevertheless the general case remains open. There are two other approaches one could try: in the proof of Proposition \[quotient\] we compared $G$-invariant and $\Gamma$-invariant $\Delta_\delbar$-harmonic differential forms on the universal cover $G$ after choosing some left-invariant hermitian structure. The study of this elliptic operator falls into the realm of harmonic analysis but there does not seem to be a general result that shows that $\Gamma$-invariant harmonic forms are $G$-invariant. One problem is again that $\Delta_\delbar$ does not need to have any compatibility with the natural filtrations on ${\ensuremath{\mathfrak g}}$ but working on $G$ we might avoid the issue of rationality. Going back to the compact manifold $M_J$ one might try to use some Weitzenböck formula to express $\Delta_\delbar$ in a different way. But since $M_J$ is in general not Kähler the Chern-connection compatible with the hermitian structure will differ from the Levi-Civita connection and again there does not seem to be an applicable general formula at the moment. In this context Gromov’s characterisation of nilmanifolds as almost flat manifolds [@gromov78] might play an important role. Applications {#apps} ============ As mentioned in the introduction, nilmanifolds can be a convenient source of examples in many contexts. Integrability conditions for additional left-invariant geometric structures usually boil down to linear algebra and thus one easily writes down interesting examples of complex, riemannian, hermitian or symplectic structures. Proceeding from the examples to general results is more difficult. Here we will discuss two further applications related to complex structures references to other areas have already been given in the introduction. Prescribing cohomology behaviour and the Frölicher spectral sequence -------------------------------------------------------------------- If Conjecture \[conj\] holds for a nilmanifold with left-invariant complex structure $M_J$ the computation if its Dolbeault cohomology $H^{p,q}(M_J)=H^{p,q}({\ensuremath{\mathfrak g}}, J)$ is a matter of finite-dimensional linear algebra and can be taught to a computer algebra system. In addition this makes it possible to study the Frölicher spectral sequence $$E_2^{p,q}=H^{p,q}(M_J)\implies H^{p+q}_{dR}(M, \IC),$$ that measures the difference between Dolbeault cohomology and de Rham cohomology. This spectral sequence degenerate at $E_1$ for all compact complex surfaces but Cordero, Fernández, Gray and Ugarte showed in [@cfgu99], studying nilmanifolds, that for complex 3-folds the maximal non-degeneracy $E_2\ncong E_3=E_\infty$ is possible. Later we constructed a family $X_n\to T_n$ of principal torus bundles over tori such that $d_n\neq0$ for $X_n$ (see [@rollenske07a]). Probably, starting from dimension 3, the maximal non-degeneracy is possible but concrete examples are still missing. If we ask in addition for simply connected manifolds there are only very few examples with non-zero higher differentials known [@pittie89]. The idea behind these examples is that if we write down some 1-forms and their differentials carefully enough we get a nilmanifold supporting these forms for free. For example, let $V$, $W$ be two complex vector spaces and give an arbitrary map $$\delta: W^* \to \Lambda^2 V^*\tensor ( V^*\tensor \bar V^*).$$ Setting ${{{{\ensuremath{\mathfrak g}}}^{1,0}}}=V\oplus W$ and ${\ensuremath{\mathfrak g}}_\IC:={{{{\ensuremath{\mathfrak g}}}^{1,0}}}\oplus \overline{{{{{\ensuremath{\mathfrak g}}}^{1,0}}}}$ we extend $\delta$ to a map $$d:{\ensuremath{\mathfrak g}}^*_\IC\to \Lambda^2{\ensuremath{\mathfrak g}}^*_\IC$$ which is zero on $V^*\oplus \bar V^*$ and $\delta+\bar\delta$ on $W^*\oplus \bar W^*$. There is a natural real vector space ${\ensuremath{\mathfrak g}}=\{z+\bar z\mid z\in {{{{\ensuremath{\mathfrak g}}}^{1,0}}}\}\subset {\ensuremath{\mathfrak g}}_\IC$ and via the identity $$d\alpha(x,y)=-\alpha([x,y])\quad \text{for $\alpha\in {\ensuremath{\mathfrak g}}^*$ and $x,y\in {\ensuremath{\mathfrak g}}$}$$ the vector space ${\ensuremath{\mathfrak g}}$ becomes a 2-step nilpotent Lie-algebra. The decomposition of ${\ensuremath{\mathfrak g}}_\IC$ defines an almost complex structure $J$ on ${\ensuremath{\mathfrak g}}$ which is integrable by our choice that $\delta$ has no component mapping to $\Lambda^2\bar V$. If we have chosen $\delta$ such that the structure constants of ${\ensuremath{\mathfrak g}}$ turn out to be rational there exists a lattice in the associated nilpotent Lie-group and we have constructed a nilmanifold $M_J$ with left-invariant complex structure. Nearly by definition $M_J$ is a principal holomorphic torus bundle over a torus and thus we not only have prescribed the differential of some 1-forms quite arbitrarily but our datum encodes in fact the whole cohomology algebra. Constructing nilmanifolds with higher nilpotency index in a similar way is more tedious since one has to take care of the Jacobi identity, equivalent to $d^2=0$, as well. Deformations of complex structures {#defos} ---------------------------------- Our main motivation to study Conjecture \[conj\] was the question if small deformations of left-invariant complex structures remain left-invariant. Generalising results of Console, Fino and Poon [@con-fin-poon06] (see also [@mpps06]) we proved If Conjecture \[conj\] holds for a nilmanifold with left-invariant complex structure $M_J$ then all sufficiently small deformations of $J$ are again left-invariant complex structures. The idea of the proof is that small deformations of $J$ are controlled by the first and second cohomology groups of the holomorphic tangent bundle. By constructing a version of Serre-duality that works purely on the level of Lie-algebra cohomology one can represent the elements of $H^i(M_J, \mathcal{T}_{M_J})$ by left-invariant forms and the result follows by the standard inductive construction of the Kuranishi space [@kuranishi62]. The space of all integrable complex structures on a nilmanifold $M$ modulo orientation preserving diffeomorphisms isotopic to the identity is called Teichmüller space $\mathfrak{T}(M)$. It is (locally) a complex analytic space, the germ at a fixed complex structure $J$ being the Kuranishi space of $(M,J)$. Thus the theorem says that, under the assumption of Conjecture \[conj\], the set of left-invariant complex structures is open in $\mathfrak{T}(M)$. If the Lie algebra ${\ensuremath{\mathfrak g}}$ of $M$ admits a stable (principal) torus bundle series (see Definition \[stableseries\]) then Conjecture \[conj\] holds for all left-invariant complex structures on ${\ensuremath{\mathfrak g}}$ and it is natural to ask if the set of left-invariant complex structures is also closed. The starting point in this direction is Catanese’s result that all deformations in the large of a complex torus are complex tori [@catanese02]. Generalising results of Catanese and Frediani [@catanese04; @cat-fred06] this was extended in [@rollenske09b] to a large class of nilmanifolds with left-invariant complex structure. As an example we would like to mention that every deformation in the large of the Iwasawa manifold is a nilmanifold with left-invariant complex structure; in this case the topology of the space of left-invariant complex structures is known [@ket-sal04]. In this area many interesting questions remain open, we hope to address some of these in future work. Progress in the direction of Conjecture \[conj\] would encourage our belief that the complex geometry of nilmanifolds with left-invariant complex structure can be completely understood via linear algebra. acknowledgement {#acknowledgement .unnumbered} --------------- We would like to thank the organisers for the stimulating conference and the invitation to contribute to this volume. We enjoyed several discussion on this topic with Fabrizio Catanese and Uwe Semmelmann. Anna Fino provided some interesting references to the literature. Careful remarks by the referee helped to improve the presentation. During the preparation of this article the author was supported by the Hausdorff Centre for Mathematics in Bonn. [10]{} Abbena, E., Garbiero, S., Salamon, S.: Almost [H]{}ermitian geometry on six dimensional nilmanifolds. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) **30**(1), 147–170 (2001). Andrada, A., Salamon, S.: Complex product structures on [L]{}ie algebras. Forum Math. **17**(2), 261–295 (2005). Barberis, M.L.: Affine connections on homogeneous hypercomplex manifolds. J. Geom. Phys. **32**(1), 1–13 (1999). Barberis, M.L., Dotti, I.G., Verbitsky, M.: Canonical bundles of complex nilmanifolds, with applications to hypercomplex geometry. Math. Res. Lett. **16**(2), 331–347 (2009) Benson, C., Gordon, C.S.: Kähler and symplectic structures on nilmanifolds. Topology **27**(4), 513–518 (1988) Buser, P., Karcher, H.: Gromov’s almost flat manifolds, *Astérisque*, vol. 81. Société Mathématique de France, Paris (1981) Catanese, F.: Deformation types of real and complex manifolds. In: Contemporary trends in algebraic geometry and algebraic topology (Tianjin, 2000), *Nankai Tracts Math.*, vol. 5, pp. 195–238. World Sci. Publ., River Edge, NJ (2002) Catanese, F.: Deformation in the large of some complex manifolds. [I]{}. Ann. Mat. Pura Appl. (4) **183**(3), 261–289 (2004) Catanese, F., Frediani, P.: Deformation in the large of some complex manifolds. [II]{}. In: Recent progress on some problems in several complex variables and partial differential equations, *Contemp. Math.*, vol. 400, pp. 21–41. Amer. Math. Soc., Providence, RI (2006) Chevalley, C., Eilenberg, S.: Cohomology theory of [L]{}ie groups and [L]{}ie algebras. Trans. Amer. Math. Soc. **63**, 85–124 (1948) Console, S., Fino, A.: Dolbeault cohomology of compact nilmanifolds. Transform. Groups **6**(2), 111–124 (2001) Console, S., Fino, A.: On the de [R]{}ham cohomology of solvmanifolds (2009) Console, S., Fino, A., Poon, Y.S.: Stability of abelian complex structures. Internat. J. Math. **17**(4), 401–416 (2006) Cordero, L.A., Fern[á]{}ndez, M., Gray, A., Ugarte, L.: Compact nilmanifolds with nilpotent complex structures: [D]{}olbeault cohomology. Trans. Amer. Math. Soc. **352**(12), 5405–5433 (2000) Cordero, L.A., Fernandez, M., Ugarte, L., Gray, A.: Frölicher spectral sequence of compact nilmanifolds with nilpotent complex structure. In: New developments in differential geometry, Budapest 1996, pp. 77–102. Kluwer Acad. Publ., Dordrecht (1999) Corwin, L.J., Greenleaf, F.P.: Representations of nilpotent [L]{}ie groups and their applications. [P]{}art [I]{}, *Cambridge Studies in Advanced Mathematics*, vol. 18. Cambridge University Press, Cambridge (1990). Dotti, I.G., Fino, A.: Hypercomplex nilpotent [L]{}ie groups. In: Global differential geometry: the mathematical legacy of [A]{}lfred [G]{}ray ([B]{}ilbao, 2000), *Contemp. Math.*, vol. 288, pp. 310–314. Amer. Math. Soc., Providence, RI (2001) Dyer, J.L.: A nilpotent [L]{}ie algebra with nilpotent automorphism group. Bull. Amer. Math. Soc. **76**, 52–56 (1970) Fern[á]{}ndez, M., Ivanov, S., Ugarte, L., Villacampa, R.: Compact supersymmetric solutions of the heterotic equations of motion in dimension 5. Nuclear Phys. B **820**(1-2), 483–502 (2009). Graña, M., Minasian, R., Petrini, M., Tomasiello, A.: A scan for new N = 1 vacua on twisted tori, JHEP 05 (2007) 031, hep-th/0609124. Green, B., Tao, T.: Linear equations in primes (2006), arXiv:math/0606088v2. Gromov, M.: Almost flat manifolds. J. Differential Geom. **13**(2), 231–241 (1978). Guan, D.: Modification and the cohomology groups of compact solvmanifolds. Electron. Res. Announc. Amer. Math. Soc. **13**, 74–81 (electronic) (2007). Hirzebruch, F.: Topological methods in algebraic geometry. Classics in Mathematics. Springer-Verlag, Berlin (1995). Host, B., Kra, B.: Nonconventional ergodic averages and nilmanifolds. Ann. of Math. (2) **161**(1), 397–488 (2005). Ketsetzis, G., Salamon, S.: Complex structures on the [I]{}wasawa manifold. Adv. Geom. **4**(2), 165–179 (2004) Knapp, A.W.: Lie groups beyond an introduction, *Progress in Mathematics*, vol. 140, second edn. Birkhäuser Boston Inc., Boston, MA (2002) Kobayashi, S., Nomizu, K.: Foundations of differential geometry. [V]{}ol. [II]{}. Interscience Tracts in Pure and Applied Mathematics, No. 15 Vol. II. Interscience Publishers John Wiley & Sons, Inc., New York-London-Sydney (1969) Kodaira, K.: On the structure of compact complex analytic surfaces. [II]{}. Amer. J. Math. **88**, 682–721 (1966) Kuranishi, M.: On the locally complete families of complex analytic structures. Ann. of Math. (2) **75**, 536–577 (1962) Lange, H., Birkenhake, C.: Complex abelian varieties, *Grundlehren der Mathematischen Wissenschaften*, vol. 302. Springer-Verlag, Berlin (1992) Lauret, J.: A canonical compatible metric for geometric structures on nilmanifolds. Ann. Global Anal. Geom. **30**(2), 107–138 (2006) Maclaughlin, C., Pedersen, H., Poon, Y.S., Salamon, S.: Deformation of 2-step nilmanifolds with abelian complex structures. J. London Math. Soc. (2) **73**(1), 173–193 (2006) Mostow, G.D.: Cohomology of topological groups and solvmanifolds. Ann. of Math. (2) **73**, 20–48 (1961) Nakamura, I.: Complex parallelisable manifolds and their small deformations. J. Differential Geometry **10**, 85–112 (1975) Nomizu, K.: On the cohomology of compact homogeneous spaces of nilpotent [L]{}ie groups. Ann. of Math. (2) **59**, 531–538 (1954) Pittie, H.V.: The nondegeneration of the [H]{}odge-de [R]{}ham spectral sequence. Bull. Amer. Math. Soc. (N.S.) **20**(1), 19–22 (1989) Rollenske, S.: The [F]{}rölicher spectral sequence can be arbitrarily non-degenerate. Math. Ann. **341**(3), 623–628 (2008). Rollenske, S.: The [K]{}uranishi space of complex parallelisable nilmanifolds (2008). arXiv:0803.2048, to appear in JEMS. Rollenske, S.: Geometry of nilmanifolds with left-invariant complex structure and deformations in the large. Proc. Lond. Math. Soc. (3) **99**(2), 425–460 (2009). Rollenske, S.: Lie-algebra [D]{}olbeault-cohomology and small deformations of nilmanifolds. J. Lond. Math. Soc. (2) **79**(2), 346–362 (2009). Salamon, S.M.: Complex structures on nilpotent [L]{}ie algebras. J. Pure Appl. Algebra **157**(2-3), 311–333 (2001) Ugarte, L.: Hermitian structures on six-dimensional nilmanifolds. Transform. Groups **12**(1), 175–202 (2007) Vinberg, E.B., Gorbatsevich, V.V., Shvartsman, O.V.: Discrete subgroups of [L]{}ie groups. In: Lie groups and Lie algebras, II, *Encyclopaedia Math. Sci.*, vol. 21, pp. 1–123, 217–223. Springer, Berlin (2000) Weibel, C.A.: An introduction to homological algebra, *Cambridge Studies in Advanced Mathematics*, vol. 38. Cambridge University Press, Cambridge (1994)
Fighting Four Big Cancer Myths on World Cancer Day Fighting Four Big Cancer Myths on World Cancer Day What if everybody around the world stopped their busy lives for just a minute and thought about cancer? That’s exactly what the hopes will happen today, . World Cancer Day — What It is www.worldcancerday.org The UICC, a multinational non-profit organization founded in 1933, launched World Cancer Day at the First to raise awareness about the disease. The UICC’s ultimate goal is to eliminate cancers as life-threatening illnesses, but they have some work to do. Currently (about 13 percent of all deaths) worldwide are caused by cancer or related complications. And as many as are due to preventable behaviors including smoking, obesity, and alcohol consumption. In 2010, the — largely from productivity lost because of disability or premature death — was $290 billion. If we don’t change how we think about healthcare and sickness on an international level, that number is expected to swell to $458 billion by 2030 — with most of the bill falling on middle- and lower-income countries. This year, World Cancer Day is focusing on surrounding “the big C.” Myth One: Cancer is just a health issue. Not true — in reality, cancer affects a country’s economy, society, and overall development. Myth Two: Cancer is a disease of the wealthy, elderly, and developed countries. False — cancer can touch all people, regardless of socioeconomic status, nationality, and age. Myth Three: Cancer is a death sentence. Wrong — these days, many cancers can be managed and cured with effective treatments. Myth Four: Cancer is my fate. No — preventative measures can reduce the likelihood of many people developing cancer in the first place. By addressing these misconceptions, the UICC hopes separate the truth from the hyperbole and encourage more people to seek preventative measures, potentially saving millions of lives every year. Get the Facts — How to Participate So what can we do to help out? The easiest way to get involved is to , which outlines 11 steps to drastically reduce cancer’s reach by 2020. Measures include lower rates of tobacco use and obesity worldwide, promoting vaccination programs for Hepatitis B and HPV, training health workers to specialize in cancer care, and educating the public about cancer. The next step is to get talking, in person and online, via , , and apps. By opening up about one of the world’s most serious health problems, we can help take the important steps necessary to dispel the stigma of cancer — and ultimately work toward the necessary cures. The American Cancer Society (@ACSGlobal) is hosting a Twitter chat starting at 11am Eastern time. with the hashtag #WorldCancerDay to show support, install the “Cancer Myths vs. Facts” Facebook app, or get involved in the . Are you participating in World Cancer Day this year? Share why this day is important to you in the comments below or tweet the author at .
Q: How to improve runtime performance of reading file program I'm currently trying to read 150 million lines (from a data file with bio-sequencing information) using Python. Currently, it's reading at 20,000 lines per second which would take about an hour and a half. I have to read through 20 of these files. Given that Python is a very high level language, would it be better to use Java to read the files instead or is the time difference not significant enough to warrant switching to another language? The current code I'm using is: lines_hashed = 0 with open(CUR_FILE) as f: for line in f: cpg = line.split("\t") cpg_dict[cpg[0]] = ....data.... print lines_hashed lined_hashed += 1 The print statement is there only as a sanity that the program didn't stall anywhere. I'm assuming this is also slowing down the running time. Is there a way to check this without the print statement? Thanks. A: Printing to the screen is expensive compared to disk reads. If you must check performance as you go along, only print something out every 1000 lines or more. As for using other languages, almost all languages call the OS to do the real work anyway.
Our 'Plain Barberry Knit' is only £22.00 in the sale! A real bargain considering it used to be £47.50. It looks fab with a little denim skirt in the summer or layered up when the temperature drops (sob). See what other treasures you can find... Discover more bargains> ... Summer's here! Our Senior Menswear Designer, Craig Osborne, tells us what's new for men this season. By nature, spring/summer represents the time of the year where colour and pattern is widely accepted as part of your everyday wardrobe as we move away from the muted tones of grey and navy traditionally associated with the colder months. This season, brigh...